diff --git a/versions/unreleased/cloud/index.html b/versions/unreleased/cloud/index.html index a7d1056b5..b32118ba0 100644 --- a/versions/unreleased/cloud/index.html +++ b/versions/unreleased/cloud/index.html @@ -6483,7 +6483,7 @@

Welcome to Prefect Cloud

-

Prefect Cloud is a workflow coordination-as-a-service platform. Prefect Cloud provides all the capabilities of the Prefect server and UI in a hosted environment, plus additional features such as automations, workspaces, and organizations.

+

Prefect Cloud is a workflow orchestration platform. Prefect Cloud provides all the capabilities of the Prefect server and UI in a hosted environment, plus additional features such as automations, workspaces, and organizations.

Getting Started with Prefect Cloud

Ready to jump right in and start running with Prefect Cloud? See the Cloud Getting Started Guide to create a workspace, configure a local execution environment, and write your first Prefect Cloud-monitored flow run.

diff --git a/versions/unreleased/search/search_index.json b/versions/unreleased/search/search_index.json index 19a903e68..d40fc3f1d 100644 --- a/versions/unreleased/search/search_index.json +++ b/versions/unreleased/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"","title":"Welcome to Prefect","text":"

Prefect is a workflow orchestration tool empowering developers to build, observe, and react to data pipelines.

It's the easiest way to transform any Python function into a unit of work that can be observed and orchestrated. Just bring your Python code, sprinkle in a few decorators, and go!

With Prefect you gain:

Prefect UI","tags":["getting started","quick start","overview"],"boost":2},{"location":"#new-to-prefect","title":"New to Prefect?","text":"

Start with the tutorial and then check out the concepts for more in-depth information. For deeper dives on specific use cases, explore our guides for common use-cases.

Concepts Tutorial Guides

Alternatively, read on for a quickstart of Prefect.

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#prefect-quickstart","title":"Prefect Quickstart","text":"","tags":["getting started","quick start","overview"],"boost":2},{"location":"#step-1-install-prefect","title":"Step 1: Install Prefect","text":"
pip install -U prefect\n

See the install guide for more detailed installation instructions.

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#step-2-connect-to-prefects-api","title":"Step 2: Connect to Prefect's API","text":"

Sign up for a forever free Prefect Cloud account or, alternatively, host your own server with a subset of Prefect Cloud's features.

  1. If using Prefect Cloud sign in with an existing account or create a new account at https://app.prefect.cloud/.
  2. If setting up a new account, create a workspace for your account.
  3. Use the prefect cloud login Prefect CLI command to log into Prefect Cloud from your environment.

    prefect cloud login\n

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#step-3-create-a-flow","title":"Step 3: Create a flow","text":"

The fastest way to get started with Prefect is to add a @flow decorator to any python function.

Rules of Thumb

Here is an example flow named Repo Info that contains two tasks:

# my_flow.py\nimport httpx\nfrom prefect import flow, task\n\n@task(retries=2)\ndef get_repo_info(repo_owner: str, repo_name: str):\n\"\"\" Get info about a repo - will retry twice after failing \"\"\"\n    url = f\"https://api.github.com/repos/{repo_owner}/{repo_name}\"\n    api_response = httpx.get(url)\n    api_response.raise_for_status()\n    repo_info = api_response.json()\n    return repo_info\n\n\n@task\ndef get_contributors(repo_info: dict):\n    contributors_url = repo_info[\"contributors_url\"]\n    response = httpx.get(contributors_url)\n    response.raise_for_status()\n    contributors = response.json()\n    return contributors\n\n@flow(name=\"Repo Info\", log_prints=True)  \ndef repo_info(\n    repo_owner: str = \"PrefectHQ\", repo_name: str = \"prefect\"\n):\n    # call our `get_repo_info` task\n    repo_info = get_repo_info(repo_owner, repo_name)\n    print(f\"Stars \ud83c\udf20 : {repo_info['stargazers_count']}\")\n\n    # call our `get_contributors` task, \n    # passing in the upstream result\n    contributors = get_contributors(repo_info)\n    print(\n        f\"Number of contributors \ud83d\udc77: {len(contributors)}\"\n    )\n\n\nif __name__ == \"__main__\":\n    # Call a flow function for a local flow run!\n    repo_info()\n

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#step-4-run-your-flow-locally","title":"Step 4: Run your flow locally","text":"

Call any function that you've decorated with a @flow decorator to see a local instance of a flow run.

python my_flow.py\n
18:21:40.235 | INFO    | prefect.engine - Created flow run 'fine-gorilla' for flow 'Repo Info'\n18:21:40.237 | INFO    | Flow run 'fine-gorilla' - View at https://app.prefect.cloud/account/0ff44498-d380-4d7b-bd68-9b52da03823f/workspace/c859e5b6-1539-4c77-81e0-444c2ddcaafe/flow-runs/flow-run/c5a85193-69ea-4577-aa0e-e11adbf1f659\n18:21:40.837 | INFO    | Flow run 'fine-gorilla' - Created task run 'get_repo_info-0' for task 'get_repo_info'\n18:21:40.838 | INFO    | Flow run 'fine-gorilla' - Executing 'get_repo_info-0' immediately...\n18:21:41.468 | INFO    | Task run 'get_repo_info-0' - Finished in state Completed()\n18:21:41.477 | INFO    | Flow run 'fine-gorilla' - Stars \ud83c\udf20 : 12340\n18:21:41.606 | INFO    | Flow run 'fine-gorilla' - Created task run 'get_contributors-0' for task 'get_contributors'\n18:21:41.607 | INFO    | Flow run 'fine-gorilla' - Executing 'get_contributors-0' immediately...\n18:21:42.225 | INFO    | Task run 'get_contributors-0' - Finished in state Completed()\n18:21:42.232 | INFO    | Flow run 'fine-gorilla' - Number of contributors \ud83d\udc77: 30\n18:21:42.383 | INFO    | Flow run 'fine-gorilla' - Finished in state Completed('All states completed.')\n

You'll find a link directing you to the flow run page conveniently positioned at the top of your flow logs.

Local flow run execution is great for development and testing, but to schedule flow runs or trigger them based on events, you\u2019ll need to deploy your flows.

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#step-5-deploy-the-flow","title":"Step 5: Deploy the flow","text":"

Deploying your flows informs the Prefect API of where, how, and when to run your flows.

Always run prefect deploy commands from the root level of your repo!

When you run the deploy command, Prefect will automatically detect any flows defined in your repository. Select the one you wish to deploy. Then, follow the \ud83e\uddd9 wizard to name your deployment, add an optional schedule, create a work pool, optionally configure remote flow code storage, and more!

prefect deploy\n

It's recommended to save the configuration for the deployment.

Saving the configuration for your deployment will result in a prefect.yaml file populated with your first deployment. You can use this YAML file to edit and define multiple deployments for this repo.

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#step-5-start-a-worker-and-run-the-deployed-flow","title":"Step 5: Start a worker and run the deployed flow","text":"

Start a worker to manage local flow execution. Each worker polls its assigned work pool.

In a new terminal, run:

prefect worker start --pool '<work-pool-name>'\n

Now that your worker is running, you are ready to kick off deployed flow runs from the UI or by running:

prefect deployment run '<flow-name>/<deployment-name>'\n

Check out this flow run's logs from the Flow Runs page in the UI or from the worker logs. Congrats on your first successfully deployed flow run! \ud83c\udf89

You've seen:

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#next-steps","title":"Next Steps","text":"

To learn more, try our tutorial and guides, or go deeper with concepts.

Need help?

Get your questions answered with a Prefect product advocate by Booking A Rubber Duck!

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#community","title":"Community","text":"

Changing from 'Orion'

With the 2.8.1 release, we removed references to \"Orion\" and replaced them with more explicit, conventional nomenclature throughout the codebase. These changes clarify the function of various components, commands, and variables. See the Release Notes for details.

Looking for Prefect 1 Core and Server?

Prefect 2 is used by thousands of teams in production. It is highly recommended that you use Prefect 2 for all new projects.

If you are looking for the Prefect 1 Core and Server documentation, it is still available at http://docs-v1.prefect.io/.

","tags":["getting started","quick start","overview"],"boost":2},{"location":"faq/","title":"Frequently Asked Questions","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#prefect","title":"Prefect","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#how-is-prefect-licensed","title":"How is Prefect licensed?","text":"

Prefect is licensed under the Apache 2.0 License, an OSI approved open-source license. If you have any questions about licensing, please contact us.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#is-the-prefect-v2-cloud-url-different-than-the-prefect-v1-cloud-url","title":"Is the Prefect v2 Cloud URL different than the Prefect v1 Cloud URL?","text":"

Yes. Prefect Cloud for v2 is at app.prefect.cloud/ while Prefect Cloud for v1 is at cloud.prefect.io.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#the-prefect-orchestration-engine","title":"The Prefect Orchestration Engine","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#why-was-the-prefect-orchestration-engine-created","title":"Why was the Prefect orchestration engine created?","text":"

The Prefect orchestration engine has three major objectives:

As Prefect has matured, so has the modern data stack. The on-demand, dynamic, highly scalable workflows that used to exist principally in the domain of data science and analytics are now prevalent throughout all of data engineering. Few companies have workflows that don\u2019t deal with streaming data, uncertain timing, runtime logic, complex dependencies, versioning, or custom scheduling.

This means that the current generation of workflow managers are built around the wrong abstraction: the directed acyclic graph (DAG). DAGs are an increasingly arcane, constrained way of representing the dynamic, heterogeneous range of modern data and computation patterns.

Furthermore, as workflows have become more complex, it has become even more important to focus on the developer experience of building, testing, and monitoring them. Faced with an explosion of available tools, it is more important than ever for development teams to seek orchestration tools that will be compatible with any code, tools, or services they may require in the future.

And finally, this additional complexity means that providing clear and consistent insight into the behavior of the orchestration engine and any decisions it makes is critically important.

The Prefect orchestration engine represents a unified solution to these three problems.

The Prefect orchestration engine is capable of governing any code through a well-defined series of state transitions designed to maximize the user's understanding of what happened during execution. It's popular to describe \"workflows as code\" or \"orchestration as code,\" but the Prefect engine represents \"code as workflows\": rather than ask users to change how they work to meet the requirements of the orchestrator, we've defined an orchestrator that adapts to how our users work.

To achieve this, we've leveraged the familiar tools of native Python: first class functions, type annotations, and async support. Users are free to implement as much \u2014 or as little \u2014 of the Prefect engine as is useful for their objectives.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#if-im-using-prefect-cloud-2-do-i-still-need-to-run-a-prefect-server-locally","title":"If I\u2019m using Prefect Cloud 2, do I still need to run a Prefect server locally?","text":"

No, Prefect Cloud hosts an instance of the Prefect API for you. In fact, each workspace in Prefect Cloud corresponds directly to a single instance of the Prefect orchestration engine. See the Prefect Cloud Overview for more information.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#features","title":"Features","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#does-prefect-support-mapping","title":"Does Prefect support mapping?","text":"

Yes! For more information, see the Task.map API reference

@flow\ndef my_flow():\n\n    # map over a constant\n    for i in range(10):\n        my_mapped_task(i)\n\n    # map over a task's output\n    l = list_task()\n    for i in l.wait().result():\n        my_mapped_task_2(i)\n

Note that when tasks are called on constant values, they cannot detect their upstream edges automatically. In this example, my_mapped_task_2 does not know that it is downstream from list_task(). Prefect will have convenience functions for detecting these associations, and Prefect's .map() operator will automatically track them.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-enforce-ordering-between-tasks-that-dont-share-data","title":"Can I enforce ordering between tasks that don't share data?","text":"

Yes! For more information, see the Tasks section.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#does-prefect-support-proxies","title":"Does Prefect support proxies?","text":"

Yes!

Prefect supports communicating via proxies through the use of environment variables. You can read more about this in the Installation documentation and the article Using Prefect Cloud with proxies.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-run-prefect-flows-on-linux","title":"Can I run Prefect flows on Linux?","text":"

Yes!

See the Installation documentation and Linux installation notes for details on getting started with Prefect on Linux.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-run-prefect-flows-on-windows","title":"Can I run Prefect flows on Windows?","text":"

Yes!

See the Installation documentation and Windows installation notes for details on getting started with Prefect on Windows.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#what-external-requirements-does-prefect-have","title":"What external requirements does Prefect have?","text":"

Prefect does not have any additional requirements besides those installed by pip install --pre prefect. The entire system, including the UI and services, can be run in a single process via prefect server start and does not require Docker.

Prefect Cloud users do not need to worry about the Prefect database. Prefect Cloud uses PostgreSQL on GCP behind the scenes. To use PostgreSQL with a self-hosted Prefect server, users must provide the connection string for a running database via the PREFECT_API_DATABASE_CONNECTION_URL environment variable.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#what-databases-does-prefect-support","title":"What databases does Prefect support?","text":"

A self-hosted Prefect server can work with SQLite and PostgreSQL. New Prefect installs default to a SQLite database hosted at ~/.prefect/prefect.db on Mac or Linux machines. SQLite and PostgreSQL are not installed by Prefect.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#how-do-i-choose-between-sqlite-and-postgres","title":"How do I choose between SQLite and Postgres?","text":"

SQLite generally works well for getting started and exploring Prefect. We have tested it with up to hundreds of thousands of task runs. Many users may be able to stay on SQLite for some time. However, for production uses, Prefect Cloud or self-hosted PostgreSQL is highly recommended. Under write-heavy workloads, SQLite performance can begin to suffer. Users running many flows with high degrees of parallelism or concurrency should use PostgreSQL.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#relationship-with-other-prefect-products","title":"Relationship with other Prefect products","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-a-flow-written-with-prefect-1-be-orchestrated-with-prefect-2-and-vice-versa","title":"Can a flow written with Prefect 1 be orchestrated with Prefect 2 and vice versa?","text":"

No. Flows written with the Prefect 1 client must be rewritten with the Prefect 2 client. For most flows, this should take just a few minutes. See our migration guide and our Upgrade to Prefect 2 post for more information.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-a-use-prefect-1-and-prefect-2-at-the-same-time-on-my-local-machine","title":"Can a use Prefect 1 and Prefect 2 at the same time on my local machine?","text":"

Yes. Just use different virtual environments.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"api-ref/","title":"API References","text":"

Prefect provides several APIs.

","tags":["API","Prefect API","Prefect Cloud","REST API","development","orchestration"]},{"location":"api-ref/rest-api-reference/","title":"Prefect server REST API reference","text":"","tags":["REST API","Prefect server"]},{"location":"api-ref/rest-api-reference/#the-prefect-server-rest-api","title":"The Prefect server REST API","text":"

The REST API documentation for a locally hosted open-source Prefect server is available below.

","tags":["REST API","Prefect server"]},{"location":"api-ref/prefect/agent/","title":"prefect.agent","text":"","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent","title":"prefect.agent","text":"

The agent is responsible for checking for flow runs that are ready to run and starting their execution.

","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.OrionAgent","title":"OrionAgent","text":"

Bases: PrefectAgent

Deprecated. Use PrefectAgent instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/agent.py
@deprecated_callable(start_date=\"Feb 2023\", help=\"Use `PrefectAgent` instead.\")\nclass OrionAgent(PrefectAgent):\n\"\"\"\n    Deprecated. Use `PrefectAgent` instead.\n    \"\"\"\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent","title":"PrefectAgent","text":"Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/agent.py
class PrefectAgent:\n    @experimental_parameter(\n        \"work_pool_name\", group=\"work_pools\", when=lambda y: y is not None\n    )\n    def __init__(\n        self,\n        work_queues: List[str] = None,\n        work_queue_prefix: Union[str, List[str]] = None,\n        work_pool_name: str = None,\n        prefetch_seconds: int = None,\n        default_infrastructure: Infrastructure = None,\n        default_infrastructure_document_id: UUID = None,\n        limit: Optional[int] = None,\n    ) -> None:\n        if default_infrastructure and default_infrastructure_document_id:\n            raise ValueError(\n                \"Provide only one of 'default_infrastructure' and\"\n                \" 'default_infrastructure_document_id'.\"\n            )\n\n        self.work_queues: Set[str] = set(work_queues) if work_queues else set()\n        self.work_pool_name = work_pool_name\n        self.prefetch_seconds = prefetch_seconds\n        self.submitting_flow_run_ids = set()\n        self.cancelling_flow_run_ids = set()\n        self.scheduled_task_scopes = set()\n        self.started = False\n        self.logger = get_logger(\"agent\")\n        self.task_group: Optional[anyio.abc.TaskGroup] = None\n        self.limit: Optional[int] = limit\n        self.limiter: Optional[anyio.CapacityLimiter] = None\n        self.client: Optional[PrefectClient] = None\n\n        if isinstance(work_queue_prefix, str):\n            work_queue_prefix = [work_queue_prefix]\n        self.work_queue_prefix = work_queue_prefix\n\n        self._work_queue_cache_expiration: pendulum.DateTime = None\n        self._work_queue_cache: List[WorkQueue] = []\n\n        if default_infrastructure:\n            self.default_infrastructure_document_id = (\n                default_infrastructure._block_document_id\n            )\n            self.default_infrastructure = default_infrastructure\n        elif default_infrastructure_document_id:\n            self.default_infrastructure_document_id = default_infrastructure_document_id\n            self.default_infrastructure = None\n        else:\n            self.default_infrastructure = Process()\n            self.default_infrastructure_document_id = None\n\n    async def update_matched_agent_work_queues(self):\n        if self.work_queue_prefix:\n            if self.work_pool_name:\n                matched_queues = await self.client.read_work_queues(\n                    work_pool_name=self.work_pool_name,\n                    work_queue_filter=WorkQueueFilter(\n                        name=WorkQueueFilterName(startswith_=self.work_queue_prefix)\n                    ),\n                )\n            else:\n                matched_queues = await self.client.match_work_queues(\n                    self.work_queue_prefix\n                )\n            matched_queues = set(q.name for q in matched_queues)\n            if matched_queues != self.work_queues:\n                new_queues = matched_queues - self.work_queues\n                removed_queues = self.work_queues - matched_queues\n                if new_queues:\n                    self.logger.info(\n                        f\"Matched new work queues: {', '.join(new_queues)}\"\n                    )\n                if removed_queues:\n                    self.logger.info(\n                        f\"Work queues no longer matched: {', '.join(removed_queues)}\"\n                    )\n            self.work_queues = matched_queues\n\n    async def get_work_queues(self) -> AsyncIterator[WorkQueue]:\n\"\"\"\n        Loads the work queue objects corresponding to the agent's target work\n        queues. If any of them don't exist, they are created.\n        \"\"\"\n\n        # if the queue cache has not expired, yield queues from the cache\n        now = pendulum.now(\"UTC\")\n        if (self._work_queue_cache_expiration or now) > now:\n            for queue in self._work_queue_cache:\n                yield queue\n            return\n\n        # otherwise clear the cache, set the expiration for 30 seconds, and\n        # reload the work queues\n        self._work_queue_cache.clear()\n        self._work_queue_cache_expiration = now.add(seconds=30)\n\n        await self.update_matched_agent_work_queues()\n\n        for name in self.work_queues:\n            try:\n                work_queue = await self.client.read_work_queue_by_name(\n                    work_pool_name=self.work_pool_name, name=name\n                )\n            except ObjectNotFound:\n                # if the work queue wasn't found, create it\n                if not self.work_queue_prefix:\n                    # do not attempt to create work queues if the agent is polling for\n                    # queues using a regex\n                    try:\n                        work_queue = await self.client.create_work_queue(\n                            work_pool_name=self.work_pool_name, name=name\n                        )\n                        if self.work_pool_name:\n                            self.logger.info(\n                                f\"Created work queue {name!r} in work pool\"\n                                f\" {self.work_pool_name!r}.\"\n                            )\n                        else:\n                            self.logger.info(f\"Created work queue '{name}'.\")\n\n                    # if creating it raises an exception, it was probably just\n                    # created by some other agent; rather than entering a re-read\n                    # loop with new error handling, we log the exception and\n                    # continue.\n                    except Exception:\n                        self.logger.exception(f\"Failed to create work queue {name!r}.\")\n                        continue\n\n            self._work_queue_cache.append(work_queue)\n            yield work_queue\n\n    async def get_and_submit_flow_runs(self) -> List[FlowRun]:\n\"\"\"\n        The principle method on agents. Queries for scheduled flow runs and submits\n        them for execution in parallel.\n        \"\"\"\n        if not self.started:\n            raise RuntimeError(\n                \"Agent is not started. Use `async with PrefectAgent()...`\"\n            )\n\n        self.logger.debug(\"Checking for scheduled flow runs...\")\n\n        before = pendulum.now(\"utc\").add(\n            seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()\n        )\n\n        submittable_runs: List[FlowRun] = []\n\n        if self.work_pool_name:\n            responses = await self.client.get_scheduled_flow_runs_for_work_pool(\n                work_pool_name=self.work_pool_name,\n                work_queue_names=[wq.name async for wq in self.get_work_queues()],\n                scheduled_before=before,\n            )\n            submittable_runs.extend([response.flow_run for response in responses])\n\n        else:\n            # load runs from each work queue\n            async for work_queue in self.get_work_queues():\n                # print a nice message if the work queue is paused\n                if work_queue.is_paused:\n                    self.logger.info(\n                        f\"Work queue {work_queue.name!r} ({work_queue.id}) is paused.\"\n                    )\n\n                else:\n                    try:\n                        queue_runs = await self.client.get_runs_in_work_queue(\n                            id=work_queue.id, limit=10, scheduled_before=before\n                        )\n                        submittable_runs.extend(queue_runs)\n                    except ObjectNotFound:\n                        self.logger.error(\n                            f\"Work queue {work_queue.name!r} ({work_queue.id}) not\"\n                            \" found.\"\n                        )\n                    except Exception as exc:\n                        self.logger.exception(exc)\n\n            submittable_runs.sort(key=lambda run: run.next_scheduled_start_time)\n\n        for flow_run in submittable_runs:\n            # don't resubmit a run\n            if flow_run.id in self.submitting_flow_run_ids:\n                continue\n\n            try:\n                if self.limiter:\n                    self.limiter.acquire_on_behalf_of_nowait(flow_run.id)\n            except anyio.WouldBlock:\n                self.logger.info(\n                    f\"Flow run limit reached; {self.limiter.borrowed_tokens} flow runs\"\n                    \" in progress.\"\n                )\n                break\n            else:\n                self.logger.info(f\"Submitting flow run '{flow_run.id}'\")\n                self.submitting_flow_run_ids.add(flow_run.id)\n                self.task_group.start_soon(\n                    self.submit_run,\n                    flow_run,\n                )\n\n        return list(\n            filter(lambda run: run.id in self.submitting_flow_run_ids, submittable_runs)\n        )\n\n    async def check_for_cancelled_flow_runs(self):\n        if not self.started:\n            raise RuntimeError(\n                \"Agent is not started. Use `async with PrefectAgent()...`\"\n            )\n\n        self.logger.debug(\"Checking for cancelled flow runs...\")\n\n        work_queue_filter = (\n            WorkQueueFilter(name=WorkQueueFilterName(any_=list(self.work_queues)))\n            if self.work_queues\n            else None\n        )\n\n        work_pool_filter = (\n            WorkPoolFilter(name=WorkPoolFilterName(any_=[self.work_pool_name]))\n            if self.work_pool_name\n            else WorkPoolFilter(name=WorkPoolFilterName(any_=[\"default-agent-pool\"]))\n        )\n        named_cancelling_flow_runs = await self.client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n                    name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self.cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=work_pool_filter,\n            work_queue_filter=work_queue_filter,\n        )\n\n        typed_cancelling_flow_runs = await self.client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self.cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=work_pool_filter,\n            work_queue_filter=work_queue_filter,\n        )\n\n        cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n        if cancelling_flow_runs:\n            self.logger.info(\n                f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n            )\n\n        for flow_run in cancelling_flow_runs:\n            self.cancelling_flow_run_ids.add(flow_run.id)\n            self.task_group.start_soon(self.cancel_run, flow_run)\n\n        return cancelling_flow_runs\n\n    async def cancel_run(self, flow_run: FlowRun) -> None:\n\"\"\"\n        Cancel a flow run by killing its infrastructure\n        \"\"\"\n        if not flow_run.infrastructure_pid:\n            self.logger.error(\n                f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n                \" attached. Cancellation cannot be guaranteed.\"\n            )\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": (\n                        \"This flow run is missing infrastructure tracking information\"\n                        \" and cancellation cannot be guaranteed.\"\n                    )\n                },\n            )\n            return\n\n        try:\n            infrastructure = await self.get_infrastructure(flow_run)\n        except Exception:\n            self.logger.exception(\n                f\"Failed to get infrastructure for flow run '{flow_run.id}'. \"\n                \"Flow run cannot be cancelled.\"\n            )\n            # Note: We leave this flow run in the cancelling set because it cannot be\n            #       cancelled and this will prevent additional attempts.\n            return\n\n        if not hasattr(infrastructure, \"kill\"):\n            self.logger.error(\n                f\"Flow run '{flow_run.id}' infrastructure {infrastructure.type!r} \"\n                \"does not support killing created infrastructure. \"\n                \"Cancellation cannot be guaranteed.\"\n            )\n            return\n\n        self.logger.info(\n            f\"Killing {infrastructure.type} {flow_run.infrastructure_pid} for flow run \"\n            f\"'{flow_run.id}'...\"\n        )\n        try:\n            await infrastructure.kill(flow_run.infrastructure_pid)\n        except InfrastructureNotFound as exc:\n            self.logger.warning(f\"{exc} Marking flow run as cancelled.\")\n            await self._mark_flow_run_as_cancelled(flow_run)\n        except InfrastructureNotAvailable as exc:\n            self.logger.warning(f\"{exc} Flow run cannot be cancelled by this agent.\")\n        except Exception:\n            self.logger.exception(\n                \"Encountered exception while killing infrastructure for flow run \"\n                f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n            )\n            # We will try again on generic exceptions\n            self.cancelling_flow_run_ids.remove(flow_run.id)\n            return\n        else:\n            await self._mark_flow_run_as_cancelled(flow_run)\n            self.logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n\n    async def _mark_flow_run_as_cancelled(\n        self, flow_run: FlowRun, state_updates: Optional[dict] = None\n    ) -> None:\n        state_updates = state_updates or {}\n        state_updates.setdefault(\"name\", \"Cancelled\")\n        state_updates.setdefault(\"type\", StateType.CANCELLED)\n        state = flow_run.state.copy(update=state_updates)\n\n        await self.client.set_flow_run_state(flow_run.id, state, force=True)\n\n        # Do not remove the flow run from the cancelling set immediately because\n        # the API caches responses for the `read_flow_runs` and we do not want to\n        # duplicate cancellations.\n        await self._schedule_task(\n            60 * 10, self.cancelling_flow_run_ids.remove, flow_run.id\n        )\n\n    async def get_infrastructure(self, flow_run: FlowRun) -> Infrastructure:\n        deployment = await self.client.read_deployment(flow_run.deployment_id)\n\n        flow = await self.client.read_flow(deployment.flow_id)\n\n        # overrides only apply when configuring known infra blocks\n        if not deployment.infrastructure_document_id:\n            if self.default_infrastructure:\n                infra_block = self.default_infrastructure\n            else:\n                infra_document = await self.client.read_block_document(\n                    self.default_infrastructure_document_id\n                )\n                infra_block = Block._from_block_document(infra_document)\n\n            # Add flow run metadata to the infrastructure\n            prepared_infrastructure = infra_block.prepare_for_flow_run(\n                flow_run, deployment=deployment, flow=flow\n            )\n            return prepared_infrastructure\n\n        ## get infra\n        infra_document = await self.client.read_block_document(\n            deployment.infrastructure_document_id\n        )\n\n        # this piece of logic applies any overrides that may have been set on the\n        # deployment; overrides are defined as dot.delimited paths on possibly nested\n        # attributes of the infrastructure block\n        doc_dict = infra_document.dict()\n        infra_dict = doc_dict.get(\"data\", {})\n        for override, value in (deployment.infra_overrides or {}).items():\n            nested_fields = override.split(\".\")\n            data = infra_dict\n            for field in nested_fields[:-1]:\n                data = data[field]\n\n            # once we reach the end, set the value\n            data[nested_fields[-1]] = value\n\n        # reconstruct the infra block\n        doc_dict[\"data\"] = infra_dict\n        infra_document = BlockDocument(**doc_dict)\n        infrastructure_block = Block._from_block_document(infra_document)\n\n        # TODO: Here the agent may update the infrastructure with agent-level settings\n\n        # Add flow run metadata to the infrastructure\n        prepared_infrastructure = infrastructure_block.prepare_for_flow_run(\n            flow_run, deployment=deployment, flow=flow\n        )\n\n        return prepared_infrastructure\n\n    async def submit_run(self, flow_run: FlowRun) -> None:\n\"\"\"\n        Submit a flow run to the infrastructure\n        \"\"\"\n        ready_to_submit = await self._propose_pending_state(flow_run)\n\n        if ready_to_submit:\n            try:\n                infrastructure = await self.get_infrastructure(flow_run)\n            except Exception as exc:\n                self.logger.exception(\n                    f\"Failed to get infrastructure for flow run '{flow_run.id}'.\"\n                )\n                await self._propose_failed_state(flow_run, exc)\n                if self.limiter:\n                    self.limiter.release_on_behalf_of(flow_run.id)\n            else:\n                # Wait for submission to be completed. Note that the submission function\n                # may continue to run in the background after this exits.\n                readiness_result = await self.task_group.start(\n                    self._submit_run_and_capture_errors, flow_run, infrastructure\n                )\n\n                if readiness_result and not isinstance(readiness_result, Exception):\n                    try:\n                        await self.client.update_flow_run(\n                            flow_run_id=flow_run.id,\n                            infrastructure_pid=str(readiness_result),\n                        )\n                    except Exception:\n                        self.logger.exception(\n                            \"An error occured while setting the `infrastructure_pid`\"\n                            f\" on flow run {flow_run.id!r}. The flow run will not be\"\n                            \" cancellable.\"\n                        )\n\n                self.logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n        else:\n            # If the run is not ready to submit, release the concurrency slot\n            if self.limiter:\n                self.limiter.release_on_behalf_of(flow_run.id)\n\n        self.submitting_flow_run_ids.remove(flow_run.id)\n\n    async def _submit_run_and_capture_errors(\n        self,\n        flow_run: FlowRun,\n        infrastructure: Infrastructure,\n        task_status: anyio.abc.TaskStatus = None,\n    ) -> Union[InfrastructureResult, Exception]:\n        # Note: There is not a clear way to determine if task_status.started() has been\n        #       called without peeking at the internal `_future`. Ideally we could just\n        #       check if the flow run id has been removed from `submitting_flow_run_ids`\n        #       but it is not so simple to guarantee that this coroutine yields back\n        #       to `submit_run` to execute that line when exceptions are raised during\n        #       submission.\n        try:\n            result = await infrastructure.run(task_status=task_status)\n        except Exception as exc:\n            if not task_status._future.done():\n                # This flow run was being submitted and did not start successfully\n                self.logger.exception(\n                    f\"Failed to submit flow run '{flow_run.id}' to infrastructure.\"\n                )\n                # Mark the task as started to prevent agent crash\n                task_status.started(exc)\n                await self._propose_crashed_state(\n                    flow_run, \"Flow run could not be submitted to infrastructure\"\n                )\n            else:\n                self.logger.exception(\n                    f\"An error occured while monitoring flow run '{flow_run.id}'. \"\n                    \"The flow run will not be marked as failed, but an issue may have \"\n                    \"occurred.\"\n                )\n            return exc\n        finally:\n            if self.limiter:\n                self.limiter.release_on_behalf_of(flow_run.id)\n\n        if not task_status._future.done():\n            self.logger.error(\n                f\"Infrastructure returned without reporting flow run '{flow_run.id}' \"\n                \"as started or raising an error. This behavior is not expected and \"\n                \"generally indicates improper implementation of infrastructure. The \"\n                \"flow run will not be marked as failed, but an issue may have occurred.\"\n            )\n            # Mark the task as started to prevent agent crash\n            task_status.started()\n\n        if result.status_code != 0:\n            await self._propose_crashed_state(\n                flow_run,\n                (\n                    \"Flow run infrastructure exited with non-zero status code\"\n                    f\" {result.status_code}.\"\n                ),\n            )\n\n        return result\n\n    async def _propose_pending_state(self, flow_run: FlowRun) -> bool:\n        state = flow_run.state\n        try:\n            state = await propose_state(self.client, Pending(), flow_run_id=flow_run.id)\n        except Abort as exc:\n            self.logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}'. \"\n                    f\"Server sent an abort signal: {exc}\"\n                ),\n            )\n            return False\n        except Exception:\n            self.logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n            return False\n\n        if not state.is_pending():\n            self.logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}': \"\n                    f\"Server returned a non-pending state {state.type.value!r}\"\n                ),\n            )\n            return False\n\n        return True\n\n    async def _propose_failed_state(self, flow_run: FlowRun, exc: Exception) -> None:\n        try:\n            await propose_state(\n                self.client,\n                await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # We've already failed, no need to note the abort but we don't want it to\n            # raise in the agent process\n            pass\n        except Exception:\n            self.logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n\n    async def _propose_crashed_state(self, flow_run: FlowRun, message: str) -> None:\n        try:\n            state = await propose_state(\n                self.client,\n                Crashed(message=message),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # Flow run already marked as failed\n            pass\n        except Exception:\n            self.logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n        else:\n            if state.is_crashed():\n                self.logger.info(\n                    f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n                )\n\n    async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n\"\"\"\n        Schedule a background task to start after some time.\n\n        These tasks will be run immediately when the agent exits instead of waiting.\n\n        The function may be async or sync. Async functions will be awaited.\n        \"\"\"\n\n        async def wrapper(task_status):\n            # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n            # time or shutdown\n            if self.started:\n                with anyio.CancelScope() as scope:\n                    self.scheduled_task_scopes.add(scope)\n                    task_status.started()\n                    await anyio.sleep(__in_seconds)\n\n                self.scheduled_task_scopes.remove(scope)\n            else:\n                task_status.started()\n\n            result = fn(*args, **kwargs)\n            if inspect.iscoroutine(result):\n                await result\n\n        await self.task_group.start(wrapper)\n\n    # Context management ---------------------------------------------------------------\n\n    async def start(self):\n        self.started = True\n        self.task_group = anyio.create_task_group()\n        self.limiter = (\n            anyio.CapacityLimiter(self.limit) if self.limit is not None else None\n        )\n        self.client = get_client()\n        await self.client.__aenter__()\n        await self.task_group.__aenter__()\n\n    async def shutdown(self, *exc_info):\n        self.started = False\n        # We must cancel scheduled task scopes before closing the task group\n        for scope in self.scheduled_task_scopes:\n            scope.cancel()\n        await self.task_group.__aexit__(*exc_info)\n        await self.client.__aexit__(*exc_info)\n        self.task_group = None\n        self.client = None\n        self.submitting_flow_run_ids.clear()\n        self.cancelling_flow_run_ids.clear()\n        self.scheduled_task_scopes.clear()\n        self._work_queue_cache_expiration = None\n        self._work_queue_cache = []\n\n    async def __aenter__(self):\n        await self.start()\n        return self\n\n    async def __aexit__(self, *exc_info):\n        await self.shutdown(*exc_info)\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.cancel_run","title":"cancel_run async","text":"

Cancel a flow run by killing its infrastructure

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/agent.py
async def cancel_run(self, flow_run: FlowRun) -> None:\n\"\"\"\n    Cancel a flow run by killing its infrastructure\n    \"\"\"\n    if not flow_run.infrastructure_pid:\n        self.logger.error(\n            f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n            \" attached. Cancellation cannot be guaranteed.\"\n        )\n        await self._mark_flow_run_as_cancelled(\n            flow_run,\n            state_updates={\n                \"message\": (\n                    \"This flow run is missing infrastructure tracking information\"\n                    \" and cancellation cannot be guaranteed.\"\n                )\n            },\n        )\n        return\n\n    try:\n        infrastructure = await self.get_infrastructure(flow_run)\n    except Exception:\n        self.logger.exception(\n            f\"Failed to get infrastructure for flow run '{flow_run.id}'. \"\n            \"Flow run cannot be cancelled.\"\n        )\n        # Note: We leave this flow run in the cancelling set because it cannot be\n        #       cancelled and this will prevent additional attempts.\n        return\n\n    if not hasattr(infrastructure, \"kill\"):\n        self.logger.error(\n            f\"Flow run '{flow_run.id}' infrastructure {infrastructure.type!r} \"\n            \"does not support killing created infrastructure. \"\n            \"Cancellation cannot be guaranteed.\"\n        )\n        return\n\n    self.logger.info(\n        f\"Killing {infrastructure.type} {flow_run.infrastructure_pid} for flow run \"\n        f\"'{flow_run.id}'...\"\n    )\n    try:\n        await infrastructure.kill(flow_run.infrastructure_pid)\n    except InfrastructureNotFound as exc:\n        self.logger.warning(f\"{exc} Marking flow run as cancelled.\")\n        await self._mark_flow_run_as_cancelled(flow_run)\n    except InfrastructureNotAvailable as exc:\n        self.logger.warning(f\"{exc} Flow run cannot be cancelled by this agent.\")\n    except Exception:\n        self.logger.exception(\n            \"Encountered exception while killing infrastructure for flow run \"\n            f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n        )\n        # We will try again on generic exceptions\n        self.cancelling_flow_run_ids.remove(flow_run.id)\n        return\n    else:\n        await self._mark_flow_run_as_cancelled(flow_run)\n        self.logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.get_and_submit_flow_runs","title":"get_and_submit_flow_runs async","text":"

The principle method on agents. Queries for scheduled flow runs and submits them for execution in parallel.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/agent.py
async def get_and_submit_flow_runs(self) -> List[FlowRun]:\n\"\"\"\n    The principle method on agents. Queries for scheduled flow runs and submits\n    them for execution in parallel.\n    \"\"\"\n    if not self.started:\n        raise RuntimeError(\n            \"Agent is not started. Use `async with PrefectAgent()...`\"\n        )\n\n    self.logger.debug(\"Checking for scheduled flow runs...\")\n\n    before = pendulum.now(\"utc\").add(\n        seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()\n    )\n\n    submittable_runs: List[FlowRun] = []\n\n    if self.work_pool_name:\n        responses = await self.client.get_scheduled_flow_runs_for_work_pool(\n            work_pool_name=self.work_pool_name,\n            work_queue_names=[wq.name async for wq in self.get_work_queues()],\n            scheduled_before=before,\n        )\n        submittable_runs.extend([response.flow_run for response in responses])\n\n    else:\n        # load runs from each work queue\n        async for work_queue in self.get_work_queues():\n            # print a nice message if the work queue is paused\n            if work_queue.is_paused:\n                self.logger.info(\n                    f\"Work queue {work_queue.name!r} ({work_queue.id}) is paused.\"\n                )\n\n            else:\n                try:\n                    queue_runs = await self.client.get_runs_in_work_queue(\n                        id=work_queue.id, limit=10, scheduled_before=before\n                    )\n                    submittable_runs.extend(queue_runs)\n                except ObjectNotFound:\n                    self.logger.error(\n                        f\"Work queue {work_queue.name!r} ({work_queue.id}) not\"\n                        \" found.\"\n                    )\n                except Exception as exc:\n                    self.logger.exception(exc)\n\n        submittable_runs.sort(key=lambda run: run.next_scheduled_start_time)\n\n    for flow_run in submittable_runs:\n        # don't resubmit a run\n        if flow_run.id in self.submitting_flow_run_ids:\n            continue\n\n        try:\n            if self.limiter:\n                self.limiter.acquire_on_behalf_of_nowait(flow_run.id)\n        except anyio.WouldBlock:\n            self.logger.info(\n                f\"Flow run limit reached; {self.limiter.borrowed_tokens} flow runs\"\n                \" in progress.\"\n            )\n            break\n        else:\n            self.logger.info(f\"Submitting flow run '{flow_run.id}'\")\n            self.submitting_flow_run_ids.add(flow_run.id)\n            self.task_group.start_soon(\n                self.submit_run,\n                flow_run,\n            )\n\n    return list(\n        filter(lambda run: run.id in self.submitting_flow_run_ids, submittable_runs)\n    )\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.get_work_queues","title":"get_work_queues async","text":"

Loads the work queue objects corresponding to the agent's target work queues. If any of them don't exist, they are created.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/agent.py
async def get_work_queues(self) -> AsyncIterator[WorkQueue]:\n\"\"\"\n    Loads the work queue objects corresponding to the agent's target work\n    queues. If any of them don't exist, they are created.\n    \"\"\"\n\n    # if the queue cache has not expired, yield queues from the cache\n    now = pendulum.now(\"UTC\")\n    if (self._work_queue_cache_expiration or now) > now:\n        for queue in self._work_queue_cache:\n            yield queue\n        return\n\n    # otherwise clear the cache, set the expiration for 30 seconds, and\n    # reload the work queues\n    self._work_queue_cache.clear()\n    self._work_queue_cache_expiration = now.add(seconds=30)\n\n    await self.update_matched_agent_work_queues()\n\n    for name in self.work_queues:\n        try:\n            work_queue = await self.client.read_work_queue_by_name(\n                work_pool_name=self.work_pool_name, name=name\n            )\n        except ObjectNotFound:\n            # if the work queue wasn't found, create it\n            if not self.work_queue_prefix:\n                # do not attempt to create work queues if the agent is polling for\n                # queues using a regex\n                try:\n                    work_queue = await self.client.create_work_queue(\n                        work_pool_name=self.work_pool_name, name=name\n                    )\n                    if self.work_pool_name:\n                        self.logger.info(\n                            f\"Created work queue {name!r} in work pool\"\n                            f\" {self.work_pool_name!r}.\"\n                        )\n                    else:\n                        self.logger.info(f\"Created work queue '{name}'.\")\n\n                # if creating it raises an exception, it was probably just\n                # created by some other agent; rather than entering a re-read\n                # loop with new error handling, we log the exception and\n                # continue.\n                except Exception:\n                    self.logger.exception(f\"Failed to create work queue {name!r}.\")\n                    continue\n\n        self._work_queue_cache.append(work_queue)\n        yield work_queue\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.submit_run","title":"submit_run async","text":"

Submit a flow run to the infrastructure

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/agent.py
async def submit_run(self, flow_run: FlowRun) -> None:\n\"\"\"\n    Submit a flow run to the infrastructure\n    \"\"\"\n    ready_to_submit = await self._propose_pending_state(flow_run)\n\n    if ready_to_submit:\n        try:\n            infrastructure = await self.get_infrastructure(flow_run)\n        except Exception as exc:\n            self.logger.exception(\n                f\"Failed to get infrastructure for flow run '{flow_run.id}'.\"\n            )\n            await self._propose_failed_state(flow_run, exc)\n            if self.limiter:\n                self.limiter.release_on_behalf_of(flow_run.id)\n        else:\n            # Wait for submission to be completed. Note that the submission function\n            # may continue to run in the background after this exits.\n            readiness_result = await self.task_group.start(\n                self._submit_run_and_capture_errors, flow_run, infrastructure\n            )\n\n            if readiness_result and not isinstance(readiness_result, Exception):\n                try:\n                    await self.client.update_flow_run(\n                        flow_run_id=flow_run.id,\n                        infrastructure_pid=str(readiness_result),\n                    )\n                except Exception:\n                    self.logger.exception(\n                        \"An error occured while setting the `infrastructure_pid`\"\n                        f\" on flow run {flow_run.id!r}. The flow run will not be\"\n                        \" cancellable.\"\n                    )\n\n            self.logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n    else:\n        # If the run is not ready to submit, release the concurrency slot\n        if self.limiter:\n            self.limiter.release_on_behalf_of(flow_run.id)\n\n    self.submitting_flow_run_ids.remove(flow_run.id)\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/artifacts/","title":"prefect.artifacts","text":"","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts","title":"prefect.artifacts","text":"

Interface for creating and reading artifacts.

","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_link_artifact","title":"create_link_artifact async","text":"

Create a link artifact.

Parameters:

Name Type Description Default link str

The link to create.

required link_text Optional[str]

The link text.

None key Optional[str]

A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.

None description Optional[str]

A user-specified description of the artifact.

None

Returns:

Type Description UUID

The table artifact ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/artifacts.py
@sync_compatible\nasync def create_link_artifact(\n    link: str,\n    link_text: Optional[str] = None,\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n) -> UUID:\n\"\"\"\n    Create a link artifact.\n\n    Arguments:\n        link: The link to create.\n        link_text: The link text.\n        key: A user-provided string identifier.\n          Required for the artifact to show in the Artifacts page in the UI.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n\n\n    Returns:\n        The table artifact ID.\n    \"\"\"\n    formatted_link = f\"[{link_text}]({link})\" if link_text else f\"[{link}]({link})\"\n    artifact = await _create_artifact(\n        key=key,\n        type=\"markdown\",\n        description=description,\n        data=formatted_link,\n    )\n\n    return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_markdown_artifact","title":"create_markdown_artifact async","text":"

Create a markdown artifact.

Parameters:

Name Type Description Default markdown str

The markdown to create.

required key Optional[str]

A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.

None description Optional[str]

A user-specified description of the artifact.

None

Returns:

Type Description UUID

The table artifact ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/artifacts.py
@sync_compatible\nasync def create_markdown_artifact(\n    markdown: str,\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n) -> UUID:\n\"\"\"\n    Create a markdown artifact.\n\n    Arguments:\n        markdown: The markdown to create.\n        key: A user-provided string identifier.\n          Required for the artifact to show in the Artifacts page in the UI.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n\n    Returns:\n        The table artifact ID.\n    \"\"\"\n    artifact = await _create_artifact(\n        key=key,\n        type=\"markdown\",\n        description=description,\n        data=markdown,\n    )\n\n    return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_table_artifact","title":"create_table_artifact async","text":"

Create a table artifact.

Parameters:

Name Type Description Default table Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]]

The table to create.

required key Optional[str]

A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.

None description Optional[str]

A user-specified description of the artifact.

None

Returns:

Type Description UUID

The table artifact ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/artifacts.py
@sync_compatible\nasync def create_table_artifact(\n    table: Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]],\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n) -> UUID:\n\"\"\"\n    Create a table artifact.\n\n    Arguments:\n        table: The table to create.\n        key: A user-provided string identifier.\n          Required for the artifact to show in the Artifacts page in the UI.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n\n    Returns:\n        The table artifact ID.\n    \"\"\"\n\n    def _sanitize_nan_values(item):\n\"\"\"\n        Sanitize NaN values in a given item. The item can be a dict, list or float.\n        \"\"\"\n\n        if isinstance(item, list):\n            return [_sanitize_nan_values(sub_item) for sub_item in item]\n\n        elif isinstance(item, dict):\n            return {k: _sanitize_nan_values(v) for k, v in item.items()}\n\n        elif isinstance(item, float) and math.isnan(item):\n            return None\n\n        else:\n            return item\n\n    sanitized_table = _sanitize_nan_values(table)\n\n    if isinstance(table, dict) and all(isinstance(v, list) for v in table.values()):\n        pass\n    elif isinstance(table, list) and all(isinstance(v, (list, dict)) for v in table):\n        pass\n    else:\n        raise TypeError(INVALID_TABLE_TYPE_ERROR)\n\n    formatted_table = json.dumps(sanitized_table)\n\n    artifact = await _create_artifact(\n        key=key,\n        type=\"table\",\n        description=description,\n        data=formatted_table,\n    )\n\n    return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/context/","title":"prefect.context","text":"","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context","title":"prefect.context","text":"

Async and thread safe models for passing runtime context data.

These contexts should never be directly mutated by the user.

For more user-accessible information about the current run, see prefect.runtime.

","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.ContextModel","title":"ContextModel","text":"

Bases: BaseModel

A base model for context data that forbids mutation and extra data while providing a context manager

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class ContextModel(BaseModel):\n\"\"\"\n    A base model for context data that forbids mutation and extra data while providing\n    a context manager\n    \"\"\"\n\n    # The context variable for storing data must be defined by the child class\n    __var__: ContextVar\n    _token: Token = PrivateAttr(None)\n\n    class Config:\n        allow_mutation = False\n        arbitrary_types_allowed = True\n        extra = \"forbid\"\n\n    def __enter__(self):\n        if self._token is not None:\n            raise RuntimeError(\n                \"Context already entered. Context enter calls cannot be nested.\"\n            )\n        self._token = self.__var__.set(self)\n        return self\n\n    def __exit__(self, *_):\n        if not self._token:\n            raise RuntimeError(\n                \"Asymmetric use of context. Context exit called without an enter.\"\n            )\n        self.__var__.reset(self._token)\n        self._token = None\n\n    @classmethod\n    def get(cls: Type[T]) -> Optional[T]:\n        return cls.__var__.get(None)\n\n    def copy(self, **kwargs):\n\"\"\"\n        Duplicate the context model, optionally choosing which fields to include, exclude, or change.\n\n        Attributes:\n            include: Fields to include in new model.\n            exclude: Fields to exclude from new model, as with values this takes precedence over include.\n            update: Values to change/add in the new model. Note: the data is not validated before creating\n                the new model - you should trust this data.\n            deep: Set to `True` to make a deep copy of the model.\n\n        Returns:\n            A new model instance.\n        \"\"\"\n        # Remove the token on copy to avoid re-entrance errors\n        new = super().copy(**kwargs)\n        new._token = None\n        return new\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.ContextModel.copy","title":"copy","text":"

Duplicate the context model, optionally choosing which fields to include, exclude, or change.

Attributes:

Name Type Description include

Fields to include in new model.

exclude

Fields to exclude from new model, as with values this takes precedence over include.

update

Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data.

deep

Set to True to make a deep copy of the model.

Returns:

Type Description

A new model instance.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
def copy(self, **kwargs):\n\"\"\"\n    Duplicate the context model, optionally choosing which fields to include, exclude, or change.\n\n    Attributes:\n        include: Fields to include in new model.\n        exclude: Fields to exclude from new model, as with values this takes precedence over include.\n        update: Values to change/add in the new model. Note: the data is not validated before creating\n            the new model - you should trust this data.\n        deep: Set to `True` to make a deep copy of the model.\n\n    Returns:\n        A new model instance.\n    \"\"\"\n    # Remove the token on copy to avoid re-entrance errors\n    new = super().copy(**kwargs)\n    new._token = None\n    return new\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.FlowRunContext","title":"FlowRunContext","text":"

Bases: RunContext

The context for a flow run. Data in this context is only available from within a flow run function.

Attributes:

Name Type Description flow Flow

The flow instance associated with the run

flow_run FlowRun

The API metadata for the flow run

task_runner BaseTaskRunner

The task runner instance being used for the flow run

task_run_futures List[PrefectFuture]

A list of futures for task runs submitted within this flow run

task_run_states List[State]

A list of states for task runs created within this flow run

task_run_results Dict[int, State]

A mapping of result ids to task run states for this flow run

flow_run_states List[State]

A list of states for flow runs created within this flow run

sync_portal Optional[anyio.abc.BlockingPortal]

A blocking portal for sync task/flow runs in an async flow

timeout_scope Optional[anyio.abc.CancelScope]

The cancellation scope for flow level timeouts

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class FlowRunContext(RunContext):\n\"\"\"\n    The context for a flow run. Data in this context is only available from within a\n    flow run function.\n\n    Attributes:\n        flow: The flow instance associated with the run\n        flow_run: The API metadata for the flow run\n        task_runner: The task runner instance being used for the flow run\n        task_run_futures: A list of futures for task runs submitted within this flow run\n        task_run_states: A list of states for task runs created within this flow run\n        task_run_results: A mapping of result ids to task run states for this flow run\n        flow_run_states: A list of states for flow runs created within this flow run\n        sync_portal: A blocking portal for sync task/flow runs in an async flow\n        timeout_scope: The cancellation scope for flow level timeouts\n    \"\"\"\n\n    flow: \"Flow\"\n    flow_run: FlowRun\n    task_runner: BaseTaskRunner\n    log_prints: bool = False\n    parameters: Dict[str, Any]\n\n    # Result handling\n    result_factory: ResultFactory\n\n    # Counter for task calls allowing unique\n    task_run_dynamic_keys: Dict[str, int] = Field(default_factory=dict)\n\n    # Counter for flow pauses\n    observed_flow_pauses: Dict[str, int] = Field(default_factory=dict)\n\n    # Tracking for objects created by this flow run\n    task_run_futures: List[PrefectFuture] = Field(default_factory=list)\n    task_run_states: List[State] = Field(default_factory=list)\n    task_run_results: Dict[int, State] = Field(default_factory=dict)\n    flow_run_states: List[State] = Field(default_factory=list)\n\n    # The synchronous portal is only created for async flows for creating engine calls\n    # from synchronous task and subflow calls\n    sync_portal: Optional[anyio.abc.BlockingPortal] = None\n    timeout_scope: Optional[anyio.abc.CancelScope] = None\n\n    # Task group that can be used for background tasks during the flow run\n    background_tasks: anyio.abc.TaskGroup\n\n    # Events worker to emit events to Prefect Cloud\n    events: Optional[EventsWorker] = None\n\n    __var__ = ContextVar(\"flow_run\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.PrefectObjectRegistry","title":"PrefectObjectRegistry","text":"

Bases: ContextModel

A context that acts as a registry for all Prefect objects that are registered during load and execution.

Attributes:

Name Type Description start_time DateTimeTZ

The time the object registry was created.

block_code_execution bool

If set, flow calls will be ignored.

capture_failures bool

If set, failures during init will be silenced and tracked.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class PrefectObjectRegistry(ContextModel):\n\"\"\"\n    A context that acts as a registry for all Prefect objects that are\n    registered during load and execution.\n\n    Attributes:\n        start_time: The time the object registry was created.\n        block_code_execution: If set, flow calls will be ignored.\n        capture_failures: If set, failures during __init__ will be silenced and tracked.\n    \"\"\"\n\n    start_time: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n\n    _instance_registry: Dict[Type[T], List[T]] = PrivateAttr(\n        default_factory=lambda: defaultdict(list)\n    )\n\n    # Failures will be a tuple of (exception, instance, args, kwargs)\n    _instance_init_failures: Dict[Type[T], List[Tuple[Exception, T, Tuple, Dict]]] = (\n        PrivateAttr(default_factory=lambda: defaultdict(list))\n    )\n\n    block_code_execution: bool = False\n    capture_failures: bool = False\n\n    __var__ = ContextVar(\"object_registry\")\n\n    def get_instances(self, type_: Type[T]) -> List[T]:\n        instances = []\n        for registered_type, type_instances in self._instance_registry.items():\n            if type_ in registered_type.mro():\n                instances.extend(type_instances)\n        return instances\n\n    def get_instance_failures(\n        self, type_: Type[T]\n    ) -> List[Tuple[Exception, T, Tuple, Dict]]:\n        failures = []\n        for type__ in type_.mro():\n            failures.extend(self._instance_init_failures[type__])\n        return failures\n\n    def register_instance(self, object):\n        # TODO: Consider using a 'Set' to avoid duplicate entries\n        self._instance_registry[type(object)].append(object)\n\n    def register_init_failure(\n        self, exc: Exception, object: Any, init_args: Tuple, init_kwargs: Dict\n    ):\n        self._instance_init_failures[type(object)].append(\n            (exc, object, init_args, init_kwargs)\n        )\n\n    @classmethod\n    def register_instances(cls, type_: Type):\n\"\"\"\n        Decorator for a class that adds registration to the `PrefectObjectRegistry`\n        on initialization of instances.\n        \"\"\"\n        __init__ = type_.__init__\n\n        def __register_init__(__self__, *args, **kwargs):\n            registry = cls.get()\n            try:\n                __init__(__self__, *args, **kwargs)\n            except Exception as exc:\n                if not registry or not registry.capture_failures:\n                    raise\n                else:\n                    registry.register_init_failure(exc, __self__, args, kwargs)\n            else:\n                if registry:\n                    registry.register_instance(__self__)\n\n        type_.__init__ = __register_init__\n        return type_\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.PrefectObjectRegistry.register_instances","title":"register_instances classmethod","text":"

Decorator for a class that adds registration to the PrefectObjectRegistry on initialization of instances.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
@classmethod\ndef register_instances(cls, type_: Type):\n\"\"\"\n    Decorator for a class that adds registration to the `PrefectObjectRegistry`\n    on initialization of instances.\n    \"\"\"\n    __init__ = type_.__init__\n\n    def __register_init__(__self__, *args, **kwargs):\n        registry = cls.get()\n        try:\n            __init__(__self__, *args, **kwargs)\n        except Exception as exc:\n            if not registry or not registry.capture_failures:\n                raise\n            else:\n                registry.register_init_failure(exc, __self__, args, kwargs)\n        else:\n            if registry:\n                registry.register_instance(__self__)\n\n    type_.__init__ = __register_init__\n    return type_\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.RunContext","title":"RunContext","text":"

Bases: ContextModel

The base context for a flow or task run. Data in this context will always be available when get_run_context is called.

Attributes:

Name Type Description start_time DateTimeTZ

The time the run context was entered

client PrefectClient

The Prefect client instance being used for API communication

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class RunContext(ContextModel):\n\"\"\"\n    The base context for a flow or task run. Data in this context will always be\n    available when `get_run_context` is called.\n\n    Attributes:\n        start_time: The time the run context was entered\n        client: The Prefect client instance being used for API communication\n    \"\"\"\n\n    start_time: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n    client: PrefectClient\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.SettingsContext","title":"SettingsContext","text":"

Bases: ContextModel

The context for a Prefect settings.

This allows for safe concurrent access and modification of settings.

Attributes:

Name Type Description profile Profile

The profile that is in use.

settings Settings

The complete settings model.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class SettingsContext(ContextModel):\n\"\"\"\n    The context for a Prefect settings.\n\n    This allows for safe concurrent access and modification of settings.\n\n    Attributes:\n        profile: The profile that is in use.\n        settings: The complete settings model.\n    \"\"\"\n\n    profile: Profile\n    settings: Settings\n\n    __var__ = ContextVar(\"settings\")\n\n    def __hash__(self) -> int:\n        return hash(self.settings)\n\n    def __enter__(self):\n\"\"\"\n        Upon entrance, we ensure the home directory for the profile exists.\n        \"\"\"\n        return_value = super().__enter__()\n\n        try:\n            prefect_home = Path(self.settings.value_of(PREFECT_HOME))\n            prefect_home.mkdir(mode=0o0700, exist_ok=True)\n        except OSError:\n            warnings.warn(\n                (\n                    \"Failed to create the Prefect home directory at \"\n                    f\"{self.settings.value_of(PREFECT_HOME)}\"\n                ),\n                stacklevel=2,\n            )\n\n        return return_value\n\n    @classmethod\n    def get(cls) -> \"SettingsContext\":\n        # Return the global context instead of `None` if no context exists\n        return super().get() or GLOBAL_SETTINGS_CONTEXT\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.TagsContext","title":"TagsContext","text":"

Bases: ContextModel

The context for prefect.tags management.

Attributes:

Name Type Description current_tags Set[str]

A set of current tags in the context

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class TagsContext(ContextModel):\n\"\"\"\n    The context for `prefect.tags` management.\n\n    Attributes:\n        current_tags: A set of current tags in the context\n    \"\"\"\n\n    current_tags: Set[str] = Field(default_factory=set)\n\n    @classmethod\n    def get(cls) -> \"TagsContext\":\n        # Return an empty `TagsContext` instead of `None` if no context exists\n        return cls.__var__.get(TagsContext())\n\n    __var__ = ContextVar(\"tags\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.TaskRunContext","title":"TaskRunContext","text":"

Bases: RunContext

The context for a task run. Data in this context is only available from within a task run function.

Attributes:

Name Type Description task Task

The task instance associated with the task run

task_run TaskRun

The API metadata for this task run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class TaskRunContext(RunContext):\n\"\"\"\n    The context for a task run. Data in this context is only available from within a\n    task run function.\n\n    Attributes:\n        task: The task instance associated with the task run\n        task_run: The API metadata for this task run\n    \"\"\"\n\n    task: \"Task\"\n    task_run: TaskRun\n    log_prints: bool = False\n    parameters: Dict[str, Any]\n\n    # Result handling\n    result_factory: ResultFactory\n\n    __var__ = ContextVar(\"task_run\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.get_run_context","title":"get_run_context","text":"

Get the current run context from within a task or flow function.

Returns:

Type Description Union[FlowRunContext, TaskRunContext]

A FlowRunContext or TaskRunContext depending on the function type.

Raises RuntimeError: If called outside of a flow or task run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
def get_run_context() -> Union[FlowRunContext, TaskRunContext]:\n\"\"\"\n    Get the current run context from within a task or flow function.\n\n    Returns:\n        A `FlowRunContext` or `TaskRunContext` depending on the function type.\n\n    Raises\n        RuntimeError: If called outside of a flow or task run.\n    \"\"\"\n    task_run_ctx = TaskRunContext.get()\n    if task_run_ctx:\n        return task_run_ctx\n\n    flow_run_ctx = FlowRunContext.get()\n    if flow_run_ctx:\n        return flow_run_ctx\n\n    raise MissingContextError(\n        \"No run context available. You are not in a flow or task run context.\"\n    )\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.get_settings_context","title":"get_settings_context","text":"

Get the current settings context which contains profile information and the settings that are being used.

Generally, the settings that are being used are a combination of values from the profile and environment. See prefect.context.use_profile for more details.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
def get_settings_context() -> SettingsContext:\n\"\"\"\n    Get the current settings context which contains profile information and the\n    settings that are being used.\n\n    Generally, the settings that are being used are a combination of values from the\n    profile and environment. See `prefect.context.use_profile` for more details.\n    \"\"\"\n    settings_ctx = SettingsContext.get()\n\n    if not settings_ctx:\n        raise MissingContextError(\"No settings context found.\")\n\n    return settings_ctx\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.registry_from_script","title":"registry_from_script","text":"

Return a fresh registry with instances populated from execution of a script.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
def registry_from_script(\n    path: str,\n    block_code_execution: bool = True,\n    capture_failures: bool = True,\n) -> PrefectObjectRegistry:\n\"\"\"\n    Return a fresh registry with instances populated from execution of a script.\n    \"\"\"\n    with PrefectObjectRegistry(\n        block_code_execution=block_code_execution,\n        capture_failures=capture_failures,\n    ) as registry:\n        load_script_as_module(path)\n\n    return registry\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.root_settings_context","title":"root_settings_context","text":"

Return the settings context that will exist as the root context for the module.

The profile to use is determined with the following precedence - Command line via 'prefect --profile ' - Environment variable via 'PREFECT_PROFILE' - Profiles file via the 'active' key Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py

def root_settings_context():\n\"\"\"\n    Return the settings context that will exist as the root context for the module.\n\n    The profile to use is determined with the following precedence\n    - Command line via 'prefect --profile <name>'\n    - Environment variable via 'PREFECT_PROFILE'\n    - Profiles file via the 'active' key\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    active_name = profiles.active_name\n    profile_source = \"in the profiles file\"\n\n    if \"PREFECT_PROFILE\" in os.environ:\n        active_name = os.environ[\"PREFECT_PROFILE\"]\n        profile_source = \"by environment variable\"\n\n    if (\n        sys.argv[0].endswith(\"/prefect\")\n        and len(sys.argv) >= 3\n        and sys.argv[1] == \"--profile\"\n    ):\n        active_name = sys.argv[2]\n        profile_source = \"by command line argument\"\n\n    if active_name not in profiles.names:\n        print(\n            (\n                f\"WARNING: Active profile {active_name!r} set {profile_source} not \"\n                \"found. The default profile will be used instead. \"\n            ),\n            file=sys.stderr,\n        )\n        active_name = \"default\"\n\n    with use_profile(\n        profiles[active_name],\n        # Override environment variables if the profile was set by the CLI\n        override_environment_variables=profile_source == \"by command line argument\",\n    ) as settings_context:\n        return settings_context\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.tags","title":"tags","text":"

Context manager to add tags to flow and task run calls.

Tags are always combined with any existing tags.

Yields:

Type Description Set[str]

The current set of tags

Examples:

>>> from prefect import tags, task, flow\n>>> @task\n>>> def my_task():\n>>>     pass\n

Run a task with tags

>>> @flow\n>>> def my_flow():\n>>>     with tags(\"a\", \"b\"):\n>>>         my_task()  # has tags: a, b\n

Run a flow with tags

>>> @flow\n>>> def my_flow():\n>>>     pass\n>>> with tags(\"a\", \"b\"):\n>>>     my_flow()  # has tags: a, b\n

Run a task with nested tag contexts

>>> @flow\n>>> def my_flow():\n>>>     with tags(\"a\", \"b\"):\n>>>         with tags(\"c\", \"d\"):\n>>>             my_task()  # has tags: a, b, c, d\n>>>         my_task()  # has tags: a, b\n

Inspect the current tags

>>> @flow\n>>> def my_flow():\n>>>     with tags(\"c\", \"d\"):\n>>>         with tags(\"e\", \"f\") as current_tags:\n>>>              print(current_tags)\n>>> with tags(\"a\", \"b\"):\n>>>     my_flow()\n{\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"}\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
@contextmanager\ndef tags(*new_tags: str) -> Set[str]:\n\"\"\"\n    Context manager to add tags to flow and task run calls.\n\n    Tags are always combined with any existing tags.\n\n    Yields:\n        The current set of tags\n\n    Examples:\n        >>> from prefect import tags, task, flow\n        >>> @task\n        >>> def my_task():\n        >>>     pass\n\n        Run a task with tags\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     with tags(\"a\", \"b\"):\n        >>>         my_task()  # has tags: a, b\n\n        Run a flow with tags\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     pass\n        >>> with tags(\"a\", \"b\"):\n        >>>     my_flow()  # has tags: a, b\n\n        Run a task with nested tag contexts\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     with tags(\"a\", \"b\"):\n        >>>         with tags(\"c\", \"d\"):\n        >>>             my_task()  # has tags: a, b, c, d\n        >>>         my_task()  # has tags: a, b\n\n        Inspect the current tags\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     with tags(\"c\", \"d\"):\n        >>>         with tags(\"e\", \"f\") as current_tags:\n        >>>              print(current_tags)\n        >>> with tags(\"a\", \"b\"):\n        >>>     my_flow()\n        {\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"}\n    \"\"\"\n    current_tags = TagsContext.get().current_tags\n    new_tags = current_tags.union(new_tags)\n    with TagsContext(current_tags=new_tags):\n        yield new_tags\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.use_profile","title":"use_profile","text":"

Switch to a profile for the duration of this context.

Profile contexts are confined to an async context in a single thread.

Parameters:

Name Type Description Default profile Union[Profile, str]

The name of the profile to load or an instance of a Profile.

required override_environment_variable

If set, variables in the profile will take precedence over current environment variables. By default, environment variables will override profile settings.

required include_current_context bool

If set, the new settings will be constructed with the current settings context as a base. If not set, the use_base settings will be loaded from the environment and defaults.

True

Yields:

Type Description

The created SettingsContext object

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
@contextmanager\ndef use_profile(\n    profile: Union[Profile, str],\n    override_environment_variables: bool = False,\n    include_current_context: bool = True,\n):\n\"\"\"\n    Switch to a profile for the duration of this context.\n\n    Profile contexts are confined to an async context in a single thread.\n\n    Args:\n        profile: The name of the profile to load or an instance of a Profile.\n        override_environment_variable: If set, variables in the profile will take\n            precedence over current environment variables. By default, environment\n            variables will override profile settings.\n        include_current_context: If set, the new settings will be constructed\n            with the current settings context as a base. If not set, the use_base settings\n            will be loaded from the environment and defaults.\n\n    Yields:\n        The created `SettingsContext` object\n    \"\"\"\n    if isinstance(profile, str):\n        profiles = prefect.settings.load_profiles()\n        profile = profiles[profile]\n\n    if not isinstance(profile, Profile):\n        raise TypeError(\n            f\"Unexpected type {type(profile).__name__!r} for `profile`. \"\n            \"Expected 'str' or 'Profile'.\"\n        )\n\n    # Create a copy of the profiles settings as we will mutate it\n    profile_settings = profile.settings.copy()\n\n    existing_context = SettingsContext.get()\n    if existing_context and include_current_context:\n        settings = existing_context.settings\n    else:\n        settings = prefect.settings.get_settings_from_env()\n\n    if not override_environment_variables:\n        for key in os.environ:\n            if key in prefect.settings.SETTING_VARIABLES:\n                profile_settings.pop(prefect.settings.SETTING_VARIABLES[key], None)\n\n    new_settings = settings.copy_with_update(updates=profile_settings)\n\n    with SettingsContext(profile=profile, settings=new_settings) as ctx:\n        yield ctx\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/engine/","title":"prefect.engine","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine","title":"prefect.engine","text":"

Client-side execution and orchestration of flows and tasks.

","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--engine-process-overview","title":"Engine process overview","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--flows","title":"Flows","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--tasks","title":"Tasks","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_flow_run","title":"begin_flow_run async","text":"

Begins execution of a flow run; blocks until completion of the flow run

Note that the flow_run contains a parameters attribute which is the serialized parameters sent to the backend while the parameters argument here should be the deserialized and validated dictionary of python objects.

Returns:

Type Description State

The final state of the run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def begin_flow_run(\n    flow: Flow,\n    flow_run: FlowRun,\n    parameters: Dict[str, Any],\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> State:\n\"\"\"\n    Begins execution of a flow run; blocks until completion of the flow run\n\n    - Starts a task runner\n    - Determines the result storage block to use\n    - Orchestrates the flow run (runs the user-function and generates tasks)\n    - Waits for tasks to complete / shutsdown the task runner\n    - Sets a terminal state for the flow run\n\n    Note that the `flow_run` contains a `parameters` attribute which is the serialized\n    parameters sent to the backend while the `parameters` argument here should be the\n    deserialized and validated dictionary of python objects.\n\n    Returns:\n        The final state of the run\n    \"\"\"\n    logger = flow_run_logger(flow_run, flow)\n\n    log_prints = should_log_prints(flow)\n    flow_run_context = PartialModel(FlowRunContext, log_prints=log_prints)\n\n    async with AsyncExitStack() as stack:\n        await stack.enter_async_context(\n            report_flow_run_crashes(flow_run=flow_run, client=client, flow=flow)\n        )\n\n        # Create a task group for background tasks\n        flow_run_context.background_tasks = await stack.enter_async_context(\n            anyio.create_task_group()\n        )\n\n        # If the flow is async, we need to provide a portal so sync tasks can run\n        flow_run_context.sync_portal = (\n            stack.enter_context(start_blocking_portal()) if flow.isasync else None\n        )\n\n        task_runner = flow.task_runner.duplicate()\n        if task_runner is NotImplemented:\n            # Backwards compatibility; will not support concurrent flow runs\n            task_runner = flow.task_runner\n            logger.warning(\n                f\"Task runner {type(task_runner).__name__!r} does not implement the\"\n                \" `duplicate` method and will fail if used for concurrent execution of\"\n                \" the same flow.\"\n            )\n\n        logger.debug(\n            f\"Starting {type(flow.task_runner).__name__!r}; submitted tasks \"\n            f\"will be run {CONCURRENCY_MESSAGES[flow.task_runner.concurrency_type]}...\"\n        )\n\n        flow_run_context.task_runner = await stack.enter_async_context(\n            task_runner.start()\n        )\n\n        flow_run_context.result_factory = await ResultFactory.from_flow(\n            flow, client=client\n        )\n\n        if log_prints:\n            stack.enter_context(patch_print())\n\n        terminal_or_paused_state = await orchestrate_flow_run(\n            flow,\n            flow_run=flow_run,\n            parameters=parameters,\n            wait_for=None,\n            client=client,\n            partial_flow_run_context=flow_run_context,\n            # Orchestration needs to be interruptible if it has a timeout\n            interruptible=flow.timeout_seconds is not None,\n            user_thread=user_thread,\n        )\n\n    if terminal_or_paused_state.is_paused():\n        timeout = terminal_or_paused_state.state_details.pause_timeout\n        msg = \"Currently paused and suspending execution.\"\n        if timeout:\n            msg += f\" Resume before {timeout.to_rfc3339_string()} to finish execution.\"\n        logger.log(level=logging.INFO, msg=msg)\n        await APILogHandler.aflush()\n\n        return terminal_or_paused_state\n    else:\n        terminal_state = terminal_or_paused_state\n\n    # If debugging, use the more complete `repr` than the usual `str` description\n    display_state = repr(terminal_state) if PREFECT_DEBUG_MODE else str(terminal_state)\n\n    logger.log(\n        level=logging.INFO if terminal_state.is_completed() else logging.ERROR,\n        msg=f\"Finished in state {display_state}\",\n    )\n\n    # When a \"root\" flow run finishes, flush logs so we do not have to rely on handling\n    # during interpreter shutdown\n    await APILogHandler.aflush()\n\n    return terminal_state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_task_map","title":"begin_task_map async","text":"

Async entrypoint for task mapping

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def begin_task_map(\n    task: Task,\n    flow_run_context: FlowRunContext,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    task_runner: Optional[BaseTaskRunner],\n) -> List[Union[PrefectFuture, Awaitable[PrefectFuture]]]:\n\"\"\"Async entrypoint for task mapping\"\"\"\n    # We need to resolve some futures to map over their data, collect the upstream\n    # links beforehand to retain relationship tracking.\n    task_inputs = {\n        k: await collect_task_run_inputs(v, max_depth=0) for k, v in parameters.items()\n    }\n\n    # Resolve the top-level parameters in order to get mappable data of a known length.\n    # Nested parameters will be resolved in each mapped child where their relationships\n    # will also be tracked.\n    parameters = await resolve_inputs(parameters, max_depth=1)\n\n    # Ensure that any parameters in kwargs are expanded before this check\n    parameters = explode_variadic_parameter(task.fn, parameters)\n\n    iterable_parameters = {}\n    static_parameters = {}\n    annotated_parameters = {}\n    for key, val in parameters.items():\n        if isinstance(val, (allow_failure, quote)):\n            # Unwrap annotated parameters to determine if they are iterable\n            annotated_parameters[key] = val\n            val = val.unwrap()\n\n        if isinstance(val, unmapped):\n            static_parameters[key] = val.value\n        elif isiterable(val):\n            iterable_parameters[key] = list(val)\n        else:\n            static_parameters[key] = val\n\n    if not len(iterable_parameters):\n        raise MappingMissingIterable(\n            \"No iterable parameters were received. Parameters for map must \"\n            f\"include at least one iterable. Parameters: {parameters}\"\n        )\n\n    iterable_parameter_lengths = {\n        key: len(val) for key, val in iterable_parameters.items()\n    }\n    lengths = set(iterable_parameter_lengths.values())\n    if len(lengths) > 1:\n        raise MappingLengthMismatch(\n            \"Received iterable parameters with different lengths. Parameters for map\"\n            f\" must all be the same length. Got lengths: {iterable_parameter_lengths}\"\n        )\n\n    map_length = list(lengths)[0]\n\n    task_runs = []\n    for i in range(map_length):\n        call_parameters = {key: value[i] for key, value in iterable_parameters.items()}\n        call_parameters.update({key: value for key, value in static_parameters.items()})\n\n        # Add default values for parameters; these are skipped earlier since they should\n        # not be mapped over\n        for key, value in get_parameter_defaults(task.fn).items():\n            call_parameters.setdefault(key, value)\n\n        # Re-apply annotations to each key again\n        for key, annotation in annotated_parameters.items():\n            call_parameters[key] = annotation.rewrap(call_parameters[key])\n\n        # Collapse any previously exploded kwargs\n        call_parameters = collapse_variadic_parameters(task.fn, call_parameters)\n\n        task_runs.append(\n            partial(\n                get_task_call_return_value,\n                task=task,\n                flow_run_context=flow_run_context,\n                parameters=call_parameters,\n                wait_for=wait_for,\n                return_type=return_type,\n                task_runner=task_runner,\n                extra_task_inputs=task_inputs,\n            )\n        )\n\n    # Maintain the order of the task runs when using the sequential task runner\n    runner = task_runner if task_runner else flow_run_context.task_runner\n    if runner.concurrency_type == TaskConcurrencyType.SEQUENTIAL:\n        return [await task_run() for task_run in task_runs]\n\n    return await gather(*task_runs)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_task_run","title":"begin_task_run async","text":"

Entrypoint for task run execution.

This function is intended for submission to the task runner.

This method may be called from a worker so we ensure the settings context has been entered. For example, with a runner that is executing tasks in the same event loop, we will likely not enter the context again because the current context already matches:

main thread: --> Flow called with settings A --> begin_task_run executes same event loop --> Profile A matches and is not entered again

However, with execution on a remote environment, we are going to need to ensure the settings for the task run are respected by entering the context:

main thread: --> Flow called with settings A --> begin_task_run is scheduled on a remote worker, settings A is serialized remote worker: --> Remote worker imports Prefect (may not occur) --> Global settings is loaded with default settings --> begin_task_run executes on a different event loop than the flow --> Current settings is not set or does not match, settings A is entered

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def begin_task_run(\n    task: Task,\n    task_run: TaskRun,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    result_factory: ResultFactory,\n    log_prints: bool,\n    settings: prefect.context.SettingsContext,\n):\n\"\"\"\n    Entrypoint for task run execution.\n\n    This function is intended for submission to the task runner.\n\n    This method may be called from a worker so we ensure the settings context has been\n    entered. For example, with a runner that is executing tasks in the same event loop,\n    we will likely not enter the context again because the current context already\n    matches:\n\n    main thread:\n    --> Flow called with settings A\n    --> `begin_task_run` executes same event loop\n    --> Profile A matches and is not entered again\n\n    However, with execution on a remote environment, we are going to need to ensure the\n    settings for the task run are respected by entering the context:\n\n    main thread:\n    --> Flow called with settings A\n    --> `begin_task_run` is scheduled on a remote worker, settings A is serialized\n    remote worker:\n    --> Remote worker imports Prefect (may not occur)\n    --> Global settings is loaded with default settings\n    --> `begin_task_run` executes on a different event loop than the flow\n    --> Current settings is not set or does not match, settings A is entered\n    \"\"\"\n    maybe_flow_run_context = prefect.context.FlowRunContext.get()\n\n    async with AsyncExitStack() as stack:\n        # The settings context may be null on a remote worker so we use the safe `.get`\n        # method and compare it to the settings required for this task run\n        if prefect.context.SettingsContext.get() != settings:\n            stack.enter_context(settings)\n            setup_logging()\n\n        if maybe_flow_run_context:\n            # Accessible if on a worker that is running in the same thread as the flow\n            client = maybe_flow_run_context.client\n            # Only run the task in an interruptible thread if it in the same thread as\n            # the flow _and_ the flow run has a timeout attached. If the task is on a\n            # worker, the flow run timeout will not be raised in the worker process.\n            interruptible = maybe_flow_run_context.timeout_scope is not None\n        else:\n            # Otherwise, retrieve a new client\n            client = await stack.enter_async_context(get_client())\n            interruptible = False\n            await stack.enter_async_context(anyio.create_task_group())\n\n        await stack.enter_async_context(report_task_run_crashes(task_run, client))\n\n        # TODO: Use the background tasks group to manage logging for this task\n\n        if log_prints:\n            stack.enter_context(patch_print())\n\n        await check_api_reachable(\n            client, f\"Cannot orchestrate task run '{task_run.id}'\"\n        )\n        try:\n            state = await orchestrate_task_run(\n                task=task,\n                task_run=task_run,\n                parameters=parameters,\n                wait_for=wait_for,\n                result_factory=result_factory,\n                log_prints=log_prints,\n                interruptible=interruptible,\n                client=client,\n            )\n\n            if not maybe_flow_run_context:\n                # When a a task run finishes on a remote worker flush logs to prevent\n                # loss if the process exits\n                await APILogHandler.aflush()\n\n        except Abort as abort:\n            # Task run probably already completed, fetch its state\n            task_run = await client.read_task_run(task_run.id)\n\n            if task_run.state.is_final():\n                task_run_logger(task_run).info(\n                    f\"Task run '{task_run.id}' already finished.\"\n                )\n            else:\n                # TODO: This is a concerning case; we should determine when this occurs\n                #       1. This can occur when the flow run is not in a running state\n                task_run_logger(task_run).warning(\n                    f\"Task run '{task_run.id}' received abort during orchestration: \"\n                    f\"{abort} Task run is in {task_run.state.type.value} state.\"\n                )\n            state = task_run.state\n\n        except Pause:\n            task_run_logger(task_run).info(\n                \"Task run encountered a pause signal during orchestration.\"\n            )\n            state = Paused()\n\n        return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.collect_task_run_inputs","title":"collect_task_run_inputs async","text":"

This function recurses through an expression to generate a set of any discernable task run inputs it finds in the data structure. It produces a set of all inputs found.

Examples:

>>> task_inputs = {\n>>>    k: await collect_task_run_inputs(v) for k, v in parameters.items()\n>>> }\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def collect_task_run_inputs(expr: Any, max_depth: int = -1) -> Set[TaskRunInput]:\n\"\"\"\n    This function recurses through an expression to generate a set of any discernable\n    task run inputs it finds in the data structure. It produces a set of all inputs\n    found.\n\n    Examples:\n        >>> task_inputs = {\n        >>>    k: await collect_task_run_inputs(v) for k, v in parameters.items()\n        >>> }\n    \"\"\"\n    # TODO: This function needs to be updated to detect parameters and constants\n\n    inputs = set()\n    futures = set()\n\n    def add_futures_and_states_to_inputs(obj):\n        if isinstance(obj, PrefectFuture):\n            # We need to wait for futures to be submitted before we can get the task\n            # run id but we want to do so asynchronously\n            futures.add(obj)\n        elif is_state(obj):\n            if obj.state_details.task_run_id:\n                inputs.add(TaskRunResult(id=obj.state_details.task_run_id))\n        else:\n            state = get_state_for_result(obj)\n            if state and state.state_details.task_run_id:\n                inputs.add(TaskRunResult(id=state.state_details.task_run_id))\n\n    visit_collection(\n        expr,\n        visit_fn=add_futures_and_states_to_inputs,\n        return_data=False,\n        max_depth=max_depth,\n    )\n\n    await asyncio.gather(*[future._wait_for_submission() for future in futures])\n    for future in futures:\n        inputs.add(TaskRunResult(id=future.task_run.id))\n\n    return inputs\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_and_begin_subflow_run","title":"create_and_begin_subflow_run async","text":"

Async entrypoint for flows calls within a flow run

Subflows differ from parent flows in that they - Resolve futures in passed parameters into values - Create a dummy task for representation in the parent flow - Retrieve default result storage from the parent flow rather than the server

Returns:

Type Description Any

The final state of the run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@inject_client\nasync def create_and_begin_subflow_run(\n    flow: Flow,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> Any:\n\"\"\"\n    Async entrypoint for flows calls within a flow run\n\n    Subflows differ from parent flows in that they\n    - Resolve futures in passed parameters into values\n    - Create a dummy task for representation in the parent flow\n    - Retrieve default result storage from the parent flow rather than the server\n\n    Returns:\n        The final state of the run\n    \"\"\"\n    parent_flow_run_context = FlowRunContext.get()\n    parent_logger = get_run_logger(parent_flow_run_context)\n    log_prints = should_log_prints(flow)\n    terminal_state = None\n\n    parent_logger.debug(f\"Resolving inputs to {flow.name!r}\")\n    task_inputs = {k: await collect_task_run_inputs(v) for k, v in parameters.items()}\n\n    if wait_for:\n        task_inputs[\"wait_for\"] = await collect_task_run_inputs(wait_for)\n\n    rerunning = parent_flow_run_context.flow_run.run_count > 1\n\n    # Generate a task in the parent flow run to represent the result of the subflow run\n    dummy_task = Task(name=flow.name, fn=flow.fn, version=flow.version)\n    parent_task_run = await client.create_task_run(\n        task=dummy_task,\n        flow_run_id=parent_flow_run_context.flow_run.id,\n        dynamic_key=_dynamic_key_for_task_run(parent_flow_run_context, dummy_task),\n        task_inputs=task_inputs,\n        state=Pending(),\n    )\n\n    # Resolve any task futures in the input\n    parameters = await resolve_inputs(parameters)\n\n    if parent_task_run.state.is_final() and not (\n        rerunning and not parent_task_run.state.is_completed()\n    ):\n        # Retrieve the most recent flow run from the database\n        flow_runs = await client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                parent_task_run_id={\"any_\": [parent_task_run.id]}\n            ),\n            sort=FlowRunSort.EXPECTED_START_TIME_ASC,\n        )\n        flow_run = flow_runs[-1]\n\n        # Set up variables required downstream\n        terminal_state = flow_run.state\n        logger = flow_run_logger(flow_run, flow)\n\n    else:\n        flow_run = await client.create_flow_run(\n            flow,\n            parameters=flow.serialize_parameters(parameters),\n            parent_task_run_id=parent_task_run.id,\n            state=parent_task_run.state if not rerunning else Pending(),\n            tags=TagsContext.get().current_tags,\n        )\n\n        parent_logger.info(\n            f\"Created subflow run {flow_run.name!r} for flow {flow.name!r}\"\n        )\n\n        logger = flow_run_logger(flow_run, flow)\n        ui_url = PREFECT_UI_URL.value()\n        if ui_url:\n            logger.info(\n                f\"View at {ui_url}/flow-runs/flow-run/{flow_run.id}\",\n                extra={\"send_to_api\": False},\n            )\n\n        result_factory = await ResultFactory.from_flow(\n            flow, client=parent_flow_run_context.client\n        )\n\n        if flow.should_validate_parameters:\n            try:\n                parameters = flow.validate_parameters(parameters)\n            except Exception:\n                message = \"Validation of flow parameters failed with error:\"\n                logger.exception(message)\n                terminal_state = await propose_state(\n                    client,\n                    state=await exception_to_failed_state(\n                        message=message, result_factory=result_factory\n                    ),\n                    flow_run_id=flow_run.id,\n                )\n\n        if terminal_state is None or not terminal_state.is_final():\n            async with AsyncExitStack() as stack:\n                await stack.enter_async_context(\n                    report_flow_run_crashes(flow_run=flow_run, client=client, flow=flow)\n                )\n\n                task_runner = flow.task_runner.duplicate()\n                if task_runner is NotImplemented:\n                    # Backwards compatibility; will not support concurrent flow runs\n                    task_runner = flow.task_runner\n                    logger.warning(\n                        f\"Task runner {type(task_runner).__name__!r} does not implement\"\n                        \" the `duplicate` method and will fail if used for concurrent\"\n                        \" execution of the same flow.\"\n                    )\n\n                await stack.enter_async_context(task_runner.start())\n\n                if log_prints:\n                    stack.enter_context(patch_print())\n\n                terminal_state = await orchestrate_flow_run(\n                    flow,\n                    flow_run=flow_run,\n                    parameters=parameters,\n                    wait_for=wait_for,\n                    # If the parent flow run has a timeout, then this one needs to be\n                    # interruptible as well\n                    interruptible=parent_flow_run_context.timeout_scope is not None,\n                    client=client,\n                    partial_flow_run_context=PartialModel(\n                        FlowRunContext,\n                        sync_portal=parent_flow_run_context.sync_portal,\n                        task_runner=task_runner,\n                        background_tasks=parent_flow_run_context.background_tasks,\n                        result_factory=result_factory,\n                        log_prints=log_prints,\n                    ),\n                    user_thread=user_thread,\n                )\n\n    # Display the full state (including the result) if debugging\n    display_state = repr(terminal_state) if PREFECT_DEBUG_MODE else str(terminal_state)\n    logger.log(\n        level=logging.INFO if terminal_state.is_completed() else logging.ERROR,\n        msg=f\"Finished in state {display_state}\",\n    )\n\n    # Track the subflow state so the parent flow can use it to determine its final state\n    parent_flow_run_context.flow_run_states.append(terminal_state)\n\n    if return_type == \"state\":\n        return terminal_state\n    elif return_type == \"result\":\n        return await terminal_state.result(fetch=True)\n    else:\n        raise ValueError(f\"Invalid return type for flow engine {return_type!r}.\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_then_begin_flow_run","title":"create_then_begin_flow_run async","text":"

Async entrypoint for flow calls

Creates the flow run in the backend, then enters the main flow run engine.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@inject_client\nasync def create_then_begin_flow_run(\n    flow: Flow,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> Any:\n\"\"\"\n    Async entrypoint for flow calls\n\n    Creates the flow run in the backend, then enters the main flow run engine.\n    \"\"\"\n    # TODO: Returns a `State` depending on `return_type` and we can add an overload to\n    #       the function signature to clarify this eventually.\n\n    await check_api_reachable(client, \"Cannot create flow run\")\n\n    state = Pending()\n    if flow.should_validate_parameters:\n        try:\n            parameters = flow.validate_parameters(parameters)\n        except Exception:\n            state = await exception_to_failed_state(\n                message=\"Validation of flow parameters failed with error:\"\n            )\n\n    flow_run = await client.create_flow_run(\n        flow,\n        # Send serialized parameters to the backend\n        parameters=flow.serialize_parameters(parameters),\n        state=state,\n        tags=TagsContext.get().current_tags,\n    )\n\n    engine_logger.info(f\"Created flow run {flow_run.name!r} for flow {flow.name!r}\")\n\n    logger = flow_run_logger(flow_run, flow)\n\n    ui_url = PREFECT_UI_URL.value()\n    if ui_url:\n        logger.info(\n            f\"View at {ui_url}/flow-runs/flow-run/{flow_run.id}\",\n            extra={\"send_to_api\": False},\n        )\n\n    if state.is_failed():\n        logger.error(state.message)\n        engine_logger.info(\n            f\"Flow run {flow_run.name!r} received invalid parameters and is marked as\"\n            \" failed.\"\n        )\n    else:\n        state = await begin_flow_run(\n            flow=flow,\n            flow_run=flow_run,\n            parameters=parameters,\n            client=client,\n            user_thread=user_thread,\n        )\n\n    if return_type == \"state\":\n        return state\n    elif return_type == \"result\":\n        return await state.result(fetch=True)\n    else:\n        raise ValueError(f\"Invalid return type for flow engine {return_type!r}.\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_flow_run_engine_from_flow_call","title":"enter_flow_run_engine_from_flow_call","text":"

Sync entrypoint for flow calls.

This function does the heavy lifting of ensuring we can get into an async context for flow run execution with minimal overhead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
def enter_flow_run_engine_from_flow_call(\n    flow: Flow,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n) -> Union[State, Awaitable[State]]:\n\"\"\"\n    Sync entrypoint for flow calls.\n\n    This function does the heavy lifting of ensuring we can get into an async context\n    for flow run execution with minimal overhead.\n    \"\"\"\n    setup_logging()\n\n    registry = PrefectObjectRegistry.get()\n    if registry and registry.block_code_execution:\n        engine_logger.warning(\n            f\"Script loading is in progress, flow {flow.name!r} will not be executed.\"\n            \" Consider updating the script to only call the flow if executed\"\n            f' directly:\\n\\n\\tif __name__ == \"__main__\":\\n\\t\\t{flow.fn.__name__}()'\n        )\n        return None\n\n    if TaskRunContext.get():\n        raise RuntimeError(\n            \"Flows cannot be run from within tasks. Did you mean to call this \"\n            \"flow in a flow?\"\n        )\n\n    parent_flow_run_context = FlowRunContext.get()\n    is_subflow_run = parent_flow_run_context is not None\n\n    if wait_for is not None and not is_subflow_run:\n        raise ValueError(\"Only flows run as subflows can wait for dependencies.\")\n\n    begin_run = create_call(\n        create_and_begin_subflow_run if is_subflow_run else create_then_begin_flow_run,\n        flow=flow,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        client=parent_flow_run_context.client if is_subflow_run else None,\n        user_thread=threading.current_thread(),\n    )\n\n    # On completion of root flows, wait for the global thread to ensure that\n    # any work there is complete\n    done_callbacks = (\n        [create_call(wait_for_global_loop_exit)] if not is_subflow_run else None\n    )\n\n    # WARNING: You must define any context managers here to pass to our concurrency\n    # api instead of entering them in here in the engine entrypoint. Otherwise, async\n    # flows will not use the context as this function _exits_ to return an awaitable to\n    # the user. Generally, you should enter contexts _within_ the async `begin_run`\n    # instead but if you need to enter a context from the main thread you'll need to do\n    # it here.\n    contexts = [capture_sigterm()]\n\n    if flow.isasync and (\n        not is_subflow_run or (is_subflow_run and parent_flow_run_context.flow.isasync)\n    ):\n        # return a coro for the user to await if the flow is async\n        # unless it is an async subflow called in a sync flow\n        retval = from_async.wait_for_call_in_loop_thread(\n            begin_run,\n            done_callbacks=done_callbacks,\n            contexts=contexts,\n        )\n\n    else:\n        retval = from_sync.wait_for_call_in_loop_thread(\n            begin_run,\n            done_callbacks=done_callbacks,\n            contexts=contexts,\n        )\n\n    return retval\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_flow_run_engine_from_subprocess","title":"enter_flow_run_engine_from_subprocess","text":"

Sync entrypoint for flow runs that have been submitted for execution by an agent

Differs from enter_flow_run_engine_from_flow_call in that we have a flow run id but not a flow object. The flow must be retrieved before execution can begin. Additionally, this assumes that the caller is always in a context without an event loop as this should be called from a fresh process.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
def enter_flow_run_engine_from_subprocess(flow_run_id: UUID) -> State:\n\"\"\"\n    Sync entrypoint for flow runs that have been submitted for execution by an agent\n\n    Differs from `enter_flow_run_engine_from_flow_call` in that we have a flow run id\n    but not a flow object. The flow must be retrieved before execution can begin.\n    Additionally, this assumes that the caller is always in a context without an event\n    loop as this should be called from a fresh process.\n    \"\"\"\n\n    # Ensure collections are imported and have the opportunity to register types before\n    # loading the user code from the deployment\n    prefect.plugins.load_prefect_collections()\n\n    setup_logging()\n\n    state = from_sync.wait_for_call_in_loop_thread(\n        create_call(\n            retrieve_flow_then_begin_flow_run,\n            flow_run_id,\n            user_thread=threading.current_thread(),\n        ),\n        contexts=[capture_sigterm()],\n    )\n\n    APILogHandler.flush()\n    return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_task_run_engine","title":"enter_task_run_engine","text":"

Sync entrypoint for task calls

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
def enter_task_run_engine(\n    task: Task,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    task_runner: Optional[BaseTaskRunner],\n    mapped: bool,\n) -> Union[PrefectFuture, Awaitable[PrefectFuture]]:\n\"\"\"\n    Sync entrypoint for task calls\n    \"\"\"\n\n    flow_run_context = FlowRunContext.get()\n    if not flow_run_context:\n        raise RuntimeError(\n            \"Tasks cannot be run outside of a flow. To call the underlying task\"\n            \" function outside of a flow use `task.fn()`.\"\n        )\n\n    if TaskRunContext.get():\n        raise RuntimeError(\n            \"Tasks cannot be run from within tasks. Did you mean to call this \"\n            \"task in a flow?\"\n        )\n\n    if flow_run_context.timeout_scope and flow_run_context.timeout_scope.cancel_called:\n        raise TimeoutError(\"Flow run timed out\")\n\n    begin_run = create_call(\n        begin_task_map if mapped else get_task_call_return_value,\n        task=task,\n        flow_run_context=flow_run_context,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        task_runner=task_runner,\n    )\n\n    if task.isasync and flow_run_context.flow.isasync:\n        # return a coro for the user to await if an async task in an async flow\n        return from_async.wait_for_call_in_loop_thread(begin_run)\n    else:\n        return from_sync.wait_for_call_in_loop_thread(begin_run)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.get_state_for_result","title":"get_state_for_result","text":"

Get the state related to a result object.

link_state_to_result must have been called first.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
def get_state_for_result(obj: Any) -> Optional[State]:\n\"\"\"\n    Get the state related to a result object.\n\n    `link_state_to_result` must have been called first.\n    \"\"\"\n    flow_run_context = FlowRunContext.get()\n    if flow_run_context:\n        return flow_run_context.task_run_results.get(id(obj))\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.link_state_to_result","title":"link_state_to_result","text":"

Caches a link between a state and a result and its components using the id of the components to map to the state. The cache is persisted to the current flow run context since task relationships are limited to within a flow run.

This allows dependency tracking to occur when results are passed around. Note: Because id is used, we cannot cache links between singleton objects.

We only cache the relationship between components 1-layer deep.

Example

Given the result [1, [\"a\",\"b\"], (\"c\",)], the following elements will be mapped to the state: - [1, [\"a\",\"b\"], (\"c\",)] - [\"a\",\"b\"] - (\"c\",)

Note: the int 1 will not be mapped to the state because it is a singleton.

Other Notes: We do not hash the result because: - If changes are made to the object in the flow between task calls, we can still track that they are related. - Hashing can be expensive. - Not all objects are hashable.

We do not set an attribute, e.g. __prefect_state__, on the result because:

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
def link_state_to_result(state: State, result: Any) -> None:\n\"\"\"\n    Caches a link between a state and a result and its components using\n    the `id` of the components to map to the state. The cache is persisted to the\n    current flow run context since task relationships are limited to within a flow run.\n\n    This allows dependency tracking to occur when results are passed around.\n    Note: Because `id` is used, we cannot cache links between singleton objects.\n\n    We only cache the relationship between components 1-layer deep.\n    Example:\n        Given the result [1, [\"a\",\"b\"], (\"c\",)], the following elements will be\n        mapped to the state:\n        - [1, [\"a\",\"b\"], (\"c\",)]\n        - [\"a\",\"b\"]\n        - (\"c\",)\n\n        Note: the int `1` will not be mapped to the state because it is a singleton.\n\n    Other Notes:\n    We do not hash the result because:\n    - If changes are made to the object in the flow between task calls, we can still\n      track that they are related.\n    - Hashing can be expensive.\n    - Not all objects are hashable.\n\n    We do not set an attribute, e.g. `__prefect_state__`, on the result because:\n\n    - Mutating user's objects is dangerous.\n    - Unrelated equality comparisons can break unexpectedly.\n    - The field can be preserved on copy.\n    - We cannot set this attribute on Python built-ins.\n    \"\"\"\n\n    flow_run_context = FlowRunContext.get()\n\n    def link_if_trackable(obj: Any) -> None:\n\"\"\"Track connection between a task run result and its associated state if it has a unique ID.\n\n        We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256\n        because they are singletons.\n\n        This function will mutate the State if the object is an untrackable type by setting the value\n        for `State.state_details.untrackable_result` to `True`.\n\n        \"\"\"\n        if (type(obj) in UNTRACKABLE_TYPES) or (\n            isinstance(obj, int) and (-5 <= obj <= 256)\n        ):\n            state.state_details.untrackable_result = True\n            return\n        flow_run_context.task_run_results[id(obj)] = state\n\n    if flow_run_context:\n        visit_collection(expr=result, visit_fn=link_if_trackable, max_depth=1)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.orchestrate_flow_run","title":"orchestrate_flow_run async","text":"

Executes a flow run.

Note on flow timeouts

Since async flows are run directly in the main event loop, timeout behavior will match that described by anyio. If the flow is awaiting something, it will immediately return; otherwise, the next time it awaits it will exit. Sync flows are being task runner in a worker thread, which cannot be interrupted. The worker thread will exit at the next task call. The worker thread also has access to the status of the cancellation scope at FlowRunContext.timeout_scope.cancel_called which allows it to raise a TimeoutError to respect the timeout.

Returns:

Type Description State

The final state of the run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def orchestrate_flow_run(\n    flow: Flow,\n    flow_run: FlowRun,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    interruptible: bool,\n    client: PrefectClient,\n    partial_flow_run_context: PartialModel[FlowRunContext],\n    user_thread: threading.Thread,\n) -> State:\n\"\"\"\n    Executes a flow run.\n\n    Note on flow timeouts:\n        Since async flows are run directly in the main event loop, timeout behavior will\n        match that described by anyio. If the flow is awaiting something, it will\n        immediately return; otherwise, the next time it awaits it will exit. Sync flows\n        are being task runner in a worker thread, which cannot be interrupted. The worker\n        thread will exit at the next task call. The worker thread also has access to the\n        status of the cancellation scope at `FlowRunContext.timeout_scope.cancel_called`\n        which allows it to raise a `TimeoutError` to respect the timeout.\n\n    Returns:\n        The final state of the run\n    \"\"\"\n\n    logger = flow_run_logger(flow_run, flow)\n\n    flow_run_context = None\n    parent_flow_run_context = FlowRunContext.get()\n\n    try:\n        # Resolve futures in any non-data dependencies to ensure they are ready\n        if wait_for is not None:\n            await resolve_inputs({\"wait_for\": wait_for}, return_data=False)\n    except UpstreamTaskError as upstream_exc:\n        return await propose_state(\n            client,\n            Pending(name=\"NotReady\", message=str(upstream_exc)),\n            flow_run_id=flow_run.id,\n            # if orchestrating a run already in a pending state, force orchestration to\n            # update the state name\n            force=flow_run.state.is_pending(),\n        )\n\n    state = await propose_state(client, Running(), flow_run_id=flow_run.id)\n\n    # flag to ensure we only update the flow run name once\n    run_name_set = False\n\n    while state.is_running():\n        waited_for_task_runs = False\n\n        # Update the flow run to the latest data\n        flow_run = await client.read_flow_run(flow_run.id)\n        try:\n            with partial_flow_run_context.finalize(\n                flow=flow,\n                flow_run=flow_run,\n                client=client,\n                parameters=parameters,\n            ) as flow_run_context:\n                # update flow run name\n                if not run_name_set and flow.flow_run_name:\n                    flow_run_name = _resolve_custom_flow_run_name(\n                        flow=flow, parameters=parameters\n                    )\n\n                    await client.update_flow_run(\n                        flow_run_id=flow_run.id, name=flow_run_name\n                    )\n                    logger.extra[\"flow_run_name\"] = flow_run_name\n                    logger.debug(\n                        f\"Renamed flow run {flow_run.name!r} to {flow_run_name!r}\"\n                    )\n                    flow_run.name = flow_run_name\n                    run_name_set = True\n\n                args, kwargs = parameters_to_args_kwargs(flow.fn, parameters)\n                logger.debug(\n                    f\"Executing flow {flow.name!r} for flow run {flow_run.name!r}...\"\n                )\n\n                if PREFECT_DEBUG_MODE:\n                    logger.debug(f\"Executing {call_repr(flow.fn, *args, **kwargs)}\")\n                else:\n                    logger.debug(\n                        \"Beginning execution...\", extra={\"state_message\": True}\n                    )\n\n                flow_call = create_call(flow.fn, *args, **kwargs)\n\n                # This check for a parent call is needed for cases where the engine\n                # was entered directly during testing\n                parent_call = get_current_call()\n\n                if parent_call and (\n                    not parent_flow_run_context\n                    or (\n                        parent_flow_run_context\n                        and parent_flow_run_context.flow.isasync == flow.isasync\n                    )\n                ):\n                    from_async.call_soon_in_waiting_thread(\n                        flow_call, thread=user_thread, timeout=flow.timeout_seconds\n                    )\n                else:\n                    from_async.call_soon_in_new_thread(\n                        flow_call, timeout=flow.timeout_seconds\n                    )\n\n                result = await flow_call.aresult()\n\n                waited_for_task_runs = await wait_for_task_runs_and_report_crashes(\n                    flow_run_context.task_run_futures, client=client\n                )\n        except PausedRun as exc:\n            # could get raised either via utility or by returning Paused from a task run\n            # if a task run pauses, we set its state as the flow's state\n            # to preserve reschedule and timeout behavior\n            paused_flow_run = await client.read_flow_run(flow_run.id)\n            if paused_flow_run.state.is_running():\n                state = await propose_state(\n                    client,\n                    state=exc.state,\n                    flow_run_id=flow_run.id,\n                )\n\n                return state\n            paused_flow_run_state = paused_flow_run.state\n            return paused_flow_run_state\n        except CancelledError as exc:\n            if not flow_call.timedout():\n                # If the flow call was not cancelled by us; this is a crash\n                raise\n            # Construct a new exception as `TimeoutError`\n            original = exc\n            exc = TimeoutError()\n            exc.__cause__ = original\n            logger.exception(\"Encountered exception during execution:\")\n            terminal_state = await exception_to_failed_state(\n                exc,\n                message=f\"Flow run exceeded timeout of {flow.timeout_seconds} seconds\",\n                result_factory=flow_run_context.result_factory,\n                name=\"TimedOut\",\n            )\n        except Exception:\n            # Generic exception in user code\n            logger.exception(\"Encountered exception during execution:\")\n            terminal_state = await exception_to_failed_state(\n                message=\"Flow run encountered an exception.\",\n                result_factory=flow_run_context.result_factory,\n            )\n        else:\n            if result is None:\n                # All tasks and subflows are reference tasks if there is no return value\n                # If there are no tasks, use `None` instead of an empty iterable\n                result = (\n                    flow_run_context.task_run_futures\n                    + flow_run_context.task_run_states\n                    + flow_run_context.flow_run_states\n                ) or None\n\n            terminal_state = await return_value_to_state(\n                await resolve_futures_to_states(result),\n                result_factory=flow_run_context.result_factory,\n            )\n\n        if not waited_for_task_runs:\n            # An exception occured that prevented us from waiting for task runs to\n            # complete. Ensure that we wait for them before proposing a final state\n            # for the flow run.\n            await wait_for_task_runs_and_report_crashes(\n                flow_run_context.task_run_futures, client=client\n            )\n\n        # Before setting the flow run state, store state.data using\n        # block storage and send the resulting data document to the Prefect API instead.\n        # This prevents the pickled return value of flow runs\n        # from being sent to the Prefect API and stored in the Prefect database.\n        # state.data is left as is, otherwise we would have to load\n        # the data from block storage again after storing.\n        state = await propose_state(\n            client,\n            state=terminal_state,\n            flow_run_id=flow_run.id,\n        )\n\n        await _run_flow_hooks(flow=flow, flow_run=flow_run, state=state)\n\n        if state.type != terminal_state.type and PREFECT_DEBUG_MODE:\n            logger.debug(\n                (\n                    f\"Received new state {state} when proposing final state\"\n                    f\" {terminal_state}\"\n                ),\n                extra={\"send_to_api\": False},\n            )\n\n        if not state.is_final() and not state.is_paused():\n            logger.info(\n                (\n                    f\"Received non-final state {state.name!r} when proposing final\"\n                    f\" state {terminal_state.name!r} and will attempt to run again...\"\n                ),\n                extra={\"send_to_api\": False},\n            )\n            # Attempt to enter a running state again\n            state = await propose_state(client, Running(), flow_run_id=flow_run.id)\n\n    return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.orchestrate_task_run","title":"orchestrate_task_run async","text":"

Execute a task run

This function should be submitted to an task runner. We must construct the context here instead of receiving it already populated since we may be in a new environment.

Proposes a RUNNING state, then - if accepted, the task user function will be run - if rejected, the received state will be returned

When the user function is run, the result will be used to determine a final state - if an exception is encountered, it is trapped and stored in a FAILED state - otherwise, return_value_to_state is used to determine the state

If the final state is COMPLETED, we generate a cache key as specified by the task

The final state is then proposed - if accepted, this is the final state and will be returned - if rejected and a new final state is provided, it will be returned - if rejected and a non-final state is provided, we will attempt to enter a RUNNING state again

Returns:

Type Description State

The final state of the run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def orchestrate_task_run(\n    task: Task,\n    task_run: TaskRun,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    result_factory: ResultFactory,\n    log_prints: bool,\n    interruptible: bool,\n    client: PrefectClient,\n) -> State:\n\"\"\"\n    Execute a task run\n\n    This function should be submitted to an task runner. We must construct the context\n    here instead of receiving it already populated since we may be in a new environment.\n\n    Proposes a RUNNING state, then\n    - if accepted, the task user function will be run\n    - if rejected, the received state will be returned\n\n    When the user function is run, the result will be used to determine a final state\n    - if an exception is encountered, it is trapped and stored in a FAILED state\n    - otherwise, `return_value_to_state` is used to determine the state\n\n    If the final state is COMPLETED, we generate a cache key as specified by the task\n\n    The final state is then proposed\n    - if accepted, this is the final state and will be returned\n    - if rejected and a new final state is provided, it will be returned\n    - if rejected and a non-final state is provided, we will attempt to enter a RUNNING\n        state again\n\n    Returns:\n        The final state of the run\n    \"\"\"\n    flow_run = await client.read_flow_run(task_run.flow_run_id)\n    logger = task_run_logger(task_run, task=task, flow_run=flow_run)\n\n    partial_task_run_context = PartialModel(\n        TaskRunContext,\n        task_run=task_run,\n        task=task,\n        client=client,\n        result_factory=result_factory,\n        log_prints=log_prints,\n    )\n\n    try:\n        # Resolve futures in parameters into data\n        resolved_parameters = await resolve_inputs(parameters)\n        # Resolve futures in any non-data dependencies to ensure they are ready\n        await resolve_inputs({\"wait_for\": wait_for}, return_data=False)\n    except UpstreamTaskError as upstream_exc:\n        return await propose_state(\n            client,\n            Pending(name=\"NotReady\", message=str(upstream_exc)),\n            task_run_id=task_run.id,\n            # if orchestrating a run already in a pending state, force orchestration to\n            # update the state name\n            force=task_run.state.is_pending(),\n        )\n\n    # Generate the cache key to attach to proposed states\n    # The cache key uses a TaskRunContext that does not include a `timeout_context``\n    cache_key = (\n        task.cache_key_fn(\n            partial_task_run_context.finalize(parameters=resolved_parameters),\n            resolved_parameters,\n        )\n        if task.cache_key_fn\n        else None\n    )\n\n    task_run_context = partial_task_run_context.finalize(parameters=resolved_parameters)\n\n    # Ignore the cached results for a cache key, default = false\n    # Setting on task level overrules the Prefect setting (env var)\n    refresh_cache = (\n        task.refresh_cache\n        if task.refresh_cache is not None\n        else PREFECT_TASKS_REFRESH_CACHE.value()\n    )\n\n    # Emit an event to capture that the task run was in the `PENDING` state.\n    last_event = _emit_task_run_state_change_event(\n        task_run=task_run, initial_state=None, validated_state=task_run.state\n    )\n    last_state = task_run.state\n\n    # Transition from `PENDING` -> `RUNNING`\n    state = await propose_state(\n        client,\n        Running(\n            state_details=StateDetails(cache_key=cache_key, refresh_cache=refresh_cache)\n        ),\n        task_run_id=task_run.id,\n    )\n\n    # Emit an event to capture the result of proposing a `RUNNING` state.\n    last_event = _emit_task_run_state_change_event(\n        task_run=task_run,\n        initial_state=last_state,\n        validated_state=state,\n        follows=last_event,\n    )\n    last_state = state\n\n    # flag to ensure we only update the task run name once\n    run_name_set = False\n\n    # Only run the task if we enter a `RUNNING` state\n    while state.is_running():\n        # Retrieve the latest metadata for the task run context\n        task_run = await client.read_task_run(task_run.id)\n\n        with task_run_context.copy(\n            update={\"task_run\": task_run, \"start_time\": pendulum.now(\"UTC\")}\n        ):\n            try:\n                args, kwargs = parameters_to_args_kwargs(task.fn, resolved_parameters)\n                # update task run name\n                if not run_name_set and task.task_run_name:\n                    task_run_name = _resolve_custom_task_run_name(\n                        task=task, parameters=resolved_parameters\n                    )\n                    await client.set_task_run_name(\n                        task_run_id=task_run.id, name=task_run_name\n                    )\n                    logger.extra[\"task_run_name\"] = task_run_name\n                    logger.debug(\n                        f\"Renamed task run {task_run.name!r} to {task_run_name!r}\"\n                    )\n                    task_run.name = task_run_name\n                    run_name_set = True\n\n                if PREFECT_DEBUG_MODE.value():\n                    logger.debug(f\"Executing {call_repr(task.fn, *args, **kwargs)}\")\n                else:\n                    logger.debug(\n                        \"Beginning execution...\", extra={\"state_message\": True}\n                    )\n\n                call = from_async.call_soon_in_new_thread(\n                    create_call(task.fn, *args, **kwargs), timeout=task.timeout_seconds\n                )\n                result = await call.aresult()\n\n            except (CancelledError, asyncio.CancelledError) as exc:\n                if not call.timedout():\n                    # If the task call was not cancelled by us; this is a crash\n                    raise\n                # Construct a new exception as `TimeoutError`\n                original = exc\n                exc = TimeoutError()\n                exc.__cause__ = original\n                logger.exception(\"Encountered exception during execution:\")\n                terminal_state = await exception_to_failed_state(\n                    exc,\n                    message=(\n                        f\"Task run exceeded timeout of {task.timeout_seconds} seconds\"\n                    ),\n                    result_factory=task_run_context.result_factory,\n                    name=\"TimedOut\",\n                )\n            except Exception as exc:\n                logger.exception(\"Encountered exception during execution:\")\n                terminal_state = await exception_to_failed_state(\n                    exc,\n                    message=\"Task run encountered an exception\",\n                    result_factory=task_run_context.result_factory,\n                )\n            else:\n                terminal_state = await return_value_to_state(\n                    result,\n                    result_factory=task_run_context.result_factory,\n                )\n\n                # for COMPLETED tasks, add the cache key and expiration\n                if terminal_state.is_completed():\n                    terminal_state.state_details.cache_expiration = (\n                        (pendulum.now(\"utc\") + task.cache_expiration)\n                        if task.cache_expiration\n                        else None\n                    )\n                    terminal_state.state_details.cache_key = cache_key\n\n            state = await propose_state(client, terminal_state, task_run_id=task_run.id)\n            last_event = _emit_task_run_state_change_event(\n                task_run=task_run,\n                initial_state=last_state,\n                validated_state=state,\n                follows=last_event,\n            )\n            last_state = state\n\n            await _run_task_hooks(\n                task=task,\n                task_run=task_run,\n                state=state,\n            )\n\n            if state.type != terminal_state.type and PREFECT_DEBUG_MODE:\n                logger.debug(\n                    (\n                        f\"Received new state {state} when proposing final state\"\n                        f\" {terminal_state}\"\n                    ),\n                    extra={\"send_to_api\": False},\n                )\n\n            if not state.is_final() and not state.is_paused():\n                logger.info(\n                    (\n                        f\"Received non-final state {state.name!r} when proposing final\"\n                        f\" state {terminal_state.name!r} and will attempt to run\"\n                        \" again...\"\n                    ),\n                    extra={\"send_to_api\": False},\n                )\n                # Attempt to enter a running state again\n                state = await propose_state(client, Running(), task_run_id=task_run.id)\n                last_event = _emit_task_run_state_change_event(\n                    task_run=task_run,\n                    initial_state=last_state,\n                    validated_state=state,\n                    follows=last_event,\n                )\n                last_state = state\n\n    # If debugging, use the more complete `repr` than the usual `str` description\n    display_state = repr(state) if PREFECT_DEBUG_MODE else str(state)\n\n    logger.log(\n        level=logging.INFO if state.is_completed() else logging.ERROR,\n        msg=f\"Finished in state {display_state}\",\n    )\n\n    return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.pause_flow_run","title":"pause_flow_run async","text":"

Pauses the current flow run by stopping execution until resumed.

When called within a flow run, execution will block and no downstream tasks will run until the flow is resumed. Task runs that have already started will continue running. A timeout parameter can be passed that will fail the flow run if it has not been resumed within the specified time.

Parameters:

Name Type Description Default flow_run_id UUID

a flow run id. If supplied, this function will attempt to pause the specified flow run outside of the flow run process. When paused, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order pause a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the persist_results option.

None timeout int

the number of seconds to wait for the flow to be resumed before failing. Defaults to 5 minutes (300 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming.

300 poll_interval int

The number of seconds between checking whether the flow has been resumed. Defaults to 10 seconds.

10 reschedule bool

Flag that will reschedule the flow run if resumed. Instead of blocking execution, the flow will gracefully exit (with no result returned) instead. To use this flag, a flow needs to have an associated deployment and results need to be configured with the persist_results option.

False key str

An optional key to prevent calling pauses more than once. This defaults to the number of pauses observed by the flow so far, and prevents pauses that use the \"reschedule\" option from running the same pause twice. A custom key can be supplied for custom pausing behavior.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@sync_compatible\nasync def pause_flow_run(\n    flow_run_id: UUID = None,\n    timeout: int = 300,\n    poll_interval: int = 10,\n    reschedule: bool = False,\n    key: str = None,\n):\n\"\"\"\n    Pauses the current flow run by stopping execution until resumed.\n\n    When called within a flow run, execution will block and no downstream tasks will\n    run until the flow is resumed. Task runs that have already started will continue\n    running. A timeout parameter can be passed that will fail the flow run if it has not\n    been resumed within the specified time.\n\n    Args:\n        flow_run_id: a flow run id. If supplied, this function will attempt to pause\n            the specified flow run outside of the flow run process. When paused, the\n            flow run will continue execution until the NEXT task is orchestrated, at\n            which point the flow will exit. Any tasks that have already started will\n            run until completion. When resumed, the flow run will be rescheduled to\n            finish execution. In order pause a flow run in this way, the flow needs to\n            have an associated deployment and results need to be configured with the\n            `persist_results` option.\n        timeout: the number of seconds to wait for the flow to be resumed before\n            failing. Defaults to 5 minutes (300 seconds). If the pause timeout exceeds\n            any configured flow-level timeout, the flow might fail even after resuming.\n        poll_interval: The number of seconds between checking whether the flow has been\n            resumed. Defaults to 10 seconds.\n        reschedule: Flag that will reschedule the flow run if resumed. Instead of\n            blocking execution, the flow will gracefully exit (with no result returned)\n            instead. To use this flag, a flow needs to have an associated deployment and\n            results need to be configured with the `persist_results` option.\n        key: An optional key to prevent calling pauses more than once. This defaults to\n            the number of pauses observed by the flow so far, and prevents pauses that\n            use the \"reschedule\" option from running the same pause twice. A custom key\n            can be supplied for custom pausing behavior.\n    \"\"\"\n    if flow_run_id:\n        return await _out_of_process_pause(\n            flow_run_id=flow_run_id,\n            timeout=timeout,\n            reschedule=reschedule,\n            key=key,\n        )\n    else:\n        return await _in_process_pause(\n            timeout=timeout, poll_interval=poll_interval, reschedule=reschedule, key=key\n        )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.propose_state","title":"propose_state async","text":"

Propose a new state for a flow run or task run, invoking Prefect orchestration logic.

If the proposed state is accepted, the provided state will be augmented with details and returned.

If the proposed state is rejected, a new state returned by the Prefect API will be returned.

If the proposed state results in a WAIT instruction from the Prefect API, the function will sleep and attempt to propose the state again.

If the proposed state results in an ABORT instruction from the Prefect API, an error will be raised.

Parameters:

Name Type Description Default state State

a new state for the task or flow run

required task_run_id UUID

an optional task run id, used when proposing task run states

None flow_run_id UUID

an optional flow run id, used when proposing flow run states

None

Returns:

Type Description State

a State model representation of the flow or task run state

Raises:

Type Description ValueError

if neither task_run_id or flow_run_id is provided

prefect.exceptions.Abort

if an ABORT instruction is received from the Prefect API

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def propose_state(\n    client: PrefectClient,\n    state: State,\n    force: bool = False,\n    task_run_id: UUID = None,\n    flow_run_id: UUID = None,\n) -> State:\n\"\"\"\n    Propose a new state for a flow run or task run, invoking Prefect orchestration logic.\n\n    If the proposed state is accepted, the provided `state` will be augmented with\n     details and returned.\n\n    If the proposed state is rejected, a new state returned by the Prefect API will be\n    returned.\n\n    If the proposed state results in a WAIT instruction from the Prefect API, the\n    function will sleep and attempt to propose the state again.\n\n    If the proposed state results in an ABORT instruction from the Prefect API, an\n    error will be raised.\n\n    Args:\n        state: a new state for the task or flow run\n        task_run_id: an optional task run id, used when proposing task run states\n        flow_run_id: an optional flow run id, used when proposing flow run states\n\n    Returns:\n        a [State model][prefect.client.schemas.objects.State] representation of the\n            flow or task run state\n\n    Raises:\n        ValueError: if neither task_run_id or flow_run_id is provided\n        prefect.exceptions.Abort: if an ABORT instruction is received from\n            the Prefect API\n    \"\"\"\n\n    # Determine if working with a task run or flow run\n    if not task_run_id and not flow_run_id:\n        raise ValueError(\"You must provide either a `task_run_id` or `flow_run_id`\")\n\n    # Handle task and sub-flow tracing\n    if state.is_final():\n        if isinstance(state.data, BaseResult) and state.data.has_cached_object():\n            # Avoid fetching the result unless it is cached, otherwise we defeat\n            # the purpose of disabling `cache_result_in_memory`\n            result = await state.result(raise_on_failure=False, fetch=True)\n        else:\n            result = state.data\n\n        link_state_to_result(state, result)\n\n    # Handle repeated WAITs in a loop instead of recursively, to avoid\n    # reaching max recursion depth in extreme cases.\n    async def set_state_and_handle_waits(set_state_func) -> OrchestrationResult:\n        response = await set_state_func()\n        while response.status == SetStateStatus.WAIT:\n            engine_logger.debug(\n                f\"Received wait instruction for {response.details.delay_seconds}s: \"\n                f\"{response.details.reason}\"\n            )\n            await anyio.sleep(response.details.delay_seconds)\n            response = await set_state_func()\n        return response\n\n    # Attempt to set the state\n    if task_run_id:\n        set_state = partial(client.set_task_run_state, task_run_id, state, force=force)\n        response = await set_state_and_handle_waits(set_state)\n    elif flow_run_id:\n        set_state = partial(client.set_flow_run_state, flow_run_id, state, force=force)\n        response = await set_state_and_handle_waits(set_state)\n    else:\n        raise ValueError(\n            \"Neither flow run id or task run id were provided. At least one must \"\n            \"be given.\"\n        )\n\n    # Parse the response to return the new state\n    if response.status == SetStateStatus.ACCEPT:\n        # Update the state with the details if provided\n        state.id = response.state.id\n        state.timestamp = response.state.timestamp\n        if response.state.state_details:\n            state.state_details = response.state.state_details\n        return state\n\n    elif response.status == SetStateStatus.ABORT:\n        raise prefect.exceptions.Abort(response.details.reason)\n\n    elif response.status == SetStateStatus.REJECT:\n        if response.state.is_paused():\n            raise Pause(response.details.reason)\n        return response.state\n\n    else:\n        raise ValueError(\n            f\"Received unexpected `SetStateStatus` from server: {response.status!r}\"\n        )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.report_flow_run_crashes","title":"report_flow_run_crashes async","text":"

Detect flow run crashes during this context and update the run to a proper final state.

This context must reraise the exception to properly exit the run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@asynccontextmanager\nasync def report_flow_run_crashes(flow_run: FlowRun, client: PrefectClient, flow: Flow):\n\"\"\"\n    Detect flow run crashes during this context and update the run to a proper final\n    state.\n\n    This context _must_ reraise the exception to properly exit the run.\n    \"\"\"\n\n    try:\n        yield\n    except (Abort, Pause):\n        # Do not capture internal signals as crashes\n        raise\n    except BaseException as exc:\n        state = await exception_to_crashed_state(exc)\n        logger = flow_run_logger(flow_run)\n        with anyio.CancelScope(shield=True):\n            logger.error(f\"Crash detected! {state.message}\")\n            logger.debug(\"Crash details:\", exc_info=exc)\n            flow_run_state = await propose_state(client, state, flow_run_id=flow_run.id)\n            engine_logger.debug(\n                f\"Reported crashed flow run {flow_run.name!r} successfully!\"\n            )\n\n            # Only `on_crashed` and `on_cancellation` flow run state change hooks can be called here.\n            # We call the hooks after the state change proposal to `CRASHED` is validated\n            # or rejected (if it is in a `CANCELLING` state).\n            await _run_flow_hooks(\n                flow=flow,\n                flow_run=flow_run,\n                state=flow_run_state,\n            )\n\n        # Reraise the exception\n        raise\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.report_task_run_crashes","title":"report_task_run_crashes async","text":"

Detect task run crashes during this context and update the run to a proper final state.

This context must reraise the exception to properly exit the run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@asynccontextmanager\nasync def report_task_run_crashes(task_run: TaskRun, client: PrefectClient):\n\"\"\"\n    Detect task run crashes during this context and update the run to a proper final\n    state.\n\n    This context _must_ reraise the exception to properly exit the run.\n    \"\"\"\n    try:\n        yield\n    except (Abort, Pause):\n        # Do not capture internal signals as crashes\n        raise\n    except BaseException as exc:\n        state = await exception_to_crashed_state(exc)\n        logger = task_run_logger(task_run)\n        with anyio.CancelScope(shield=True):\n            logger.error(f\"Crash detected! {state.message}\")\n            logger.debug(\"Crash details:\", exc_info=exc)\n            await client.set_task_run_state(\n                state=state,\n                task_run_id=task_run.id,\n                force=True,\n            )\n            engine_logger.debug(\n                f\"Reported crashed task run {task_run.name!r} successfully!\"\n            )\n\n        # Reraise the exception\n        raise\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.resolve_inputs","title":"resolve_inputs async","text":"

Resolve any Quote, PrefectFuture, or State types nested in parameters into data.

Returns:

Type Description Dict[str, Any]

A copy of the parameters with resolved data

Raises:

Type Description UpstreamTaskError

If any of the upstream states are not COMPLETED

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def resolve_inputs(\n    parameters: Dict[str, Any], return_data: bool = True, max_depth: int = -1\n) -> Dict[str, Any]:\n\"\"\"\n    Resolve any `Quote`, `PrefectFuture`, or `State` types nested in parameters into\n    data.\n\n    Returns:\n        A copy of the parameters with resolved data\n\n    Raises:\n        UpstreamTaskError: If any of the upstream states are not `COMPLETED`\n    \"\"\"\n\n    futures = set()\n    states = set()\n    result_by_state = {}\n\n    if not parameters:\n        return {}\n\n    def collect_futures_and_states(expr, context):\n        # Expressions inside quotes should not be traversed\n        if isinstance(context.get(\"annotation\"), quote):\n            raise StopVisiting()\n\n        if isinstance(expr, PrefectFuture):\n            futures.add(expr)\n        if is_state(expr):\n            states.add(expr)\n\n        return expr\n\n    visit_collection(\n        parameters,\n        visit_fn=collect_futures_and_states,\n        return_data=False,\n        max_depth=max_depth,\n        context={},\n    )\n\n    # Wait for all futures so we do not block when we retrieve the state in `resolve_input`\n    states.update(await asyncio.gather(*[future._wait() for future in futures]))\n\n    # Only retrieve the result if requested as it may be expensive\n    if return_data:\n        finished_states = [state for state in states if state.is_final()]\n\n        state_results = await asyncio.gather(\n            *[\n                state.result(raise_on_failure=False, fetch=True)\n                for state in finished_states\n            ]\n        )\n\n        for state, result in zip(finished_states, state_results):\n            result_by_state[state] = result\n\n    def resolve_input(expr, context):\n        state = None\n\n        # Expressions inside quotes should not be modified\n        if isinstance(context.get(\"annotation\"), quote):\n            raise StopVisiting()\n\n        if isinstance(expr, PrefectFuture):\n            state = expr._final_state\n        elif is_state(expr):\n            state = expr\n        else:\n            return expr\n\n        # Do not allow uncompleted upstreams except failures when `allow_failure` has\n        # been used\n        if not state.is_completed() and not (\n            # TODO: Note that the contextual annotation here is only at the current level\n            #       if `allow_failure` is used then another annotation is used, this will\n            #       incorrectly evaulate to false \u2014 to resolve this, we must track all\n            #       annotations wrapping the current expression but this is not yet\n            #       implemented.\n            isinstance(context.get(\"annotation\"), allow_failure)\n            and state.is_failed()\n        ):\n            raise UpstreamTaskError(\n                f\"Upstream task run '{state.state_details.task_run_id}' did not reach a\"\n                \" 'COMPLETED' state.\"\n            )\n\n        return result_by_state.get(state)\n\n    resolved_parameters = {}\n    for parameter, value in parameters.items():\n        try:\n            resolved_parameters[parameter] = visit_collection(\n                value,\n                visit_fn=resolve_input,\n                return_data=return_data,\n                # we're manually going 1 layer deeper here\n                max_depth=max_depth - 1,\n                remove_annotations=True,\n                context={},\n            )\n        except UpstreamTaskError:\n            raise\n        except Exception as exc:\n            raise PrefectException(\n                f\"Failed to resolve inputs in parameter {parameter!r}. If your\"\n                \" parameter type is not supported, consider using the `quote`\"\n                \" annotation to skip resolution of inputs.\"\n            ) from exc\n\n    return resolved_parameters\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.resume_flow_run","title":"resume_flow_run async","text":"

Resumes a paused flow.

Parameters:

Name Type Description Default flow_run_id

the flow_run_id to resume

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@sync_compatible\nasync def resume_flow_run(flow_run_id):\n\"\"\"\n    Resumes a paused flow.\n\n    Args:\n        flow_run_id: the flow_run_id to resume\n    \"\"\"\n    client = get_client()\n    flow_run = await client.read_flow_run(flow_run_id)\n\n    if not flow_run.state.is_paused():\n        raise NotPausedError(\"Cannot resume a run that isn't paused!\")\n\n    response = await client.resume_flow_run(flow_run_id)\n\n    if response.status == SetStateStatus.REJECT:\n        if response.state.type == StateType.FAILED:\n            raise FlowPauseTimeout(\"Flow run can no longer be resumed.\")\n        else:\n            raise RuntimeError(f\"Cannot resume this run: {response.details.reason}\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.retrieve_flow_then_begin_flow_run","title":"retrieve_flow_then_begin_flow_run async","text":"

Async entrypoint for flow runs that have been submitted for execution by an agent

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@inject_client\nasync def retrieve_flow_then_begin_flow_run(\n    flow_run_id: UUID,\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> State:\n\"\"\"\n    Async entrypoint for flow runs that have been submitted for execution by an agent\n\n    - Retrieves the deployment information\n    - Loads the flow object using deployment information\n    - Updates the flow run version\n    \"\"\"\n    flow_run = await client.read_flow_run(flow_run_id)\n    try:\n        flow = await load_flow_from_flow_run(flow_run, client=client)\n    except Exception:\n        message = \"Flow could not be retrieved from deployment.\"\n        flow_run_logger(flow_run).exception(message)\n        state = await exception_to_failed_state(message=message)\n        await client.set_flow_run_state(\n            state=state, flow_run_id=flow_run_id, force=True\n        )\n        return state\n\n    # Update the flow run policy defaults to match settings on the flow\n    # Note: Mutating the flow run object prevents us from performing another read\n    #       operation if these properties are used by the client downstream\n    if flow_run.empirical_policy.retry_delay is None:\n        flow_run.empirical_policy.retry_delay = flow.retry_delay_seconds\n\n    if flow_run.empirical_policy.retries is None:\n        flow_run.empirical_policy.retries = flow.retries\n\n    await client.update_flow_run(\n        flow_run_id=flow_run_id,\n        flow_version=flow.version,\n        empirical_policy=flow_run.empirical_policy,\n    )\n\n    if flow.should_validate_parameters:\n        failed_state = None\n        try:\n            parameters = flow.validate_parameters(flow_run.parameters)\n        except Exception:\n            message = \"Validation of flow parameters failed with error: \"\n            flow_run_logger(flow_run).exception(message)\n            failed_state = await exception_to_failed_state(message=message)\n\n        if failed_state is not None:\n            await propose_state(\n                client,\n                state=failed_state,\n                flow_run_id=flow_run_id,\n            )\n            return failed_state\n    else:\n        parameters = flow_run.parameters\n\n    # Ensure default values are populated\n    parameters = {**get_parameter_defaults(flow.fn), **parameters}\n\n    return await begin_flow_run(\n        flow=flow,\n        flow_run=flow_run,\n        parameters=parameters,\n        client=client,\n        user_thread=user_thread,\n    )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/exceptions/","title":"prefect.exceptions","text":"","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions","title":"prefect.exceptions","text":"

Prefect-specific exceptions.

","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.Abort","title":"Abort","text":"

Bases: PrefectSignal

Raised when the API sends an 'ABORT' instruction during state proposal.

Indicates that the run should exit immediately.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class Abort(PrefectSignal):\n\"\"\"\n    Raised when the API sends an 'ABORT' instruction during state proposal.\n\n    Indicates that the run should exit immediately.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.BlockMissingCapabilities","title":"BlockMissingCapabilities","text":"

Bases: PrefectException

Raised when a block does not have required capabilities for a given operation.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class BlockMissingCapabilities(PrefectException):\n\"\"\"\n    Raised when a block does not have required capabilities for a given operation.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.CancelledRun","title":"CancelledRun","text":"

Bases: PrefectException

Raised when the result from a cancelled run is retrieved and an exception is not attached.

This occurs when a string is attached to the state instead of an exception or if the state's data is null.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class CancelledRun(PrefectException):\n\"\"\"\n    Raised when the result from a cancelled run is retrieved and an exception\n    is not attached.\n\n    This occurs when a string is attached to the state instead of an exception\n    or if the state's data is null.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.CrashedRun","title":"CrashedRun","text":"

Bases: PrefectException

Raised when the result from a crashed run is retrieved.

This occurs when a string is attached to the state instead of an exception or if the state's data is null.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class CrashedRun(PrefectException):\n\"\"\"\n    Raised when the result from a crashed run is retrieved.\n\n    This occurs when a string is attached to the state instead of an exception or if\n    the state's data is null.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ExternalSignal","title":"ExternalSignal","text":"

Bases: BaseException

Base type for external signal-like exceptions that should never be caught by users.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ExternalSignal(BaseException):\n\"\"\"\n    Base type for external signal-like exceptions that should never be caught by users.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FailedRun","title":"FailedRun","text":"

Bases: PrefectException

Raised when the result from a failed run is retrieved and an exception is not attached.

This occurs when a string is attached to the state instead of an exception or if the state's data is null.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class FailedRun(PrefectException):\n\"\"\"\n    Raised when the result from a failed run is retrieved and an exception is not\n    attached.\n\n    This occurs when a string is attached to the state instead of an exception or if\n    the state's data is null.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowPauseTimeout","title":"FlowPauseTimeout","text":"

Bases: PrefectException

Raised when a flow pause times out

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class FlowPauseTimeout(PrefectException):\n\"\"\"Raised when a flow pause times out\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowScriptError","title":"FlowScriptError","text":"

Bases: PrefectException

Raised when a script errors during evaluation while attempting to load a flow.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class FlowScriptError(PrefectException):\n\"\"\"\n    Raised when a script errors during evaluation while attempting to load a flow.\n    \"\"\"\n\n    def __init__(\n        self,\n        user_exc: Exception,\n        script_path: str,\n    ) -> None:\n        message = f\"Flow script at {script_path!r} encountered an exception\"\n        super().__init__(message)\n\n        self.user_exc = user_exc\n\n    def rich_user_traceback(self, **kwargs):\n        trace = Traceback.extract(\n            type(self.user_exc),\n            self.user_exc,\n            self.user_exc.__traceback__.tb_next.tb_next.tb_next.tb_next,\n        )\n        return Traceback(trace, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureError","title":"InfrastructureError","text":"

Bases: PrefectException

A base class for exceptions related to infrastructure blocks

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class InfrastructureError(PrefectException):\n\"\"\"\n    A base class for exceptions related to infrastructure blocks\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureNotAvailable","title":"InfrastructureNotAvailable","text":"

Bases: PrefectException

Raised when infrastructure is not accessable from the current machine. For example, if a process was spawned on another machine it cannot be managed.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class InfrastructureNotAvailable(PrefectException):\n\"\"\"\n    Raised when infrastructure is not accessable from the current machine. For example,\n    if a process was spawned on another machine it cannot be managed.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureNotFound","title":"InfrastructureNotFound","text":"

Bases: PrefectException

Raised when infrastructure is missing, likely because it has exited or been deleted.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class InfrastructureNotFound(PrefectException):\n\"\"\"\n    Raised when infrastructure is missing, likely because it has exited or been\n    deleted.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InvalidNameError","title":"InvalidNameError","text":"

Bases: PrefectException, ValueError

Raised when a name contains characters that are not permitted.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class InvalidNameError(PrefectException, ValueError):\n\"\"\"\n    Raised when a name contains characters that are not permitted.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InvalidRepositoryURLError","title":"InvalidRepositoryURLError","text":"

Bases: PrefectException

Raised when an incorrect URL is provided to a GitHub filesystem block.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class InvalidRepositoryURLError(PrefectException):\n\"\"\"Raised when an incorrect URL is provided to a GitHub filesystem block.\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MappingLengthMismatch","title":"MappingLengthMismatch","text":"

Bases: PrefectException

Raised when attempting to call Task.map with arguments of different lengths.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class MappingLengthMismatch(PrefectException):\n\"\"\"\n    Raised when attempting to call Task.map with arguments of different lengths.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MappingMissingIterable","title":"MappingMissingIterable","text":"

Bases: PrefectException

Raised when attempting to call Task.map with all static arguments

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class MappingMissingIterable(PrefectException):\n\"\"\"\n    Raised when attempting to call Task.map with all static arguments\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingContextError","title":"MissingContextError","text":"

Bases: PrefectException, RuntimeError

Raised when a method is called that requires a task or flow run context to be active but one cannot be found.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class MissingContextError(PrefectException, RuntimeError):\n\"\"\"\n    Raised when a method is called that requires a task or flow run context to be\n    active but one cannot be found.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingFlowError","title":"MissingFlowError","text":"

Bases: PrefectException

Raised when a given flow name is not found in the expected script.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class MissingFlowError(PrefectException):\n\"\"\"\n    Raised when a given flow name is not found in the expected script.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingProfileError","title":"MissingProfileError","text":"

Bases: PrefectException, ValueError

Raised when a profile name does not exist.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class MissingProfileError(PrefectException, ValueError):\n\"\"\"\n    Raised when a profile name does not exist.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingResult","title":"MissingResult","text":"

Bases: PrefectException

Raised when a result is missing from a state; often when result persistence is disabled and the state is retrieved from the API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class MissingResult(PrefectException):\n\"\"\"\n    Raised when a result is missing from a state; often when result persistence is\n    disabled and the state is retrieved from the API.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.NotPausedError","title":"NotPausedError","text":"

Bases: PrefectException

Raised when attempting to unpause a run that isn't paused.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class NotPausedError(PrefectException):\n\"\"\"Raised when attempting to unpause a run that isn't paused.\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ObjectAlreadyExists","title":"ObjectAlreadyExists","text":"

Bases: PrefectException

Raised when the client receives a 409 (conflict) from the API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ObjectAlreadyExists(PrefectException):\n\"\"\"\n    Raised when the client receives a 409 (conflict) from the API.\n    \"\"\"\n\n    def __init__(self, http_exc: Exception, *args, **kwargs):\n        self.http_exc = http_exc\n        super().__init__(*args, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ObjectNotFound","title":"ObjectNotFound","text":"

Bases: PrefectException

Raised when the client receives a 404 (not found) from the API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ObjectNotFound(PrefectException):\n\"\"\"\n    Raised when the client receives a 404 (not found) from the API.\n    \"\"\"\n\n    def __init__(self, http_exc: Exception, *args, **kwargs):\n        self.http_exc = http_exc\n        super().__init__(*args, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ParameterBindError","title":"ParameterBindError","text":"

Bases: TypeError, PrefectException

Raised when args and kwargs cannot be converted to parameters.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ParameterBindError(TypeError, PrefectException):\n\"\"\"\n    Raised when args and kwargs cannot be converted to parameters.\n    \"\"\"\n\n    def __init__(self, msg: str):\n        super().__init__(msg)\n\n    @classmethod\n    def from_bind_failure(\n        cls, fn: Callable, exc: TypeError, call_args: List, call_kwargs: Dict\n    ) -> Self:\n        fn_signature = str(inspect.signature(fn)).strip(\"()\")\n\n        base = f\"Error binding parameters for function '{fn.__name__}': {exc}\"\n        signature = f\"Function '{fn.__name__}' has signature '{fn_signature}'\"\n        received = f\"received args: {call_args} and kwargs: {list(call_kwargs.keys())}\"\n        msg = f\"{base}.\\n{signature} but {received}.\"\n        return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ParameterTypeError","title":"ParameterTypeError","text":"

Bases: PrefectException

Raised when a parameter does not pass Pydantic type validation.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ParameterTypeError(PrefectException):\n\"\"\"\n    Raised when a parameter does not pass Pydantic type validation.\n    \"\"\"\n\n    def __init__(self, msg: str):\n        super().__init__(msg)\n\n    @classmethod\n    def from_validation_error(cls, exc: pydantic.ValidationError) -> Self:\n        bad_params = [f'{err[\"loc\"][0]}: {err[\"msg\"]}' for err in exc.errors()]\n        msg = \"Flow run received invalid parameters:\\n - \" + \"\\n - \".join(bad_params)\n        return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.Pause","title":"Pause","text":"

Bases: PrefectSignal

Raised when a flow run is PAUSED and needs to exit for resubmission.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class Pause(PrefectSignal):\n\"\"\"\n    Raised when a flow run is PAUSED and needs to exit for resubmission.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PausedRun","title":"PausedRun","text":"

Bases: PrefectException

Raised when the result from a paused run is retrieved.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class PausedRun(PrefectException):\n\"\"\"\n    Raised when the result from a paused run is retrieved.\n    \"\"\"\n\n    def __init__(self, *args, state=None, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.state = state\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectException","title":"PrefectException","text":"

Bases: Exception

Base exception type for Prefect errors.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class PrefectException(Exception):\n\"\"\"\n    Base exception type for Prefect errors.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectHTTPStatusError","title":"PrefectHTTPStatusError","text":"

Bases: HTTPStatusError

Raised when client receives a Response that contains an HTTPStatusError.

Used to include API error details in the error messages that the client provides users.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class PrefectHTTPStatusError(HTTPStatusError):\n\"\"\"\n    Raised when client receives a `Response` that contains an HTTPStatusError.\n\n    Used to include API error details in the error messages that the client provides users.\n    \"\"\"\n\n    @classmethod\n    def from_httpx_error(cls: Type[Self], httpx_error: HTTPStatusError) -> Self:\n\"\"\"\n        Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`.\n        \"\"\"\n        try:\n            details = httpx_error.response.json()\n        except Exception:\n            details = None\n\n        error_message, *more_info = str(httpx_error).split(\"\\n\")\n\n        if details:\n            message_components = [error_message, f\"Response: {details}\", *more_info]\n        else:\n            message_components = [error_message, *more_info]\n\n        new_message = \"\\n\".join(message_components)\n\n        return cls(\n            new_message, request=httpx_error.request, response=httpx_error.response\n        )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectHTTPStatusError.from_httpx_error","title":"from_httpx_error classmethod","text":"

Generate a PrefectHTTPStatusError from an httpx.HTTPStatusError.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
@classmethod\ndef from_httpx_error(cls: Type[Self], httpx_error: HTTPStatusError) -> Self:\n\"\"\"\n    Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`.\n    \"\"\"\n    try:\n        details = httpx_error.response.json()\n    except Exception:\n        details = None\n\n    error_message, *more_info = str(httpx_error).split(\"\\n\")\n\n    if details:\n        message_components = [error_message, f\"Response: {details}\", *more_info]\n    else:\n        message_components = [error_message, *more_info]\n\n    new_message = \"\\n\".join(message_components)\n\n    return cls(\n        new_message, request=httpx_error.request, response=httpx_error.response\n    )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectSignal","title":"PrefectSignal","text":"

Bases: BaseException

Base type for signal-like exceptions that should never be caught by users.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class PrefectSignal(BaseException):\n\"\"\"\n    Base type for signal-like exceptions that should never be caught by users.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ProtectedBlockError","title":"ProtectedBlockError","text":"

Bases: PrefectException

Raised when an operation is prevented due to block protection.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ProtectedBlockError(PrefectException):\n\"\"\"\n    Raised when an operation is prevented due to block protection.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ReservedArgumentError","title":"ReservedArgumentError","text":"

Bases: PrefectException, TypeError

Raised when a function used with Prefect has an argument with a name that is reserved for a Prefect feature

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ReservedArgumentError(PrefectException, TypeError):\n\"\"\"\n    Raised when a function used with Prefect has an argument with a name that is\n    reserved for a Prefect feature\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ScriptError","title":"ScriptError","text":"

Bases: PrefectException

Raised when a script errors during evaluation while attempting to load data

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ScriptError(PrefectException):\n\"\"\"\n    Raised when a script errors during evaluation while attempting to load data\n    \"\"\"\n\n    def __init__(\n        self,\n        user_exc: Exception,\n        path: str,\n    ) -> None:\n        message = f\"Script at {str(path)!r} encountered an exception: {user_exc!r}\"\n        super().__init__(message)\n        self.user_exc = user_exc\n\n        # Strip script run information from the traceback\n        self.user_exc.__traceback__ = _trim_traceback(\n            self.user_exc.__traceback__,\n            remove_modules=[prefect.utilities.importtools],\n        )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.SignatureMismatchError","title":"SignatureMismatchError","text":"

Bases: PrefectException, TypeError

Raised when parameters passed to a function do not match its signature.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class SignatureMismatchError(PrefectException, TypeError):\n\"\"\"Raised when parameters passed to a function do not match its signature.\"\"\"\n\n    def __init__(self, msg: str):\n        super().__init__(msg)\n\n    @classmethod\n    def from_bad_params(cls, expected_params: List[str], provided_params: List[str]):\n        msg = (\n            f\"Function expects parameters {expected_params} but was provided with\"\n            f\" parameters {provided_params}\"\n        )\n        return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.TerminationSignal","title":"TerminationSignal","text":"

Bases: ExternalSignal

Raised when a flow run receives a termination signal.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class TerminationSignal(ExternalSignal):\n\"\"\"\n    Raised when a flow run receives a termination signal.\n    \"\"\"\n\n    def __init__(self, signal: int):\n        self.signal = signal\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UnfinishedRun","title":"UnfinishedRun","text":"

Bases: PrefectException

Raised when the result from a run that is not finished is retrieved.

For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class UnfinishedRun(PrefectException):\n\"\"\"\n    Raised when the result from a run that is not finished is retrieved.\n\n    For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UnspecifiedFlowError","title":"UnspecifiedFlowError","text":"

Bases: PrefectException

Raised when multiple flows are found in the expected script and no name is given.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class UnspecifiedFlowError(PrefectException):\n\"\"\"\n    Raised when multiple flows are found in the expected script and no name is given.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UpstreamTaskError","title":"UpstreamTaskError","text":"

Bases: PrefectException

Raised when a task relies on the result of another task but that task is not 'COMPLETE'

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class UpstreamTaskError(PrefectException):\n\"\"\"\n    Raised when a task relies on the result of another task but that task is not\n    'COMPLETE'\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.exception_traceback","title":"exception_traceback","text":"

Convert an exception to a printable string with a traceback

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
def exception_traceback(exc: Exception) -> str:\n\"\"\"\n    Convert an exception to a printable string with a traceback\n    \"\"\"\n    tb = traceback.TracebackException.from_exception(exc)\n    return \"\".join(list(tb.format()))\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/filesystems/","title":"prefect.filesystems","text":"","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems","title":"prefect.filesystems","text":"","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure","title":"Azure","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on Azure Datalake and Azure Blob Storage.

Example

Load stored Azure config:

from prefect.filesystems import Azure\n\naz_block = Azure.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class Azure(WritableFileSystem, WritableDeploymentStorage):\n\"\"\"\n    Store data as a file on Azure Datalake and Azure Blob Storage.\n\n    Example:\n        Load stored Azure config:\n        ```python\n        from prefect.filesystems import Azure\n\n        az_block = Azure.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Azure\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/6AiQ6HRIft8TspZH7AfyZg/39fd82bdbb186db85560f688746c8cdd/azure.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#azure\"\n\n    bucket_path: str = Field(\n        default=...,\n        description=\"An Azure storage bucket path.\",\n        example=\"my-bucket/a-directory-within\",\n    )\n    azure_storage_connection_string: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage connection string\",\n        description=(\n            \"Equivalent to the AZURE_STORAGE_CONNECTION_STRING environment variable.\"\n        ),\n    )\n    azure_storage_account_name: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage account name\",\n        description=(\n            \"Equivalent to the AZURE_STORAGE_ACCOUNT_NAME environment variable.\"\n        ),\n    )\n    azure_storage_account_key: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage account key\",\n        description=\"Equivalent to the AZURE_STORAGE_ACCOUNT_KEY environment variable.\",\n    )\n    azure_storage_tenant_id: Optional[SecretStr] = Field(\n        None,\n        title=\"Azure storage tenant ID\",\n        description=\"Equivalent to the AZURE_TENANT_ID environment variable.\",\n    )\n    azure_storage_client_id: Optional[SecretStr] = Field(\n        None,\n        title=\"Azure storage client ID\",\n        description=\"Equivalent to the AZURE_CLIENT_ID environment variable.\",\n    )\n    azure_storage_client_secret: Optional[SecretStr] = Field(\n        None,\n        title=\"Azure storage client secret\",\n        description=\"Equivalent to the AZURE_CLIENT_SECRET environment variable.\",\n    )\n    azure_storage_anon: bool = Field(\n        default=True,\n        title=\"Azure storage anonymous connection\",\n        description=(\n            \"Set the 'anon' flag for ADLFS. This should be False for systems that\"\n            \" require ADLFS to use DefaultAzureCredentials.\"\n        ),\n    )\n\n    _remote_file_system: RemoteFileSystem = None\n\n    @property\n    def basepath(self) -> str:\n        return f\"az://{self.bucket_path}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.azure_storage_connection_string:\n            settings[\"connection_string\"] = (\n                self.azure_storage_connection_string.get_secret_value()\n            )\n        if self.azure_storage_account_name:\n            settings[\"account_name\"] = (\n                self.azure_storage_account_name.get_secret_value()\n            )\n        if self.azure_storage_account_key:\n            settings[\"account_key\"] = self.azure_storage_account_key.get_secret_value()\n        if self.azure_storage_tenant_id:\n            settings[\"tenant_id\"] = self.azure_storage_tenant_id.get_secret_value()\n        if self.azure_storage_client_id:\n            settings[\"client_id\"] = self.azure_storage_client_id.get_secret_value()\n        if self.azure_storage_client_secret:\n            settings[\"client_secret\"] = (\n                self.azure_storage_client_secret.get_secret_value()\n            )\n        settings[\"anon\"] = self.azure_storage_anon\n        self._remote_file_system = RemoteFileSystem(\n            basepath=f\"az://{self.bucket_path}\", settings=settings\n        )\n        return self._remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n\"\"\"\n        Downloads a directory from a given remote path to a local direcotry.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n\"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path, to_path=to_path, ignore_file=ignore_file\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local direcotry.

Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n\"\"\"\n    Downloads a directory from a given remote path to a local direcotry.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory.

Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n\"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path, to_path=to_path, ignore_file=ignore_file\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS","title":"GCS","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on Google Cloud Storage.

Example

Load stored GCS config:

from prefect.filesystems import GCS\n\ngcs_block = GCS.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class GCS(WritableFileSystem, WritableDeploymentStorage):\n\"\"\"\n    Store data as a file on Google Cloud Storage.\n\n    Example:\n        Load stored GCS config:\n        ```python\n        from prefect.filesystems import GCS\n\n        gcs_block = GCS.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/4CD4wwbiIKPkZDt4U3TEuW/c112fe85653da054b6d5334ef662bec4/gcp.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#gcs\"\n\n    bucket_path: str = Field(\n        default=...,\n        description=\"A GCS bucket path.\",\n        example=\"my-bucket/a-directory-within\",\n    )\n    service_account_info: Optional[SecretStr] = Field(\n        default=None,\n        description=\"The contents of a service account keyfile as a JSON string.\",\n    )\n    project: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The project the GCS bucket resides in. If not provided, the project will\"\n            \" be inferred from the credentials or environment.\"\n        ),\n    )\n\n    @property\n    def basepath(self) -> str:\n        return f\"gcs://{self.bucket_path}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.service_account_info:\n            try:\n                settings[\"token\"] = json.loads(\n                    self.service_account_info.get_secret_value()\n                )\n            except json.JSONDecodeError:\n                raise ValueError(\n                    \"Unable to load provided service_account_info. Please make sure\"\n                    \" that the provided value is a valid JSON string.\"\n                )\n        remote_file_system = RemoteFileSystem(\n            basepath=f\"gcs://{self.bucket_path}\", settings=settings\n        )\n        return remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n\"\"\"\n        Downloads a directory from a given remote path to a local directory.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n\"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path, to_path=to_path, ignore_file=ignore_file\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local directory.

Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n\"\"\"\n    Downloads a directory from a given remote path to a local directory.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory.

Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n\"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path, to_path=to_path, ignore_file=ignore_file\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GitHub","title":"GitHub","text":"

Bases: ReadableDeploymentStorage

Interact with files stored on GitHub repositories.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class GitHub(ReadableDeploymentStorage):\n\"\"\"\n    Interact with files stored on GitHub repositories.\n    \"\"\"\n\n    _block_type_name = \"GitHub\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/187oCWsD18m5yooahq1vU0/ace41e99ab6dc40c53e5584365a33821/github.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#github\"\n\n    repository: str = Field(\n        default=...,\n        description=(\n            \"The URL of a GitHub repository to read from, in either HTTPS or SSH\"\n            \" format.\"\n        ),\n    )\n    reference: Optional[str] = Field(\n        default=None,\n        description=\"An optional reference to pin to; can be a branch name or tag.\",\n    )\n    access_token: Optional[SecretStr] = Field(\n        name=\"Personal Access Token\",\n        default=None,\n        description=(\n            \"A GitHub Personal Access Token (PAT) with repo scope.\"\n            \" To use a fine-grained PAT, provide '{username}:{PAT}' as the value.\"\n        ),\n    )\n    include_git_objects: bool = Field(\n        default=True,\n        description=(\n            \"Whether to include git objects when copying the repo contents to a\"\n            \" directory.\"\n        ),\n    )\n\n    @validator(\"access_token\")\n    def _ensure_credentials_go_with_https(cls, v: str, values: dict) -> str:\n\"\"\"Ensure that credentials are not provided with 'SSH' formatted GitHub URLs.\n\n        Note: validates `access_token` specifically so that it only fires when\n        private repositories are used.\n        \"\"\"\n        if v is not None:\n            if urllib.parse.urlparse(values[\"repository\"]).scheme != \"https\":\n                raise InvalidRepositoryURLError(\n                    \"Crendentials can only be used with GitHub repositories \"\n                    \"using the 'HTTPS' format. You must either remove the \"\n                    \"credential if you wish to use the 'SSH' format and are not \"\n                    \"using a private repository, or you must change the repository \"\n                    \"URL to the 'HTTPS' format. \"\n                )\n\n        return v\n\n    def _create_repo_url(self) -> str:\n\"\"\"Format the URL provided to the `git clone` command.\n\n        For private repos: https://<oauth-key>@github.com/<username>/<repo>.git\n        All other repos should be the same as `self.repository`.\n        \"\"\"\n        url_components = urllib.parse.urlparse(self.repository)\n        if url_components.scheme == \"https\" and self.access_token is not None:\n            updated_components = url_components._replace(\n                netloc=f\"{self.access_token.get_secret_value()}@{url_components.netloc}\"\n            )\n            full_url = urllib.parse.urlunparse(updated_components)\n        else:\n            full_url = self.repository\n\n        return full_url\n\n    @staticmethod\n    def _get_paths(\n        dst_dir: Union[str, None], src_dir: str, sub_directory: str\n    ) -> Tuple[str, str]:\n\"\"\"Returns the fully formed paths for GitHubRepository contents in the form\n        (content_source, content_destination).\n        \"\"\"\n        if dst_dir is None:\n            content_destination = Path(\".\").absolute()\n        else:\n            content_destination = Path(dst_dir)\n\n        content_source = Path(src_dir)\n\n        if sub_directory:\n            content_destination = content_destination.joinpath(sub_directory)\n            content_source = content_source.joinpath(sub_directory)\n\n        return str(content_source), str(content_destination)\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n\"\"\"\n        Clones a GitHub project specified in `from_path` to the provided `local_path`;\n        defaults to cloning the repository reference configured on the Block to the\n        present working directory.\n\n        Args:\n            from_path: If provided, interpreted as a subdirectory of the underlying\n                repository that will be copied to the provided local path.\n            local_path: A local path to clone to; defaults to present working directory.\n        \"\"\"\n        # CONSTRUCT COMMAND\n        cmd = [\"git\", \"clone\", self._create_repo_url()]\n        if self.reference:\n            cmd += [\"-b\", self.reference]\n\n        # Limit git history\n        cmd += [\"--depth\", \"1\"]\n\n        # Clone to a temporary directory and move the subdirectory over\n        with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n            cmd.append(tmp_dir)\n\n            err_stream = io.StringIO()\n            out_stream = io.StringIO()\n            process = await run_process(cmd, stream_output=(out_stream, err_stream))\n            if process.returncode != 0:\n                err_stream.seek(0)\n                raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n            content_source, content_destination = self._get_paths(\n                dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n            )\n\n            ignore_func = None\n            if not self.include_git_objects:\n                ignore_func = ignore_patterns(\".git\")\n\n            copytree(\n                src=content_source,\n                dst=content_destination,\n                dirs_exist_ok=True,\n                ignore=ignore_func,\n            )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GitHub.get_directory","title":"get_directory async","text":"

Clones a GitHub project specified in from_path to the provided local_path; defaults to cloning the repository reference configured on the Block to the present working directory.

Parameters:

Name Type Description Default from_path Optional[str]

If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path.

None local_path Optional[str]

A local path to clone to; defaults to present working directory.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n\"\"\"\n    Clones a GitHub project specified in `from_path` to the provided `local_path`;\n    defaults to cloning the repository reference configured on the Block to the\n    present working directory.\n\n    Args:\n        from_path: If provided, interpreted as a subdirectory of the underlying\n            repository that will be copied to the provided local path.\n        local_path: A local path to clone to; defaults to present working directory.\n    \"\"\"\n    # CONSTRUCT COMMAND\n    cmd = [\"git\", \"clone\", self._create_repo_url()]\n    if self.reference:\n        cmd += [\"-b\", self.reference]\n\n    # Limit git history\n    cmd += [\"--depth\", \"1\"]\n\n    # Clone to a temporary directory and move the subdirectory over\n    with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n        cmd.append(tmp_dir)\n\n        err_stream = io.StringIO()\n        out_stream = io.StringIO()\n        process = await run_process(cmd, stream_output=(out_stream, err_stream))\n        if process.returncode != 0:\n            err_stream.seek(0)\n            raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n        content_source, content_destination = self._get_paths(\n            dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n        )\n\n        ignore_func = None\n        if not self.include_git_objects:\n            ignore_func = ignore_patterns(\".git\")\n\n        copytree(\n            src=content_source,\n            dst=content_destination,\n            dirs_exist_ok=True,\n            ignore=ignore_func,\n        )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem","title":"LocalFileSystem","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on a local file system.

Example

Load stored local file system config:

from prefect.filesystems import LocalFileSystem\n\nlocal_file_system_block = LocalFileSystem.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class LocalFileSystem(WritableFileSystem, WritableDeploymentStorage):\n\"\"\"\n    Store data as a file on a local file system.\n\n    Example:\n        Load stored local file system config:\n        ```python\n        from prefect.filesystems import LocalFileSystem\n\n        local_file_system_block = LocalFileSystem.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Local File System\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/EVKjxM7fNyi4NGUSkeTEE/95c958c5dd5a56c59ea5033e919c1a63/image1.png?h=250\"\n    _documentation_url = (\n        \"https://docs.prefect.io/concepts/filesystems/#local-filesystem\"\n    )\n\n    basepath: Optional[str] = Field(\n        default=None, description=\"Default local path for this block to write to.\"\n    )\n\n    @validator(\"basepath\", pre=True)\n    def cast_pathlib(cls, value):\n        if isinstance(value, Path):\n            return str(value)\n        return value\n\n    def _resolve_path(self, path: str) -> Path:\n        # Only resolve the base path at runtime, default to the current directory\n        basepath = (\n            Path(self.basepath).expanduser().resolve()\n            if self.basepath\n            else Path(\".\").resolve()\n        )\n\n        # Determine the path to access relative to the base path, ensuring that paths\n        # outside of the base path are off limits\n        if path is None:\n            return basepath\n\n        path: Path = Path(path).expanduser()\n\n        if not path.is_absolute():\n            path = basepath / path\n        else:\n            path = path.resolve()\n            if basepath not in path.parents and (basepath != path):\n                raise ValueError(\n                    f\"Provided path {path} is outside of the base path {basepath}.\"\n                )\n\n        return path\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: str = None, local_path: str = None\n    ) -> None:\n\"\"\"\n        Copies a directory from one place to another on the local filesystem.\n\n        Defaults to copying the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        if not from_path:\n            from_path = Path(self.basepath).expanduser().resolve()\n        else:\n            from_path = self._resolve_path(from_path)\n\n        if not local_path:\n            local_path = Path(\".\").resolve()\n        else:\n            local_path = Path(local_path).resolve()\n\n        if from_path == local_path:\n            # If the paths are the same there is no need to copy\n            # and we avoid shutil.copytree raising an error\n            return\n\n        # .prefectignore exists in the original location, not the current location which\n        # is most likely temporary\n        if (from_path / Path(\".prefectignore\")).exists():\n            ignore_func = await self._get_ignore_func(\n                local_path=from_path, ignore_file=from_path / Path(\".prefectignore\")\n            )\n        else:\n            ignore_func = None\n\n        copytree(from_path, local_path, dirs_exist_ok=True, ignore=ignore_func)\n\n    async def _get_ignore_func(self, local_path: str, ignore_file: str):\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n        included_files = filter_files(root=local_path, ignore_patterns=ignore_patterns)\n\n        def ignore_func(directory, files):\n            relative_path = Path(directory).relative_to(local_path)\n\n            files_to_ignore = [\n                f for f in files if str(relative_path / f) not in included_files\n            ]\n            return files_to_ignore\n\n        return ignore_func\n\n    @sync_compatible\n    async def put_directory(\n        self, local_path: str = None, to_path: str = None, ignore_file: str = None\n    ) -> None:\n\"\"\"\n        Copies a directory from one place to another on the local filesystem.\n\n        Defaults to copying the entire contents of the current working directory to the block's basepath.\n        An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore.\n        \"\"\"\n        destination_path = self._resolve_path(to_path)\n\n        if not local_path:\n            local_path = Path(\".\").absolute()\n\n        if ignore_file:\n            ignore_func = await self._get_ignore_func(\n                local_path=local_path, ignore_file=ignore_file\n            )\n        else:\n            ignore_func = None\n\n        if local_path == destination_path:\n            pass\n        else:\n            copytree(\n                src=local_path,\n                dst=destination_path,\n                ignore=ignore_func,\n                dirs_exist_ok=True,\n            )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        path: Path = self._resolve_path(path)\n\n        # Check if the path exists\n        if not path.exists():\n            raise ValueError(f\"Path {path} does not exist.\")\n\n        # Validate that its a file\n        if not path.is_file():\n            raise ValueError(f\"Path {path} is not a file.\")\n\n        async with await anyio.open_file(str(path), mode=\"rb\") as f:\n            content = await f.read()\n\n        return content\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        path: Path = self._resolve_path(path)\n\n        # Construct the path if it does not exist\n        path.parent.mkdir(exist_ok=True, parents=True)\n\n        # Check if the file already exists\n        if path.exists() and not path.is_file():\n            raise ValueError(f\"Path {path} already exists and is not a file.\")\n\n        async with await anyio.open_file(path, mode=\"wb\") as f:\n            await f.write(content)\n        # Leave path stringify to the OS\n        return str(path)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem.get_directory","title":"get_directory async","text":"

Copies a directory from one place to another on the local filesystem.

Defaults to copying the entire contents of the block's basepath to the current working directory.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: str = None, local_path: str = None\n) -> None:\n\"\"\"\n    Copies a directory from one place to another on the local filesystem.\n\n    Defaults to copying the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    if not from_path:\n        from_path = Path(self.basepath).expanduser().resolve()\n    else:\n        from_path = self._resolve_path(from_path)\n\n    if not local_path:\n        local_path = Path(\".\").resolve()\n    else:\n        local_path = Path(local_path).resolve()\n\n    if from_path == local_path:\n        # If the paths are the same there is no need to copy\n        # and we avoid shutil.copytree raising an error\n        return\n\n    # .prefectignore exists in the original location, not the current location which\n    # is most likely temporary\n    if (from_path / Path(\".prefectignore\")).exists():\n        ignore_func = await self._get_ignore_func(\n            local_path=from_path, ignore_file=from_path / Path(\".prefectignore\")\n        )\n    else:\n        ignore_func = None\n\n    copytree(from_path, local_path, dirs_exist_ok=True, ignore=ignore_func)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem.put_directory","title":"put_directory async","text":"

Copies a directory from one place to another on the local filesystem.

Defaults to copying the entire contents of the current working directory to the block's basepath. An ignore_file path may be provided that can include gitignore style expressions for filepaths to ignore.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self, local_path: str = None, to_path: str = None, ignore_file: str = None\n) -> None:\n\"\"\"\n    Copies a directory from one place to another on the local filesystem.\n\n    Defaults to copying the entire contents of the current working directory to the block's basepath.\n    An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore.\n    \"\"\"\n    destination_path = self._resolve_path(to_path)\n\n    if not local_path:\n        local_path = Path(\".\").absolute()\n\n    if ignore_file:\n        ignore_func = await self._get_ignore_func(\n            local_path=local_path, ignore_file=ignore_file\n        )\n    else:\n        ignore_func = None\n\n    if local_path == destination_path:\n        pass\n    else:\n        copytree(\n            src=local_path,\n            dst=destination_path,\n            ignore=ignore_func,\n            dirs_exist_ok=True,\n        )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem","title":"RemoteFileSystem","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on a remote file system.

Supports any remote file system supported by fsspec. The file system is specified using a protocol. For example, \"s3://my-bucket/my-folder/\" will use S3.

Example

Load stored remote file system config:

from prefect.filesystems import RemoteFileSystem\n\nremote_file_system_block = RemoteFileSystem.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class RemoteFileSystem(WritableFileSystem, WritableDeploymentStorage):\n\"\"\"\n    Store data as a file on a remote file system.\n\n    Supports any remote file system supported by `fsspec`. The file system is specified\n    using a protocol. For example, \"s3://my-bucket/my-folder/\" will use S3.\n\n    Example:\n        Load stored remote file system config:\n        ```python\n        from prefect.filesystems import RemoteFileSystem\n\n        remote_file_system_block = RemoteFileSystem.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Remote File System\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/4CxjycqILlT9S9YchI7o1q/ee62e2089dfceb19072245c62f0c69d2/image12.png?h=250\"\n    _documentation_url = (\n        \"https://docs.prefect.io/concepts/filesystems/#remote-file-system\"\n    )\n\n    basepath: str = Field(\n        default=...,\n        description=\"Default path for this block to write to.\",\n        example=\"s3://my-bucket/my-folder/\",\n    )\n    settings: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Additional settings to pass through to fsspec.\",\n    )\n\n    # Cache for the configured fsspec file system used for access\n    _filesystem: fsspec.AbstractFileSystem = None\n\n    @validator(\"basepath\")\n    def check_basepath(cls, value):\n        scheme, netloc, _, _, _ = urllib.parse.urlsplit(value)\n\n        if not scheme:\n            raise ValueError(f\"Base path must start with a scheme. Got {value!r}.\")\n\n        if not netloc:\n            raise ValueError(\n                f\"Base path must include a location after the scheme. Got {value!r}.\"\n            )\n\n        if scheme == \"file\":\n            raise ValueError(\n                \"Base path scheme cannot be 'file'. Use `LocalFileSystem` instead for\"\n                \" local file access.\"\n            )\n\n        return value\n\n    def _resolve_path(self, path: str) -> str:\n        base_scheme, base_netloc, base_urlpath, _, _ = urllib.parse.urlsplit(\n            self.basepath\n        )\n        scheme, netloc, urlpath, _, _ = urllib.parse.urlsplit(path)\n\n        # Confirm that absolute paths are valid\n        if scheme:\n            if scheme != base_scheme:\n                raise ValueError(\n                    f\"Path {path!r} with scheme {scheme!r} must use the same scheme as\"\n                    f\" the base path {base_scheme!r}.\"\n                )\n\n        if netloc:\n            if (netloc != base_netloc) or not urlpath.startswith(base_urlpath):\n                raise ValueError(\n                    f\"Path {path!r} is outside of the base path {self.basepath!r}.\"\n                )\n\n        return f\"{self.basepath.rstrip('/')}/{urlpath.lstrip('/')}\"\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n\"\"\"\n        Downloads a directory from a given remote path to a local direcotry.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        if from_path is None:\n            from_path = str(self.basepath)\n        else:\n            from_path = self._resolve_path(from_path)\n\n        if local_path is None:\n            local_path = Path(\".\").absolute()\n\n        # validate that from_path has a trailing slash for proper fsspec behavior across versions\n        if not from_path.endswith(\"/\"):\n            from_path += \"/\"\n\n        return self.filesystem.get(from_path, local_path, recursive=True)\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n        overwrite: bool = True,\n    ) -> int:\n\"\"\"\n        Uploads a directory from a given local path to a remote direcotry.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        if to_path is None:\n            to_path = str(self.basepath)\n        else:\n            to_path = self._resolve_path(to_path)\n\n        if local_path is None:\n            local_path = \".\"\n\n        included_files = None\n        if ignore_file:\n            with open(ignore_file, \"r\") as f:\n                ignore_patterns = f.readlines()\n\n            included_files = filter_files(\n                local_path, ignore_patterns, include_dirs=True\n            )\n\n        counter = 0\n        for f in Path(local_path).rglob(\"*\"):\n            relative_path = f.relative_to(local_path)\n            if included_files and str(relative_path) not in included_files:\n                continue\n\n            if to_path.endswith(\"/\"):\n                fpath = to_path + relative_path.as_posix()\n            else:\n                fpath = to_path + \"/\" + relative_path.as_posix()\n\n            if f.is_dir():\n                pass\n            else:\n                f = f.as_posix()\n                if overwrite:\n                    self.filesystem.put_file(f, fpath, overwrite=True)\n                else:\n                    self.filesystem.put_file(f, fpath)\n\n                counter += 1\n\n        return counter\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        path = self._resolve_path(path)\n\n        with self.filesystem.open(path, \"rb\") as file:\n            content = await run_sync_in_worker_thread(file.read)\n\n        return content\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        path = self._resolve_path(path)\n        dirpath = path[: path.rindex(\"/\")]\n\n        self.filesystem.makedirs(dirpath, exist_ok=True)\n\n        with self.filesystem.open(path, \"wb\") as file:\n            await run_sync_in_worker_thread(file.write, content)\n        return path\n\n    @property\n    def filesystem(self) -> fsspec.AbstractFileSystem:\n        if not self._filesystem:\n            scheme, _, _, _, _ = urllib.parse.urlsplit(self.basepath)\n\n            try:\n                self._filesystem = fsspec.filesystem(scheme, **self.settings)\n            except ImportError as exc:\n                # The path is a remote file system that uses a lib that is not installed\n                raise RuntimeError(\n                    f\"File system created with scheme {scheme!r} from base path \"\n                    f\"{self.basepath!r} could not be created. \"\n                    \"You are likely missing a Python module required to use the given \"\n                    \"storage protocol.\"\n                ) from exc\n\n        return self._filesystem\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local direcotry.

Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n\"\"\"\n    Downloads a directory from a given remote path to a local direcotry.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    if from_path is None:\n        from_path = str(self.basepath)\n    else:\n        from_path = self._resolve_path(from_path)\n\n    if local_path is None:\n        local_path = Path(\".\").absolute()\n\n    # validate that from_path has a trailing slash for proper fsspec behavior across versions\n    if not from_path.endswith(\"/\"):\n        from_path += \"/\"\n\n    return self.filesystem.get(from_path, local_path, recursive=True)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote direcotry.

Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n    overwrite: bool = True,\n) -> int:\n\"\"\"\n    Uploads a directory from a given local path to a remote direcotry.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    if to_path is None:\n        to_path = str(self.basepath)\n    else:\n        to_path = self._resolve_path(to_path)\n\n    if local_path is None:\n        local_path = \".\"\n\n    included_files = None\n    if ignore_file:\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n\n        included_files = filter_files(\n            local_path, ignore_patterns, include_dirs=True\n        )\n\n    counter = 0\n    for f in Path(local_path).rglob(\"*\"):\n        relative_path = f.relative_to(local_path)\n        if included_files and str(relative_path) not in included_files:\n            continue\n\n        if to_path.endswith(\"/\"):\n            fpath = to_path + relative_path.as_posix()\n        else:\n            fpath = to_path + \"/\" + relative_path.as_posix()\n\n        if f.is_dir():\n            pass\n        else:\n            f = f.as_posix()\n            if overwrite:\n                self.filesystem.put_file(f, fpath, overwrite=True)\n            else:\n                self.filesystem.put_file(f, fpath)\n\n            counter += 1\n\n    return counter\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3","title":"S3","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on AWS S3.

Example

Load stored S3 config:

from prefect.filesystems import S3\n\ns3_block = S3.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class S3(WritableFileSystem, WritableDeploymentStorage):\n\"\"\"\n    Store data as a file on AWS S3.\n\n    Example:\n        Load stored S3 config:\n        ```python\n        from prefect.filesystems import S3\n\n        s3_block = S3.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"S3\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/1jbV4lceHOjGgunX15lUwT/db88e184d727f721575aeb054a37e277/aws.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#s3\"\n\n    bucket_path: str = Field(\n        default=...,\n        description=\"An S3 bucket path.\",\n        example=\"my-bucket/a-directory-within\",\n    )\n    aws_access_key_id: Optional[SecretStr] = Field(\n        default=None,\n        title=\"AWS Access Key ID\",\n        description=\"Equivalent to the AWS_ACCESS_KEY_ID environment variable.\",\n        example=\"AKIAIOSFODNN7EXAMPLE\",\n    )\n    aws_secret_access_key: Optional[SecretStr] = Field(\n        default=None,\n        title=\"AWS Secret Access Key\",\n        description=\"Equivalent to the AWS_SECRET_ACCESS_KEY environment variable.\",\n        example=\"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\",\n    )\n\n    _remote_file_system: RemoteFileSystem = None\n\n    @property\n    def basepath(self) -> str:\n        return f\"s3://{self.bucket_path}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.aws_access_key_id:\n            settings[\"key\"] = self.aws_access_key_id.get_secret_value()\n        if self.aws_secret_access_key:\n            settings[\"secret\"] = self.aws_secret_access_key.get_secret_value()\n        self._remote_file_system = RemoteFileSystem(\n            basepath=f\"s3://{self.bucket_path}\", settings=settings\n        )\n        return self._remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n\"\"\"\n        Downloads a directory from a given remote path to a local directory.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n\"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path, to_path=to_path, ignore_file=ignore_file\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local directory.

Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n\"\"\"\n    Downloads a directory from a given remote path to a local directory.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory.

Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n\"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path, to_path=to_path, ignore_file=ignore_file\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB","title":"SMB","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on a SMB share.

Example

Load stored SMB config:

from prefect.filesystems import SMB\nsmb_block = SMB.load(\"BLOCK_NAME\")\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class SMB(WritableFileSystem, WritableDeploymentStorage):\n\"\"\"\n    Store data as a file on a SMB share.\n\n    Example:\n\n        Load stored SMB config:\n\n        ```python\n        from prefect.filesystems import SMB\n        smb_block = SMB.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"SMB\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/6J444m3vW6ukgBOCinSxLk/025f5562d3c165feb7a5df599578a6a8/samba_2010_logo_transparent_151x27.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#smb\"\n\n    share_path: str = Field(\n        default=...,\n        description=\"SMB target (requires <SHARE>, followed by <PATH>).\",\n        example=\"/SHARE/dir/subdir\",\n    )\n    smb_username: Optional[SecretStr] = Field(\n        default=None,\n        title=\"SMB Username\",\n        description=\"Username with access to the target SMB SHARE.\",\n    )\n    smb_password: Optional[SecretStr] = Field(\n        default=None, title=\"SMB Password\", description=\"Password for SMB access.\"\n    )\n    smb_host: str = Field(\n        default=..., tile=\"SMB server/hostname\", description=\"SMB server/hostname.\"\n    )\n    smb_port: Optional[int] = Field(\n        default=None, title=\"SMB port\", description=\"SMB port (default: 445).\"\n    )\n\n    _remote_file_system: RemoteFileSystem = None\n\n    @property\n    def basepath(self) -> str:\n        return f\"smb://{self.smb_host.rstrip('/')}/{self.share_path.lstrip('/')}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.smb_username:\n            settings[\"username\"] = self.smb_username.get_secret_value()\n        if self.smb_password:\n            settings[\"password\"] = self.smb_password.get_secret_value()\n        if self.smb_host:\n            settings[\"host\"] = self.smb_host\n        if self.smb_port:\n            settings[\"port\"] = self.smb_port\n        self._remote_file_system = RemoteFileSystem(\n            basepath=f\"smb://{self.smb_host.rstrip('/')}/{self.share_path.lstrip('/')}\",\n            settings=settings,\n        )\n        return self._remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n\"\"\"\n        Downloads a directory from a given remote path to a local directory.\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n\"\"\"\n        Uploads a directory from a given local path to a remote directory.\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path,\n            to_path=to_path,\n            ignore_file=ignore_file,\n            overwrite=False,\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local directory. Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n\"\"\"\n    Downloads a directory from a given remote path to a local directory.\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory. Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n\"\"\"\n    Uploads a directory from a given local path to a remote directory.\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path,\n        to_path=to_path,\n        ignore_file=ignore_file,\n        overwrite=False,\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/flows/","title":"prefect.flows","text":"","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows","title":"prefect.flows","text":"

Module containing the base workflow class and decorator - for most use cases, using the @flow decorator is preferred.

","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow","title":"Flow","text":"

Bases: Generic[P, R]

A Prefect workflow definition.

Note

We recommend using the @flow decorator for most use-cases.

Wraps a function with an entrypoint to the Prefect engine. To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and \"Returns\" respectively.

Parameters:

Name Type Description Default fn Callable[P, R]

The function defining the workflow.

required name Optional[str]

An optional name for the flow; if not provided, the name will be inferred from the given function.

None version Optional[str]

An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.

None flow_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.

None task_runner Union[Type[BaseTaskRunner], BaseTaskRunner]

An optional task runner to use for task execution within the flow; if not provided, a ConcurrentTaskRunner will be used.

ConcurrentTaskRunner description str

An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.

None timeout_seconds Union[int, float]

An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.

None validate_parameters bool

By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as x: int and \"5\" is passed, it will be resolved to 5. If set to False, no validation will be performed on flow parameters.

True retries Optional[int]

An optional number of times to retry on flow run failure.

None retry_delay_seconds Optional[Union[int, float]]

An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero.

None persist_result Optional[bool]

An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

None result_storage Optional[ResultStorage]

An optional block to use to perist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

None result_serializer Optional[ResultSerializer]

An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of PREFECT_RESULTS_DEFAULT_SERIALIZER will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

None on_failure Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a failed state.

None on_completion Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a completed state.

None on_cancellation Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a cancelling state.

None on_crashed Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a crashed state.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
@PrefectObjectRegistry.register_instances\nclass Flow(Generic[P, R]):\n\"\"\"\n    A Prefect workflow definition.\n\n    !!! note\n        We recommend using the [`@flow` decorator][prefect.flows.flow] for most use-cases.\n\n    Wraps a function with an entrypoint to the Prefect engine. To preserve the input\n    and output types, we use the generic type variables `P` and `R` for \"Parameters\" and\n    \"Returns\" respectively.\n\n    Args:\n        fn: The function defining the workflow.\n        name: An optional name for the flow; if not provided, the name will be inferred\n            from the given function.\n        version: An optional version string for the flow; if not provided, we will\n            attempt to create a version string as a hash of the file containing the\n            wrapped function; if the file cannot be located, the version will be null.\n        flow_run_name: An optional name to distinguish runs of this flow; this name can\n            be provided as a string template with the flow's parameters as variables,\n            or a function that returns a string.\n        task_runner: An optional task runner to use for task execution within the flow;\n            if not provided, a `ConcurrentTaskRunner` will be used.\n        description: An optional string description for the flow; if not provided, the\n            description will be pulled from the docstring for the decorated function.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the flow. If the flow exceeds this runtime, it will be marked as failed.\n            Flow execution may continue until the next task is called.\n        validate_parameters: By default, parameters passed to flows are validated by\n            Pydantic. This will check that input values conform to the annotated types\n            on the function. Where possible, values will be coerced into the correct\n            type; for example, if a parameter is defined as `x: int` and \"5\" is passed,\n            it will be resolved to `5`. If set to `False`, no validation will be\n            performed on flow parameters.\n        retries: An optional number of times to retry on flow run failure.\n        retry_delay_seconds: An optional number of seconds to wait before retrying the\n            flow after failure. This is only applicable if `retries` is nonzero.\n        persist_result: An optional toggle indicating whether the result of this flow\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to perist the result of this flow.\n            This value will be used as the default for any tasks in this flow.\n            If not provided, the local file system will be used unless called as\n            a subflow, at which point the default will be loaded from the parent flow.\n        result_serializer: An optional serializer to use to serialize the result of this\n            flow for persistence. This value will be used as the default for any tasks\n            in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER`\n            will be used unless called as a subflow, at which point the default will be\n            loaded from the parent flow.\n        on_failure: An optional list of callables to run when the flow enters a failed state.\n        on_completion: An optional list of callables to run when the flow enters a completed state.\n        on_cancellation: An optional list of callables to run when the flow enters a cancelling state.\n        on_crashed: An optional list of callables to run when the flow enters a crashed state.\n    \"\"\"\n\n    # NOTE: These parameters (types, defaults, and docstrings) should be duplicated\n    #       exactly in the @flow decorator\n    def __init__(\n        self,\n        fn: Callable[P, R],\n        name: Optional[str] = None,\n        version: Optional[str] = None,\n        flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n        retries: Optional[int] = None,\n        retry_delay_seconds: Optional[Union[int, float]] = None,\n        task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = ConcurrentTaskRunner,\n        description: str = None,\n        timeout_seconds: Union[int, float] = None,\n        validate_parameters: bool = True,\n        persist_result: Optional[bool] = None,\n        result_storage: Optional[ResultStorage] = None,\n        result_serializer: Optional[ResultSerializer] = None,\n        cache_result_in_memory: bool = True,\n        log_prints: Optional[bool] = None,\n        on_completion: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n        on_cancellation: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    ):\n        # Validate if hook passed is list and contains callables\n        hook_categories = [on_completion, on_failure, on_cancellation, on_crashed]\n        hook_names = [\"on_completion\", \"on_failure\", \"on_cancellation\", \"on_crashed\"]\n        for hooks, hook_name in zip(hook_categories, hook_names):\n            if hooks is not None:\n                if not hooks:\n                    raise ValueError(f\"Empty list passed for '{hook_name}'\")\n                try:\n                    hooks = list(hooks)\n                except TypeError:\n                    raise TypeError(\n                        f\"Expected iterable for '{hook_name}'; got\"\n                        f\" {type(hooks).__name__} instead. Please provide a list of\"\n                        f\" hooks to '{hook_name}':\\n\\n\"\n                        f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                        \" my_flow():\\n\\tpass\"\n                    )\n\n                for hook in hooks:\n                    if not callable(hook):\n                        raise TypeError(\n                            f\"Expected callables in '{hook_name}'; got\"\n                            f\" {type(hook).__name__} instead. Please provide a list of\"\n                            f\" hooks to '{hook_name}':\\n\\n\"\n                            f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                            \" my_flow():\\n\\tpass\"\n                        )\n\n        if not callable(fn):\n            raise TypeError(\"'fn' must be callable\")\n\n        # Validate name if given\n        if name:\n            raise_on_name_with_banned_characters(name)\n\n        self.name = name or fn.__name__.replace(\"_\", \"-\")\n\n        if flow_run_name is not None:\n            if not isinstance(flow_run_name, str) and not callable(flow_run_name):\n                raise TypeError(\n                    \"Expected string or callable for 'flow_run_name'; got\"\n                    f\" {type(flow_run_name).__name__} instead.\"\n                )\n        self.flow_run_name = flow_run_name\n\n        task_runner = task_runner or ConcurrentTaskRunner()\n        self.task_runner = (\n            task_runner() if isinstance(task_runner, type) else task_runner\n        )\n\n        self.log_prints = log_prints\n\n        self.description = description or inspect.getdoc(fn)\n        update_wrapper(self, fn)\n        self.fn = fn\n        self.isasync = is_async_fn(self.fn)\n\n        raise_for_reserved_arguments(self.fn, [\"return_state\", \"wait_for\"])\n\n        # Version defaults to a hash of the function's file\n        flow_file = inspect.getsourcefile(self.fn)\n        if not version:\n            try:\n                version = file_hash(flow_file)\n            except (FileNotFoundError, TypeError, OSError):\n                pass  # `getsourcefile` can return null values and \"<stdin>\" for objects in repls\n        self.version = version\n\n        self.timeout_seconds = float(timeout_seconds) if timeout_seconds else None\n\n        # FlowRunPolicy settings\n        # TODO: We can instantiate a `FlowRunPolicy` and add Pydantic bound checks to\n        #       validate that the user passes positive numbers here\n        self.retries = (\n            retries if retries is not None else PREFECT_FLOW_DEFAULT_RETRIES.value()\n        )\n\n        self.retry_delay_seconds = (\n            retry_delay_seconds\n            if retry_delay_seconds is not None\n            else PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS.value()\n        )\n\n        self.parameters = parameter_schema(self.fn)\n        self.should_validate_parameters = validate_parameters\n\n        if self.should_validate_parameters:\n            # Try to create the validated function now so that incompatibility can be\n            # raised at declaration time rather than at runtime\n            # We cannot, however, store the validated function on the flow because it\n            # is not picklable in some environments\n            try:\n                ValidatedFunction(self.fn, config=None)\n            except pydantic.ConfigError as exc:\n                raise ValueError(\n                    \"Flow function is not compatible with `validate_parameters`. \"\n                    \"Disable validation or change the argument names.\"\n                ) from exc\n\n        self.persist_result = persist_result\n        self.result_storage = result_storage\n        self.result_serializer = result_serializer\n        self.cache_result_in_memory = cache_result_in_memory\n\n        # Check for collision in the registry\n        registry = PrefectObjectRegistry.get()\n\n        if registry and any(\n            other\n            for other in registry.get_instances(Flow)\n            if other.name == self.name and id(other.fn) != id(self.fn)\n        ):\n            file = inspect.getsourcefile(self.fn)\n            line_number = inspect.getsourcelines(self.fn)[1]\n            warnings.warn(\n                f\"A flow named {self.name!r} and defined at '{file}:{line_number}' \"\n                \"conflicts with another flow. Consider specifying a unique `name` \"\n                \"parameter in the flow definition:\\n\\n \"\n                \"`@flow(name='my_unique_name', ...)`\"\n            )\n        self.on_completion = on_completion\n        self.on_failure = on_failure\n        self.on_cancellation = on_cancellation\n        self.on_crashed = on_crashed\n\n    def with_options(\n        self,\n        *,\n        name: str = None,\n        version: str = None,\n        retries: int = 0,\n        retry_delay_seconds: Union[int, float] = 0,\n        description: str = None,\n        flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n        task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = None,\n        timeout_seconds: Union[int, float] = None,\n        validate_parameters: bool = None,\n        persist_result: Optional[bool] = NotSet,\n        result_storage: Optional[ResultStorage] = NotSet,\n        result_serializer: Optional[ResultSerializer] = NotSet,\n        cache_result_in_memory: bool = None,\n        log_prints: Optional[bool] = NotSet,\n        on_completion: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n        on_cancellation: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    ):\n\"\"\"\n        Create a new flow from the current object, updating provided options.\n\n        Args:\n            name: A new name for the flow.\n            version: A new version for the flow.\n            description: A new description for the flow.\n            flow_run_name: An optional name to distinguish runs of this flow; this name\n                can be provided as a string template with the flow's parameters as variables,\n                or a function that returns a string.\n            task_runner: A new task runner for the flow.\n            timeout_seconds: A new number of seconds to fail the flow after if still\n                running.\n            validate_parameters: A new value indicating if flow calls should validate\n                given parameters.\n            retries: A new number of times to retry on flow run failure.\n            retry_delay_seconds: A new number of seconds to wait before retrying the\n                flow after failure. This is only applicable if `retries` is nonzero.\n            persist_result: A new option for enabling or disabling result persistence.\n            result_storage: A new storage type to use for results.\n            result_serializer: A new serializer to use for results.\n            cache_result_in_memory: A new value indicating if the flow's result should\n                be cached in memory.\n            on_failure: A new list of callables to run when the flow enters a failed state.\n            on_completion: A new list of callables to run when the flow enters a completed state.\n            on_cancellation: A new list of callables to run when the flow enters a cancelling state.\n            on_crashed: A new list of callables to run when the flow enters a crashed state.\n\n        Returns:\n            A new `Flow` instance.\n\n        Examples:\n\n            Create a new flow from an existing flow and update the name:\n\n            >>> @flow(name=\"My flow\")\n            >>> def my_flow():\n            >>>     return 1\n            >>>\n            >>> new_flow = my_flow.with_options(name=\"My new flow\")\n\n            Create a new flow from an existing flow, update the task runner, and call\n            it without an intermediate variable:\n\n            >>> from prefect.task_runners import SequentialTaskRunner\n            >>>\n            >>> @flow\n            >>> def my_flow(x, y):\n            >>>     return x + y\n            >>>\n            >>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n            >>> assert state.result() == 4\n\n        \"\"\"\n        return Flow(\n            fn=self.fn,\n            name=name or self.name,\n            description=description or self.description,\n            flow_run_name=flow_run_name,\n            version=version or self.version,\n            task_runner=task_runner or self.task_runner,\n            retries=retries or self.retries,\n            retry_delay_seconds=retry_delay_seconds or self.retry_delay_seconds,\n            timeout_seconds=(\n                timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n            ),\n            validate_parameters=(\n                validate_parameters\n                if validate_parameters is not None\n                else self.should_validate_parameters\n            ),\n            persist_result=(\n                persist_result if persist_result is not NotSet else self.persist_result\n            ),\n            result_storage=(\n                result_storage if result_storage is not NotSet else self.result_storage\n            ),\n            result_serializer=(\n                result_serializer\n                if result_serializer is not NotSet\n                else self.result_serializer\n            ),\n            cache_result_in_memory=(\n                cache_result_in_memory\n                if cache_result_in_memory is not None\n                else self.cache_result_in_memory\n            ),\n            log_prints=log_prints if log_prints is not NotSet else self.log_prints,\n            on_completion=on_completion or self.on_completion,\n            on_failure=on_failure or self.on_failure,\n            on_cancellation=on_cancellation or self.on_cancellation,\n            on_crashed=on_crashed or self.on_crashed,\n        )\n\n    def validate_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n\"\"\"\n        Validate parameters for compatibility with the flow by attempting to cast the inputs to the\n        associated types specified by the function's type annotations.\n\n        Returns:\n            A new dict of parameters that have been cast to the appropriate types\n\n        Raises:\n            ParameterTypeError: if the provided parameters are not valid\n        \"\"\"\n        validated_fn = ValidatedFunction(self.fn, config=None)\n        args, kwargs = parameters_to_args_kwargs(self.fn, parameters)\n\n        try:\n            model = validated_fn.init_model_instance(*args, **kwargs)\n        except pydantic.ValidationError as exc:\n            # We capture the pydantic exception and raise our own because the pydantic\n            # exception is not picklable when using a cythonized pydantic installation\n            raise ParameterTypeError.from_validation_error(exc) from None\n\n        # Get the updated parameter dict with cast values from the model\n        cast_parameters = {\n            k: v\n            for k, v in model._iter()\n            if k in model.__fields_set__ or model.__fields__[k].default_factory\n        }\n        return cast_parameters\n\n    def serialize_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n\"\"\"\n        Convert parameters to a serializable form.\n\n        Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without\n        converting everything directly to a string. This maintains basic types like\n        integers during API roundtrips.\n        \"\"\"\n        serialized_parameters = {}\n        for key, value in parameters.items():\n            try:\n                serialized_parameters[key] = jsonable_encoder(value)\n            except (TypeError, ValueError):\n                logger.debug(\n                    f\"Parameter {key!r} for flow {self.name!r} is of unserializable \"\n                    f\"type {type(value).__name__!r} and will not be stored \"\n                    \"in the backend.\"\n                )\n                serialized_parameters[key] = f\"<{type(value).__name__}>\"\n        return serialized_parameters\n\n    @overload\n    def __call__(self: \"Flow[P, NoReturn]\", *args: P.args, **kwargs: P.kwargs) -> None:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def __call__(\n        self: \"Flow[P, Coroutine[Any, Any, T]]\", *args: P.args, **kwargs: P.kwargs\n    ) -> Awaitable[T]:\n        ...\n\n    @overload\n    def __call__(\n        self: \"Flow[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> T:\n        ...\n\n    @overload\n    def __call__(\n        self: \"Flow[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def __call__(\n        self,\n        *args: \"P.args\",\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: \"P.kwargs\",\n    ):\n\"\"\"\n        Run the flow and return its result.\n\n\n        Flow parameter values must be serializable by Pydantic.\n\n        If writing an async flow, this call must be awaited.\n\n        This will create a new flow run in the API.\n\n        Args:\n            *args: Arguments to run the flow with.\n            return_state: Return a Prefect State containing the result of the\n                flow run.\n            wait_for: Upstream task futures to wait for before starting the flow if called as a subflow\n            **kwargs: Keyword arguments to run the flow with.\n\n        Returns:\n            If `return_state` is False, returns the result of the flow run.\n            If `return_state` is True, returns the result of the flow run\n                wrapped in a Prefect State which provides error handling.\n\n        Examples:\n\n            Define a flow\n\n            >>> @flow\n            >>> def my_flow(name):\n            >>>     print(f\"hello {name}\")\n            >>>     return f\"goodbye {name}\"\n\n            Run a flow\n\n            >>> my_flow(\"marvin\")\n            hello marvin\n            \"goodbye marvin\"\n\n            Run a flow with additional tags\n\n            >>> from prefect import tags\n            >>> with tags(\"db\", \"blue\"):\n            >>>     my_flow(\"foo\")\n        \"\"\"\n        from prefect.engine import enter_flow_run_engine_from_flow_call\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return_type = \"state\" if return_state else \"result\"\n\n        return enter_flow_run_engine_from_flow_call(\n            self,\n            parameters,\n            wait_for=wait_for,\n            return_type=return_type,\n        )\n\n    @overload\n    def _run(self: \"Flow[P, NoReturn]\", *args: P.args, **kwargs: P.kwargs) -> State[T]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def _run(\n        self: \"Flow[P, Coroutine[Any, Any, T]]\", *args: P.args, **kwargs: P.kwargs\n    ) -> Awaitable[T]:\n        ...\n\n    @overload\n    def _run(self: \"Flow[P, T]\", *args: P.args, **kwargs: P.kwargs) -> State[T]:\n        ...\n\n    def _run(\n        self,\n        *args: \"P.args\",\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: \"P.kwargs\",\n    ):\n\"\"\"\n        Run the flow and return its final state.\n\n        Examples:\n\n            Run a flow and get the returned result\n\n            >>> state = my_flow._run(\"marvin\")\n            >>> state.result()\n           \"goodbye marvin\"\n        \"\"\"\n        from prefect.engine import enter_flow_run_engine_from_flow_call\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return enter_flow_run_engine_from_flow_call(\n            self,\n            parameters,\n            wait_for=wait_for,\n            return_type=\"state\",\n        )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.serialize_parameters","title":"serialize_parameters","text":"

Convert parameters to a serializable form.

Uses FastAPI's jsonable_encoder to convert to JSON compatible objects without converting everything directly to a string. This maintains basic types like integers during API roundtrips.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def serialize_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n\"\"\"\n    Convert parameters to a serializable form.\n\n    Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without\n    converting everything directly to a string. This maintains basic types like\n    integers during API roundtrips.\n    \"\"\"\n    serialized_parameters = {}\n    for key, value in parameters.items():\n        try:\n            serialized_parameters[key] = jsonable_encoder(value)\n        except (TypeError, ValueError):\n            logger.debug(\n                f\"Parameter {key!r} for flow {self.name!r} is of unserializable \"\n                f\"type {type(value).__name__!r} and will not be stored \"\n                \"in the backend.\"\n            )\n            serialized_parameters[key] = f\"<{type(value).__name__}>\"\n    return serialized_parameters\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.validate_parameters","title":"validate_parameters","text":"

Validate parameters for compatibility with the flow by attempting to cast the inputs to the associated types specified by the function's type annotations.

Returns:

Type Description Dict[str, Any]

A new dict of parameters that have been cast to the appropriate types

Raises:

Type Description ParameterTypeError

if the provided parameters are not valid

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def validate_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n\"\"\"\n    Validate parameters for compatibility with the flow by attempting to cast the inputs to the\n    associated types specified by the function's type annotations.\n\n    Returns:\n        A new dict of parameters that have been cast to the appropriate types\n\n    Raises:\n        ParameterTypeError: if the provided parameters are not valid\n    \"\"\"\n    validated_fn = ValidatedFunction(self.fn, config=None)\n    args, kwargs = parameters_to_args_kwargs(self.fn, parameters)\n\n    try:\n        model = validated_fn.init_model_instance(*args, **kwargs)\n    except pydantic.ValidationError as exc:\n        # We capture the pydantic exception and raise our own because the pydantic\n        # exception is not picklable when using a cythonized pydantic installation\n        raise ParameterTypeError.from_validation_error(exc) from None\n\n    # Get the updated parameter dict with cast values from the model\n    cast_parameters = {\n        k: v\n        for k, v in model._iter()\n        if k in model.__fields_set__ or model.__fields__[k].default_factory\n    }\n    return cast_parameters\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.with_options","title":"with_options","text":"

Create a new flow from the current object, updating provided options.

Parameters:

Name Type Description Default name str

A new name for the flow.

None version str

A new version for the flow.

None description str

A new description for the flow.

None flow_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.

None task_runner Union[Type[BaseTaskRunner], BaseTaskRunner]

A new task runner for the flow.

None timeout_seconds Union[int, float]

A new number of seconds to fail the flow after if still running.

None validate_parameters bool

A new value indicating if flow calls should validate given parameters.

None retries int

A new number of times to retry on flow run failure.

0 retry_delay_seconds Union[int, float]

A new number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero.

0 persist_result Optional[bool]

A new option for enabling or disabling result persistence.

NotSet result_storage Optional[ResultStorage]

A new storage type to use for results.

NotSet result_serializer Optional[ResultSerializer]

A new serializer to use for results.

NotSet cache_result_in_memory bool

A new value indicating if the flow's result should be cached in memory.

None on_failure Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

A new list of callables to run when the flow enters a failed state.

None on_completion Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

A new list of callables to run when the flow enters a completed state.

None on_cancellation Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

A new list of callables to run when the flow enters a cancelling state.

None on_crashed Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

A new list of callables to run when the flow enters a crashed state.

None

Returns:

Type Description

A new Flow instance.

Examples:

Create a new flow from an existing flow and update the name:

>>> @flow(name=\"My flow\")\n>>> def my_flow():\n>>>     return 1\n>>>\n>>> new_flow = my_flow.with_options(name=\"My new flow\")\n

Create a new flow from an existing flow, update the task runner, and call it without an intermediate variable:

>>> from prefect.task_runners import SequentialTaskRunner\n>>>\n>>> @flow\n>>> def my_flow(x, y):\n>>>     return x + y\n>>>\n>>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n>>> assert state.result() == 4\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def with_options(\n    self,\n    *,\n    name: str = None,\n    version: str = None,\n    retries: int = 0,\n    retry_delay_seconds: Union[int, float] = 0,\n    description: str = None,\n    flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n    task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = None,\n    timeout_seconds: Union[int, float] = None,\n    validate_parameters: bool = None,\n    persist_result: Optional[bool] = NotSet,\n    result_storage: Optional[ResultStorage] = NotSet,\n    result_serializer: Optional[ResultSerializer] = NotSet,\n    cache_result_in_memory: bool = None,\n    log_prints: Optional[bool] = NotSet,\n    on_completion: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], None]]\n    ] = None,\n    on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    on_cancellation: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], None]]\n    ] = None,\n    on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n):\n\"\"\"\n    Create a new flow from the current object, updating provided options.\n\n    Args:\n        name: A new name for the flow.\n        version: A new version for the flow.\n        description: A new description for the flow.\n        flow_run_name: An optional name to distinguish runs of this flow; this name\n            can be provided as a string template with the flow's parameters as variables,\n            or a function that returns a string.\n        task_runner: A new task runner for the flow.\n        timeout_seconds: A new number of seconds to fail the flow after if still\n            running.\n        validate_parameters: A new value indicating if flow calls should validate\n            given parameters.\n        retries: A new number of times to retry on flow run failure.\n        retry_delay_seconds: A new number of seconds to wait before retrying the\n            flow after failure. This is only applicable if `retries` is nonzero.\n        persist_result: A new option for enabling or disabling result persistence.\n        result_storage: A new storage type to use for results.\n        result_serializer: A new serializer to use for results.\n        cache_result_in_memory: A new value indicating if the flow's result should\n            be cached in memory.\n        on_failure: A new list of callables to run when the flow enters a failed state.\n        on_completion: A new list of callables to run when the flow enters a completed state.\n        on_cancellation: A new list of callables to run when the flow enters a cancelling state.\n        on_crashed: A new list of callables to run when the flow enters a crashed state.\n\n    Returns:\n        A new `Flow` instance.\n\n    Examples:\n\n        Create a new flow from an existing flow and update the name:\n\n        >>> @flow(name=\"My flow\")\n        >>> def my_flow():\n        >>>     return 1\n        >>>\n        >>> new_flow = my_flow.with_options(name=\"My new flow\")\n\n        Create a new flow from an existing flow, update the task runner, and call\n        it without an intermediate variable:\n\n        >>> from prefect.task_runners import SequentialTaskRunner\n        >>>\n        >>> @flow\n        >>> def my_flow(x, y):\n        >>>     return x + y\n        >>>\n        >>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n        >>> assert state.result() == 4\n\n    \"\"\"\n    return Flow(\n        fn=self.fn,\n        name=name or self.name,\n        description=description or self.description,\n        flow_run_name=flow_run_name,\n        version=version or self.version,\n        task_runner=task_runner or self.task_runner,\n        retries=retries or self.retries,\n        retry_delay_seconds=retry_delay_seconds or self.retry_delay_seconds,\n        timeout_seconds=(\n            timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n        ),\n        validate_parameters=(\n            validate_parameters\n            if validate_parameters is not None\n            else self.should_validate_parameters\n        ),\n        persist_result=(\n            persist_result if persist_result is not NotSet else self.persist_result\n        ),\n        result_storage=(\n            result_storage if result_storage is not NotSet else self.result_storage\n        ),\n        result_serializer=(\n            result_serializer\n            if result_serializer is not NotSet\n            else self.result_serializer\n        ),\n        cache_result_in_memory=(\n            cache_result_in_memory\n            if cache_result_in_memory is not None\n            else self.cache_result_in_memory\n        ),\n        log_prints=log_prints if log_prints is not NotSet else self.log_prints,\n        on_completion=on_completion or self.on_completion,\n        on_failure=on_failure or self.on_failure,\n        on_cancellation=on_cancellation or self.on_cancellation,\n        on_crashed=on_crashed or self.on_crashed,\n    )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.flow","title":"flow","text":"

Decorator to designate a function as a Prefect workflow.

This decorator may be used for asynchronous or synchronous functions.

Flow parameters must be serializable by Pydantic.

Parameters:

Name Type Description Default name Optional[str]

An optional name for the flow; if not provided, the name will be inferred from the given function.

None version Optional[str]

An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.

None flow_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.

None task_runner BaseTaskRunner

An optional task runner to use for task execution within the flow; if not provided, a ConcurrentTaskRunner will be instantiated.

ConcurrentTaskRunner description str

An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.

None timeout_seconds Union[int, float]

An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.

None validate_parameters bool

By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as x: int and \"5\" is passed, it will be resolved to 5. If set to False, no validation will be performed on flow parameters.

True retries int

An optional number of times to retry on flow run failure.

None retry_delay_seconds Union[int, float]

An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero.

None persist_result Optional[bool]

An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

None result_storage Optional[ResultStorage]

An optional block to use to perist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

None result_serializer Optional[ResultSerializer]

An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of PREFECT_RESULTS_DEFAULT_SERIALIZER will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

None log_prints Optional[bool]

If set, print statements in the flow will be redirected to the Prefect logger for the flow run. Defaults to None, which indicates that the value from the parent flow should be used. If this is a parent flow, the default is pulled from the PREFECT_LOGGING_LOG_PRINTS setting.

None on_completion Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of functions to call when the flow run is completed. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.

None on_failure Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of functions to call when the flow run fails. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.

None on_cancellation Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of functions to call when the flow run is cancelled. These functions will be passed the flow, flow run, and final state.

None on_crashed Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of functions to call when the flow run crashes. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.

None

Returns:

Type Description

A callable Flow object which, when called, will run the flow and return its

final state.

Examples:

Define a simple flow

>>> from prefect import flow\n>>> @flow\n>>> def add(x, y):\n>>>     return x + y\n

Define an async flow

>>> @flow\n>>> async def add(x, y):\n>>>     return x + y\n

Define a flow with a version and description

>>> @flow(version=\"first-flow\", description=\"This flow is empty!\")\n>>> def my_flow():\n>>>     pass\n

Define a flow with a custom name

>>> @flow(name=\"The Ultimate Flow\")\n>>> def my_flow():\n>>>     pass\n

Define a flow that submits its tasks to dask

>>> from prefect_dask.task_runners import DaskTaskRunner\n>>>\n>>> @flow(task_runner=DaskTaskRunner)\n>>> def my_flow():\n>>>     pass\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def flow(\n    __fn=None,\n    *,\n    name: Optional[str] = None,\n    version: Optional[str] = None,\n    flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n    retries: int = None,\n    retry_delay_seconds: Union[int, float] = None,\n    task_runner: BaseTaskRunner = ConcurrentTaskRunner,\n    description: str = None,\n    timeout_seconds: Union[int, float] = None,\n    validate_parameters: bool = True,\n    persist_result: Optional[bool] = None,\n    result_storage: Optional[ResultStorage] = None,\n    result_serializer: Optional[ResultSerializer] = None,\n    cache_result_in_memory: bool = True,\n    log_prints: Optional[bool] = None,\n    on_completion: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    on_cancellation: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], None]]\n    ] = None,\n    on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n):\n\"\"\"\n    Decorator to designate a function as a Prefect workflow.\n\n    This decorator may be used for asynchronous or synchronous functions.\n\n    Flow parameters must be serializable by Pydantic.\n\n    Args:\n        name: An optional name for the flow; if not provided, the name will be inferred\n            from the given function.\n        version: An optional version string for the flow; if not provided, we will\n            attempt to create a version string as a hash of the file containing the\n            wrapped function; if the file cannot be located, the version will be null.\n        flow_run_name: An optional name to distinguish runs of this flow; this name can\n            be provided as a string template with the flow's parameters as variables,\n            or a function that returns a string.\n        task_runner: An optional task runner to use for task execution within the flow; if\n            not provided, a `ConcurrentTaskRunner` will be instantiated.\n        description: An optional string description for the flow; if not provided, the\n            description will be pulled from the docstring for the decorated function.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the flow. If the flow exceeds this runtime, it will be marked as failed.\n            Flow execution may continue until the next task is called.\n        validate_parameters: By default, parameters passed to flows are validated by\n            Pydantic. This will check that input values conform to the annotated types\n            on the function. Where possible, values will be coerced into the correct\n            type; for example, if a parameter is defined as `x: int` and \"5\" is passed,\n            it will be resolved to `5`. If set to `False`, no validation will be\n            performed on flow parameters.\n        retries: An optional number of times to retry on flow run failure.\n        retry_delay_seconds: An optional number of seconds to wait before retrying the\n            flow after failure. This is only applicable if `retries` is nonzero.\n        persist_result: An optional toggle indicating whether the result of this flow\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to perist the result of this flow.\n            This value will be used as the default for any tasks in this flow.\n            If not provided, the local file system will be used unless called as\n            a subflow, at which point the default will be loaded from the parent flow.\n        result_serializer: An optional serializer to use to serialize the result of this\n            flow for persistence. This value will be used as the default for any tasks\n            in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER`\n            will be used unless called as a subflow, at which point the default will be\n            loaded from the parent flow.\n        log_prints: If set, `print` statements in the flow will be redirected to the\n            Prefect logger for the flow run. Defaults to `None`, which indicates that\n            the value from the parent flow should be used. If this is a parent flow,\n            the default is pulled from the `PREFECT_LOGGING_LOG_PRINTS` setting.\n        on_completion: An optional list of functions to call when the flow run is\n            completed. Each function should accept three arguments: the flow, the flow\n            run, and the final state of the flow run.\n        on_failure: An optional list of functions to call when the flow run fails. Each\n            function should accept three arguments: the flow, the flow run, and the\n            final state of the flow run.\n        on_cancellation: An optional list of functions to call when the flow run is\n            cancelled. These functions will be passed the flow, flow run, and final state.\n        on_crashed: An optional list of functions to call when the flow run crashes. Each\n            function should accept three arguments: the flow, the flow run, and the\n            final state of the flow run.\n\n    Returns:\n        A callable `Flow` object which, when called, will run the flow and return its\n        final state.\n\n    Examples:\n        Define a simple flow\n\n        >>> from prefect import flow\n        >>> @flow\n        >>> def add(x, y):\n        >>>     return x + y\n\n        Define an async flow\n\n        >>> @flow\n        >>> async def add(x, y):\n        >>>     return x + y\n\n        Define a flow with a version and description\n\n        >>> @flow(version=\"first-flow\", description=\"This flow is empty!\")\n        >>> def my_flow():\n        >>>     pass\n\n        Define a flow with a custom name\n\n        >>> @flow(name=\"The Ultimate Flow\")\n        >>> def my_flow():\n        >>>     pass\n\n        Define a flow that submits its tasks to dask\n\n        >>> from prefect_dask.task_runners import DaskTaskRunner\n        >>>\n        >>> @flow(task_runner=DaskTaskRunner)\n        >>> def my_flow():\n        >>>     pass\n    \"\"\"\n    if __fn:\n        return cast(\n            Flow[P, R],\n            Flow(\n                fn=__fn,\n                name=name,\n                version=version,\n                flow_run_name=flow_run_name,\n                task_runner=task_runner,\n                description=description,\n                timeout_seconds=timeout_seconds,\n                validate_parameters=validate_parameters,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                log_prints=log_prints,\n                on_completion=on_completion,\n                on_failure=on_failure,\n                on_cancellation=on_cancellation,\n                on_crashed=on_crashed,\n            ),\n        )\n    else:\n        return cast(\n            Callable[[Callable[P, R]], Flow[P, R]],\n            partial(\n                flow,\n                name=name,\n                version=version,\n                flow_run_name=flow_run_name,\n                task_runner=task_runner,\n                description=description,\n                timeout_seconds=timeout_seconds,\n                validate_parameters=validate_parameters,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                log_prints=log_prints,\n                on_completion=on_completion,\n                on_failure=on_failure,\n                on_cancellation=on_cancellation,\n                on_crashed=on_crashed,\n            ),\n        )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_entrypoint","title":"load_flow_from_entrypoint","text":"

Extract a flow object from a script at an entrypoint by running all of the code in the file.

Parameters:

Name Type Description Default entrypoint str

a string in the format <path_to_script>:<flow_func_name>

required

Returns:

Type Description Flow

The flow object from the script

Raises:

Type Description FlowScriptError

If an exception is encountered while running the script

MissingFlowError

If the flow function specified in the entrypoint does not exist

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def load_flow_from_entrypoint(entrypoint: str) -> Flow:\n\"\"\"\n    Extract a flow object from a script at an entrypoint by running all of the code in the file.\n\n    Args:\n        entrypoint: a string in the format `<path_to_script>:<flow_func_name>`\n\n    Returns:\n        The flow object from the script\n\n    Raises:\n        FlowScriptError: If an exception is encountered while running the script\n        MissingFlowError: If the flow function specified in the entrypoint does not exist\n    \"\"\"\n    with PrefectObjectRegistry(\n        block_code_execution=True,\n        capture_failures=True,\n    ):\n        path, func_name = entrypoint.split(\":\")\n        try:\n            flow = import_object(entrypoint)\n        except AttributeError as exc:\n            raise MissingFlowError(\n                f\"Flow function with name {func_name!r} not found in {path!r}. \"\n            ) from exc\n\n        if not isinstance(flow, Flow):\n            raise MissingFlowError(\n                f\"Function with name {func_name!r} is not a flow. Make sure that it is \"\n                \"decorated with '@flow'.\"\n            )\n\n        return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_script","title":"load_flow_from_script","text":"

Extract a flow object from a script by running all of the code in the file.

If the script has multiple flows in it, a flow name must be provided to specify the flow to return.

Parameters:

Name Type Description Default path str

A path to a Python script containing flows

required flow_name str

An optional flow name to look for in the script

None

Returns:

Type Description Flow

The flow object from the script

Raises:

Type Description FlowScriptError

If an exception is encountered while running the script

MissingFlowError

If no flows exist in the iterable

MissingFlowError

If a flow name is provided and that flow does not exist

UnspecifiedFlowError

If multiple flows exist but no flow name was provided

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def load_flow_from_script(path: str, flow_name: str = None) -> Flow:\n\"\"\"\n    Extract a flow object from a script by running all of the code in the file.\n\n    If the script has multiple flows in it, a flow name must be provided to specify\n    the flow to return.\n\n    Args:\n        path: A path to a Python script containing flows\n        flow_name: An optional flow name to look for in the script\n\n    Returns:\n        The flow object from the script\n\n    Raises:\n        FlowScriptError: If an exception is encountered while running the script\n        MissingFlowError: If no flows exist in the iterable\n        MissingFlowError: If a flow name is provided and that flow does not exist\n        UnspecifiedFlowError: If multiple flows exist but no flow name was provided\n    \"\"\"\n    return select_flow(\n        load_flows_from_script(path),\n        flow_name=flow_name,\n        from_message=f\"in script '{path}'\",\n    )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_text","title":"load_flow_from_text","text":"

Load a flow from a text script.

The script will be written to a temporary local file path so errors can refer to line numbers and contextual tracebacks can be provided.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def load_flow_from_text(script_contents: AnyStr, flow_name: str):\n\"\"\"\n    Load a flow from a text script.\n\n    The script will be written to a temporary local file path so errors can refer\n    to line numbers and contextual tracebacks can be provided.\n    \"\"\"\n    with NamedTemporaryFile(\n        mode=\"wt\" if isinstance(script_contents, str) else \"wb\",\n        prefix=f\"flow-script-{flow_name}\",\n        suffix=\".py\",\n        delete=False,\n    ) as tmpfile:\n        tmpfile.write(script_contents)\n        tmpfile.flush()\n    try:\n        flow = load_flow_from_script(tmpfile.name, flow_name=flow_name)\n    finally:\n        # windows compat\n        tmpfile.close()\n        os.remove(tmpfile.name)\n    return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flows_from_script","title":"load_flows_from_script","text":"

Load all flow objects from the given python script. All of the code in the file will be executed.

Returns:

Type Description List[Flow]

A list of flows

Raises:

Type Description FlowScriptError

If an exception is encountered while running the script

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def load_flows_from_script(path: str) -> List[Flow]:\n\"\"\"\n    Load all flow objects from the given python script. All of the code in the file\n    will be executed.\n\n    Returns:\n        A list of flows\n\n    Raises:\n        FlowScriptError: If an exception is encountered while running the script\n    \"\"\"\n    return registry_from_script(path).get_instances(Flow)\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.select_flow","title":"select_flow","text":"

Select the only flow in an iterable or a flow specified by name.

Returns A single flow object

Raises:

Type Description MissingFlowError

If no flows exist in the iterable

MissingFlowError

If a flow name is provided and that flow does not exist

UnspecifiedFlowError

If multiple flows exist but no flow name was provided

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def select_flow(\n    flows: Iterable[Flow], flow_name: str = None, from_message: str = None\n) -> Flow:\n\"\"\"\n    Select the only flow in an iterable or a flow specified by name.\n\n    Returns\n        A single flow object\n\n    Raises:\n        MissingFlowError: If no flows exist in the iterable\n        MissingFlowError: If a flow name is provided and that flow does not exist\n        UnspecifiedFlowError: If multiple flows exist but no flow name was provided\n    \"\"\"\n    # Convert to flows by name\n    flows = {f.name: f for f in flows}\n\n    # Add a leading space if given, otherwise use an empty string\n    from_message = (\" \" + from_message) if from_message else \"\"\n    if not flows:\n        raise MissingFlowError(f\"No flows found{from_message}.\")\n\n    elif flow_name and flow_name not in flows:\n        raise MissingFlowError(\n            f\"Flow {flow_name!r} not found{from_message}. \"\n            f\"Found the following flows: {listrepr(flows.keys())}. \"\n            \"Check to make sure that your flow function is decorated with `@flow`.\"\n        )\n\n    elif not flow_name and len(flows) > 1:\n        raise UnspecifiedFlowError(\n            (\n                f\"Found {len(flows)} flows{from_message}:\"\n                f\" {listrepr(sorted(flows.keys()))}. Specify a flow name to select a\"\n                \" flow.\"\n            ),\n        )\n\n    if flow_name:\n        return flows[flow_name]\n    else:\n        return list(flows.values())[0]\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/futures/","title":"prefect.futures","text":"","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures","title":"prefect.futures","text":"

Futures represent the execution of a task and allow retrieval of the task run's state.

This module contains the definition for futures as well as utilities for resolving futures in nested data structures.

","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture","title":"PrefectFuture","text":"

Bases: Generic[R, A]

Represents the result of a computation happening in a task runner.

When tasks are called, they are submitted to a task runner which creates a future for access to the state and result of the task.

Examples:

Define a task that returns a string

>>> from prefect import flow, task\n>>> @task\n>>> def my_task() -> str:\n>>>     return \"hello\"\n

Calls of this task in a flow will return a future

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()  # PrefectFuture[str, Sync] includes result type\n>>>     future.task_run.id  # UUID for the task run\n

Wait for the task to complete

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     final_state = future.wait()\n

Wait N sconds for the task to complete

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     final_state = future.wait(0.1)\n>>>     if final_state:\n>>>         ... # Task done\n>>>     else:\n>>>         ... # Task not done yet\n

Wait for a task to complete and retrieve its result

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     result = future.result()\n>>>     assert result == \"hello\"\n

Wait N seconds for a task to complete and retrieve its result

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     result = future.result(timeout=5)\n>>>     assert result == \"hello\"\n

Retrieve the state of a task without waiting for completion

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     state = future.get_state()\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
class PrefectFuture(Generic[R, A]):\n\"\"\"\n    Represents the result of a computation happening in a task runner.\n\n    When tasks are called, they are submitted to a task runner which creates a future\n    for access to the state and result of the task.\n\n    Examples:\n        Define a task that returns a string\n\n        >>> from prefect import flow, task\n        >>> @task\n        >>> def my_task() -> str:\n        >>>     return \"hello\"\n\n        Calls of this task in a flow will return a future\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()  # PrefectFuture[str, Sync] includes result type\n        >>>     future.task_run.id  # UUID for the task run\n\n        Wait for the task to complete\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     final_state = future.wait()\n\n        Wait N sconds for the task to complete\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     final_state = future.wait(0.1)\n        >>>     if final_state:\n        >>>         ... # Task done\n        >>>     else:\n        >>>         ... # Task not done yet\n\n        Wait for a task to complete and retrieve its result\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     result = future.result()\n        >>>     assert result == \"hello\"\n\n        Wait N seconds for a task to complete and retrieve its result\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     result = future.result(timeout=5)\n        >>>     assert result == \"hello\"\n\n        Retrieve the state of a task without waiting for completion\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     state = future.get_state()\n    \"\"\"\n\n    def __init__(\n        self,\n        name: str,\n        key: UUID,\n        task_runner: \"BaseTaskRunner\",\n        asynchronous: A = True,\n        _final_state: State[R] = None,  # Exposed for testing\n    ) -> None:\n        self.key = key\n        self.name = name\n        self.asynchronous = asynchronous\n        self.task_run = None\n        self._final_state = _final_state\n        self._exception: Optional[Exception] = None\n        self._task_runner = task_runner\n        self._submitted = anyio.Event()\n\n        self._loop = asyncio.get_running_loop()\n\n    @overload\n    def wait(\n        self: \"PrefectFuture[R, Async]\", timeout: None = None\n    ) -> Awaitable[State[R]]:\n        ...\n\n    @overload\n    def wait(self: \"PrefectFuture[R, Sync]\", timeout: None = None) -> State[R]:\n        ...\n\n    @overload\n    def wait(\n        self: \"PrefectFuture[R, Async]\", timeout: float\n    ) -> Awaitable[Optional[State[R]]]:\n        ...\n\n    @overload\n    def wait(self: \"PrefectFuture[R, Sync]\", timeout: float) -> Optional[State[R]]:\n        ...\n\n    def wait(self, timeout=None):\n\"\"\"\n        Wait for the run to finish and return the final state\n\n        If the timeout is reached before the run reaches a final state,\n        `None` is returned.\n        \"\"\"\n        wait = create_call(self._wait, timeout=timeout)\n        if self.asynchronous:\n            return from_async.call_soon_in_loop_thread(wait).aresult()\n        else:\n            # type checking cannot handle the overloaded timeout passing\n            return from_sync.call_soon_in_loop_thread(wait).result()  # type: ignore\n\n    @overload\n    async def _wait(self, timeout: None = None) -> State[R]:\n        ...\n\n    @overload\n    async def _wait(self, timeout: float) -> Optional[State[R]]:\n        ...\n\n    async def _wait(self, timeout=None):\n\"\"\"\n        Async implementation for `wait`\n        \"\"\"\n        await self._wait_for_submission()\n\n        if self._final_state:\n            return self._final_state\n\n        self._final_state = await self._task_runner.wait(self.key, timeout)\n        return self._final_state\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Sync]\",\n        timeout: float = None,\n        raise_on_failure: bool = True,\n    ) -> R:\n        ...\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Sync]\",\n        timeout: float = None,\n        raise_on_failure: bool = False,\n    ) -> Union[R, Exception]:\n        ...\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Async]\",\n        timeout: float = None,\n        raise_on_failure: bool = True,\n    ) -> Awaitable[R]:\n        ...\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Async]\",\n        timeout: float = None,\n        raise_on_failure: bool = False,\n    ) -> Awaitable[Union[R, Exception]]:\n        ...\n\n    def result(self, timeout: float = None, raise_on_failure: bool = True):\n\"\"\"\n        Wait for the run to finish and return the final state.\n\n        If the timeout is reached before the run reaches a final state, a `TimeoutError`\n        will be raised.\n\n        If `raise_on_failure` is `True` and the task run failed, the task run's\n        exception will be raised.\n        \"\"\"\n        result = create_call(\n            self._result, timeout=timeout, raise_on_failure=raise_on_failure\n        )\n        if self.asynchronous:\n            return from_async.call_soon_in_loop_thread(result).aresult()\n        else:\n            return from_sync.call_soon_in_loop_thread(result).result()\n\n    async def _result(self, timeout: float = None, raise_on_failure: bool = True):\n\"\"\"\n        Async implementation of `result`\n        \"\"\"\n        final_state = await self._wait(timeout=timeout)\n        if not final_state:\n            raise TimeoutError(\"Call timed out before task finished.\")\n        return await final_state.result(raise_on_failure=raise_on_failure, fetch=True)\n\n    @overload\n    def get_state(\n        self: \"PrefectFuture[R, Async]\", client: PrefectClient = None\n    ) -> Awaitable[State[R]]:\n        ...\n\n    @overload\n    def get_state(\n        self: \"PrefectFuture[R, Sync]\", client: PrefectClient = None\n    ) -> State[R]:\n        ...\n\n    def get_state(self, client: PrefectClient = None):\n\"\"\"\n        Get the current state of the task run.\n        \"\"\"\n        if self.asynchronous:\n            return cast(Awaitable[State[R]], self._get_state(client=client))\n        else:\n            return cast(State[R], sync(self._get_state, client=client))\n\n    @inject_client\n    async def _get_state(self, client: PrefectClient = None) -> State[R]:\n        assert client is not None  # always injected\n\n        # We must wait for the task run id to be populated\n        await self._wait_for_submission()\n\n        task_run = await client.read_task_run(self.task_run.id)\n\n        if not task_run:\n            raise RuntimeError(\"Future has no associated task run in the server.\")\n\n        # Update the task run reference\n        self.task_run = task_run\n        return task_run.state\n\n    async def _wait_for_submission(self):\n        await run_coroutine_in_loop_from_async(self._loop, self._submitted.wait())\n\n    def __hash__(self) -> int:\n        return hash(self.key)\n\n    def __repr__(self) -> str:\n        return f\"PrefectFuture({self.name!r})\"\n\n    def __bool__(self) -> bool:\n        warnings.warn(\n            (\n                \"A 'PrefectFuture' from a task call was cast to a boolean; \"\n                \"did you mean to check the result of the task instead? \"\n                \"e.g. `if my_task().result(): ...`\"\n            ),\n            stacklevel=2,\n        )\n        return True\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.get_state","title":"get_state","text":"

Get the current state of the task run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
def get_state(self, client: PrefectClient = None):\n\"\"\"\n    Get the current state of the task run.\n    \"\"\"\n    if self.asynchronous:\n        return cast(Awaitable[State[R]], self._get_state(client=client))\n    else:\n        return cast(State[R], sync(self._get_state, client=client))\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.result","title":"result","text":"

Wait for the run to finish and return the final state.

If the timeout is reached before the run reaches a final state, a TimeoutError will be raised.

If raise_on_failure is True and the task run failed, the task run's exception will be raised.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
def result(self, timeout: float = None, raise_on_failure: bool = True):\n\"\"\"\n    Wait for the run to finish and return the final state.\n\n    If the timeout is reached before the run reaches a final state, a `TimeoutError`\n    will be raised.\n\n    If `raise_on_failure` is `True` and the task run failed, the task run's\n    exception will be raised.\n    \"\"\"\n    result = create_call(\n        self._result, timeout=timeout, raise_on_failure=raise_on_failure\n    )\n    if self.asynchronous:\n        return from_async.call_soon_in_loop_thread(result).aresult()\n    else:\n        return from_sync.call_soon_in_loop_thread(result).result()\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.wait","title":"wait","text":"

Wait for the run to finish and return the final state

If the timeout is reached before the run reaches a final state, None is returned.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
def wait(self, timeout=None):\n\"\"\"\n    Wait for the run to finish and return the final state\n\n    If the timeout is reached before the run reaches a final state,\n    `None` is returned.\n    \"\"\"\n    wait = create_call(self._wait, timeout=timeout)\n    if self.asynchronous:\n        return from_async.call_soon_in_loop_thread(wait).aresult()\n    else:\n        # type checking cannot handle the overloaded timeout passing\n        return from_sync.call_soon_in_loop_thread(wait).result()  # type: ignore\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.call_repr","title":"call_repr","text":"

Generate a repr for a function call as \"fn_name(arg_value, kwarg_name=kwarg_value)\"

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
def call_repr(__fn: Callable, *args: Any, **kwargs: Any) -> str:\n\"\"\"\n    Generate a repr for a function call as \"fn_name(arg_value, kwarg_name=kwarg_value)\"\n    \"\"\"\n\n    name = __fn.__name__\n\n    # TODO: If this computation is concerningly expensive, we can iterate checking the\n    #       length at each arg or avoid calling `repr` on args with large amounts of\n    #       data\n    call_args = \", \".join(\n        [repr(arg) for arg in args]\n        + [f\"{key}={repr(val)}\" for key, val in kwargs.items()]\n    )\n\n    # Enforce a maximum length\n    if len(call_args) > 100:\n        call_args = call_args[:100] + \"...\"\n\n    return f\"{name}({call_args})\"\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.resolve_futures_to_data","title":"resolve_futures_to_data async","text":"

Given a Python built-in collection, recursively find PrefectFutures and build a new collection with the same structure with futures resolved to their results. Resolving futures to their results may wait for execution to complete and require communication with the API.

Unsupported object types will be returned without modification.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
async def resolve_futures_to_data(\n    expr: Union[PrefectFuture[R, Any], Any],\n    raise_on_failure: bool = True,\n) -> Union[R, Any]:\n\"\"\"\n    Given a Python built-in collection, recursively find `PrefectFutures` and build a\n    new collection with the same structure with futures resolved to their results.\n    Resolving futures to their results may wait for execution to complete and require\n    communication with the API.\n\n    Unsupported object types will be returned without modification.\n    \"\"\"\n    futures: Set[PrefectFuture] = set()\n\n    maybe_expr = visit_collection(\n        expr,\n        visit_fn=partial(_collect_futures, futures),\n        return_data=False,\n        context={},\n    )\n    if maybe_expr is not None:\n        expr = maybe_expr\n\n    # Get results\n    results = await asyncio.gather(\n        *[\n            # We must wait for the future in the thread it was created in\n            from_async.call_soon_in_loop_thread(\n                create_call(future._result, raise_on_failure=raise_on_failure)\n            ).aresult()\n            for future in futures\n        ]\n    )\n\n    results_by_future = dict(zip(futures, results))\n\n    def replace_futures_with_results(expr, context):\n        # Expressions inside quotes should not be modified\n        if isinstance(context.get(\"annotation\"), quote):\n            raise StopVisiting()\n\n        if isinstance(expr, PrefectFuture):\n            return results_by_future[expr]\n        else:\n            return expr\n\n    return visit_collection(\n        expr,\n        visit_fn=replace_futures_with_results,\n        return_data=True,\n        context={},\n    )\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.resolve_futures_to_states","title":"resolve_futures_to_states async","text":"

Given a Python built-in collection, recursively find PrefectFutures and build a new collection with the same structure with futures resolved to their final states. Resolving futures to their final states may wait for execution to complete.

Unsupported object types will be returned without modification.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
async def resolve_futures_to_states(\n    expr: Union[PrefectFuture[R, Any], Any]\n) -> Union[State[R], Any]:\n\"\"\"\n    Given a Python built-in collection, recursively find `PrefectFutures` and build a\n    new collection with the same structure with futures resolved to their final states.\n    Resolving futures to their final states may wait for execution to complete.\n\n    Unsupported object types will be returned without modification.\n    \"\"\"\n    futures: Set[PrefectFuture] = set()\n\n    visit_collection(\n        expr,\n        visit_fn=partial(_collect_futures, futures),\n        return_data=False,\n        context={},\n    )\n\n    # Get final states for each future\n    states = await asyncio.gather(\n        *[\n            # We must wait for the future in the thread it was created in\n            from_async.call_soon_in_loop_thread(create_call(future._wait)).aresult()\n            for future in futures\n        ]\n    )\n\n    states_by_future = dict(zip(futures, states))\n\n    def replace_futures_with_states(expr, context):\n        # Expressions inside quotes should not be modified\n        if isinstance(context.get(\"annotation\"), quote):\n            raise StopVisiting()\n\n        if isinstance(expr, PrefectFuture):\n            return states_by_future[expr]\n        else:\n            return expr\n\n    return visit_collection(\n        expr,\n        visit_fn=replace_futures_with_states,\n        return_data=True,\n        context={},\n    )\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/infrastructure/","title":"prefect.infrastructure","text":"","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure","title":"prefect.infrastructure","text":"","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer","title":"DockerContainer","text":"

Bases: Infrastructure

Runs a command in a container.

Requires a Docker Engine to be connectable. Docker settings will be retrieved from the environment.

Click here to see a tutorial.

Attributes:

Name Type Description auto_remove bool

If set, the container will be removed on completion. Otherwise, the container will remain after exit for inspection.

command bool

A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.

env bool

Environment variables to set for the container.

image str

An optional string specifying the tag of a Docker image to use. Defaults to the Prefect image.

image_pull_policy Optional[ImagePullPolicy]

Specifies if the image should be pulled. One of 'ALWAYS', 'NEVER', 'IF_NOT_PRESENT'.

image_registry Optional[DockerRegistry]

A DockerRegistry block containing credentials to use if image is stored in a private image registry.

labels Optional[DockerRegistry]

An optional dictionary of labels, mapping name to value.

name Optional[DockerRegistry]

An optional name for the container.

network_mode Optional[str]

Set the network mode for the created container. Defaults to 'host' if a local API url is detected, otherwise the Docker default of 'bridge' is used. If 'networks' is set, this cannot be set.

networks List[str]

An optional list of strings specifying Docker networks to connect the container to.

stream_output bool

If set, stream output from the container to local standard output.

volumes List[str]

An optional list of volume mount strings in the format of \"local_path:container_path\".

memswap_limit Union[int, str]

Total memory (memory + swap), -1 to disable swap. Should only be set if mem_limit is also set. If mem_limit is set, this defaults to allowing the container to use as much swap as memory. For example, if mem_limit is 300m and memswap_limit is not set, the container can use 600m in total of memory and swap.

mem_limit Union[float, str]

Memory limit of the created container. Accepts float values to enforce a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g. If a string is given without a unit, bytes are assumed.

privileged bool

Give extended privileges to this container.

","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer--connecting-to-a-locally-hosted-prefect-api","title":"Connecting to a locally hosted Prefect API","text":"

If using a local API URL on Linux, we will update the network mode default to 'host' to enable connectivity. If using another OS or an alternative network mode is used, we will replace 'localhost' in the API URL with 'host.docker.internal'. Generally, this will enable connectivity, but the API URL can be provided as an environment variable to override inference in more complex use-cases.

Note, if using 'host.docker.internal' in the API URL on Linux, the API must be bound to 0.0.0.0 or the Docker IP address to allow connectivity. On macOS, this is not necessary and the API is connectable while bound to localhost.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/container.py
class DockerContainer(Infrastructure):\n\"\"\"\n    Runs a command in a container.\n\n    Requires a Docker Engine to be connectable. Docker settings will be retrieved from\n    the environment.\n\n    Click [here](https://docs.prefect.io/guides/deployment/docker) to see a tutorial.\n\n    Attributes:\n        auto_remove: If set, the container will be removed on completion. Otherwise,\n            the container will remain after exit for inspection.\n        command: A list of strings specifying the command to run in the container to\n            start the flow run. In most cases you should not override this.\n        env: Environment variables to set for the container.\n        image: An optional string specifying the tag of a Docker image to use.\n            Defaults to the Prefect image.\n        image_pull_policy: Specifies if the image should be pulled. One of 'ALWAYS',\n            'NEVER', 'IF_NOT_PRESENT'.\n        image_registry: A `DockerRegistry` block containing credentials to use if `image` is stored in a private\n            image registry.\n        labels: An optional dictionary of labels, mapping name to value.\n        name: An optional name for the container.\n        network_mode: Set the network mode for the created container. Defaults to 'host'\n            if a local API url is detected, otherwise the Docker default of 'bridge' is\n            used. If 'networks' is set, this cannot be set.\n        networks: An optional list of strings specifying Docker networks to connect the\n            container to.\n        stream_output: If set, stream output from the container to local standard output.\n        volumes: An optional list of volume mount strings in the format of\n            \"local_path:container_path\".\n        memswap_limit: Total memory (memory + swap), -1 to disable swap. Should only be\n            set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\n            allowing the container to use as much swap as memory. For example, if\n            `mem_limit` is 300m and `memswap_limit` is not set, the container can use\n            600m in total of memory and swap.\n        mem_limit: Memory limit of the created container. Accepts float values to enforce\n            a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g.\n            If a string is given without a unit, bytes are assumed.\n        privileged: Give extended privileges to this container.\n\n    ## Connecting to a locally hosted Prefect API\n\n    If using a local API URL on Linux, we will update the network mode default to 'host'\n    to enable connectivity. If using another OS or an alternative network mode is used,\n    we will replace 'localhost' in the API URL with 'host.docker.internal'. Generally,\n    this will enable connectivity, but the API URL can be provided as an environment\n    variable to override inference in more complex use-cases.\n\n    Note, if using 'host.docker.internal' in the API URL on Linux, the API must be bound\n    to 0.0.0.0 or the Docker IP address to allow connectivity. On macOS, this is not\n    necessary and the API is connectable while bound to localhost.\n    \"\"\"\n\n    type: Literal[\"docker-container\"] = Field(\n        default=\"docker-container\", description=\"The type of infrastructure.\"\n    )\n    image: str = Field(\n        default_factory=get_prefect_image_name,\n        description=\"Tag of a Docker image to use. Defaults to the Prefect image.\",\n    )\n    image_pull_policy: Optional[ImagePullPolicy] = Field(\n        default=None, description=\"Specifies if the image should be pulled.\"\n    )\n    image_registry: Optional[DockerRegistry] = None\n    networks: List[str] = Field(\n        default_factory=list,\n        description=(\n            \"A list of strings specifying Docker networks to connect the container to.\"\n        ),\n    )\n    network_mode: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The network mode for the created container (e.g. host, bridge). If\"\n            \" 'networks' is set, this cannot be set.\"\n        ),\n    )\n    auto_remove: bool = Field(\n        default=False,\n        description=\"If set, the container will be removed on completion.\",\n    )\n    volumes: List[str] = Field(\n        default_factory=list,\n        description=(\n            \"A list of volume mount strings in the format of\"\n            ' \"local_path:container_path\".'\n        ),\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, the output will be streamed from the container to local standard\"\n            \" output.\"\n        ),\n    )\n    memswap_limit: Union[int, str] = Field(\n        default=None,\n        description=(\n            \"Total memory (memory + swap), -1 to disable swap. Should only be \"\n            \"set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\"\n            \"allowing the container to use as much swap as memory. For example, if \"\n            \"`mem_limit` is 300m and `memswap_limit` is not set, the container can use \"\n            \"600m in total of memory and swap.\"\n        ),\n    )\n    mem_limit: Union[float, str] = Field(\n        default=None,\n        description=(\n            \"Memory limit of the created container. Accepts float values to enforce \"\n            \"a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g. \"\n            \"If a string is given without a unit, bytes are assumed.\"\n        ),\n    )\n    privileged: bool = Field(\n        default=False,\n        description=\"Give extended privileges to this container.\",\n    )\n\n    _block_type_name = \"Docker Container\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/2IfXXfMq66mrzJBDFFCHTp/6d8f320d9e4fc4393f045673d61ab612/Moby-logo.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer\"\n\n    @validator(\"labels\")\n    def convert_labels_to_docker_format(cls, labels: Dict[str, str]):\n        labels = labels or {}\n        new_labels = {}\n        for name, value in labels.items():\n            if \"/\" in name:\n                namespace, key = name.split(\"/\", maxsplit=1)\n                new_namespace = \".\".join(reversed(namespace.split(\".\")))\n                new_labels[f\"{new_namespace}.{key}\"] = value\n            else:\n                new_labels[name] = value\n        return new_labels\n\n    @validator(\"volumes\")\n    def check_volume_format(cls, volumes):\n        for volume in volumes:\n            if \":\" not in volume:\n                raise ValueError(\n                    \"Invalid volume specification. \"\n                    f\"Expected format 'path:container_path', but got {volume!r}\"\n                )\n\n        return volumes\n\n    @sync_compatible\n    async def run(\n        self,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> Optional[bool]:\n        if not self.command:\n            raise ValueError(\"Docker container cannot be run with empty command.\")\n\n        # The `docker` library uses requests instead of an async http library so it must\n        # be run in a thread to avoid blocking the event loop.\n        container = await run_sync_in_worker_thread(self._create_and_start_container)\n        container_pid = self._get_infrastructure_pid(container_id=container.id)\n\n        # Mark as started and return the infrastructure id\n        if task_status:\n            task_status.started(container_pid)\n\n        # Monitor the container\n        container = await run_sync_in_worker_thread(\n            self._watch_container_safe, container\n        )\n\n        exit_code = container.attrs[\"State\"].get(\"ExitCode\")\n        return DockerContainerResult(\n            status_code=exit_code if exit_code is not None else -1,\n            identifier=container_pid,\n        )\n\n    async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n        docker_client = self._get_client()\n        base_url, container_id = self._parse_infrastructure_pid(infrastructure_pid)\n\n        if docker_client.api.base_url != base_url:\n            raise InfrastructureNotAvailable(\n                \"\".join(\n                    [\n                        (\n                            f\"Unable to stop container {container_id!r}: the current\"\n                            \" Docker API \"\n                        ),\n                        (\n                            f\"URL {docker_client.api.base_url!r} does not match the\"\n                            \" expected \"\n                        ),\n                        f\"API base URL {base_url}.\",\n                    ]\n                )\n            )\n        try:\n            container = docker_client.containers.get(container_id=container_id)\n        except docker.errors.NotFound:\n            raise InfrastructureNotFound(\n                f\"Unable to stop container {container_id!r}: The container was not\"\n                \" found.\"\n            )\n\n        try:\n            container.stop(timeout=grace_seconds)\n        except Exception:\n            raise\n\n    def preview(self):\n        # TODO: build and document a more sophisticated preview\n        docker_client = self._get_client()\n        try:\n            return json.dumps(self._build_container_settings(docker_client))\n        finally:\n            docker_client.close()\n\n    def _get_infrastructure_pid(self, container_id: str) -> str:\n\"\"\"Generates a Docker infrastructure_pid string in the form of\n        `<docker_host_base_url>:<container_id>`.\n        \"\"\"\n        docker_client = self._get_client()\n        base_url = docker_client.api.base_url\n        docker_client.close()\n        return f\"{base_url}:{container_id}\"\n\n    def _parse_infrastructure_pid(self, infrastructure_pid: str) -> Tuple[str, str]:\n\"\"\"Splits a Docker infrastructure_pid into its component parts\"\"\"\n\n        # base_url can contain `:` so we only want the last item of the split\n        base_url, container_id = infrastructure_pid.rsplit(\":\", 1)\n        return base_url, str(container_id)\n\n    def _build_container_settings(\n        self,\n        docker_client: \"DockerClient\",\n    ) -> Dict:\n        network_mode = self._get_network_mode()\n        return dict(\n            image=self.image,\n            network=self.networks[0] if self.networks else None,\n            network_mode=network_mode,\n            command=self.command,\n            environment=self._get_environment_variables(network_mode),\n            auto_remove=self.auto_remove,\n            labels={**CONTAINER_LABELS, **self.labels},\n            extra_hosts=self._get_extra_hosts(docker_client),\n            name=self._get_container_name(),\n            volumes=self.volumes,\n            mem_limit=self.mem_limit,\n            memswap_limit=self.memswap_limit,\n            privileged=self.privileged,\n        )\n\n    def _create_and_start_container(self) -> \"Container\":\n        if self.image_registry:\n            # If an image registry block was supplied, load an authenticated Docker\n            # client from the block. Otherwise, use an unauthenticated client to\n            # pull images from public registries.\n            docker_client = self.image_registry.get_docker_client()\n        else:\n            docker_client = self._get_client()\n        container_settings = self._build_container_settings(docker_client)\n\n        if self._should_pull_image(docker_client):\n            self.logger.info(f\"Pulling image {self.image!r}...\")\n            self._pull_image(docker_client)\n\n        container = self._create_container(docker_client, **container_settings)\n\n        # Add additional networks after the container is created; only one network can\n        # be attached at creation time\n        if len(self.networks) > 1:\n            for network_name in self.networks[1:]:\n                network = docker_client.networks.get(network_name)\n                network.connect(container)\n\n        # Start the container\n        container.start()\n\n        docker_client.close()\n\n        return container\n\n    def _get_image_and_tag(self) -> Tuple[str, Optional[str]]:\n        return parse_image_tag(self.image)\n\n    def _determine_image_pull_policy(self) -> ImagePullPolicy:\n\"\"\"\n        Determine the appropriate image pull policy.\n\n        1. If they specified an image pull policy, use that.\n\n        2. If they did not specify an image pull policy and gave us\n           the \"latest\" tag, use ImagePullPolicy.always.\n\n        3. If they did not specify an image pull policy and did not\n           specify a tag, use ImagePullPolicy.always.\n\n        4. If they did not specify an image pull policy and gave us\n           a tag other than \"latest\", use ImagePullPolicy.if_not_present.\n\n        This logic matches the behavior of Kubernetes.\n        See:https://kubernetes.io/docs/concepts/containers/images/#imagepullpolicy-defaulting\n        \"\"\"\n        if not self.image_pull_policy:\n            _, tag = self._get_image_and_tag()\n            if tag == \"latest\" or not tag:\n                return ImagePullPolicy.ALWAYS\n            return ImagePullPolicy.IF_NOT_PRESENT\n        return self.image_pull_policy\n\n    def _get_network_mode(self) -> Optional[str]:\n        # User's value takes precedence; this may collide with the incompatible options\n        # mentioned below.\n        if self.network_mode:\n            if sys.platform != \"linux\" and self.network_mode == \"host\":\n                warnings.warn(\n                    f\"{self.network_mode!r} network mode is not supported on platform \"\n                    f\"{sys.platform!r} and may not work as intended.\"\n                )\n            return self.network_mode\n\n        # Network mode is not compatible with networks or ports (we do not support ports\n        # yet though)\n        if self.networks:\n            return None\n\n        # Check for a local API connection\n        api_url = self.env.get(\"PREFECT_API_URL\", PREFECT_API_URL.value())\n\n        if api_url:\n            try:\n                _, netloc, _, _, _, _ = urllib.parse.urlparse(api_url)\n            except Exception as exc:\n                warnings.warn(\n                    f\"Failed to parse host from API URL {api_url!r} with exception: \"\n                    f\"{exc}\\nThe network mode will not be inferred.\"\n                )\n                return None\n\n            host = netloc.split(\":\")[0]\n\n            # If using a locally hosted API, use a host network on linux\n            if sys.platform == \"linux\" and (host == \"127.0.0.1\" or host == \"localhost\"):\n                return \"host\"\n\n        # Default to unset\n        return None\n\n    def _should_pull_image(self, docker_client: \"DockerClient\") -> bool:\n\"\"\"\n        Decide whether we need to pull the Docker image.\n        \"\"\"\n        image_pull_policy = self._determine_image_pull_policy()\n\n        if image_pull_policy is ImagePullPolicy.ALWAYS:\n            return True\n        elif image_pull_policy is ImagePullPolicy.NEVER:\n            return False\n        elif image_pull_policy is ImagePullPolicy.IF_NOT_PRESENT:\n            try:\n                # NOTE: images.get() wants the tag included with the image\n                # name, while images.pull() wants them split.\n                docker_client.images.get(self.image)\n            except docker.errors.ImageNotFound:\n                self.logger.debug(f\"Could not find Docker image locally: {self.image}\")\n                return True\n        return False\n\n    def _pull_image(self, docker_client: \"DockerClient\"):\n\"\"\"\n        Pull the image we're going to use to create the container.\n        \"\"\"\n        image, tag = self._get_image_and_tag()\n\n        return docker_client.images.pull(image, tag)\n\n    def _create_container(self, docker_client: \"DockerClient\", **kwargs) -> \"Container\":\n\"\"\"\n        Create a docker container with retries on name conflicts.\n\n        If the container already exists with the given name, an incremented index is\n        added.\n        \"\"\"\n        # Create the container with retries on name conflicts (with an incremented idx)\n        index = 0\n        container = None\n        name = original_name = kwargs.pop(\"name\")\n\n        while not container:\n            from docker.errors import APIError\n\n            try:\n                display_name = repr(name) if name else \"with auto-generated name\"\n                self.logger.info(f\"Creating Docker container {display_name}...\")\n                container = docker_client.containers.create(name=name, **kwargs)\n            except APIError as exc:\n                if \"Conflict\" in str(exc) and \"container name\" in str(exc):\n                    self.logger.info(\n                        f\"Docker container name {display_name} already exists; \"\n                        \"retrying...\"\n                    )\n                    index += 1\n                    name = f\"{original_name}-{index}\"\n                else:\n                    raise\n\n        self.logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        return container\n\n    def _watch_container_safe(self, container: \"Container\") -> \"Container\":\n        # Monitor the container capturing the latest snapshot while capturing\n        # not found errors\n        docker_client = self._get_client()\n\n        try:\n            for latest_container in self._watch_container(docker_client, container.id):\n                container = latest_container\n        except docker.errors.NotFound:\n            # The container was removed during watching\n            self.logger.warning(\n                f\"Docker container {container.name} was removed before we could wait \"\n                \"for its completion.\"\n            )\n        finally:\n            docker_client.close()\n\n        return container\n\n    def _watch_container(\n        self, docker_client: \"DockerClient\", container_id: str\n    ) -> Generator[None, None, \"Container\"]:\n        container: \"Container\" = docker_client.containers.get(container_id)\n\n        status = container.status\n        self.logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        yield container\n\n        if self.stream_output:\n            try:\n                for log in container.logs(stream=True):\n                    log: bytes\n                    print(log.decode().rstrip())\n            except docker.errors.APIError as exc:\n                if \"marked for removal\" in str(exc):\n                    self.logger.warning(\n                        f\"Docker container {container.name} was marked for removal\"\n                        \" before logs could be retrieved. Output will not be\"\n                        \" streamed. \"\n                    )\n                else:\n                    self.logger.exception(\n                        \"An unexpected Docker API error occured while streaming output \"\n                        f\"from container {container.name}.\"\n                    )\n\n            container.reload()\n            if container.status != status:\n                self.logger.info(\n                    f\"Docker container {container.name!r} has status\"\n                    f\" {container.status!r}\"\n                )\n            yield container\n\n        container.wait()\n        self.logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        yield container\n\n    def _get_client(self):\n        try:\n            with warnings.catch_warnings():\n                # Silence warnings due to use of deprecated methods within dockerpy\n                # See https://github.com/docker/docker-py/pull/2931\n                warnings.filterwarnings(\n                    \"ignore\",\n                    message=\"distutils Version classes are deprecated.*\",\n                    category=DeprecationWarning,\n                )\n\n                docker_client = docker.from_env()\n\n        except docker.errors.DockerException as exc:\n            raise RuntimeError(\"Could not connect to Docker.\") from exc\n\n        return docker_client\n\n    def _get_container_name(self) -> Optional[str]:\n\"\"\"\n        Generates a container name to match the configured name, ensuring it is Docker\n        compatible.\n        \"\"\"\n        # Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+` in the end\n        if not self.name:\n            return None\n\n        return (\n            slugify(\n                self.name,\n                lowercase=False,\n                # Docker does not limit length but URL limits apply eventually so\n                # limit the length for safety\n                max_length=250,\n                # Docker allows these characters for container names\n                regex_pattern=r\"[^a-zA-Z0-9_.-]+\",\n            ).lstrip(\n                # Docker does not allow leading underscore, dash, or period\n                \"_-.\"\n            )\n            # Docker does not allow 0 character names so cast to null if the name is\n            # empty after slufification\n            or None\n        )\n\n    def _get_extra_hosts(self, docker_client) -> Dict[str, str]:\n\"\"\"\n        A host.docker.internal -> host-gateway mapping is necessary for communicating\n        with the API on Linux machines. Docker Desktop on macOS will automatically\n        already have this mapping.\n        \"\"\"\n        if sys.platform == \"linux\" and (\n            # Do not warn if the user has specified a host manually that does not use\n            # a local address\n            \"PREFECT_API_URL\" not in self.env\n            or re.search(\n                \".*(localhost)|(127.0.0.1)|(host.docker.internal).*\",\n                self.env[\"PREFECT_API_URL\"],\n            )\n        ):\n            user_version = packaging.version.parse(\n                format_outlier_version_name(docker_client.version()[\"Version\"])\n            )\n            required_version = packaging.version.parse(\"20.10.0\")\n\n            if user_version < required_version:\n                warnings.warn(\n                    \"`host.docker.internal` could not be automatically resolved to\"\n                    \" your local ip address. This feature is not supported on Docker\"\n                    f\" Engine v{user_version}, upgrade to v{required_version}+ if you\"\n                    \" encounter issues.\"\n                )\n                return {}\n            else:\n                # Compatibility for linux -- https://github.com/docker/cli/issues/2290\n                # Only supported by Docker v20.10.0+ which is our minimum recommend version\n                return {\"host.docker.internal\": \"host-gateway\"}\n\n    def _get_environment_variables(self, network_mode):\n        # If the API URL has been set by the base environment rather than the by the\n        # user, update the value to ensure connectivity when using a bridge network by\n        # updating local connections to use the docker internal host unless the\n        # network mode is \"host\" where localhost is available already.\n        env = {**self._base_environment(), **self.env}\n\n        if (\n            \"PREFECT_API_URL\" in env\n            and \"PREFECT_API_URL\" not in self.env\n            and network_mode != \"host\"\n        ):\n            env[\"PREFECT_API_URL\"] = (\n                env[\"PREFECT_API_URL\"]\n                .replace(\"localhost\", \"host.docker.internal\")\n                .replace(\"127.0.0.1\", \"host.docker.internal\")\n            )\n\n        # Drop null values allowing users to \"unset\" variables\n        return {key: value for key, value in env.items() if value is not None}\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainerResult","title":"DockerContainerResult","text":"

Bases: InfrastructureResult

Contains information about a completed Docker container

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/container.py
class DockerContainerResult(InfrastructureResult):\n\"\"\"Contains information about a completed Docker container\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure","title":"Infrastructure","text":"

Bases: Block, abc.ABC

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/base.py
class Infrastructure(Block, abc.ABC):\n    _block_schema_capabilities = [\"run-infrastructure\"]\n\n    type: str\n\n    env: Dict[str, Optional[str]] = pydantic.Field(\n        default_factory=dict,\n        title=\"Environment\",\n        description=\"Environment variables to set in the configured infrastructure.\",\n    )\n    labels: Dict[str, str] = pydantic.Field(\n        default_factory=dict,\n        description=\"Labels applied to the infrastructure for metadata purposes.\",\n    )\n    name: Optional[str] = pydantic.Field(\n        default=None,\n        description=\"Name applied to the infrastructure for identification.\",\n    )\n    command: Optional[List[str]] = pydantic.Field(\n        default=None,\n        description=\"The command to run in the infrastructure.\",\n    )\n\n    @abc.abstractmethod\n    async def run(\n        self,\n        task_status: anyio.abc.TaskStatus = None,\n    ) -> InfrastructureResult:\n\"\"\"\n        Run the infrastructure.\n\n        If provided a `task_status`, the status will be reported as started when the\n        infrastructure is successfully created. The status return value will be an\n        identifier for the infrastructure.\n\n        The call will then monitor the created infrastructure, returning a result at\n        the end containing a status code indicating if the infrastructure exited cleanly\n        or encountered an error.\n        \"\"\"\n        # Note: implementations should include `sync_compatible`\n\n    @abc.abstractmethod\n    def preview(self) -> str:\n\"\"\"\n        View a preview of the infrastructure that would be run.\n        \"\"\"\n\n    @property\n    def logger(self):\n        return get_logger(f\"prefect.infrastructure.{self.type}\")\n\n    @classmethod\n    def _base_environment(cls) -> Dict[str, str]:\n\"\"\"\n        Environment variables that should be passed to all created infrastructure.\n\n        These values should be overridable with the `env` field.\n        \"\"\"\n        return get_current_settings().to_environment_variables(exclude_unset=True)\n\n    def prepare_for_flow_run(\n        self: Self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"Deployment\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ) -> Self:\n\"\"\"\n        Return an infrastructure block that is prepared to execute a flow run.\n        \"\"\"\n        if deployment is not None:\n            deployment_labels = self._base_deployment_labels(deployment)\n        else:\n            deployment_labels = {}\n\n        if flow is not None:\n            flow_labels = self._base_flow_labels(flow)\n        else:\n            flow_labels = {}\n\n        return self.copy(\n            update={\n                \"env\": {**self._base_flow_run_environment(flow_run), **self.env},\n                \"labels\": {\n                    **self._base_flow_run_labels(flow_run),\n                    **deployment_labels,\n                    **flow_labels,\n                    **self.labels,\n                },\n                \"name\": self.name or flow_run.name,\n                \"command\": self.command or self._base_flow_run_command(),\n            }\n        )\n\n    @staticmethod\n    def _base_flow_run_command() -> List[str]:\n\"\"\"\n        Generate a command for a flow run job.\n        \"\"\"\n        return [\"python\", \"-m\", \"prefect.engine\"]\n\n    @staticmethod\n    def _base_flow_run_labels(flow_run: \"FlowRun\") -> Dict[str, str]:\n\"\"\"\n        Generate a dictionary of labels for a flow run job.\n        \"\"\"\n        return {\n            \"prefect.io/flow-run-id\": str(flow_run.id),\n            \"prefect.io/flow-run-name\": flow_run.name,\n            \"prefect.io/version\": prefect.__version__,\n        }\n\n    @staticmethod\n    def _base_flow_run_environment(flow_run: \"FlowRun\") -> Dict[str, str]:\n\"\"\"\n        Generate a dictionary of environment variables for a flow run job.\n        \"\"\"\n        environment = {}\n        environment[\"PREFECT__FLOW_RUN_ID\"] = flow_run.id.hex\n        return environment\n\n    @staticmethod\n    def _base_deployment_labels(deployment: \"Deployment\") -> Dict[str, str]:\n        labels = {\n            \"prefect.io/deployment-name\": deployment.name,\n        }\n        if deployment.updated is not None:\n            labels[\"prefect.io/deployment-updated\"] = deployment.updated.in_timezone(\n                \"utc\"\n            ).to_iso8601_string()\n        return labels\n\n    @staticmethod\n    def _base_flow_labels(flow: \"Flow\") -> Dict[str, str]:\n        return {\n            \"prefect.io/flow-name\": flow.name,\n        }\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.base.Infrastructure.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

Return an infrastructure block that is prepared to execute a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/base.py
def prepare_for_flow_run(\n    self: Self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"Deployment\"] = None,\n    flow: Optional[\"Flow\"] = None,\n) -> Self:\n\"\"\"\n    Return an infrastructure block that is prepared to execute a flow run.\n    \"\"\"\n    if deployment is not None:\n        deployment_labels = self._base_deployment_labels(deployment)\n    else:\n        deployment_labels = {}\n\n    if flow is not None:\n        flow_labels = self._base_flow_labels(flow)\n    else:\n        flow_labels = {}\n\n    return self.copy(\n        update={\n            \"env\": {**self._base_flow_run_environment(flow_run), **self.env},\n            \"labels\": {\n                **self._base_flow_run_labels(flow_run),\n                **deployment_labels,\n                **flow_labels,\n                **self.labels,\n            },\n            \"name\": self.name or flow_run.name,\n            \"command\": self.command or self._base_flow_run_command(),\n        }\n    )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.base.Infrastructure.preview","title":"preview abstractmethod","text":"

View a preview of the infrastructure that would be run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/base.py
@abc.abstractmethod\ndef preview(self) -> str:\n\"\"\"\n    View a preview of the infrastructure that would be run.\n    \"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.base.Infrastructure.run","title":"run abstractmethod async","text":"

Run the infrastructure.

If provided a task_status, the status will be reported as started when the infrastructure is successfully created. The status return value will be an identifier for the infrastructure.

The call will then monitor the created infrastructure, returning a result at the end containing a status code indicating if the infrastructure exited cleanly or encountered an error.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/base.py
@abc.abstractmethod\nasync def run(\n    self,\n    task_status: anyio.abc.TaskStatus = None,\n) -> InfrastructureResult:\n\"\"\"\n    Run the infrastructure.\n\n    If provided a `task_status`, the status will be reported as started when the\n    infrastructure is successfully created. The status return value will be an\n    identifier for the infrastructure.\n\n    The call will then monitor the created infrastructure, returning a result at\n    the end containing a status code indicating if the infrastructure exited cleanly\n    or encountered an error.\n    \"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig","title":"KubernetesClusterConfig","text":"

Bases: Block

Stores configuration for interaction with Kubernetes clusters.

See from_file for creation.

Attributes:

Name Type Description config Dict

The entire loaded YAML contents of a kubectl config file

context_name str

The name of the kubectl context to use

Example

Load a saved Kubernetes cluster config:

from prefect.blocks.kubernetes import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
class KubernetesClusterConfig(Block):\n\"\"\"\n    Stores configuration for interaction with Kubernetes clusters.\n\n    See `from_file` for creation.\n\n    Attributes:\n        config: The entire loaded YAML contents of a kubectl config file\n        context_name: The name of the kubectl context to use\n\n    Example:\n        Load a saved Kubernetes cluster config:\n        ```python\n        from prefect.blocks.kubernetes import KubernetesClusterConfig\n\n        cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Kubernetes Cluster Config\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/1zrSeY8DZ1MJZs2BAyyyGk/20445025358491b8b72600b8f996125b/Kubernetes_logo_without_workmark.svg.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig\"\n\n    config: Dict = Field(\n        default=..., description=\"The entire contents of a kubectl config file.\"\n    )\n    context_name: str = Field(\n        default=..., description=\"The name of the kubectl context to use.\"\n    )\n\n    @validator(\"config\", pre=True)\n    def parse_yaml_config(cls, value):\n        if isinstance(value, str):\n            return yaml.safe_load(value)\n        return value\n\n    @classmethod\n    def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n\"\"\"\n        Create a cluster config from the a Kubernetes config file.\n\n        By default, the current context in the default Kubernetes config file will be\n        used.\n\n        An alternative file or context may be specified.\n\n        The entire config file will be loaded and stored.\n        \"\"\"\n        kube_config = kubernetes.config.kube_config\n\n        path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n        path = path.expanduser().resolve()\n\n        # Determine the context\n        existing_contexts, current_context = kube_config.list_kube_config_contexts(\n            config_file=str(path)\n        )\n        context_names = {ctx[\"name\"] for ctx in existing_contexts}\n        if context_name:\n            if context_name not in context_names:\n                raise ValueError(\n                    f\"Context {context_name!r} not found. \"\n                    f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n                )\n        else:\n            context_name = current_context[\"name\"]\n\n        # Load the entire config file\n        config_file_contents = path.read_text()\n        config_dict = yaml.safe_load(config_file_contents)\n\n        return cls(config=config_dict, context_name=context_name)\n\n    def get_api_client(self) -> \"ApiClient\":\n\"\"\"\n        Returns a Kubernetes API client for this cluster config.\n        \"\"\"\n        return kubernetes.config.kube_config.new_client_from_config_dict(\n            config_dict=self.config, context=self.context_name\n        )\n\n    def configure_client(self) -> None:\n\"\"\"\n        Activates this cluster configuration by loading the configuration into the\n        Kubernetes Python client. After calling this, Kubernetes API clients can use\n        this config's context.\n        \"\"\"\n        kubernetes.config.kube_config.load_kube_config_from_dict(\n            config_dict=self.config, context=self.context_name\n        )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.blocks.kubernetes.KubernetesClusterConfig.configure_client","title":"configure_client","text":"

Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
def configure_client(self) -> None:\n\"\"\"\n    Activates this cluster configuration by loading the configuration into the\n    Kubernetes Python client. After calling this, Kubernetes API clients can use\n    this config's context.\n    \"\"\"\n    kubernetes.config.kube_config.load_kube_config_from_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.blocks.kubernetes.KubernetesClusterConfig.from_file","title":"from_file classmethod","text":"

Create a cluster config from the a Kubernetes config file.

By default, the current context in the default Kubernetes config file will be used.

An alternative file or context may be specified.

The entire config file will be loaded and stored.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
@classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n\"\"\"\n    Create a cluster config from the a Kubernetes config file.\n\n    By default, the current context in the default Kubernetes config file will be\n    used.\n\n    An alternative file or context may be specified.\n\n    The entire config file will be loaded and stored.\n    \"\"\"\n    kube_config = kubernetes.config.kube_config\n\n    path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n    path = path.expanduser().resolve()\n\n    # Determine the context\n    existing_contexts, current_context = kube_config.list_kube_config_contexts(\n        config_file=str(path)\n    )\n    context_names = {ctx[\"name\"] for ctx in existing_contexts}\n    if context_name:\n        if context_name not in context_names:\n            raise ValueError(\n                f\"Context {context_name!r} not found. \"\n                f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n            )\n    else:\n        context_name = current_context[\"name\"]\n\n    # Load the entire config file\n    config_file_contents = path.read_text()\n    config_dict = yaml.safe_load(config_file_contents)\n\n    return cls(config=config_dict, context_name=context_name)\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.blocks.kubernetes.KubernetesClusterConfig.get_api_client","title":"get_api_client","text":"

Returns a Kubernetes API client for this cluster config.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
def get_api_client(self) -> \"ApiClient\":\n\"\"\"\n    Returns a Kubernetes API client for this cluster config.\n    \"\"\"\n    return kubernetes.config.kube_config.new_client_from_config_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob","title":"KubernetesJob","text":"

Bases: Infrastructure

Runs a command as a Kubernetes Job.

For a guided tutorial, see How to use Kubernetes with Prefect. For more information, including examples for customizing the resulting manifest, see KubernetesJob infrastructure concepts.

Attributes:

Name Type Description cluster_config Optional[KubernetesClusterConfig]

An optional Kubernetes cluster config to use for this job.

command Optional[KubernetesClusterConfig]

A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.

customizations JsonPatch

A list of JSON 6902 patches to apply to the base Job manifest.

env JsonPatch

Environment variables to set for the container.

finished_job_ttl Optional[int]

The number of seconds to retain jobs after completion. If set, finished jobs will be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be manually removed.

image Optional[str]

An optional string specifying the image reference of a container image to use for the job, for example, docker.io/prefecthq/prefect:2-latest. The behavior is as described in https://kubernetes.io/docs/concepts/containers/images/#image-names. Defaults to the Prefect image.

image_pull_policy Optional[KubernetesImagePullPolicy]

The Kubernetes image pull policy to use for job containers.

job KubernetesManifest

The base manifest for the Kubernetes Job.

job_watch_timeout_seconds Optional[int]

Number of seconds to wait for the job to complete before marking it as crashed. Defaults to None, which means no timeout will be enforced.

labels Optional[int]

An optional dictionary of labels to add to the job.

name Optional[int]

An optional name for the job.

namespace Optional[str]

An optional string signifying the Kubernetes namespace to use.

pod_watch_timeout_seconds int

Number of seconds to watch for pod creation before timing out (default 60).

service_account_name Optional[str]

An optional string specifying which Kubernetes service account to use.

stream_output bool

If set, stream output from the job to local standard output.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/kubernetes.py
class KubernetesJob(Infrastructure):\n\"\"\"\n    Runs a command as a Kubernetes Job.\n\n    For a guided tutorial, see [How to use Kubernetes with Prefect](https://medium.com/the-prefect-blog/how-to-use-kubernetes-with-prefect-419b2e8b8cb2/).\n    For more information, including examples for customizing the resulting manifest, see [`KubernetesJob` infrastructure concepts](https://docs.prefect.io/concepts/infrastructure/#kubernetesjob).\n\n    Attributes:\n        cluster_config: An optional Kubernetes cluster config to use for this job.\n        command: A list of strings specifying the command to run in the container to\n            start the flow run. In most cases you should not override this.\n        customizations: A list of JSON 6902 patches to apply to the base Job manifest.\n        env: Environment variables to set for the container.\n        finished_job_ttl: The number of seconds to retain jobs after completion. If set, finished jobs will\n            be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be\n            manually removed.\n        image: An optional string specifying the image reference of a container image\n            to use for the job, for example, docker.io/prefecthq/prefect:2-latest. The\n            behavior is as described in https://kubernetes.io/docs/concepts/containers/images/#image-names.\n            Defaults to the Prefect image.\n        image_pull_policy: The Kubernetes image pull policy to use for job containers.\n        job: The base manifest for the Kubernetes Job.\n        job_watch_timeout_seconds: Number of seconds to wait for the job to complete\n            before marking it as crashed. Defaults to `None`, which means no timeout will be enforced.\n        labels: An optional dictionary of labels to add to the job.\n        name: An optional name for the job.\n        namespace: An optional string signifying the Kubernetes namespace to use.\n        pod_watch_timeout_seconds: Number of seconds to watch for pod creation before timing out (default 60).\n        service_account_name: An optional string specifying which Kubernetes service account to use.\n        stream_output: If set, stream output from the job to local standard output.\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/1zrSeY8DZ1MJZs2BAyyyGk/20445025358491b8b72600b8f996125b/Kubernetes_logo_without_workmark.svg.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob\"\n\n    type: Literal[\"kubernetes-job\"] = Field(\n        default=\"kubernetes-job\", description=\"The type of infrastructure.\"\n    )\n    # shortcuts for the most common user-serviceable settings\n    image: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The image reference of a container image to use for the job, for example,\"\n            \" `docker.io/prefecthq/prefect:2-latest`.The behavior is as described in\"\n            \" the Kubernetes documentation and uses the latest version of Prefect by\"\n            \" default, unless an image is already present in a provided job manifest.\"\n        ),\n    )\n    namespace: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The Kubernetes namespace to use for this job. Defaults to 'default' \"\n            \"unless a namespace is already present in a provided job manifest.\"\n        ),\n    )\n    service_account_name: Optional[str] = Field(\n        default=None, description=\"The Kubernetes service account to use for this job.\"\n    )\n    image_pull_policy: Optional[KubernetesImagePullPolicy] = Field(\n        default=None,\n        description=\"The Kubernetes image pull policy to use for job containers.\",\n    )\n\n    # connection to a cluster\n    cluster_config: Optional[KubernetesClusterConfig] = Field(\n        default=None, description=\"The Kubernetes cluster config to use for this job.\"\n    )\n\n    # settings allowing full customization of the Job\n    job: KubernetesManifest = Field(\n        default_factory=lambda: KubernetesJob.base_job_manifest(),\n        description=\"The base manifest for the Kubernetes Job.\",\n        title=\"Base Job Manifest\",\n    )\n    customizations: JsonPatch = Field(\n        default_factory=lambda: JsonPatch([]),\n        description=\"A list of JSON 6902 patches to apply to the base Job manifest.\",\n    )\n\n    # controls the behavior of execution\n    job_watch_timeout_seconds: Optional[int] = Field(\n        default=None,\n        description=(\n            \"Number of seconds to wait for the job to complete before marking it as\"\n            \" crashed. Defaults to `None`, which means no timeout will be enforced.\"\n        ),\n    )\n    pod_watch_timeout_seconds: int = Field(\n        default=60,\n        description=\"Number of seconds to watch for pod creation before timing out.\",\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, output will be streamed from the job to local standard output.\"\n        ),\n    )\n    finished_job_ttl: Optional[int] = Field(\n        default=None,\n        description=(\n            \"The number of seconds to retain jobs after completion. If set, finished\"\n            \" jobs will be cleaned up by Kubernetes after the given delay. If None\"\n            \" (default), jobs will need to be manually removed.\"\n        ),\n    )\n\n    # internal-use only right now\n    _api_dns_name: Optional[str] = None  # Replaces 'localhost' in API URL\n\n    _block_type_name = \"Kubernetes Job\"\n\n    @validator(\"job\")\n    def ensure_job_includes_all_required_components(cls, value: KubernetesManifest):\n        patch = JsonPatch.from_diff(value, cls.base_job_manifest())\n        missing_paths = sorted([op[\"path\"] for op in patch if op[\"op\"] == \"add\"])\n        if missing_paths:\n            raise ValueError(\n                \"Job is missing required attributes at the following paths: \"\n                f\"{', '.join(missing_paths)}\"\n            )\n        return value\n\n    @validator(\"job\")\n    def ensure_job_has_compatible_values(cls, value: KubernetesManifest):\n        patch = JsonPatch.from_diff(value, cls.base_job_manifest())\n        incompatible = sorted(\n            [\n                f\"{op['path']} must have value {op['value']!r}\"\n                for op in patch\n                if op[\"op\"] == \"replace\"\n            ]\n        )\n        if incompatible:\n            raise ValueError(\n                \"Job has incompatble values for the following attributes: \"\n                f\"{', '.join(incompatible)}\"\n            )\n        return value\n\n    @validator(\"customizations\", pre=True)\n    def cast_customizations_to_a_json_patch(\n        cls, value: Union[List[Dict], JsonPatch, str]\n    ) -> JsonPatch:\n        if isinstance(value, list):\n            return JsonPatch(value)\n        elif isinstance(value, str):\n            try:\n                return JsonPatch(json.loads(value))\n            except json.JSONDecodeError as exc:\n                raise ValueError(\n                    f\"Unable to parse customizations as JSON: {value}. Please make sure\"\n                    \" that the provided value is a valid JSON string.\"\n                ) from exc\n        return value\n\n    @root_validator\n    def default_namespace(cls, values):\n        job = values.get(\"job\")\n\n        namespace = values.get(\"namespace\")\n        job_namespace = job[\"metadata\"].get(\"namespace\") if job else None\n\n        if not namespace and not job_namespace:\n            values[\"namespace\"] = \"default\"\n\n        return values\n\n    @root_validator\n    def default_image(cls, values):\n        job = values.get(\"job\")\n        image = values.get(\"image\")\n        job_image = (\n            job[\"spec\"][\"template\"][\"spec\"][\"containers\"][0].get(\"image\")\n            if job\n            else None\n        )\n\n        if not image and not job_image:\n            values[\"image\"] = get_prefect_image_name()\n\n        return values\n\n    # Support serialization of the 'JsonPatch' type\n    class Config:\n        arbitrary_types_allowed = True\n        json_encoders = {JsonPatch: lambda p: p.patch}\n\n    def dict(self, *args, **kwargs) -> Dict:\n        d = super().dict(*args, **kwargs)\n        d[\"customizations\"] = self.customizations.patch\n        return d\n\n    @classmethod\n    def base_job_manifest(cls) -> KubernetesManifest:\n\"\"\"Produces the bare minimum allowed Job manifest\"\"\"\n        return {\n            \"apiVersion\": \"batch/v1\",\n            \"kind\": \"Job\",\n            \"metadata\": {\"labels\": {}},\n            \"spec\": {\n                \"template\": {\n                    \"spec\": {\n                        \"parallelism\": 1,\n                        \"completions\": 1,\n                        \"restartPolicy\": \"Never\",\n                        \"containers\": [\n                            {\n                                \"name\": \"prefect-job\",\n                                \"env\": [],\n                            }\n                        ],\n                    }\n                }\n            },\n        }\n\n    # Note that we're using the yaml package to load both YAML and JSON files below.\n    # This works because YAML is a strict superset of JSON:\n    #\n    #   > The YAML 1.23 specification was published in 2009. Its primary focus was\n    #   > making YAML a strict superset of JSON. It also removed many of the problematic\n    #   > implicit typing recommendations.\n    #\n    #   https://yaml.org/spec/1.2.2/#12-yaml-history\n\n    @classmethod\n    def job_from_file(cls, filename: str) -> KubernetesManifest:\n\"\"\"Load a Kubernetes Job manifest from a YAML or JSON file.\"\"\"\n        with open(filename, \"r\", encoding=\"utf-8\") as f:\n            return yaml.load(f, yaml.SafeLoader)\n\n    @classmethod\n    def customize_from_file(cls, filename: str) -> JsonPatch:\n\"\"\"Load an RFC 6902 JSON patch from a YAML or JSON file.\"\"\"\n        with open(filename, \"r\", encoding=\"utf-8\") as f:\n            return JsonPatch(yaml.load(f, yaml.SafeLoader))\n\n    @sync_compatible\n    async def run(\n        self,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> KubernetesJobResult:\n        if not self.command:\n            raise ValueError(\"Kubernetes job cannot be run with empty command.\")\n\n        self._configure_kubernetes_library_client()\n        manifest = self.build_job()\n        job = await run_sync_in_worker_thread(self._create_job, manifest)\n\n        pid = await run_sync_in_worker_thread(self._get_infrastructure_pid, job)\n        # Indicate that the job has started\n        if task_status is not None:\n            task_status.started(pid)\n\n        # Monitor the job until completion\n        status_code = await run_sync_in_worker_thread(\n            self._watch_job, job.metadata.name\n        )\n        return KubernetesJobResult(identifier=pid, status_code=status_code)\n\n    async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n        self._configure_kubernetes_library_client()\n        job_cluster_uid, job_namespace, job_name = self._parse_infrastructure_pid(\n            infrastructure_pid\n        )\n\n        if not job_namespace == self.namespace:\n            raise InfrastructureNotAvailable(\n                f\"Unable to kill job {job_name!r}: The job is running in namespace \"\n                f\"{job_namespace!r} but this block is configured to use \"\n                f\"{self.namespace!r}.\"\n            )\n\n        current_cluster_uid = self._get_cluster_uid()\n        if job_cluster_uid != current_cluster_uid:\n            raise InfrastructureNotAvailable(\n                f\"Unable to kill job {job_name!r}: The job is running on another \"\n                \"cluster.\"\n            )\n\n        with self.get_batch_client() as batch_client:\n            try:\n                batch_client.delete_namespaced_job(\n                    name=job_name,\n                    namespace=job_namespace,\n                    grace_period_seconds=grace_seconds,\n                    # Foreground propagation deletes dependent objects before deleting owner objects.\n                    # This ensures that the pods are cleaned up before the job is marked as deleted.\n                    # See: https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion\n                    propagation_policy=\"Foreground\",\n                )\n            except kubernetes.client.exceptions.ApiException as exc:\n                if exc.status == 404:\n                    raise InfrastructureNotFound(\n                        f\"Unable to kill job {job_name!r}: The job was not found.\"\n                    ) from exc\n                else:\n                    raise\n\n    def preview(self):\n        return yaml.dump(self.build_job())\n\n    def build_job(self) -> KubernetesManifest:\n\"\"\"Builds the Kubernetes Job Manifest\"\"\"\n        job_manifest = copy.copy(self.job)\n        job_manifest = self._shortcut_customizations().apply(job_manifest)\n        job_manifest = self.customizations.apply(job_manifest)\n        return job_manifest\n\n    @contextmanager\n    def get_batch_client(self) -> Generator[\"BatchV1Api\", None, None]:\n        with kubernetes.client.ApiClient() as client:\n            try:\n                yield kubernetes.client.BatchV1Api(api_client=client)\n            finally:\n                client.rest_client.pool_manager.clear()\n\n    @contextmanager\n    def get_client(self) -> Generator[\"CoreV1Api\", None, None]:\n        with kubernetes.client.ApiClient() as client:\n            try:\n                yield kubernetes.client.CoreV1Api(api_client=client)\n            finally:\n                client.rest_client.pool_manager.clear()\n\n    def _get_infrastructure_pid(self, job: \"V1Job\") -> str:\n\"\"\"\n        Generates a Kubernetes infrastructure PID.\n\n        The PID is in the format: \"<cluster uid>:<namespace>:<job name>\".\n        \"\"\"\n        cluster_uid = self._get_cluster_uid()\n        pid = f\"{cluster_uid}:{self.namespace}:{job.metadata.name}\"\n        return pid\n\n    def _parse_infrastructure_pid(\n        self, infrastructure_pid: str\n    ) -> Tuple[str, str, str]:\n\"\"\"\n        Parse a Kubernetes infrastructure PID into its component parts.\n\n        Returns a cluster UID, namespace, and job name.\n        \"\"\"\n        cluster_uid, namespace, job_name = infrastructure_pid.split(\":\", 2)\n        return cluster_uid, namespace, job_name\n\n    def _get_cluster_uid(self) -> str:\n\"\"\"\n        Gets a unique id for the current cluster being used.\n\n        There is no real unique identifier for a cluster. However, the `kube-system`\n        namespace is immutable and has a persistence UID that we use instead.\n\n        PREFECT_KUBERNETES_CLUSTER_UID can be set in cases where the `kube-system`\n        namespace cannot be read e.g. when a cluster role cannot be created. If set,\n        this variable will be used and we will not attempt to read the `kube-system`\n        namespace.\n\n        See https://github.com/kubernetes/kubernetes/issues/44954\n        \"\"\"\n        # Default to an environment variable\n        env_cluster_uid = os.environ.get(\"PREFECT_KUBERNETES_CLUSTER_UID\")\n        if env_cluster_uid:\n            return env_cluster_uid\n\n        # Read the UID from the cluster namespace\n        with self.get_client() as client:\n            namespace = client.read_namespace(\"kube-system\")\n        cluster_uid = namespace.metadata.uid\n\n        return cluster_uid\n\n    def _configure_kubernetes_library_client(self) -> None:\n\"\"\"\n        Set the correct kubernetes client configuration.\n\n        WARNING: This action is not threadsafe and may override the configuration\n                  specified by another `KubernetesJob` instance.\n        \"\"\"\n        # TODO: Investigate returning a configured client so calls on other threads\n        #       will not invalidate the config needed here\n\n        # if a k8s cluster block is provided to the flow runner, use that\n        if self.cluster_config:\n            self.cluster_config.configure_client()\n        else:\n            # If no block specified, try to load Kubernetes configuration within a cluster. If that doesn't\n            # work, try to load the configuration from the local environment, allowing\n            # any further ConfigExceptions to bubble up.\n            try:\n                kubernetes.config.load_incluster_config()\n            except kubernetes.config.ConfigException:\n                kubernetes.config.load_kube_config()\n\n    def _shortcut_customizations(self) -> JsonPatch:\n\"\"\"Produces the JSON 6902 patch for the most commonly used customizations, like\n        image and namespace, which we offer as top-level parameters (with sensible\n        default values)\"\"\"\n        shortcuts = []\n\n        if self.namespace:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/metadata/namespace\",\n                    \"value\": self.namespace,\n                }\n            )\n\n        if self.image:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/containers/0/image\",\n                    \"value\": self.image,\n                }\n            )\n\n        shortcuts += [\n            {\n                \"op\": \"add\",\n                \"path\": (\n                    f\"/metadata/labels/{self._slugify_label_key(key).replace('/', '~1', 1)}\"\n                ),\n                \"value\": self._slugify_label_value(value),\n            }\n            for key, value in self.labels.items()\n        ]\n\n        shortcuts += [\n            {\n                \"op\": \"add\",\n                \"path\": \"/spec/template/spec/containers/0/env/-\",\n                \"value\": {\"name\": key, \"value\": value},\n            }\n            for key, value in self._get_environment_variables().items()\n        ]\n\n        if self.image_pull_policy:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/containers/0/imagePullPolicy\",\n                    \"value\": self.image_pull_policy.value,\n                }\n            )\n\n        if self.service_account_name:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/serviceAccountName\",\n                    \"value\": self.service_account_name,\n                }\n            )\n\n        if self.finished_job_ttl is not None:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/ttlSecondsAfterFinished\",\n                    \"value\": self.finished_job_ttl,\n                }\n            )\n\n        if self.command:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/containers/0/args\",\n                    \"value\": self.command,\n                }\n            )\n\n        if self.name:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/metadata/generateName\",\n                    \"value\": self._slugify_name(self.name) + \"-\",\n                }\n            )\n        else:\n            # Generate name is required\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/metadata/generateName\",\n                    \"value\": (\n                        \"prefect-job-\"\n                        # We generate a name using a hash of the primary job settings\n                        + stable_hash(\n                            *self.command,\n                            *self.env.keys(),\n                            *[v for v in self.env.values() if v is not None],\n                        )\n                        + \"-\"\n                    ),\n                }\n            )\n\n        return JsonPatch(shortcuts)\n\n    def _get_job(self, job_id: str) -> Optional[\"V1Job\"]:\n        with self.get_batch_client() as batch_client:\n            try:\n                job = batch_client.read_namespaced_job(job_id, self.namespace)\n            except kubernetes.client.exceptions.ApiException:\n                self.logger.error(f\"Job {job_id!r} was removed.\", exc_info=True)\n                return None\n            return job\n\n    def _get_job_pod(self, job_name: str) -> \"V1Pod\":\n\"\"\"Get the first running pod for a job.\"\"\"\n\n        # Wait until we find a running pod for the job\n        # if `pod_watch_timeout_seconds` is None, no timeout will be enforced\n        watch = kubernetes.watch.Watch()\n        self.logger.debug(f\"Job {job_name!r}: Starting watch for pod start...\")\n        last_phase = None\n        with self.get_client() as client:\n            for event in watch.stream(\n                func=client.list_namespaced_pod,\n                namespace=self.namespace,\n                label_selector=f\"job-name={job_name}\",\n                timeout_seconds=self.pod_watch_timeout_seconds,\n            ):\n                phase = event[\"object\"].status.phase\n                if phase != last_phase:\n                    self.logger.info(f\"Job {job_name!r}: Pod has status {phase!r}.\")\n\n                if phase != \"Pending\":\n                    watch.stop()\n                    return event[\"object\"]\n\n                last_phase = phase\n\n        self.logger.error(f\"Job {job_name!r}: Pod never started.\")\n\n    def _watch_job(self, job_name: str) -> int:\n\"\"\"\n        Watch a job.\n\n        Return the final status code of the first container.\n        \"\"\"\n        self.logger.debug(f\"Job {job_name!r}: Monitoring job...\")\n\n        job = self._get_job(job_name)\n        if not job:\n            return -1\n\n        pod = self._get_job_pod(job_name)\n        if not pod:\n            return -1\n\n        # Calculate the deadline before streaming output\n        deadline = (\n            (time.monotonic() + self.job_watch_timeout_seconds)\n            if self.job_watch_timeout_seconds is not None\n            else None\n        )\n\n        if self.stream_output:\n            with self.get_client() as client:\n                logs = client.read_namespaced_pod_log(\n                    pod.metadata.name,\n                    self.namespace,\n                    follow=True,\n                    _preload_content=False,\n                    container=\"prefect-job\",\n                )\n                try:\n                    for log in logs.stream():\n                        print(log.decode().rstrip())\n\n                        # Check if we have passed the deadline and should stop streaming\n                        # logs\n                        remaining_time = (\n                            deadline - time.monotonic() if deadline else None\n                        )\n                        if deadline and remaining_time <= 0:\n                            break\n\n                except Exception:\n                    self.logger.warning(\n                        (\n                            \"Error occurred while streaming logs - \"\n                            \"Job will continue to run but logs will \"\n                            \"no longer be streamed to stdout.\"\n                        ),\n                        exc_info=True,\n                    )\n\n        with self.get_batch_client() as batch_client:\n            # Check if the job is completed before beginning a watch\n            job = batch_client.read_namespaced_job(\n                name=job_name, namespace=self.namespace\n            )\n            completed = job.status.completion_time is not None\n\n            while not completed:\n                remaining_time = (\n                    math.ceil(deadline - time.monotonic()) if deadline else None\n                )\n                if deadline and remaining_time <= 0:\n                    self.logger.error(\n                        f\"Job {job_name!r}: Job did not complete within \"\n                        f\"timeout of {self.job_watch_timeout_seconds}s.\"\n                    )\n                    return -1\n\n                watch = kubernetes.watch.Watch()\n                # The kubernetes library will disable retries if the timeout kwarg is\n                # present regardless of the value so we do not pass it unless given\n                # https://github.com/kubernetes-client/python/blob/84f5fea2a3e4b161917aa597bf5e5a1d95e24f5a/kubernetes/base/watch/watch.py#LL160\n                timeout_seconds = (\n                    {\"timeout_seconds\": remaining_time} if deadline else {}\n                )\n\n                for event in watch.stream(\n                    func=batch_client.list_namespaced_job,\n                    field_selector=f\"metadata.name={job_name}\",\n                    namespace=self.namespace,\n                    **timeout_seconds,\n                ):\n                    if event[\"type\"] == \"DELETED\":\n                        self.logger.error(f\"Job {job_name!r}: Job has been deleted.\")\n                        completed = True\n                    elif event[\"object\"].status.completion_time:\n                        if not event[\"object\"].status.succeeded:\n                            # Job failed, exit while loop and return pod exit code\n                            self.logger.error(f\"Job {job_name!r}: Job failed.\")\n                        completed = True\n                    # Check if the job has reached its backoff limit\n                    # and stop watching if it has\n                    elif (\n                        event[\"object\"].spec.backoff_limit is not None\n                        and event[\"object\"].status.failed is not None\n                        and event[\"object\"].status.failed\n                        > event[\"object\"].spec.backoff_limit\n                    ):\n                        self.logger.error(\n                            f\"Job {job_name!r}: Job reached backoff limit.\"\n                        )\n                        completed = True\n                    # If the job has no backoff limit, check if it has failed\n                    # and stop watching if it has\n                    elif (\n                        not event[\"object\"].spec.backoff_limit\n                        and event[\"object\"].status.failed\n                    ):\n                        completed = True\n\n                    if completed:\n                        watch.stop()\n                        break\n\n        with self.get_client() as client:\n            # Get all pods for the job\n            pods = client.list_namespaced_pod(\n                namespace=self.namespace, label_selector=f\"job-name={job_name}\"\n            )\n            # Get the status for only the most recently used pod\n            pods.items.sort(\n                key=lambda pod: pod.metadata.creation_timestamp, reverse=True\n            )\n            most_recent_pod = pods.items[0] if pods.items else None\n            first_container_status = (\n                most_recent_pod.status.container_statuses[0]\n                if most_recent_pod\n                else None\n            )\n            if not first_container_status:\n                self.logger.error(f\"Job {job_name!r}: No pods found for job.\")\n                return -1\n\n        return first_container_status.state.terminated.exit_code\n\n    def _create_job(self, job_manifest: KubernetesManifest) -> \"V1Job\":\n\"\"\"\n        Given a Kubernetes Job Manifest, create the Job on the configured Kubernetes\n        cluster and return its name.\n        \"\"\"\n        with self.get_batch_client() as batch_client:\n            job = batch_client.create_namespaced_job(self.namespace, job_manifest)\n        return job\n\n    def _slugify_name(self, name: str) -> str:\n\"\"\"\n        Slugify text for use as a name.\n\n        Keeps only alphanumeric characters and dashes, and caps the length\n        of the slug at 45 chars.\n\n        The 45 character length allows room for the k8s utility\n        \"generateName\" to generate a unique name from the slug while\n        keeping the total length of a name below 63 characters, which is\n        the limit for e.g. label names that follow RFC 1123 (hostnames) and\n        RFC 1035 (domain names).\n\n        Args:\n            name: The name of the job\n\n        Returns:\n            the slugified job name\n        \"\"\"\n        slug = slugify(\n            name,\n            max_length=45,  # Leave enough space for generateName\n            regex_pattern=r\"[^a-zA-Z0-9-]+\",\n        )\n\n        # TODO: Handle the case that the name is an empty string after being\n        # slugified.\n\n        return slug\n\n    def _slugify_label_key(self, key: str) -> str:\n\"\"\"\n        Slugify text for use as a label key.\n\n        Keys are composed of an optional prefix and name, separated by a slash (/).\n\n        Keeps only alphanumeric characters, dashes, underscores, and periods.\n        Limits the length of the label prefix to 253 characters.\n        Limits the length of the label name to 63 characters.\n\n        See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set\n\n        Args:\n            key: The label key\n\n        Returns:\n            The slugified label key\n        \"\"\"\n        if \"/\" in key:\n            prefix, name = key.split(\"/\", maxsplit=1)\n        else:\n            prefix = None\n            name = key\n\n        name_slug = (\n            slugify(name, max_length=63, regex_pattern=r\"[^a-zA-Z0-9-_.]+\").strip(\n                \"_-.\"  # Must start or end with alphanumeric characters\n            )\n            or name\n        )\n        # Fallback to the original if we end up with an empty slug, this will allow\n        # Kubernetes to throw the validation error\n\n        if prefix:\n            prefix_slug = (\n                slugify(\n                    prefix,\n                    max_length=253,\n                    regex_pattern=r\"[^a-zA-Z0-9-\\.]+\",\n                ).strip(\n                    \"_-.\"\n                )  # Must start or end with alphanumeric characters\n                or prefix\n            )\n\n            return f\"{prefix_slug}/{name_slug}\"\n\n        return name_slug\n\n    def _slugify_label_value(self, value: str) -> str:\n\"\"\"\n        Slugify text for use as a label value.\n\n        Keeps only alphanumeric characters, dashes, underscores, and periods.\n        Limits the total length of label text to below 63 characters.\n\n        See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set\n\n        Args:\n            value: The text for the label\n\n        Returns:\n            The slugified value\n        \"\"\"\n        slug = (\n            slugify(value, max_length=63, regex_pattern=r\"[^a-zA-Z0-9-_\\.]+\").strip(\n                \"_-.\"  # Must start or end with alphanumeric characters\n            )\n            or value\n        )\n        # Fallback to the original if we end up with an empty slug, this will allow\n        # Kubernetes to throw the validation error\n\n        return slug\n\n    def _get_environment_variables(self):\n        # If the API URL has been set by the base environment rather than the by the\n        # user, update the value to ensure connectivity when using a bridge network by\n        # updating local connections to use the internal host\n        env = {**self._base_environment(), **self.env}\n\n        if (\n            \"PREFECT_API_URL\" in env\n            and \"PREFECT_API_URL\" not in self.env\n            and self._api_dns_name\n        ):\n            env[\"PREFECT_API_URL\"] = (\n                env[\"PREFECT_API_URL\"]\n                .replace(\"localhost\", self._api_dns_name)\n                .replace(\"127.0.0.1\", self._api_dns_name)\n            )\n\n        # Drop null values allowing users to \"unset\" variables\n        return {key: value for key, value in env.items() if value is not None}\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.kubernetes.KubernetesJob.base_job_manifest","title":"base_job_manifest classmethod","text":"

Produces the bare minimum allowed Job manifest

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/kubernetes.py
@classmethod\ndef base_job_manifest(cls) -> KubernetesManifest:\n\"\"\"Produces the bare minimum allowed Job manifest\"\"\"\n    return {\n        \"apiVersion\": \"batch/v1\",\n        \"kind\": \"Job\",\n        \"metadata\": {\"labels\": {}},\n        \"spec\": {\n            \"template\": {\n                \"spec\": {\n                    \"parallelism\": 1,\n                    \"completions\": 1,\n                    \"restartPolicy\": \"Never\",\n                    \"containers\": [\n                        {\n                            \"name\": \"prefect-job\",\n                            \"env\": [],\n                        }\n                    ],\n                }\n            }\n        },\n    }\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.kubernetes.KubernetesJob.build_job","title":"build_job","text":"

Builds the Kubernetes Job Manifest

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/kubernetes.py
def build_job(self) -> KubernetesManifest:\n\"\"\"Builds the Kubernetes Job Manifest\"\"\"\n    job_manifest = copy.copy(self.job)\n    job_manifest = self._shortcut_customizations().apply(job_manifest)\n    job_manifest = self.customizations.apply(job_manifest)\n    return job_manifest\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.kubernetes.KubernetesJob.customize_from_file","title":"customize_from_file classmethod","text":"

Load an RFC 6902 JSON patch from a YAML or JSON file.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/kubernetes.py
@classmethod\ndef customize_from_file(cls, filename: str) -> JsonPatch:\n\"\"\"Load an RFC 6902 JSON patch from a YAML or JSON file.\"\"\"\n    with open(filename, \"r\", encoding=\"utf-8\") as f:\n        return JsonPatch(yaml.load(f, yaml.SafeLoader))\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.kubernetes.KubernetesJob.job_from_file","title":"job_from_file classmethod","text":"

Load a Kubernetes Job manifest from a YAML or JSON file.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/kubernetes.py
@classmethod\ndef job_from_file(cls, filename: str) -> KubernetesManifest:\n\"\"\"Load a Kubernetes Job manifest from a YAML or JSON file.\"\"\"\n    with open(filename, \"r\", encoding=\"utf-8\") as f:\n        return yaml.load(f, yaml.SafeLoader)\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJobResult","title":"KubernetesJobResult","text":"

Bases: InfrastructureResult

Contains information about the final state of a completed Kubernetes Job

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/kubernetes.py
class KubernetesJobResult(InfrastructureResult):\n\"\"\"Contains information about the final state of a completed Kubernetes Job\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Process","title":"Process","text":"

Bases: Infrastructure

Run a command in a new process.

Current environment variables and Prefect settings will be included in the created process. Configured environment variables will override any current environment variables.

Attributes:

Name Type Description command

A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.

env

Environment variables to set for the new process.

labels

Labels for the process. Labels are for metadata purposes only and cannot be attached to the process itself.

name

A name for the process. For display purposes only.

stream_output bool

Whether to stream output to local stdout.

working_dir Union[str, Path, None]

Working directory where the process should be opened. If not set, a tmp directory will be used.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/process.py
class Process(Infrastructure):\n\"\"\"\n    Run a command in a new process.\n\n    Current environment variables and Prefect settings will be included in the created\n    process. Configured environment variables will override any current environment\n    variables.\n\n    Attributes:\n        command: A list of strings specifying the command to run in the container to\n            start the flow run. In most cases you should not override this.\n        env: Environment variables to set for the new process.\n        labels: Labels for the process. Labels are for metadata purposes only and\n            cannot be attached to the process itself.\n        name: A name for the process. For display purposes only.\n        stream_output: Whether to stream output to local stdout.\n        working_dir: Working directory where the process should be opened. If not set,\n            a tmp directory will be used.\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/39WQhVu4JK40rZWltGqhuC/d15be6189a0cb95949a6b43df00dcb9b/image5.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/concepts/infrastructure/#process\"\n\n    type: Literal[\"process\"] = Field(\n        default=\"process\", description=\"The type of infrastructure.\"\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, output will be streamed from the process to local standard output.\"\n        ),\n    )\n    working_dir: Union[str, Path, None] = Field(\n        default=None,\n        description=(\n            \"If set, the process will open within the specified path as the working\"\n            \" directory. Otherwise, a temporary directory will be created.\"\n        ),\n    )  # Underlying accepted types are str, bytes, PathLike[str], None\n\n    @sync_compatible\n    async def run(\n        self,\n        task_status: anyio.abc.TaskStatus = None,\n    ) -> \"ProcessResult\":\n        if not self.command:\n            raise ValueError(\"Process cannot be run with empty command.\")\n\n        _use_threaded_child_watcher()\n        display_name = f\" {self.name!r}\" if self.name else \"\"\n\n        # Open a subprocess to execute the flow run\n        self.logger.info(f\"Opening process{display_name}...\")\n        working_dir_ctx = (\n            tempfile.TemporaryDirectory(suffix=\"prefect\")\n            if not self.working_dir\n            else contextlib.nullcontext(self.working_dir)\n        )\n        with working_dir_ctx as working_dir:\n            self.logger.debug(\n                f\"Process{display_name} running command: {' '.join(self.command)} in\"\n                f\" {working_dir}\"\n            )\n\n            # We must add creationflags to a dict so it is only passed as a function\n            # parameter on Windows, because the presence of creationflags causes\n            # errors on Unix even if set to None\n            kwargs: Dict[str, object] = {}\n            if sys.platform == \"win32\":\n                kwargs[\"creationflags\"] = subprocess.CREATE_NEW_PROCESS_GROUP\n\n            process = await run_process(\n                self.command,\n                stream_output=self.stream_output,\n                task_status=task_status,\n                task_status_handler=_infrastructure_pid_from_process,\n                env=self._get_environment_variables(),\n                cwd=working_dir,\n                **kwargs,\n            )\n\n        # Use the pid for display if no name was given\n        display_name = display_name or f\" {process.pid}\"\n\n        if process.returncode:\n            help_message = None\n            if process.returncode == -9:\n                help_message = (\n                    \"This indicates that the process exited due to a SIGKILL signal. \"\n                    \"Typically, this is either caused by manual cancellation or \"\n                    \"high memory usage causing the operating system to \"\n                    \"terminate the process.\"\n                )\n            if process.returncode == -15:\n                help_message = (\n                    \"This indicates that the process exited due to a SIGTERM signal. \"\n                    \"Typically, this is caused by manual cancellation.\"\n                )\n            elif process.returncode == 247:\n                help_message = (\n                    \"This indicates that the process was terminated due to high \"\n                    \"memory usage.\"\n                )\n            elif (\n                sys.platform == \"win32\" and process.returncode == STATUS_CONTROL_C_EXIT\n            ):\n                help_message = (\n                    \"Process was terminated due to a Ctrl+C or Ctrl+Break signal. \"\n                    \"Typically, this is caused by manual cancellation.\"\n                )\n\n            self.logger.error(\n                f\"Process{display_name} exited with status code: {process.returncode}\"\n                + (f\"; {help_message}\" if help_message else \"\")\n            )\n        else:\n            self.logger.info(f\"Process{display_name} exited cleanly.\")\n\n        return ProcessResult(\n            status_code=process.returncode, identifier=str(process.pid)\n        )\n\n    async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n        hostname, pid = _parse_infrastructure_pid(infrastructure_pid)\n\n        if hostname != socket.gethostname():\n            raise InfrastructureNotAvailable(\n                f\"Unable to kill process {pid!r}: The process is running on a different\"\n                f\" host {hostname!r}.\"\n            )\n\n        # In a non-windows environment first send a SIGTERM, then, after\n        # `grace_seconds` seconds have passed subsequent send SIGKILL. In\n        # Windows we use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        if sys.platform == \"win32\":\n            try:\n                os.kill(pid, signal.CTRL_BREAK_EVENT)\n            except (ProcessLookupError, WindowsError):\n                raise InfrastructureNotFound(\n                    f\"Unable to kill process {pid!r}: The process was not found.\"\n                )\n        else:\n            try:\n                os.kill(pid, signal.SIGTERM)\n            except ProcessLookupError:\n                raise InfrastructureNotFound(\n                    f\"Unable to kill process {pid!r}: The process was not found.\"\n                )\n\n            # Throttle how often we check if the process is still alive to keep\n            # from making too many system calls in a short period of time.\n            check_interval = max(grace_seconds / 10, 1)\n\n            with anyio.move_on_after(grace_seconds):\n                while True:\n                    await anyio.sleep(check_interval)\n\n                    # Detect if the process is still alive. If not do an early\n                    # return as the process respected the SIGTERM from above.\n                    try:\n                        os.kill(pid, 0)\n                    except ProcessLookupError:\n                        return\n\n            try:\n                os.kill(pid, signal.SIGKILL)\n            except OSError:\n                # We shouldn't ever end up here, but it's possible that the\n                # process ended right after the check above.\n                return\n\n    def preview(self):\n        environment = self._get_environment_variables(include_os_environ=False)\n        return \" \\\\\\n\".join(\n            [f\"{key}={value}\" for key, value in environment.items()]\n            + [\" \".join(self.command)]\n        )\n\n    def _get_environment_variables(self, include_os_environ: bool = True):\n        os_environ = os.environ if include_os_environ else {}\n        # The base environment must override the current environment or\n        # the Prefect settings context may not be respected\n        env = {**os_environ, **self._base_environment(), **self.env}\n\n        # Drop null values allowing users to \"unset\" variables\n        return {key: value for key, value in env.items() if value is not None}\n\n    def _base_flow_run_command(self):\n        return [sys.executable, \"-m\", \"prefect.engine\"]\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.ProcessResult","title":"ProcessResult","text":"

Bases: InfrastructureResult

Contains information about the final state of a completed process

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/process.py
class ProcessResult(InfrastructureResult):\n\"\"\"Contains information about the final state of a completed process\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/logging/","title":"prefect.logging","text":"","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/#prefect.logging","title":"prefect.logging","text":"","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/#prefect.logging.get_logger","title":"get_logger cached","text":"

Get a prefect logger. These loggers are intended for internal use within the prefect package.

See get_run_logger for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the APILogHandler.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/logging/loggers.py
@lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n\"\"\"\n    Get a `prefect` logger. These loggers are intended for internal use within the\n    `prefect` package.\n\n    See `get_run_logger` for retrieving loggers for use within task or flow runs.\n    By default, only run-related loggers are connected to the `APILogHandler`.\n    \"\"\"\n\n    parent_logger = logging.getLogger(\"prefect\")\n\n    if name:\n        # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n        # should not become \"prefect.prefect.test\"\n        if not name.startswith(parent_logger.name + \".\"):\n            logger = parent_logger.getChild(name)\n        else:\n            logger = logging.getLogger(name)\n    else:\n        logger = parent_logger\n\n    return logger\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/#prefect.logging.get_run_logger","title":"get_run_logger","text":"

Get a Prefect logger for the current task run or flow run.

The logger will be named either prefect.task_runs or prefect.flow_runs. Contextual data about the run will be attached to the log records.

These loggers are connected to the APILogHandler by default to send log records to the API.

Parameters:

Name Type Description Default context RunContext

A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed.

None **kwargs str

Additional keyword arguments will be attached to the log records in addition to the run metadata

{}

Raises:

Type Description RuntimeError

If no context can be found

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/logging/loggers.py
def get_run_logger(\n    context: \"RunContext\" = None, **kwargs: str\n) -> Union[logging.Logger, logging.LoggerAdapter]:\n\"\"\"\n    Get a Prefect logger for the current task run or flow run.\n\n    The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n    Contextual data about the run will be attached to the log records.\n\n    These loggers are connected to the `APILogHandler` by default to send log records to\n    the API.\n\n    Arguments:\n        context: A specific context may be provided as an override. By default, the\n            context is inferred from global state and this should not be needed.\n        **kwargs: Additional keyword arguments will be attached to the log records in\n            addition to the run metadata\n\n    Raises:\n        RuntimeError: If no context can be found\n    \"\"\"\n    # Check for existing contexts\n    task_run_context = prefect.context.TaskRunContext.get()\n    flow_run_context = prefect.context.FlowRunContext.get()\n\n    # Apply the context override\n    if context:\n        if isinstance(context, prefect.context.FlowRunContext):\n            flow_run_context = context\n        elif isinstance(context, prefect.context.TaskRunContext):\n            task_run_context = context\n        else:\n            raise TypeError(\n                f\"Received unexpected type {type(context).__name__!r} for context. \"\n                \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n            )\n\n    # Determine if this is a task or flow run logger\n    if task_run_context:\n        logger = task_run_logger(\n            task_run=task_run_context.task_run,\n            task=task_run_context.task,\n            flow_run=flow_run_context.flow_run if flow_run_context else None,\n            flow=flow_run_context.flow if flow_run_context else None,\n            **kwargs,\n        )\n    elif flow_run_context:\n        logger = flow_run_logger(\n            flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n        )\n    elif (\n        get_logger(\"prefect.flow_run\").disabled\n        and get_logger(\"prefect.task_run\").disabled\n    ):\n        logger = logging.getLogger(\"null\")\n    else:\n        raise MissingContextError(\"There is no active flow or task run context.\")\n\n    return logger\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/packaging/","title":"Packaging","text":"","tags":["Python API","packaging","deployments"]},{"location":"api-ref/prefect/packaging/#prefect.packaging","title":"prefect.packaging","text":"","tags":["Python API","packaging","deployments"]},{"location":"api-ref/prefect/serializers/","title":"prefect.serializers","text":"","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers","title":"prefect.serializers","text":"

Serializer implementations for converting objects to bytes and bytes to objects.

All serializers are based on the Serializer class and include a type string that allows them to be referenced without referencing the actual class. For example, you can get often specify the JSONSerializer with the string \"json\". Some serializers support additional settings for configuration of serialization. These are stored on the instance so the same settings can be used to load saved objects.

All serializers must implement dumps and loads which convert objects to bytes and bytes to an object respectively.

","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedJSONSerializer","title":"CompressedJSONSerializer","text":"

Bases: CompressedSerializer

A compressed serializer preconfigured to use the json serializer.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
class CompressedJSONSerializer(CompressedSerializer):\n\"\"\"\n    A compressed serializer preconfigured to use the json serializer.\n    \"\"\"\n\n    type: Literal[\"compressed/json\"] = \"compressed/json\"\n    serializer: Serializer = pydantic.Field(default_factory=JSONSerializer)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedPickleSerializer","title":"CompressedPickleSerializer","text":"

Bases: CompressedSerializer

A compressed serializer preconfigured to use the pickle serializer.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
class CompressedPickleSerializer(CompressedSerializer):\n\"\"\"\n    A compressed serializer preconfigured to use the pickle serializer.\n    \"\"\"\n\n    type: Literal[\"compressed/pickle\"] = \"compressed/pickle\"\n    serializer: Serializer = pydantic.Field(default_factory=PickleSerializer)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedSerializer","title":"CompressedSerializer","text":"

Bases: Serializer

Wraps another serializer, compressing its output. Uses lzma by default. See compressionlib for using alternative libraries.

Attributes:

Name Type Description serializer Serializer

The serializer to use before compression.

compressionlib str

The import path of a compression module to use. Must have methods compress(bytes) -> bytes and decompress(bytes) -> bytes.

level str

If not null, the level of compression to pass to compress.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
class CompressedSerializer(Serializer):\n\"\"\"\n    Wraps another serializer, compressing its output.\n    Uses `lzma` by default. See `compressionlib` for using alternative libraries.\n\n    Attributes:\n        serializer: The serializer to use before compression.\n        compressionlib: The import path of a compression module to use.\n            Must have methods `compress(bytes) -> bytes` and `decompress(bytes) -> bytes`.\n        level: If not null, the level of compression to pass to `compress`.\n    \"\"\"\n\n    type: Literal[\"compressed\"] = \"compressed\"\n\n    serializer: Serializer\n    compressionlib: str = \"lzma\"\n\n    @pydantic.validator(\"serializer\", pre=True)\n    def cast_type_names_to_serializers(cls, value):\n        if isinstance(value, str):\n            return Serializer(type=value)\n        return value\n\n    @pydantic.validator(\"compressionlib\")\n    def check_compressionlib(cls, value):\n\"\"\"\n        Check that the given pickle library is importable and has compress/decompress\n        methods.\n        \"\"\"\n        try:\n            compresser = from_qualified_name(value)\n        except (ImportError, AttributeError) as exc:\n            raise ValueError(\n                f\"Failed to import requested compression library: {value!r}.\"\n            ) from exc\n\n        if not callable(getattr(compresser, \"compress\", None)):\n            raise ValueError(\n                f\"Compression library at {value!r} does not have a 'compress' method.\"\n            )\n\n        if not callable(getattr(compresser, \"decompress\", None)):\n            raise ValueError(\n                f\"Compression library at {value!r} does not have a 'decompress' method.\"\n            )\n\n        return value\n\n    def dumps(self, obj: Any) -> bytes:\n        blob = self.serializer.dumps(obj)\n        compresser = from_qualified_name(self.compressionlib)\n        return base64.encodebytes(compresser.compress(blob))\n\n    def loads(self, blob: bytes) -> Any:\n        compresser = from_qualified_name(self.compressionlib)\n        uncompressed = compresser.decompress(base64.decodebytes(blob))\n        return self.serializer.loads(uncompressed)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedSerializer.check_compressionlib","title":"check_compressionlib","text":"

Check that the given pickle library is importable and has compress/decompress methods.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
@pydantic.validator(\"compressionlib\")\ndef check_compressionlib(cls, value):\n\"\"\"\n    Check that the given pickle library is importable and has compress/decompress\n    methods.\n    \"\"\"\n    try:\n        compresser = from_qualified_name(value)\n    except (ImportError, AttributeError) as exc:\n        raise ValueError(\n            f\"Failed to import requested compression library: {value!r}.\"\n        ) from exc\n\n    if not callable(getattr(compresser, \"compress\", None)):\n        raise ValueError(\n            f\"Compression library at {value!r} does not have a 'compress' method.\"\n        )\n\n    if not callable(getattr(compresser, \"decompress\", None)):\n        raise ValueError(\n            f\"Compression library at {value!r} does not have a 'decompress' method.\"\n        )\n\n    return value\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.JSONSerializer","title":"JSONSerializer","text":"

Bases: Serializer

Serializes data to JSON.

Input types must be compatible with the stdlib json library.

Wraps the json library to serialize to UTF-8 bytes instead of string types.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
class JSONSerializer(Serializer):\n\"\"\"\n    Serializes data to JSON.\n\n    Input types must be compatible with the stdlib json library.\n\n    Wraps the `json` library to serialize to UTF-8 bytes instead of string types.\n    \"\"\"\n\n    type: Literal[\"json\"] = \"json\"\n    jsonlib: str = \"json\"\n    object_encoder: Optional[str] = pydantic.Field(\n        default=\"prefect.serializers.prefect_json_object_encoder\",\n        description=(\n            \"An optional callable to use when serializing objects that are not \"\n            \"supported by the JSON encoder. By default, this is set to a callable that \"\n            \"adds support for all types supported by Pydantic.\"\n        ),\n    )\n    object_decoder: Optional[str] = pydantic.Field(\n        default=\"prefect.serializers.prefect_json_object_decoder\",\n        description=(\n            \"An optional callable to use when deserializing objects. This callable \"\n            \"is passed each dictionary encountered during JSON deserialization. \"\n            \"By default, this is set to a callable that deserializes content created \"\n            \"by our default `object_encoder`.\"\n        ),\n    )\n    dumps_kwargs: dict = pydantic.Field(default_factory=dict)\n    loads_kwargs: dict = pydantic.Field(default_factory=dict)\n\n    @pydantic.validator(\"dumps_kwargs\")\n    def dumps_kwargs_cannot_contain_default(cls, value):\n        # `default` is set by `object_encoder`. A user provided callable would make this\n        # class unserializable anyway.\n        if \"default\" in value:\n            raise ValueError(\n                \"`default` cannot be provided. Use `object_encoder` instead.\"\n            )\n        return value\n\n    @pydantic.validator(\"loads_kwargs\")\n    def loads_kwargs_cannot_contain_object_hook(cls, value):\n        # `object_hook` is set by `object_decoder`. A user provided callable would make\n        # this class unserializable anyway.\n        if \"object_hook\" in value:\n            raise ValueError(\n                \"`object_hook` cannot be provided. Use `object_decoder` instead.\"\n            )\n        return value\n\n    def dumps(self, data: Any) -> bytes:\n        json = from_qualified_name(self.jsonlib)\n        kwargs = self.dumps_kwargs.copy()\n        if self.object_encoder:\n            kwargs[\"default\"] = from_qualified_name(self.object_encoder)\n        result = json.dumps(data, **kwargs)\n        if isinstance(result, str):\n            # The standard library returns str but others may return bytes directly\n            result = result.encode()\n        return result\n\n    def loads(self, blob: bytes) -> Any:\n        json = from_qualified_name(self.jsonlib)\n        kwargs = self.loads_kwargs.copy()\n        if self.object_decoder:\n            kwargs[\"object_hook\"] = from_qualified_name(self.object_decoder)\n        return json.loads(blob.decode(), **kwargs)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer","title":"PickleSerializer","text":"

Bases: Serializer

Serializes objects using the pickle protocol.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
class PickleSerializer(Serializer):\n\"\"\"\n    Serializes objects using the pickle protocol.\n\n    - Uses `cloudpickle` by default. See `picklelib` for using alternative libraries.\n    - Stores the version of the pickle library to check for compatibility during\n        deserialization.\n    - Wraps pickles in base64 for safe transmission.\n    \"\"\"\n\n    type: Literal[\"pickle\"] = \"pickle\"\n\n    picklelib: str = \"cloudpickle\"\n    picklelib_version: str = None\n\n    @pydantic.validator(\"picklelib\")\n    def check_picklelib(cls, value):\n\"\"\"\n        Check that the given pickle library is importable and has dumps/loads methods.\n        \"\"\"\n        try:\n            pickler = from_qualified_name(value)\n        except (ImportError, AttributeError) as exc:\n            raise ValueError(\n                f\"Failed to import requested pickle library: {value!r}.\"\n            ) from exc\n\n        if not callable(getattr(pickler, \"dumps\", None)):\n            raise ValueError(\n                f\"Pickle library at {value!r} does not have a 'dumps' method.\"\n            )\n\n        if not callable(getattr(pickler, \"loads\", None)):\n            raise ValueError(\n                f\"Pickle library at {value!r} does not have a 'loads' method.\"\n            )\n\n        return value\n\n    @pydantic.root_validator\n    def check_picklelib_version(cls, values):\n\"\"\"\n        Infers a default value for `picklelib_version` if null or ensures it matches\n        the version retrieved from the `pickelib`.\n        \"\"\"\n        picklelib = values.get(\"picklelib\")\n        picklelib_version = values.get(\"picklelib_version\")\n\n        if not picklelib:\n            raise ValueError(\"Unable to check version of unrecognized picklelib module\")\n\n        pickler = from_qualified_name(picklelib)\n        pickler_version = getattr(pickler, \"__version__\", None)\n\n        if not picklelib_version:\n            values[\"picklelib_version\"] = pickler_version\n        elif picklelib_version != pickler_version:\n            warnings.warn(\n                (\n                    f\"Mismatched {picklelib!r} versions. Found {pickler_version} in the\"\n                    f\" environment but {picklelib_version} was requested. This may\"\n                    \" cause the serializer to fail.\"\n                ),\n                RuntimeWarning,\n                stacklevel=3,\n            )\n\n        return values\n\n    def dumps(self, obj: Any) -> bytes:\n        pickler = from_qualified_name(self.picklelib)\n        blob = pickler.dumps(obj)\n        return base64.encodebytes(blob)\n\n    def loads(self, blob: bytes) -> Any:\n        pickler = from_qualified_name(self.picklelib)\n        return pickler.loads(base64.decodebytes(blob))\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer.check_picklelib","title":"check_picklelib","text":"

Check that the given pickle library is importable and has dumps/loads methods.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
@pydantic.validator(\"picklelib\")\ndef check_picklelib(cls, value):\n\"\"\"\n    Check that the given pickle library is importable and has dumps/loads methods.\n    \"\"\"\n    try:\n        pickler = from_qualified_name(value)\n    except (ImportError, AttributeError) as exc:\n        raise ValueError(\n            f\"Failed to import requested pickle library: {value!r}.\"\n        ) from exc\n\n    if not callable(getattr(pickler, \"dumps\", None)):\n        raise ValueError(\n            f\"Pickle library at {value!r} does not have a 'dumps' method.\"\n        )\n\n    if not callable(getattr(pickler, \"loads\", None)):\n        raise ValueError(\n            f\"Pickle library at {value!r} does not have a 'loads' method.\"\n        )\n\n    return value\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer.check_picklelib_version","title":"check_picklelib_version","text":"

Infers a default value for picklelib_version if null or ensures it matches the version retrieved from the pickelib.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
@pydantic.root_validator\ndef check_picklelib_version(cls, values):\n\"\"\"\n    Infers a default value for `picklelib_version` if null or ensures it matches\n    the version retrieved from the `pickelib`.\n    \"\"\"\n    picklelib = values.get(\"picklelib\")\n    picklelib_version = values.get(\"picklelib_version\")\n\n    if not picklelib:\n        raise ValueError(\"Unable to check version of unrecognized picklelib module\")\n\n    pickler = from_qualified_name(picklelib)\n    pickler_version = getattr(pickler, \"__version__\", None)\n\n    if not picklelib_version:\n        values[\"picklelib_version\"] = pickler_version\n    elif picklelib_version != pickler_version:\n        warnings.warn(\n            (\n                f\"Mismatched {picklelib!r} versions. Found {pickler_version} in the\"\n                f\" environment but {picklelib_version} was requested. This may\"\n                \" cause the serializer to fail.\"\n            ),\n            RuntimeWarning,\n            stacklevel=3,\n        )\n\n    return values\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer","title":"Serializer","text":"

Bases: BaseModel, Generic[D], abc.ABC

A serializer that can encode objects of type 'D' into bytes.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
@add_type_dispatch\nclass Serializer(BaseModel, Generic[D], abc.ABC):\n\"\"\"\n    A serializer that can encode objects of type 'D' into bytes.\n    \"\"\"\n\n    type: str\n\n    @abc.abstractmethod\n    def dumps(self, obj: D) -> bytes:\n\"\"\"Encode the object into a blob of bytes.\"\"\"\n\n    @abc.abstractmethod\n    def loads(self, blob: bytes) -> D:\n\"\"\"Decode the blob of bytes into an object.\"\"\"\n\n    class Config:\n        extra = \"forbid\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer.dumps","title":"dumps abstractmethod","text":"

Encode the object into a blob of bytes.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
@abc.abstractmethod\ndef dumps(self, obj: D) -> bytes:\n\"\"\"Encode the object into a blob of bytes.\"\"\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer.loads","title":"loads abstractmethod","text":"

Decode the blob of bytes into an object.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
@abc.abstractmethod\ndef loads(self, blob: bytes) -> D:\n\"\"\"Decode the blob of bytes into an object.\"\"\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.prefect_json_object_decoder","title":"prefect_json_object_decoder","text":"

JSONDecoder.object_hook for decoding objects from JSON when previously encoded with prefect_json_object_encoder

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
def prefect_json_object_decoder(result: dict):\n\"\"\"\n    `JSONDecoder.object_hook` for decoding objects from JSON when previously encoded\n    with `prefect_json_object_encoder`\n    \"\"\"\n    if \"__class__\" in result:\n        return pydantic.parse_obj_as(\n            from_qualified_name(result[\"__class__\"]), result[\"data\"]\n        )\n    elif \"__exc_type__\" in result:\n        return from_qualified_name(result[\"__exc_type__\"])(result[\"message\"])\n    else:\n        return result\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.prefect_json_object_encoder","title":"prefect_json_object_encoder","text":"

JSONEncoder.default for encoding objects into JSON with extended type support.

Raises a TypeError to fallback on other encoders on failure.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
def prefect_json_object_encoder(obj: Any) -> Any:\n\"\"\"\n    `JSONEncoder.default` for encoding objects into JSON with extended type support.\n\n    Raises a `TypeError` to fallback on other encoders on failure.\n    \"\"\"\n    if isinstance(obj, BaseException):\n        return {\"__exc_type__\": to_qualified_name(obj.__class__), \"message\": str(obj)}\n    else:\n        return {\n            \"__class__\": to_qualified_name(obj.__class__),\n            \"data\": pydantic_encoder(obj),\n        }\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/settings/","title":"prefect.settings","text":"","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings","title":"prefect.settings","text":"

Prefect settings management.

Each setting is defined as a Setting type. The name of each setting is stylized in all caps, matching the environment variable that can be used to change the setting.

All settings defined in this file are used to generate a dynamic Pydantic settings class called Settings. When insantiated, this class will load settings from environment variables and pull default values from the setting definitions.

The current instance of Settings being used by the application is stored in a SettingsContext model which allows each instance of the Settings class to be accessed in an async-safe manner.

Aside from environment variables, we allow settings to be changed during the runtime of the process using profiles. Profiles contain setting overrides that the user may persist without setting environment variables. Profiles are also used internally for managing settings during task run execution where differing settings may be used concurrently in the same process and during testing where we need to override settings to ensure their value is respected as intended.

The SettingsContext is set when the prefect module is imported. This context is referred to as the \"root\" settings context for clarity. Generally, this is the only settings context that will be used. When this context is entered, we will instantiate a Settings object, loading settings from environment variables and defaults, then we will load the active profile and use it to override settings. See enter_root_settings_context for details on determining the active profile.

Another SettingsContext may be entered at any time to change the settings being used by the code within the context. Generally, users should not use this. Settings management should be left to Prefect application internals.

Generally, settings should be accessed with SETTING_VARIABLE.value() which will pull the current Settings instance from the current SettingsContext and retrieve the value of the relevant setting.

Accessing a setting's value will also call the Setting.value_callback which allows settings to be dynamically modified on retrieval. This allows us to make settings dependent on the value of other settings or perform other dynamic effects.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_HOME","title":"PREFECT_HOME = Setting(Path, default=Path('~') / '.prefect', value_callback=expanduser_in_path) module-attribute","text":"

Prefect's home directory. Defaults to ~/.prefect. This directory may be created automatically when required.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXTRA_ENTRYPOINTS","title":"PREFECT_EXTRA_ENTRYPOINTS = Setting(str, default='') module-attribute","text":"

Modules for Prefect to import when Prefect is imported.

Values should be separated by commas, e.g. my_module,my_other_module. Objects within modules may be specified by a ':' partition, e.g. my_module:my_object. If a callable object is provided, it will be called with no arguments on import.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEBUG_MODE","title":"PREFECT_DEBUG_MODE = Setting(bool, default=False) module-attribute","text":"

If True, places the API in debug mode. This may modify behavior to facilitate debugging, including extra logs and other verbose assistance. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLI_COLORS","title":"PREFECT_CLI_COLORS = Setting(bool, default=True) module-attribute","text":"

If True, use colors in CLI output. If False, output will not include colors codes. Defaults to True.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLI_PROMPT","title":"PREFECT_CLI_PROMPT = Setting(Optional[bool], default=None) module-attribute","text":"

If True, use interactive prompts in CLI commands. If False, no interactive prompts will be used. If None, the value will be dynamically determined based on the presence of an interactive-enabled terminal.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLI_WRAP_LINES","title":"PREFECT_CLI_WRAP_LINES = Setting(bool, default=True) module-attribute","text":"

If True, wrap text by inserting new lines in long lines in CLI output. If False, output will not be wrapped. Defaults to True.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TEST_MODE","title":"PREFECT_TEST_MODE = Setting(bool, default=False) module-attribute","text":"

If True, places the API in test mode. This may modify behavior to facilitate testing. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UNIT_TEST_MODE","title":"PREFECT_UNIT_TEST_MODE = Setting(bool, default=False) module-attribute","text":"

This variable only exists to facilitate unit testing. If True, code is executing in a unit test context. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TEST_SETTING","title":"PREFECT_TEST_SETTING = Setting(Any, default=None, value_callback=only_return_value_in_test_mode) module-attribute","text":"

This variable only exists to facilitate testing of settings. If accessed when PREFECT_TEST_MODE is not set, None is returned.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_TLS_INSECURE_SKIP_VERIFY","title":"PREFECT_API_TLS_INSECURE_SKIP_VERIFY = Setting(bool, default=False) module-attribute","text":"

If True, disables SSL checking to allow insecure requests. This is recommended only during development, e.g. when using self-signed certificates.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_URL","title":"PREFECT_API_URL = Setting(str, default=None) module-attribute","text":"

If provided, the URL of a hosted Prefect API. Defaults to None.

When using Prefect Cloud, this will include an account and workspace.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_KEY","title":"PREFECT_API_KEY = Setting(str, default=None, is_secret=True) module-attribute","text":"

API key used to authenticate with a the Prefect API. Defaults to None.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_ENABLE_HTTP2","title":"PREFECT_API_ENABLE_HTTP2 = Setting(bool, default=True) module-attribute","text":"

If true, enable support for HTTP/2 for communicating with an API.

If the API does not support HTTP/2, this will have no effect and connections will be made via HTTP/1.1.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_MAX_RETRIES","title":"PREFECT_CLIENT_MAX_RETRIES = Setting(int, default=5) module-attribute","text":"

The maximum number of retries to perform on failed HTTP requests.

Defaults to 5. Set to 0 to disable retries.

See PREFECT_CLIENT_RETRY_EXTRA_CODES for details on which HTTP status codes are retried.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_RETRY_JITTER_FACTOR","title":"PREFECT_CLIENT_RETRY_JITTER_FACTOR = Setting(float, default=0.2) module-attribute","text":"

A value greater than or equal to zero to control the amount of jitter added to retried client requests. Higher values introduce larger amounts of jitter.

Set to 0 to disable jitter. See clamped_poisson_interval for details on the how jitter can affect retry lengths.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_RETRY_EXTRA_CODES","title":"PREFECT_CLIENT_RETRY_EXTRA_CODES = Setting(str, default='', value_callback=status_codes_as_integers_in_range) module-attribute","text":"

A comma-separated list of extra HTTP status codes to retry on. Defaults to an empty string. 429 and 503 are always retried. Please note that not all routes are idempotent and retrying may result in unexpected behavior.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_API_URL","title":"PREFECT_CLOUD_API_URL = Setting(str, default='https://api.prefect.cloud/api', value_callback=check_for_deprecated_cloud_url) module-attribute","text":"

API URL for Prefect Cloud. Used for authentication.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_URL","title":"PREFECT_CLOUD_URL = Setting(str, default=None, deprecated=True, deprecated_start_date='Dec 2022', deprecated_help='Use `PREFECT_CLOUD_API_URL` instead.') module-attribute","text":"","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_URL","title":"PREFECT_UI_URL = Setting(Optional[str], default=None, value_callback=default_ui_url) module-attribute","text":"

The URL for the UI. By default, this is inferred from the PREFECT_API_URL.

When using Prefect Cloud, this will include the account and workspace. When using an ephemeral server, this will be None.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_UI_URL","title":"PREFECT_CLOUD_UI_URL = Setting(str, default=None, value_callback=default_cloud_ui_url) module-attribute","text":"

The URL for the Cloud UI. By default, this is inferred from the PREFECT_CLOUD_API_URL.

PREFECT_UI_URL will be workspace specific and will be usable in the open source too.

In contrast, this value is only valid for Cloud and will not include the workspace.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_REQUEST_TIMEOUT","title":"PREFECT_API_REQUEST_TIMEOUT = Setting(float, default=30.0) module-attribute","text":"

The default timeout for requests to the API

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN","title":"PREFECT_EXPERIMENTAL_WARN = Setting(bool, default=True) module-attribute","text":"

If enabled, warn on usage of expirimental features.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_PROFILES_PATH","title":"PREFECT_PROFILES_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'profiles.toml', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

The path to a profiles configuration files.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RESULTS_DEFAULT_SERIALIZER","title":"PREFECT_RESULTS_DEFAULT_SERIALIZER = Setting(str, default='pickle') module-attribute","text":"

The default serializer to use when not otherwise specified.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RESULTS_PERSIST_BY_DEFAULT","title":"PREFECT_RESULTS_PERSIST_BY_DEFAULT = Setting(bool, default=False) module-attribute","text":"

The default setting for persisting results when not otherwise specified. If enabled, flow and task results will be persisted unless they opt out.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASKS_REFRESH_CACHE","title":"PREFECT_TASKS_REFRESH_CACHE = Setting(bool, default=False) module-attribute","text":"

If True, enables a refresh of cached results: re-executing the task will refresh the cached results. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_DEFAULT_RETRIES","title":"PREFECT_TASK_DEFAULT_RETRIES = Setting(int, default=0) module-attribute","text":"

This value sets the default number of retries for all tasks. This value does not overwrite individually set retries values on tasks

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_FLOW_DEFAULT_RETRIES","title":"PREFECT_FLOW_DEFAULT_RETRIES = Setting(int, default=0) module-attribute","text":"

This value sets the default number of retries for all flows. This value does not overwrite individually set retries values on a flow

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS","title":"PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS = Setting(Union[int, float], default=0) module-attribute","text":"

This value sets the retry delay seconds for all flows. This value does not overwrite invidually set retry delay seconds

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS","title":"PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS = Setting(Union[float, int, List[float]], default=0) module-attribute","text":"

This value sets the default retry delay seconds for all tasks. This value does not overwrite invidually set retry delay seconds

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOCAL_STORAGE_PATH","title":"PREFECT_LOCAL_STORAGE_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'storage', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

The path to a directory to store things in.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MEMO_STORE_PATH","title":"PREFECT_MEMO_STORE_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'memo_store.toml', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

The path to the memo store file.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION","title":"PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION = Setting(bool, default=True) module-attribute","text":"

Controls whether or not block auto-registration on start up should be memoized. Setting to False may result in slower server start up times.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_LEVEL","title":"PREFECT_LOGGING_LEVEL = Setting(str, default='INFO', value_callback=debug_mode_log_level) module-attribute","text":"

The default logging level for Prefect loggers. Defaults to \"INFO\" during normal operation. Is forced to \"DEBUG\" during debug mode.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_INTERNAL_LEVEL","title":"PREFECT_LOGGING_INTERNAL_LEVEL = Setting(str, default='ERROR', value_callback=debug_mode_log_level) module-attribute","text":"

The default logging level for Prefect's internal machinery loggers. Defaults to \"ERROR\" during normal operation. Is forced to \"DEBUG\" during debug mode.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_SERVER_LEVEL","title":"PREFECT_LOGGING_SERVER_LEVEL = Setting(str, default='WARNING') module-attribute","text":"

The default logging level for the Prefect API server.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_SETTINGS_PATH","title":"PREFECT_LOGGING_SETTINGS_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'logging.yml', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

The path to a custom YAML logging configuration file. If no file is found, the default logging.yml is used. Defaults to a logging.yml in the Prefect home directory.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_EXTRA_LOGGERS","title":"PREFECT_LOGGING_EXTRA_LOGGERS = Setting(str, default='', value_callback=get_extra_loggers) module-attribute","text":"

Additional loggers to attach to Prefect logging at runtime. Values should be comma separated. The handlers attached to the 'prefect' logger will be added to these loggers. Additionally, if the level is not set, it will be set to the same level as the 'prefect' logger.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_LOG_PRINTS","title":"PREFECT_LOGGING_LOG_PRINTS = Setting(bool, default=False) module-attribute","text":"

If set, print statements in flows and tasks will be redirected to the Prefect logger for the given run. This setting can be overriden by individual tasks and flows.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_ENABLED","title":"PREFECT_LOGGING_TO_API_ENABLED = Setting(bool, default=True) module-attribute","text":"

Toggles sending logs to the API. If False, logs sent to the API log handler will not be sent to the API.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_BATCH_INTERVAL","title":"PREFECT_LOGGING_TO_API_BATCH_INTERVAL = Setting(float, default=2.0) module-attribute","text":"

The number of seconds between batched writes of logs to the API.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_BATCH_SIZE","title":"PREFECT_LOGGING_TO_API_BATCH_SIZE = Setting(int, default=4000000) module-attribute","text":"

The maximum size in bytes for a batch of logs.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_MAX_LOG_SIZE","title":"PREFECT_LOGGING_TO_API_MAX_LOG_SIZE = Setting(int, default=1000000) module-attribute","text":"

The maximum size in bytes for a single log.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW","title":"PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW = Setting(Literal['warn', 'error', 'ignore'], default='warn') module-attribute","text":"

Controls the behavior when loggers attempt to send logs to the API handler from outside of a flow.

All logs sent to the API must be associated with a flow run. The API log handler can only be used outside of a flow by manually providing a flow run identifier. Logs that are not associated with a flow run will not be sent to the API. This setting can be used to determine if a warning or error is displayed when the identifier is missing.

The following options are available:

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_COLORS","title":"PREFECT_LOGGING_COLORS = Setting(bool, default=True) module-attribute","text":"

Whether to style console logs with color.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_MARKUP","title":"PREFECT_LOGGING_MARKUP = Setting(bool, default=False) module-attribute","text":"

Whether to interpret strings wrapped in square brackets as a style. This allows styles to be conveniently added to log messages, e.g. [red]This is a red message.[/red]. However, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. DROP TABLE [dbo].[SomeTable];\" outputs DROP TABLE .[SomeTable];.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_AGENT_QUERY_INTERVAL","title":"PREFECT_AGENT_QUERY_INTERVAL = Setting(float, default=15) module-attribute","text":"

The agent loop interval, in seconds. Agents will check for new runs this often. Defaults to 15.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_AGENT_PREFETCH_SECONDS","title":"PREFECT_AGENT_PREFETCH_SECONDS = Setting(int, default=15) module-attribute","text":"

Agents will look for scheduled runs this many seconds in the future and attempt to run them. This accounts for any additional infrastructure spin-up time or latency in preparing a flow run. Note flow runs will not start before their scheduled time, even if they are prefetched. Defaults to 15.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ASYNC_FETCH_STATE_RESULT","title":"PREFECT_ASYNC_FETCH_STATE_RESULT = Setting(bool, default=False) module-attribute","text":"

Determines whether State.result() fetches results automatically or not. In Prefect 2.6.0, the State.result() method was updated to be async to facilitate automatic retrieval of results from storage which means when writing async code you must await the call. For backwards compatibility, the result is not retrieved by default for async users. You may opt into this per call by passing fetch=True or toggle this setting to change the behavior globally. This setting does not affect users writing synchronous tasks and flows. This setting does not affect retrieval of results when using Future.result().

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_BLOCKS_REGISTER_ON_START","title":"PREFECT_API_BLOCKS_REGISTER_ON_START = Setting(bool, default=True) module-attribute","text":"

If set, any block types that have been imported will be registered with the backend on application startup. If not set, block types must be manually registered.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_PASSWORD","title":"PREFECT_API_DATABASE_PASSWORD = Setting(str, default=None, is_secret=True) module-attribute","text":"

Password to template into the PREFECT_API_DATABASE_CONNECTION_URL. This is useful if the password must be provided separately from the connection URL. To use this setting, you must include it in your connection URL.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_CONNECTION_URL","title":"PREFECT_API_DATABASE_CONNECTION_URL = Setting(str, default=None, value_callback=default_database_connection_url, is_secret=True) module-attribute","text":"

A database connection URL in a SQLAlchemy-compatible format. Prefect currently supports SQLite and Postgres. Note that all Prefect database engines must use an async driver - for SQLite, use sqlite+aiosqlite and for Postgres use postgresql+asyncpg.

SQLite in-memory databases can be used by providing the url sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false, which will allow the database to be accessed by multiple threads. Note that in-memory databases can not be accessed from multiple processes and should only be used for simple tests.

Defaults to a sqlite database stored in the Prefect home directory.

If you need to provide password via a different environment variable, you use the PREFECT_API_DATABASE_PASSWORD setting. For example:

PREFECT_API_DATABASE_PASSWORD='mypassword'\nPREFECT_API_DATABASE_CONNECTION_URL='postgresql+asyncpg://postgres:${PREFECT_API_DATABASE_PASSWORD}@localhost/prefect'\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_ECHO","title":"PREFECT_API_DATABASE_ECHO = Setting(bool, default=False) module-attribute","text":"

If True, SQLAlchemy will log all SQL issued to the database. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_MIGRATE_ON_START","title":"PREFECT_API_DATABASE_MIGRATE_ON_START = Setting(bool, default=True) module-attribute","text":"

If True, the database will be upgraded on application creation. If False, the database will need to be upgraded manually.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_TIMEOUT","title":"PREFECT_API_DATABASE_TIMEOUT = Setting(Optional[float], default=10.0) module-attribute","text":"

A statement timeout, in seconds, applied to all database interactions made by the API. Defaults to 10 seconds.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_CONNECTION_TIMEOUT","title":"PREFECT_API_DATABASE_CONNECTION_TIMEOUT = Setting(Optional[float], default=5) module-attribute","text":"

A connection timeout, in seconds, applied to database connections. Defaults to 5.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS","title":"PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS = Setting(float, default=60) module-attribute","text":"

The scheduler loop interval, in seconds. This determines how often the scheduler will attempt to schedule new flow runs, but has no impact on how quickly either flow runs or task runs are actually executed. Defaults to 60.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE","title":"PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE = Setting(int, default=100) module-attribute","text":"

The number of deployments the scheduler will attempt to schedule in a single batch. If there are more deployments than the batch size, the scheduler immediately attempts to schedule the next batch; it does not sleep for scheduler_loop_seconds until it has visited every deployment once. Defaults to 100.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS","title":"PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS = Setting(int, default=100) module-attribute","text":"

The scheduler will attempt to schedule up to this many auto-scheduled runs in the future. Note that runs may have fewer than this many scheduled runs, depending on the value of scheduler_max_scheduled_time. Defaults to 100.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS","title":"PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS = Setting(int, default=3) module-attribute","text":"

The scheduler will attempt to schedule at least this many auto-scheduled runs in the future. Note that runs may have more than this many scheduled runs, depending on the value of scheduler_min_scheduled_time. Defaults to 3.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME","title":"PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME = Setting(timedelta, default=timedelta(days=100)) module-attribute","text":"

The scheduler will create new runs up to this far in the future. Note that this setting will take precedence over scheduler_max_runs: if a flow runs once a month and scheduler_max_scheduled_time is three months, then only three runs will be scheduled. Defaults to 100 days (8640000 seconds).

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME","title":"PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME = Setting(timedelta, default=timedelta(hours=1)) module-attribute","text":"

The scheduler will create new runs at least this far in the future. Note that this setting will take precedence over scheduler_min_runs: if a flow runs every hour and scheduler_min_scheduled_time is three hours, then three runs will be scheduled even if scheduler_min_runs is 1. Defaults to 1 hour (3600 seconds).

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE","title":"PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE = Setting(int, default=500) module-attribute","text":"

The number of flow runs the scheduler will attempt to insert in one batch across all deployments. If the number of flow runs to schedule exceeds this amount, the runs will be inserted in batches of this size. Defaults to 500.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS","title":"PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS = Setting(float, default=5) module-attribute","text":"

The late runs service will look for runs to mark as late this often. Defaults to 5.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS","title":"PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS = Setting(timedelta, default=timedelta(seconds=5)) module-attribute","text":"

The late runs service will mark runs as late after they have exceeded their scheduled start time by this many seconds. Defaults to 5 seconds.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS","title":"PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS = Setting(float, default=5) module-attribute","text":"

The pause expiration service will look for runs to mark as failed this often. Defaults to 5.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS","title":"PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS = Setting(float, default=20) module-attribute","text":"

The cancellation cleanup service will look non-terminal tasks and subflows this often. Defaults to 20.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DEFAULT_LIMIT","title":"PREFECT_API_DEFAULT_LIMIT = Setting(int, default=200) module-attribute","text":"

The default limit applied to queries that can return multiple objects, such as POST /flow_runs/filter.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_API_HOST","title":"PREFECT_SERVER_API_HOST = Setting(str, default='127.0.0.1') module-attribute","text":"

The API's host address (defaults to 127.0.0.1).

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_API_PORT","title":"PREFECT_SERVER_API_PORT = Setting(int, default=4200) module-attribute","text":"

The API's port address (defaults to 4200).

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_API_KEEPALIVE_TIMEOUT","title":"PREFECT_SERVER_API_KEEPALIVE_TIMEOUT = Setting(int, default=5) module-attribute","text":"

The API's keep alive timeout (defaults to 5). Refer to https://www.uvicorn.org/settings/#timeouts for details.

When the API is hosted behind a load balancer, you may want to set this to a value greater than the load balancer's idle timeout.

Note this setting only applies when calling prefect server start; if hosting the API with another tool you will need to configure this there instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_ENABLED","title":"PREFECT_UI_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to serve the Prefect UI.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_API_URL","title":"PREFECT_UI_API_URL = Setting(str, default=None, value_callback=default_ui_api_url) module-attribute","text":"

The connection url for communication from the UI to the API. Defaults to PREFECT_API_URL if set. Otherwise, the default URL is generated from PREFECT_SERVER_API_HOST and PREFECT_SERVER_API_PORT. If providing a custom value, the aforementioned settings may be templated into the given string.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_ANALYTICS_ENABLED","title":"PREFECT_SERVER_ANALYTICS_ENABLED = Setting(bool, default=True) module-attribute","text":"

When enabled, Prefect sends anonymous data (e.g. count of flow runs, package version) on server startup to help us improve our product.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_ENABLED","title":"PREFECT_API_SERVICES_SCHEDULER_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the scheduling service in the server application. If disabled, you will need to run this service separately to schedule runs for deployments.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_ENABLED","title":"PREFECT_API_SERVICES_LATE_RUNS_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the late runs service in the server application. If disabled, you will need to run this service separately to have runs past their scheduled start time marked as late.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED","title":"PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the flow run notifications service in the server application. If disabled, you will need to run this service separately to send flow run notifications.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED","title":"PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the paused flow run expiration service in the server application. If disabled, paused flows that have timed out will remain in a Paused state until a resume attempt.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH","title":"PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH = Setting(int, default=2000) module-attribute","text":"

The maximum number of characters allowed for a task run cache key. This setting cannot be changed client-side, it must be set on the server.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED","title":"PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the cancellation cleanup service in the server application. If disabled, task runs and subflow runs belonging to cancelled flows may remain in non-terminal states.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_EVENTS_CLIENT","title":"PREFECT_EXPERIMENTAL_ENABLE_EVENTS_CLIENT = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental Prefect work pools.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_EVENTS_CLIENT","title":"PREFECT_EXPERIMENTAL_WARN_EVENTS_CLIENT = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental Prefect work pools are used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORK_POOLS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORK_POOLS = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental Prefect work pools.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORK_POOLS","title":"PREFECT_EXPERIMENTAL_WARN_WORK_POOLS = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental Prefect work pools are used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORKERS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORKERS = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental Prefect workers.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORKERS","title":"PREFECT_EXPERIMENTAL_WARN_WORKERS = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental Prefect workers are used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_HEARTBEAT_SECONDS","title":"PREFECT_WORKER_HEARTBEAT_SECONDS = Setting(float, default=30) module-attribute","text":"

Number of seconds a worker should wait between sending a heartbeat.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_QUERY_SECONDS","title":"PREFECT_WORKER_QUERY_SECONDS = Setting(float, default=10) module-attribute","text":"

Number of seconds a worker should wait between queries for scheduled flow runs.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_PREFETCH_SECONDS","title":"PREFECT_WORKER_PREFETCH_SECONDS = Setting(float, default=10) module-attribute","text":"

The number of seconds into the future a worker should query for scheduled flow runs. Can be used to compensate for infrastructure start up time for a worker.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS","title":"PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental Prefect artifacts.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_ARTIFACTS","title":"PREFECT_EXPERIMENTAL_WARN_ARTIFACTS = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental Prefect artifacts are used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORKSPACE_DASHBOARD","title":"PREFECT_EXPERIMENTAL_ENABLE_WORKSPACE_DASHBOARD = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable the experimental workspace dashboard.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORKSPACE_DASHBOARD","title":"PREFECT_EXPERIMENTAL_WARN_WORKSPACE_DASHBOARD = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when the experimental workspace dashboard is enabled.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_ORION_ENABLED","title":"PREFECT_LOGGING_ORION_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_LOGGING_TO_API_ENABLED` instead.', deprecated_renamed_to=PREFECT_LOGGING_TO_API_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_LOGGING_TO_API_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_ORION_BATCH_INTERVAL","title":"PREFECT_LOGGING_ORION_BATCH_INTERVAL = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_LOGGING_TO_API_BATCH_INTERVAL` instead.', deprecated_renamed_to=PREFECT_LOGGING_TO_API_BATCH_INTERVAL) module-attribute","text":"

Deprecated. Use PREFECT_LOGGING_TO_API_BATCH_INTERVAL instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_ORION_BATCH_SIZE","title":"PREFECT_LOGGING_ORION_BATCH_SIZE = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_LOGGING_TO_API_BATCH_SIZE` instead.', deprecated_renamed_to=PREFECT_LOGGING_TO_API_BATCH_SIZE) module-attribute","text":"

Deprecated. Use PREFECT_LOGGING_TO_API_BATCH_SIZE instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_ORION_MAX_LOG_SIZE","title":"PREFECT_LOGGING_ORION_MAX_LOG_SIZE = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_LOGGING_TO_API_MAX_LOG_SIZE` instead.', deprecated_renamed_to=PREFECT_LOGGING_TO_API_MAX_LOG_SIZE) module-attribute","text":"

Deprecated. Use PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_ORION_WHEN_MISSING_FLOW","title":"PREFECT_LOGGING_ORION_WHEN_MISSING_FLOW = Setting(Optional[Literal['warn', 'error', 'ignore']], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW` instead.', deprecated_renamed_to=PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW) module-attribute","text":"

Deprecated. Use PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_BLOCKS_REGISTER_ON_START","title":"PREFECT_ORION_BLOCKS_REGISTER_ON_START = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_BLOCKS_REGISTER_ON_START` instead.', deprecated_renamed_to=PREFECT_API_BLOCKS_REGISTER_ON_START) module-attribute","text":"

Deprecated. Use PREFECT_API_BLOCKS_REGISTER_ON_START instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_DATABASE_PASSWORD","title":"PREFECT_ORION_DATABASE_PASSWORD = Setting(Optional[str], default=None, is_secret=True, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DATABASE_PASSWORD` instead.', deprecated_renamed_to=PREFECT_API_DATABASE_PASSWORD) module-attribute","text":"

Deprecated. Use PREFECT_API_DATABASE_PASSWORD instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_DATABASE_CONNECTION_URL","title":"PREFECT_ORION_DATABASE_CONNECTION_URL = Setting(Optional[str], default=None, value_callback=template_with_settings(PREFECT_HOME, PREFECT_API_DATABASE_PASSWORD), is_secret=True, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DATABASE_CONNECTION_URL` instead.', deprecated_renamed_to=PREFECT_API_DATABASE_CONNECTION_URL) module-attribute","text":"

Deprecated. Use PREFECT_API_DATABASE_CONNECTION_URL instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_DATABASE_ECHO","title":"PREFECT_ORION_DATABASE_ECHO = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DATABASE_ECHO` instead.', deprecated_renamed_to=PREFECT_API_DATABASE_ECHO) module-attribute","text":"

Deprecated. Use PREFECT_API_DATABASE_ECHO instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_DATABASE_MIGRATE_ON_START","title":"PREFECT_ORION_DATABASE_MIGRATE_ON_START = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DATABASE_MIGRATE_ON_START` instead.', deprecated_renamed_to=PREFECT_API_DATABASE_MIGRATE_ON_START) module-attribute","text":"

Deprecated. Use PREFECT_API_DATABASE_MIGRATE_ON_START instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_DATABASE_TIMEOUT","title":"PREFECT_ORION_DATABASE_TIMEOUT = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DATABASE_TIMEOUT` instead.', deprecated_renamed_to=PREFECT_API_DATABASE_TIMEOUT) module-attribute","text":"

Deprecated. Use PREFECT_API_DATABASE_TIMEOUT instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_DATABASE_CONNECTION_TIMEOUT","title":"PREFECT_ORION_DATABASE_CONNECTION_TIMEOUT = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DATABASE_CONNECTION_TIMEOUT` instead.', deprecated_renamed_to=PREFECT_API_DATABASE_CONNECTION_TIMEOUT) module-attribute","text":"

Deprecated. Use PREFECT_API_DATABASE_CONNECTION_TIMEOUT instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_LOOP_SECONDS","title":"PREFECT_ORION_SERVICES_SCHEDULER_LOOP_SECONDS = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE","title":"PREFECT_ORION_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_MAX_RUNS","title":"PREFECT_ORION_SERVICES_SCHEDULER_MAX_RUNS = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_MIN_RUNS","title":"PREFECT_ORION_SERVICES_SCHEDULER_MIN_RUNS = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME","title":"PREFECT_ORION_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME = Setting(Optional[timedelta], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME","title":"PREFECT_ORION_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME = Setting(Optional[timedelta], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_INSERT_BATCH_SIZE","title":"PREFECT_ORION_SERVICES_SCHEDULER_INSERT_BATCH_SIZE = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_LATE_RUNS_LOOP_SECONDS","title":"PREFECT_ORION_SERVICES_LATE_RUNS_LOOP_SECONDS = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_LATE_RUNS_AFTER_SECONDS","title":"PREFECT_ORION_SERVICES_LATE_RUNS_AFTER_SECONDS = Setting(Optional[timedelta], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS","title":"PREFECT_ORION_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS","title":"PREFECT_ORION_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_API_DEFAULT_LIMIT","title":"PREFECT_ORION_API_DEFAULT_LIMIT = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DEFAULT_LIMIT` instead.', deprecated_renamed_to=PREFECT_API_DEFAULT_LIMIT) module-attribute","text":"

Deprecated. Use PREFECT_API_DEFAULT_LIMIT instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_API_HOST","title":"PREFECT_ORION_API_HOST = Setting(Optional[str], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_SERVER_API_HOST` instead.', deprecated_renamed_to=PREFECT_SERVER_API_HOST) module-attribute","text":"

Deprecated. Use PREFECT_SERVER_API_HOST instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_API_PORT","title":"PREFECT_ORION_API_PORT = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_SERVER_API_PORT` instead.', deprecated_renamed_to=PREFECT_SERVER_API_PORT) module-attribute","text":"

Deprecated. Use PREFECT_SERVER_API_PORT instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_API_KEEPALIVE_TIMEOUT","title":"PREFECT_ORION_API_KEEPALIVE_TIMEOUT = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_SERVER_API_KEEPALIVE_TIMEOUT` instead.', deprecated_renamed_to=PREFECT_SERVER_API_KEEPALIVE_TIMEOUT) module-attribute","text":"

Deprecated. Use PREFECT_SERVER_API_KEEPALIVE_TIMEOUT instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_UI_ENABLED","title":"PREFECT_ORION_UI_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_UI_ENABLED` instead.', deprecated_renamed_to=PREFECT_UI_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_UI_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_UI_API_URL","title":"PREFECT_ORION_UI_API_URL = Setting(Optional[str], default=None, value_callback=default_ui_api_url, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_UI_API_URL` instead.', deprecated_renamed_to=PREFECT_UI_API_URL) module-attribute","text":"

Deprecated. Use PREFECT_UI_API_URL instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_ANALYTICS_ENABLED","title":"PREFECT_ORION_ANALYTICS_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_SERVER_ANALYTICS_ENABLED` instead.', deprecated_renamed_to=PREFECT_SERVER_ANALYTICS_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_SERVER_ANALYTICS_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_ENABLED","title":"PREFECT_ORION_SERVICES_SCHEDULER_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_ENABLED` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_LATE_RUNS_ENABLED","title":"PREFECT_ORION_SERVICES_LATE_RUNS_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_LATE_RUNS_ENABLED` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_LATE_RUNS_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_LATE_RUNS_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED","title":"PREFECT_ORION_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_PAUSE_EXPIRATIONS_ENABLED","title":"PREFECT_ORION_SERVICES_PAUSE_EXPIRATIONS_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_TASK_CACHE_KEY_MAX_LENGTH","title":"PREFECT_ORION_TASK_CACHE_KEY_MAX_LENGTH = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH` instead.', deprecated_renamed_to=PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH) module-attribute","text":"

Deprecated. Use PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_CANCELLATION_CLEANUP_ENABLED","title":"PREFECT_ORION_SERVICES_CANCELLATION_CLEANUP_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting","title":"Setting","text":"

Bases: Generic[T]

Setting definition type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
class Setting(Generic[T]):\n\"\"\"\n    Setting definition type.\n    \"\"\"\n\n    def __init__(\n        self,\n        type: Type[T],\n        *,\n        deprecated: bool = False,\n        deprecated_start_date: Optional[str] = None,\n        deprecated_end_date: Optional[str] = None,\n        deprecated_help: str = \"\",\n        deprecated_when_message: str = \"\",\n        deprecated_when: Optional[Callable[[Any], bool]] = None,\n        deprecated_renamed_to: Optional[\"Setting\"] = None,\n        value_callback: Callable[[\"Settings\", T], T] = None,\n        is_secret: bool = False,\n        **kwargs,\n    ) -> None:\n        self.field: pydantic.fields.FieldInfo = Field(**kwargs)\n        self.type = type\n        self.value_callback = value_callback\n        self._name = None\n        self.is_secret = is_secret\n        self.deprecated = deprecated\n        self.deprecated_start_date = deprecated_start_date\n        self.deprecated_end_date = deprecated_end_date\n        self.deprecated_help = deprecated_help\n        self.deprecated_when = deprecated_when or (lambda _: True)\n        self.deprecated_when_message = deprecated_when_message\n        self.deprecated_renamed_to = deprecated_renamed_to\n        self.deprecated_renamed_from = None\n        self.__doc__ = self.field.description\n\n        # Validate the deprecation settings, will throw an error at setting definition\n        # time if the developer has not configured it correctly\n        if deprecated:\n            generate_deprecation_message(\n                name=\"...\",  # setting names not populated until after init\n                start_date=self.deprecated_start_date,\n                end_date=self.deprecated_end_date,\n                help=self.deprecated_help,\n                when=self.deprecated_when_message,\n            )\n\n        if deprecated_renamed_to is not None:\n            # Track the deprecation both ways\n            deprecated_renamed_to.deprecated_renamed_from = self\n\n    def value(self, bypass_callback: bool = False) -> T:\n\"\"\"\n        Get the current value of a setting.\n\n        Example:\n        ```python\n        from prefect.settings import PREFECT_API_URL\n        PREFECT_API_URL.value()\n        ```\n        \"\"\"\n        return self.value_from(get_current_settings(), bypass_callback=bypass_callback)\n\n    def value_from(self, settings: \"Settings\", bypass_callback: bool = False) -> T:\n\"\"\"\n        Get the value of a setting from a settings object\n\n        Example:\n        ```python\n        from prefect.settings import get_default_settings\n        PREFECT_API_URL.value_from(get_default_settings())\n        ```\n        \"\"\"\n        value = settings.value_of(self, bypass_callback=bypass_callback)\n\n        if not bypass_callback and self.deprecated and self.deprecated_when(value):\n            # Check if this setting is deprecated and someone is accessing the value\n            # via the old name\n            warnings.warn(self.deprecated_message, DeprecationWarning, stacklevel=3)\n\n            # If the the value is empty, return the new setting's value for compat\n            if value is None and self.deprecated_renamed_to is not None:\n                return self.deprecated_renamed_to.value_from(settings)\n\n        if not bypass_callback and self.deprecated_renamed_from is not None:\n            # Check if this setting is a rename of a deprecated setting and the\n            # deprecated setting is set and should be used for compatibility\n            deprecated_value = self.deprecated_renamed_from.value_from(\n                settings, bypass_callback=True\n            )\n            if deprecated_value is not None:\n                warnings.warn(\n                    (\n                        f\"{self.deprecated_renamed_from.deprecated_message} Because\"\n                        f\" {self.deprecated_renamed_from.name!r} is set it will be used\"\n                        f\" instead of {self.name!r} for backwards compatibility.\"\n                    ),\n                    DeprecationWarning,\n                    stacklevel=3,\n                )\n            return deprecated_value or value\n\n        return value\n\n    @property\n    def name(self):\n        if self._name:\n            return self._name\n\n        # Lookup the name on first access\n        for name, val in tuple(globals().items()):\n            if val == self:\n                self._name = name\n                return name\n\n        raise ValueError(\"Setting not found in `prefect.settings` module.\")\n\n    @name.setter\n    def name(self, value: str):\n        self._name = value\n\n    @property\n    def deprecated_message(self):\n        return generate_deprecation_message(\n            name=f\"Setting {self.name!r}\",\n            start_date=self.deprecated_start_date,\n            end_date=self.deprecated_end_date,\n            help=self.deprecated_help,\n            when=self.deprecated_when_message,\n        )\n\n    def __repr__(self) -> str:\n        return f\"<{self.name}: {self.type.__name__}>\"\n\n    def __bool__(self) -> bool:\n\"\"\"\n        Returns a truthy check of the current value.\n        \"\"\"\n        return bool(self.value())\n\n    def __eq__(self, __o: object) -> bool:\n        return __o.__eq__(self.value())\n\n    def __hash__(self) -> int:\n        return hash((type(self), self.name))\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting.value","title":"value","text":"

Get the current value of a setting.

Example:

from prefect.settings import PREFECT_API_URL\nPREFECT_API_URL.value()\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def value(self, bypass_callback: bool = False) -> T:\n\"\"\"\n    Get the current value of a setting.\n\n    Example:\n    ```python\n    from prefect.settings import PREFECT_API_URL\n    PREFECT_API_URL.value()\n    ```\n    \"\"\"\n    return self.value_from(get_current_settings(), bypass_callback=bypass_callback)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting.value_from","title":"value_from","text":"

Get the value of a setting from a settings object

Example:

from prefect.settings import get_default_settings\nPREFECT_API_URL.value_from(get_default_settings())\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def value_from(self, settings: \"Settings\", bypass_callback: bool = False) -> T:\n\"\"\"\n    Get the value of a setting from a settings object\n\n    Example:\n    ```python\n    from prefect.settings import get_default_settings\n    PREFECT_API_URL.value_from(get_default_settings())\n    ```\n    \"\"\"\n    value = settings.value_of(self, bypass_callback=bypass_callback)\n\n    if not bypass_callback and self.deprecated and self.deprecated_when(value):\n        # Check if this setting is deprecated and someone is accessing the value\n        # via the old name\n        warnings.warn(self.deprecated_message, DeprecationWarning, stacklevel=3)\n\n        # If the the value is empty, return the new setting's value for compat\n        if value is None and self.deprecated_renamed_to is not None:\n            return self.deprecated_renamed_to.value_from(settings)\n\n    if not bypass_callback and self.deprecated_renamed_from is not None:\n        # Check if this setting is a rename of a deprecated setting and the\n        # deprecated setting is set and should be used for compatibility\n        deprecated_value = self.deprecated_renamed_from.value_from(\n            settings, bypass_callback=True\n        )\n        if deprecated_value is not None:\n            warnings.warn(\n                (\n                    f\"{self.deprecated_renamed_from.deprecated_message} Because\"\n                    f\" {self.deprecated_renamed_from.name!r} is set it will be used\"\n                    f\" instead of {self.name!r} for backwards compatibility.\"\n                ),\n                DeprecationWarning,\n                stacklevel=3,\n            )\n        return deprecated_value or value\n\n    return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings","title":"Settings","text":"

Bases: SettingsFieldsMixin

Contains validated Prefect settings.

Settings should be accessed using the relevant Setting object. For example:

from prefect.settings import PREFECT_HOME\nPREFECT_HOME.value()\n

Accessing a setting attribute directly will ignore any value_callback mutations. This is not recommended:

from prefect.settings import Settings\nSettings().PREFECT_PROFILES_PATH  # PosixPath('${PREFECT_HOME}/profiles.toml')\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
@add_cloudpickle_reduction\nclass Settings(SettingsFieldsMixin):\n\"\"\"\n    Contains validated Prefect settings.\n\n    Settings should be accessed using the relevant `Setting` object. For example:\n    ```python\n    from prefect.settings import PREFECT_HOME\n    PREFECT_HOME.value()\n    ```\n\n    Accessing a setting attribute directly will ignore any `value_callback` mutations.\n    This is not recommended:\n    ```python\n    from prefect.settings import Settings\n    Settings().PREFECT_PROFILES_PATH  # PosixPath('${PREFECT_HOME}/profiles.toml')\n    ```\n    \"\"\"\n\n    def value_of(self, setting: Setting[T], bypass_callback: bool = False) -> T:\n\"\"\"\n        Retrieve a setting's value.\n        \"\"\"\n        value = getattr(self, setting.name)\n        if setting.value_callback and not bypass_callback:\n            value = setting.value_callback(self, value)\n        return value\n\n    @validator(PREFECT_LOGGING_LEVEL.name, PREFECT_LOGGING_SERVER_LEVEL.name)\n    def check_valid_log_level(cls, value):\n        if isinstance(value, str):\n            value = value.upper()\n        logging._checkLevel(value)\n        return value\n\n    @root_validator\n    def post_root_validators(cls, values):\n\"\"\"\n        Add root validation functions for settings here.\n        \"\"\"\n        # TODO: We could probably register these dynamically but this is the simpler\n        #       approach for now. We can explore more interesting validation features\n        #       in the future.\n        values = max_log_size_smaller_than_batch_size(values)\n        values = warn_on_database_password_value_without_usage(values)\n        return values\n\n    def copy_with_update(\n        self,\n        updates: Mapping[Setting, Any] = None,\n        set_defaults: Mapping[Setting, Any] = None,\n        restore_defaults: Iterable[Setting] = None,\n    ) -> \"Settings\":\n\"\"\"\n        Create a new `Settings` object with validation.\n\n        Arguments:\n            updates: A mapping of settings to new values. Existing values for the\n                given settings will be overridden.\n            set_defaults: A mapping of settings to new default values. Existing values for\n                the given settings will only be overridden if they were not set.\n            restore_defaults: An iterable of settings to restore to their default values.\n\n        Returns:\n            A new `Settings` object.\n        \"\"\"\n        updates = updates or {}\n        set_defaults = set_defaults or {}\n        restore_defaults = restore_defaults or set()\n        restore_defaults_names = {setting.name for setting in restore_defaults}\n\n        return self.__class__(\n            **{\n                **{setting.name: value for setting, value in set_defaults.items()},\n                **self.dict(exclude_unset=True, exclude=restore_defaults_names),\n                **{setting.name: value for setting, value in updates.items()},\n            }\n        )\n\n    def with_obfuscated_secrets(self):\n\"\"\"\n        Returns a copy of this settings object with secret setting values obfuscated.\n        \"\"\"\n        settings = self.copy(\n            update={\n                setting.name: obfuscate(self.value_of(setting))\n                for setting in SETTING_VARIABLES.values()\n                if setting.is_secret\n                # Exclude deprecated settings with null values to avoid warnings\n                and not (setting.deprecated and self.value_of(setting) is None)\n            }\n        )\n        # Ensure that settings that have not been marked as \"set\" before are still so\n        # after we have updated their value above\n        settings.__fields_set__.intersection_update(self.__fields_set__)\n        return settings\n\n    def to_environment_variables(\n        self, include: Iterable[Setting] = None, exclude_unset: bool = False\n    ) -> Dict[str, str]:\n\"\"\"\n        Convert the settings object to environment variables.\n\n        Note that setting values will not be run through their `value_callback` allowing\n        dynamic resolution to occur when loaded from the returned environment.\n\n        Args:\n            include_keys: An iterable of settings to include in the return value.\n                If not set, all settings are used.\n            exclude_unset: Only include settings that have been set (i.e. the value is\n                not from the default). If set, unset keys will be dropped even if they\n                are set in `include_keys`.\n\n        Returns:\n            A dictionary of settings with values cast to strings\n        \"\"\"\n        include = set(include or SETTING_VARIABLES.values())\n\n        if exclude_unset:\n            set_keys = {\n                # Collect all of the \"set\" keys and cast to `Setting` objects\n                SETTING_VARIABLES[key]\n                for key in self.dict(exclude_unset=True)\n            }\n            include.intersection_update(set_keys)\n\n        # Validate the types of items in `include` to prevent exclusion bugs\n        for key in include:\n            if not isinstance(key, Setting):\n                raise TypeError(\n                    \"Invalid type {type(key).__name__!r} for key in `include`.\"\n                )\n\n        env = {\n            # Use `getattr` instead of `value_of` to avoid value callback resolution\n            key: getattr(self, key)\n            for key, setting in SETTING_VARIABLES.items()\n            if setting in include\n        }\n\n        # Cast to strings and drop null values\n        return {key: str(value) for key, value in env.items() if value is not None}\n\n    class Config:\n        frozen = True\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.value_of","title":"value_of","text":"

Retrieve a setting's value.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def value_of(self, setting: Setting[T], bypass_callback: bool = False) -> T:\n\"\"\"\n    Retrieve a setting's value.\n    \"\"\"\n    value = getattr(self, setting.name)\n    if setting.value_callback and not bypass_callback:\n        value = setting.value_callback(self, value)\n    return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.post_root_validators","title":"post_root_validators","text":"

Add root validation functions for settings here.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
@root_validator\ndef post_root_validators(cls, values):\n\"\"\"\n    Add root validation functions for settings here.\n    \"\"\"\n    # TODO: We could probably register these dynamically but this is the simpler\n    #       approach for now. We can explore more interesting validation features\n    #       in the future.\n    values = max_log_size_smaller_than_batch_size(values)\n    values = warn_on_database_password_value_without_usage(values)\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.with_obfuscated_secrets","title":"with_obfuscated_secrets","text":"

Returns a copy of this settings object with secret setting values obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def with_obfuscated_secrets(self):\n\"\"\"\n    Returns a copy of this settings object with secret setting values obfuscated.\n    \"\"\"\n    settings = self.copy(\n        update={\n            setting.name: obfuscate(self.value_of(setting))\n            for setting in SETTING_VARIABLES.values()\n            if setting.is_secret\n            # Exclude deprecated settings with null values to avoid warnings\n            and not (setting.deprecated and self.value_of(setting) is None)\n        }\n    )\n    # Ensure that settings that have not been marked as \"set\" before are still so\n    # after we have updated their value above\n    settings.__fields_set__.intersection_update(self.__fields_set__)\n    return settings\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.to_environment_variables","title":"to_environment_variables","text":"

Convert the settings object to environment variables.

Note that setting values will not be run through their value_callback allowing dynamic resolution to occur when loaded from the returned environment.

Parameters:

Name Type Description Default include_keys

An iterable of settings to include in the return value. If not set, all settings are used.

required exclude_unset bool

Only include settings that have been set (i.e. the value is not from the default). If set, unset keys will be dropped even if they are set in include_keys.

False

Returns:

Type Description Dict[str, str]

A dictionary of settings with values cast to strings

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def to_environment_variables(\n    self, include: Iterable[Setting] = None, exclude_unset: bool = False\n) -> Dict[str, str]:\n\"\"\"\n    Convert the settings object to environment variables.\n\n    Note that setting values will not be run through their `value_callback` allowing\n    dynamic resolution to occur when loaded from the returned environment.\n\n    Args:\n        include_keys: An iterable of settings to include in the return value.\n            If not set, all settings are used.\n        exclude_unset: Only include settings that have been set (i.e. the value is\n            not from the default). If set, unset keys will be dropped even if they\n            are set in `include_keys`.\n\n    Returns:\n        A dictionary of settings with values cast to strings\n    \"\"\"\n    include = set(include or SETTING_VARIABLES.values())\n\n    if exclude_unset:\n        set_keys = {\n            # Collect all of the \"set\" keys and cast to `Setting` objects\n            SETTING_VARIABLES[key]\n            for key in self.dict(exclude_unset=True)\n        }\n        include.intersection_update(set_keys)\n\n    # Validate the types of items in `include` to prevent exclusion bugs\n    for key in include:\n        if not isinstance(key, Setting):\n            raise TypeError(\n                \"Invalid type {type(key).__name__!r} for key in `include`.\"\n            )\n\n    env = {\n        # Use `getattr` instead of `value_of` to avoid value callback resolution\n        key: getattr(self, key)\n        for key, setting in SETTING_VARIABLES.items()\n        if setting in include\n    }\n\n    # Cast to strings and drop null values\n    return {key: str(value) for key, value in env.items() if value is not None}\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile","title":"Profile","text":"

Bases: pydantic.BaseModel

A user profile containing settings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
class Profile(pydantic.BaseModel):\n\"\"\"\n    A user profile containing settings.\n    \"\"\"\n\n    name: str\n    settings: Dict[Setting, Any] = Field(default_factory=dict)\n    source: Optional[Path]\n\n    @pydantic.validator(\"settings\", pre=True)\n    def map_names_to_settings(cls, value):\n        if value is None:\n            return value\n\n        # Cast string setting names to variables\n        validated = {}\n        for setting, val in value.items():\n            if isinstance(setting, str) and setting in SETTING_VARIABLES:\n                validated[SETTING_VARIABLES[setting]] = val\n            elif isinstance(setting, Setting):\n                validated[setting] = val\n            else:\n                raise ValueError(f\"Unknown setting {setting!r}.\")\n\n        return validated\n\n    def validate_settings(self) -> None:\n\"\"\"\n        Validate the settings contained in this profile.\n\n        Raises:\n            pydantic.ValidationError: When settings do not have valid values.\n        \"\"\"\n        # Create a new `Settings` instance with the settings from this profile relying\n        # on Pydantic validation to raise an error.\n        # We do not return the `Settings` object because this is not the recommended\n        # path for constructing settings with a profile. See `use_profile` instead.\n        Settings(**{setting.name: value for setting, value in self.settings.items()})\n\n    def convert_deprecated_renamed_settings(self) -> List[Tuple[Setting, Setting]]:\n\"\"\"\n        Update settings in place to replace deprecated settings with new settings when\n        renamed.\n\n        Returns a list of tuples with the old and new setting.\n        \"\"\"\n        changed = []\n        for setting in tuple(self.settings):\n            if (\n                setting.deprecated\n                and setting.deprecated_renamed_to\n                and setting.deprecated_renamed_to not in self.settings\n            ):\n                self.settings[setting.deprecated_renamed_to] = self.settings.pop(\n                    setting\n                )\n                changed.append((setting, setting.deprecated_renamed_to))\n        return changed\n\n    class Config:\n        arbitrary_types_allowed = True\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile.validate_settings","title":"validate_settings","text":"

Validate the settings contained in this profile.

Raises:

Type Description pydantic.ValidationError

When settings do not have valid values.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def validate_settings(self) -> None:\n\"\"\"\n    Validate the settings contained in this profile.\n\n    Raises:\n        pydantic.ValidationError: When settings do not have valid values.\n    \"\"\"\n    # Create a new `Settings` instance with the settings from this profile relying\n    # on Pydantic validation to raise an error.\n    # We do not return the `Settings` object because this is not the recommended\n    # path for constructing settings with a profile. See `use_profile` instead.\n    Settings(**{setting.name: value for setting, value in self.settings.items()})\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile.convert_deprecated_renamed_settings","title":"convert_deprecated_renamed_settings","text":"

Update settings in place to replace deprecated settings with new settings when renamed.

Returns a list of tuples with the old and new setting.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def convert_deprecated_renamed_settings(self) -> List[Tuple[Setting, Setting]]:\n\"\"\"\n    Update settings in place to replace deprecated settings with new settings when\n    renamed.\n\n    Returns a list of tuples with the old and new setting.\n    \"\"\"\n    changed = []\n    for setting in tuple(self.settings):\n        if (\n            setting.deprecated\n            and setting.deprecated_renamed_to\n            and setting.deprecated_renamed_to not in self.settings\n        ):\n            self.settings[setting.deprecated_renamed_to] = self.settings.pop(\n                setting\n            )\n            changed.append((setting, setting.deprecated_renamed_to))\n    return changed\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection","title":"ProfilesCollection","text":"

\" A utility class for working with a collection of profiles.

Profiles in the collection must have unique names.

The collection may store the name of the active profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
class ProfilesCollection:\n\"\"\" \"\n    A utility class for working with a collection of profiles.\n\n    Profiles in the collection must have unique names.\n\n    The collection may store the name of the active profile.\n    \"\"\"\n\n    def __init__(\n        self, profiles: Iterable[Profile], active: Optional[str] = None\n    ) -> None:\n        self.profiles_by_name = {profile.name: profile for profile in profiles}\n        self.active_name = active\n\n    @property\n    def names(self) -> Set[str]:\n\"\"\"\n        Return a set of profile names in this collection.\n        \"\"\"\n        return set(self.profiles_by_name.keys())\n\n    @property\n    def active_profile(self) -> Optional[Profile]:\n\"\"\"\n        Retrieve the active profile in this collection.\n        \"\"\"\n        if self.active_name is None:\n            return None\n        return self[self.active_name]\n\n    def set_active(self, name: Optional[str], check: bool = True):\n\"\"\"\n        Set the active profile name in the collection.\n\n        A null value may be passed to indicate that this collection does not determine\n        the active profile.\n        \"\"\"\n        if check and name is not None and name not in self.names:\n            raise ValueError(f\"Unknown profile name {name!r}.\")\n        self.active_name = name\n\n    def update_profile(\n        self, name: str, settings: Mapping[Union[Dict, str], Any], source: Path = None\n    ) -> Profile:\n\"\"\"\n        Add a profile to the collection or update the existing on if the name is already\n        present in this collection.\n\n        If updating an existing profile, the settings will be merged. Settings can\n        be dropped from the existing profile by setting them to `None` in the new\n        profile.\n\n        Returns the new profile object.\n        \"\"\"\n        existing = self.profiles_by_name.get(name)\n\n        # Convert the input to a `Profile` to cast settings to the correct type\n        profile = Profile(name=name, settings=settings, source=source)\n\n        if existing:\n            new_settings = {**existing.settings, **profile.settings}\n\n            # Drop null keys to restore to default\n            for key, value in tuple(new_settings.items()):\n                if value is None:\n                    new_settings.pop(key)\n\n            new_profile = Profile(\n                name=profile.name,\n                settings=new_settings,\n                source=source or profile.source,\n            )\n        else:\n            new_profile = profile\n\n        self.profiles_by_name[new_profile.name] = new_profile\n\n        return new_profile\n\n    def add_profile(self, profile: Profile) -> None:\n\"\"\"\n        Add a profile to the collection.\n\n        If the profile name already exists, an exception will be raised.\n        \"\"\"\n        if profile.name in self.profiles_by_name:\n            raise ValueError(\n                f\"Profile name {profile.name!r} already exists in collection.\"\n            )\n\n        self.profiles_by_name[profile.name] = profile\n\n    def remove_profile(self, name: str) -> None:\n\"\"\"\n        Remove a profile from the collection.\n        \"\"\"\n        self.profiles_by_name.pop(name)\n\n    def without_profile_source(self, path: Optional[Path]) -> \"ProfilesCollection\":\n\"\"\"\n        Remove profiles that were loaded from a given path.\n\n        Returns a new collection.\n        \"\"\"\n        return ProfilesCollection(\n            [\n                profile\n                for profile in self.profiles_by_name.values()\n                if profile.source != path\n            ],\n            active=self.active_name,\n        )\n\n    def to_dict(self):\n\"\"\"\n        Convert to a dictionary suitable for writing to disk.\n        \"\"\"\n        return {\n            \"active\": self.active_name,\n            \"profiles\": {\n                profile.name: {\n                    setting.name: value for setting, value in profile.settings.items()\n                }\n                for profile in self.profiles_by_name.values()\n            },\n        }\n\n    def __getitem__(self, name: str) -> Profile:\n        return self.profiles_by_name[name]\n\n    def __iter__(self):\n        return self.profiles_by_name.__iter__()\n\n    def items(self):\n        return self.profiles_by_name.items()\n\n    def __eq__(self, __o: object) -> bool:\n        if not isinstance(__o, ProfilesCollection):\n            return False\n\n        return (\n            self.profiles_by_name == __o.profiles_by_name\n            and self.active_name == __o.active_name\n        )\n\n    def __repr__(self) -> str:\n        return (\n            f\"ProfilesCollection(profiles={list(self.profiles_by_name.values())!r},\"\n            f\" active={self.active_name!r})>\"\n        )\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.names","title":"names: Set[str] property","text":"

Return a set of profile names in this collection.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.active_profile","title":"active_profile: Optional[Profile] property","text":"

Retrieve the active profile in this collection.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.set_active","title":"set_active","text":"

Set the active profile name in the collection.

A null value may be passed to indicate that this collection does not determine the active profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def set_active(self, name: Optional[str], check: bool = True):\n\"\"\"\n    Set the active profile name in the collection.\n\n    A null value may be passed to indicate that this collection does not determine\n    the active profile.\n    \"\"\"\n    if check and name is not None and name not in self.names:\n        raise ValueError(f\"Unknown profile name {name!r}.\")\n    self.active_name = name\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.update_profile","title":"update_profile","text":"

Add a profile to the collection or update the existing on if the name is already present in this collection.

If updating an existing profile, the settings will be merged. Settings can be dropped from the existing profile by setting them to None in the new profile.

Returns the new profile object.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def update_profile(\n    self, name: str, settings: Mapping[Union[Dict, str], Any], source: Path = None\n) -> Profile:\n\"\"\"\n    Add a profile to the collection or update the existing on if the name is already\n    present in this collection.\n\n    If updating an existing profile, the settings will be merged. Settings can\n    be dropped from the existing profile by setting them to `None` in the new\n    profile.\n\n    Returns the new profile object.\n    \"\"\"\n    existing = self.profiles_by_name.get(name)\n\n    # Convert the input to a `Profile` to cast settings to the correct type\n    profile = Profile(name=name, settings=settings, source=source)\n\n    if existing:\n        new_settings = {**existing.settings, **profile.settings}\n\n        # Drop null keys to restore to default\n        for key, value in tuple(new_settings.items()):\n            if value is None:\n                new_settings.pop(key)\n\n        new_profile = Profile(\n            name=profile.name,\n            settings=new_settings,\n            source=source or profile.source,\n        )\n    else:\n        new_profile = profile\n\n    self.profiles_by_name[new_profile.name] = new_profile\n\n    return new_profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.add_profile","title":"add_profile","text":"

Add a profile to the collection.

If the profile name already exists, an exception will be raised.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def add_profile(self, profile: Profile) -> None:\n\"\"\"\n    Add a profile to the collection.\n\n    If the profile name already exists, an exception will be raised.\n    \"\"\"\n    if profile.name in self.profiles_by_name:\n        raise ValueError(\n            f\"Profile name {profile.name!r} already exists in collection.\"\n        )\n\n    self.profiles_by_name[profile.name] = profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.remove_profile","title":"remove_profile","text":"

Remove a profile from the collection.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def remove_profile(self, name: str) -> None:\n\"\"\"\n    Remove a profile from the collection.\n    \"\"\"\n    self.profiles_by_name.pop(name)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.without_profile_source","title":"without_profile_source","text":"

Remove profiles that were loaded from a given path.

Returns a new collection.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def without_profile_source(self, path: Optional[Path]) -> \"ProfilesCollection\":\n\"\"\"\n    Remove profiles that were loaded from a given path.\n\n    Returns a new collection.\n    \"\"\"\n    return ProfilesCollection(\n        [\n            profile\n            for profile in self.profiles_by_name.values()\n            if profile.source != path\n        ],\n        active=self.active_name,\n    )\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_extra_loggers","title":"get_extra_loggers","text":"

value_callback for PREFECT_LOGGING_EXTRA_LOGGERSthat parses the CSV string into a list and trims whitespace from logger names.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def get_extra_loggers(_: \"Settings\", value: str) -> List[str]:\n\"\"\"\n    `value_callback` for `PREFECT_LOGGING_EXTRA_LOGGERS`that parses the CSV string into a\n    list and trims whitespace from logger names.\n    \"\"\"\n    return [name.strip() for name in value.split(\",\")] if value else []\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.debug_mode_log_level","title":"debug_mode_log_level","text":"

value_callback for PREFECT_LOGGING_LEVEL that overrides the log level to DEBUG when debug mode is enabled.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def debug_mode_log_level(settings, value):\n\"\"\"\n    `value_callback` for `PREFECT_LOGGING_LEVEL` that overrides the log level to DEBUG\n    when debug mode is enabled.\n    \"\"\"\n    if PREFECT_DEBUG_MODE.value_from(settings):\n        return \"DEBUG\"\n    else:\n        return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.only_return_value_in_test_mode","title":"only_return_value_in_test_mode","text":"

value_callback for PREFECT_TEST_SETTING that only allows access during test mode

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def only_return_value_in_test_mode(settings, value):\n\"\"\"\n    `value_callback` for `PREFECT_TEST_SETTING` that only allows access during test mode\n    \"\"\"\n    if PREFECT_TEST_MODE.value_from(settings):\n        return value\n    else:\n        return None\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.default_ui_api_url","title":"default_ui_api_url","text":"

value_callback for PREFECT_UI_API_URL that sets the default value to PREFECT_API_URL if set otherwise it constructs an API URL from the API settings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def default_ui_api_url(settings, value):\n\"\"\"\n    `value_callback` for `PREFECT_UI_API_URL` that sets the default value to\n    `PREFECT_API_URL` if set otherwise it constructs an API URL from the API settings.\n    \"\"\"\n    if value is None:\n        # Set a default value\n        if PREFECT_API_URL.value_from(settings):\n            value = \"${PREFECT_API_URL}\"\n        else:\n            value = \"http://${PREFECT_SERVER_API_HOST}:${PREFECT_SERVER_API_PORT}/api\"\n\n    return template_with_settings(\n        PREFECT_SERVER_API_HOST, PREFECT_SERVER_API_PORT, PREFECT_API_URL\n    )(settings, value)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.status_codes_as_integers_in_range","title":"status_codes_as_integers_in_range","text":"

value_callback for PREFECT_CLIENT_RETRY_EXTRA_CODES that ensures status codes are integers in the range 100-599.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def status_codes_as_integers_in_range(_, value):\n\"\"\"\n    `value_callback` for `PREFECT_CLIENT_RETRY_EXTRA_CODES` that ensures status codes\n    are integers in the range 100-599.\n    \"\"\"\n    if value == \"\":\n        return set()\n\n    values = {v.strip() for v in value.split(\",\")}\n\n    if any(not v.isdigit() or int(v) < 100 or int(v) > 599 for v in values):\n        raise ValueError(\n            \"PREFECT_CLIENT_RETRY_EXTRA_CODES must be a comma separated list of \"\n            \"integers between 100 and 599.\"\n        )\n\n    values = {int(v) for v in values}\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.template_with_settings","title":"template_with_settings","text":"

Returns a value_callback that will template the given settings into the runtime value for the setting.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def template_with_settings(*upstream_settings: Setting) -> Callable[[\"Settings\", T], T]:\n\"\"\"\n    Returns a `value_callback` that will template the given settings into the runtime\n    value for the setting.\n    \"\"\"\n\n    def templater(settings, value):\n        if value is None:\n            return value  # Do not attempt to template a null string\n\n        original_type = type(value)\n        template_values = {\n            setting.name: setting.value_from(settings) for setting in upstream_settings\n        }\n        template = string.Template(str(value))\n        return original_type(template.substitute(template_values))\n\n    return templater\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.max_log_size_smaller_than_batch_size","title":"max_log_size_smaller_than_batch_size","text":"

Validator for settings asserting the batch size and match log size are compatible

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def max_log_size_smaller_than_batch_size(values):\n\"\"\"\n    Validator for settings asserting the batch size and match log size are compatible\n    \"\"\"\n    if (\n        values[\"PREFECT_LOGGING_TO_API_BATCH_SIZE\"]\n        < values[\"PREFECT_LOGGING_TO_API_MAX_LOG_SIZE\"]\n    ):\n        raise ValueError(\n            \"`PREFECT_LOGGING_TO_API_MAX_LOG_SIZE` cannot be larger than\"\n            \" `PREFECT_LOGGING_TO_API_BATCH_SIZE`\"\n        )\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.warn_on_database_password_value_without_usage","title":"warn_on_database_password_value_without_usage","text":"

Validator for settings warning if the database password is set but not used.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def warn_on_database_password_value_without_usage(values):\n\"\"\"\n    Validator for settings warning if the database password is set but not used.\n    \"\"\"\n    value = values[\"PREFECT_API_DATABASE_PASSWORD\"]\n    if (\n        value\n        and not value.startswith(OBFUSCATED_PREFIX)\n        and (\n            \"PREFECT_API_DATABASE_PASSWORD\"\n            not in values[\"PREFECT_API_DATABASE_CONNECTION_URL\"]\n        )\n    ):\n        warnings.warn(\n            \"PREFECT_API_DATABASE_PASSWORD is set but not included in the \"\n            \"PREFECT_API_DATABASE_CONNECTION_URL. \"\n            \"The provided password will be ignored.\"\n        )\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_current_settings","title":"get_current_settings","text":"

Returns a settings object populated with values from the current settings context or, if no settings context is active, the environment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def get_current_settings() -> Settings:\n\"\"\"\n    Returns a settings object populated with values from the current settings context\n    or, if no settings context is active, the environment.\n    \"\"\"\n    from prefect.context import SettingsContext\n\n    settings_context = SettingsContext.get()\n    if settings_context is not None:\n        return settings_context.settings\n\n    return get_settings_from_env()\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_settings_from_env","title":"get_settings_from_env","text":"

Returns a settings object populated with default values and overrides from environment variables, ignoring any values in profiles.

Calls with the same environment return a cached object instead of reconstructing to avoid validation overhead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def get_settings_from_env() -> Settings:\n\"\"\"\n    Returns a settings object populated with default values and overrides from\n    environment variables, ignoring any values in profiles.\n\n    Calls with the same environment return a cached object instead of reconstructing\n    to avoid validation overhead.\n    \"\"\"\n    # Since os.environ is a Dict[str, str] we can safely hash it by contents, but we\n    # must be careful to avoid hashing a generator instead of a tuple\n    cache_key = hash(tuple((key, value) for key, value in os.environ.items()))\n\n    if cache_key not in _FROM_ENV_CACHE:\n        _FROM_ENV_CACHE[cache_key] = Settings()\n\n    return _FROM_ENV_CACHE[cache_key]\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_default_settings","title":"get_default_settings","text":"

Returns a settings object populated with default values, ignoring any overrides from environment variables or profiles.

This is cached since the defaults should not change during the lifetime of the module.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def get_default_settings() -> Settings:\n\"\"\"\n    Returns a settings object populated with default values, ignoring any overrides\n    from environment variables or profiles.\n\n    This is cached since the defaults should not change during the lifetime of the\n    module.\n    \"\"\"\n    global _DEFAULTS_CACHE\n\n    if not _DEFAULTS_CACHE:\n        old = os.environ\n        try:\n            os.environ = {}\n            settings = get_settings_from_env()\n        finally:\n            os.environ = old\n\n        _DEFAULTS_CACHE = settings\n\n    return _DEFAULTS_CACHE\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.temporary_settings","title":"temporary_settings","text":"

Temporarily override the current settings by entering a new profile.

See Settings.copy_with_update for details on different argument behavior.

Examples:

>>> from prefect.settings import PREFECT_API_URL\n>>>\n>>> with temporary_settings(updates={PREFECT_API_URL: \"foo\"}):\n>>>    assert PREFECT_API_URL.value() == \"foo\"\n>>>\n>>>    with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"}):\n>>>         assert PREFECT_API_URL.value() == \"foo\"\n>>>\n>>>    with temporary_settings(restore_defaults={PREFECT_API_URL}):\n>>>         assert PREFECT_API_URL.value() is None\n>>>\n>>>         with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"})\n>>>             assert PREFECT_API_URL.value() == \"bar\"\n>>> assert PREFECT_API_URL.value() is None\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
@contextmanager\ndef temporary_settings(\n    updates: Mapping[Setting, Any] = None,\n    set_defaults: Mapping[Setting, Any] = None,\n    restore_defaults: Iterable[Setting] = None,\n) -> Settings:\n\"\"\"\n    Temporarily override the current settings by entering a new profile.\n\n    See `Settings.copy_with_update` for details on different argument behavior.\n\n    Examples:\n        >>> from prefect.settings import PREFECT_API_URL\n        >>>\n        >>> with temporary_settings(updates={PREFECT_API_URL: \"foo\"}):\n        >>>    assert PREFECT_API_URL.value() == \"foo\"\n        >>>\n        >>>    with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"}):\n        >>>         assert PREFECT_API_URL.value() == \"foo\"\n        >>>\n        >>>    with temporary_settings(restore_defaults={PREFECT_API_URL}):\n        >>>         assert PREFECT_API_URL.value() is None\n        >>>\n        >>>         with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"})\n        >>>             assert PREFECT_API_URL.value() == \"bar\"\n        >>> assert PREFECT_API_URL.value() is None\n    \"\"\"\n    import prefect.context\n\n    context = prefect.context.get_settings_context()\n\n    new_settings = context.settings.copy_with_update(\n        updates=updates, set_defaults=set_defaults, restore_defaults=restore_defaults\n    )\n\n    with prefect.context.SettingsContext(\n        profile=context.profile, settings=new_settings\n    ):\n        yield new_settings\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_profiles","title":"load_profiles","text":"

Load all profiles from the default and current profile paths.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def load_profiles() -> ProfilesCollection:\n\"\"\"\n    Load all profiles from the default and current profile paths.\n    \"\"\"\n    profiles = _read_profiles_from(DEFAULT_PROFILES_PATH)\n\n    user_profiles_path = PREFECT_PROFILES_PATH.value()\n    if user_profiles_path.exists():\n        user_profiles = _read_profiles_from(user_profiles_path)\n\n        # Merge all of the user profiles with the defaults\n        for name in user_profiles:\n            profiles.update_profile(\n                name,\n                settings=user_profiles[name].settings,\n                source=user_profiles[name].source,\n            )\n\n        if user_profiles.active_name:\n            profiles.set_active(user_profiles.active_name, check=False)\n\n    return profiles\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_current_profile","title":"load_current_profile","text":"

Load the current profile from the default and current profile paths.

This will not include settings from the current settings context. Only settings that have been persisted to the profiles file will be saved.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def load_current_profile():\n\"\"\"\n    Load the current profile from the default and current profile paths.\n\n    This will _not_ include settings from the current settings context. Only settings\n    that have been persisted to the profiles file will be saved.\n    \"\"\"\n    from prefect.context import SettingsContext\n\n    profiles = load_profiles()\n    context = SettingsContext.get()\n\n    if context:\n        profiles.set_active(context.profile.name)\n\n    return profiles.active_profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.save_profiles","title":"save_profiles","text":"

Writes all non-default profiles to the current profiles path.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def save_profiles(profiles: ProfilesCollection) -> None:\n\"\"\"\n    Writes all non-default profiles to the current profiles path.\n    \"\"\"\n    profiles_path = PREFECT_PROFILES_PATH.value()\n    profiles = profiles.without_profile_source(DEFAULT_PROFILES_PATH)\n    return _write_profiles_to(profiles_path, profiles)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_profile","title":"load_profile","text":"

Load a single profile by name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def load_profile(name: str) -> Profile:\n\"\"\"\n    Load a single profile by name.\n    \"\"\"\n    profiles = load_profiles()\n    try:\n        return profiles[name]\n    except KeyError:\n        raise ValueError(f\"Profile {name!r} not found.\")\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.update_current_profile","title":"update_current_profile","text":"

Update the persisted data for the profile currently in-use.

If the profile does not exist in the profiles file, it will be created.

Given settings will be merged with the existing settings as described in ProfilesCollection.update_profile.

Returns:

Type Description Profile

The new profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def update_current_profile(settings: Dict[Union[str, Setting], Any]) -> Profile:\n\"\"\"\n    Update the persisted data for the profile currently in-use.\n\n    If the profile does not exist in the profiles file, it will be created.\n\n    Given settings will be merged with the existing settings as described in\n    `ProfilesCollection.update_profile`.\n\n    Returns:\n        The new profile.\n    \"\"\"\n    import prefect.context\n\n    current_profile = prefect.context.get_settings_context().profile\n\n    if not current_profile:\n        raise MissingProfileError(\"No profile is currently in use.\")\n\n    profiles = load_profiles()\n\n    # Ensure the current profile's settings are present\n    profiles.update_profile(current_profile.name, current_profile.settings)\n    # Then merge the new settings in\n    new_profile = profiles.update_profile(current_profile.name, settings)\n\n    # Validate before saving\n    new_profile.validate_settings()\n\n    save_profiles(profiles)\n\n    return profiles[current_profile.name]\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/states/","title":"prefect.states","text":"","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states","title":"prefect.states","text":"","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.AwaitingRetry","title":"AwaitingRetry","text":"

Convenience function for creating AwaitingRetry states.

Returns:

Name Type Description State State

a AwaitingRetry state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def AwaitingRetry(\n    cls: Type[State] = State, scheduled_time: datetime.datetime = None, **kwargs\n) -> State:\n\"\"\"Convenience function for creating `AwaitingRetry` states.\n\n    Returns:\n        State: a AwaitingRetry state\n    \"\"\"\n    return Scheduled(\n        cls=cls, scheduled_time=scheduled_time, name=\"AwaitingRetry\", **kwargs\n    )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Cancelled","title":"Cancelled","text":"

Convenience function for creating Cancelled states.

Returns:

Name Type Description State State

a Cancelled state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Cancelled(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Cancelled` states.\n\n    Returns:\n        State: a Cancelled state\n    \"\"\"\n    return cls(type=StateType.CANCELLED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Cancelling","title":"Cancelling","text":"

Convenience function for creating Cancelling states.

Returns:

Name Type Description State State

a Cancelling state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Cancelling(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Cancelling` states.\n\n    Returns:\n        State: a Cancelling state\n    \"\"\"\n    return cls(type=StateType.CANCELLING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Completed","title":"Completed","text":"

Convenience function for creating Completed states.

Returns:

Name Type Description State State

a Completed state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Completed(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Completed` states.\n\n    Returns:\n        State: a Completed state\n    \"\"\"\n    return cls(type=StateType.COMPLETED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Crashed","title":"Crashed","text":"

Convenience function for creating Crashed states.

Returns:

Name Type Description State State

a Crashed state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Crashed(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Crashed` states.\n\n    Returns:\n        State: a Crashed state\n    \"\"\"\n    return cls(type=StateType.CRASHED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Failed","title":"Failed","text":"

Convenience function for creating Failed states.

Returns:

Name Type Description State State

a Failed state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Failed(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Failed` states.\n\n    Returns:\n        State: a Failed state\n    \"\"\"\n    return cls(type=StateType.FAILED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Late","title":"Late","text":"

Convenience function for creating Late states.

Returns:

Name Type Description State State

a Late state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Late(\n    cls: Type[State] = State, scheduled_time: datetime.datetime = None, **kwargs\n) -> State:\n\"\"\"Convenience function for creating `Late` states.\n\n    Returns:\n        State: a Late state\n    \"\"\"\n    return Scheduled(cls=cls, scheduled_time=scheduled_time, name=\"Late\", **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Paused","title":"Paused","text":"

Convenience function for creating Paused states.

Returns:

Name Type Description State State

a Paused state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Paused(\n    cls: Type[State] = State,\n    timeout_seconds: int = None,\n    pause_expiration_time: datetime.datetime = None,\n    reschedule: bool = False,\n    pause_key: str = None,\n    **kwargs,\n) -> State:\n\"\"\"Convenience function for creating `Paused` states.\n\n    Returns:\n        State: a Paused state\n    \"\"\"\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n\n    if state_details.pause_timeout:\n        raise ValueError(\"An extra pause timeout was provided in state_details\")\n\n    if pause_expiration_time is not None and timeout_seconds is not None:\n        raise ValueError(\n            \"Cannot supply both a pause_expiration_time and timeout_seconds\"\n        )\n\n    if pause_expiration_time is None and timeout_seconds is None:\n        pass\n    else:\n        state_details.pause_timeout = pause_expiration_time or (\n            pendulum.now(\"UTC\") + pendulum.Duration(seconds=timeout_seconds)\n        )\n\n    state_details.pause_reschedule = reschedule\n    state_details.pause_key = pause_key\n\n    return cls(type=StateType.PAUSED, state_details=state_details, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Pending","title":"Pending","text":"

Convenience function for creating Pending states.

Returns:

Name Type Description State State

a Pending state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Pending(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Pending` states.\n\n    Returns:\n        State: a Pending state\n    \"\"\"\n    return cls(type=StateType.PENDING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Retrying","title":"Retrying","text":"

Convenience function for creating Retrying states.

Returns:

Name Type Description State State

a Retrying state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Retrying(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Retrying` states.\n\n    Returns:\n        State: a Retrying state\n    \"\"\"\n    return cls(type=StateType.RUNNING, name=\"Retrying\", **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Running","title":"Running","text":"

Convenience function for creating Running states.

Returns:

Name Type Description State State

a Running state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Running(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Running` states.\n\n    Returns:\n        State: a Running state\n    \"\"\"\n    return cls(type=StateType.RUNNING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Scheduled","title":"Scheduled","text":"

Convenience function for creating Scheduled states.

Returns:

Name Type Description State State

a Scheduled state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Scheduled(\n    cls: Type[State] = State, scheduled_time: datetime.datetime = None, **kwargs\n) -> State:\n\"\"\"Convenience function for creating `Scheduled` states.\n\n    Returns:\n        State: a Scheduled state\n    \"\"\"\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n    if scheduled_time is None:\n        scheduled_time = pendulum.now(\"UTC\")\n    elif state_details.scheduled_time:\n        raise ValueError(\"An extra scheduled_time was provided in state_details\")\n    state_details.scheduled_time = scheduled_time\n\n    return cls(type=StateType.SCHEDULED, state_details=state_details, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.exception_to_crashed_state","title":"exception_to_crashed_state async","text":"

Takes an exception that occurs outside of user code and converts it to a 'Crash' exception with a 'Crashed' state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
async def exception_to_crashed_state(\n    exc: BaseException,\n    result_factory: Optional[ResultFactory] = None,\n) -> State:\n\"\"\"\n    Takes an exception that occurs _outside_ of user code and converts it to a\n    'Crash' exception with a 'Crashed' state.\n    \"\"\"\n    state_message = None\n\n    if isinstance(exc, anyio.get_cancelled_exc_class()):\n        state_message = \"Execution was cancelled by the runtime environment.\"\n\n    elif isinstance(exc, KeyboardInterrupt):\n        state_message = \"Execution was aborted by an interrupt signal.\"\n\n    elif isinstance(exc, TerminationSignal):\n        state_message = \"Execution was aborted by a termination signal.\"\n\n    elif isinstance(exc, SystemExit):\n        state_message = \"Execution was aborted by Python system exit call.\"\n\n    elif isinstance(exc, (httpx.TimeoutException, httpx.ConnectError)):\n        try:\n            request: httpx.Request = exc.request\n        except RuntimeError:\n            # The request property is not set\n            state_message = (\n                \"Request failed while attempting to contact the server:\"\n                f\" {format_exception(exc)}\"\n            )\n        else:\n            # TODO: We can check if this is actually our API url\n            state_message = f\"Request to {request.url} failed: {format_exception(exc)}.\"\n\n    else:\n        state_message = (\n            \"Execution was interrupted by an unexpected exception:\"\n            f\" {format_exception(exc)}\"\n        )\n\n    if result_factory:\n        data = await result_factory.create_result(exc)\n    else:\n        # Attach the exception for local usage, will not be available when retrieved\n        # from the API\n        data = exc\n\n    return Crashed(message=state_message, data=data)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.exception_to_failed_state","title":"exception_to_failed_state async","text":"

Convenience function for creating Failed states from exceptions

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
async def exception_to_failed_state(\n    exc: Optional[BaseException] = None,\n    result_factory: Optional[ResultFactory] = None,\n    **kwargs,\n) -> State:\n\"\"\"\n    Convenience function for creating `Failed` states from exceptions\n    \"\"\"\n    if not exc:\n        _, exc, _ = sys.exc_info()\n        if exc is None:\n            raise ValueError(\n                \"Exception was not passed and no active exception could be found.\"\n            )\n    else:\n        pass\n\n    if result_factory:\n        data = await result_factory.create_result(exc)\n    else:\n        # Attach the exception for local usage, will not be available when retrieved\n        # from the API\n        data = exc\n\n    existing_message = kwargs.pop(\"message\", \"\")\n    if existing_message and not existing_message.endswith(\" \"):\n        existing_message += \" \"\n\n    # TODO: Consider if we want to include traceback information, it is intentionally\n    #       excluded from messages for now\n    message = existing_message + format_exception(exc)\n\n    return Failed(data=data, message=message, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.get_state_exception","title":"get_state_exception async","text":"

If not given a FAILED or CRASHED state, this raise a value error.

If the state result is a state, its exception will be returned.

If the state result is an iterable of states, the exception of the first failure will be returned.

If the state result is a string, a wrapper exception will be returned with the string as the message.

If the state result is null, a wrapper exception will be returned with the state message attached.

If the state result is not of a known type, a TypeError will be returned.

When a wrapper exception is returned, the type will be: - FailedRun if the state type is FAILED. - CrashedRun if the state type is CRASHED. - CancelledRun if the state type is CANCELLED.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
@sync_compatible\nasync def get_state_exception(state: State) -> BaseException:\n\"\"\"\n    If not given a FAILED or CRASHED state, this raise a value error.\n\n    If the state result is a state, its exception will be returned.\n\n    If the state result is an iterable of states, the exception of the first failure\n    will be returned.\n\n    If the state result is a string, a wrapper exception will be returned with the\n    string as the message.\n\n    If the state result is null, a wrapper exception will be returned with the state\n    message attached.\n\n    If the state result is not of a known type, a `TypeError` will be returned.\n\n    When a wrapper exception is returned, the type will be:\n        - `FailedRun` if the state type is FAILED.\n        - `CrashedRun` if the state type is CRASHED.\n        - `CancelledRun` if the state type is CANCELLED.\n    \"\"\"\n\n    if state.is_failed():\n        wrapper = FailedRun\n        default_message = \"Run failed.\"\n    elif state.is_crashed():\n        wrapper = CrashedRun\n        default_message = \"Run crashed.\"\n    elif state.is_cancelled():\n        wrapper = CancelledRun\n        default_message = \"Run cancelled.\"\n    else:\n        raise ValueError(f\"Expected failed or crashed state got {state!r}.\")\n\n    if isinstance(state.data, BaseResult):\n        result = await state.data.get()\n    elif state.data is None:\n        result = None\n    else:\n        result = state.data\n\n    if result is None:\n        return wrapper(state.message or default_message)\n\n    if isinstance(result, Exception):\n        return result\n\n    elif isinstance(result, BaseException):\n        return result\n\n    elif isinstance(result, str):\n        return wrapper(result)\n\n    elif is_state(result):\n        # Return the exception from the inner state\n        return await get_state_exception(result)\n\n    elif is_state_iterable(result):\n        # Return the first failure\n        for state in result:\n            if state.is_failed() or state.is_crashed() or state.is_cancelled():\n                return await get_state_exception(state)\n\n        raise ValueError(\n            \"Failed state result was an iterable of states but none were failed.\"\n        )\n\n    else:\n        raise TypeError(\n            f\"Unexpected result for failed state: {result!r} \u2014\u2014 \"\n            f\"{type(result).__name__} cannot be resolved into an exception\"\n        )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.get_state_result","title":"get_state_result","text":"

Get the result from a state.

See State.result()

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def get_state_result(\n    state: State[R], raise_on_failure: bool = True, fetch: Optional[bool] = None\n) -> R:\n\"\"\"\n    Get the result from a state.\n\n    See `State.result()`\n    \"\"\"\n\n    if fetch is None and (\n        PREFECT_ASYNC_FETCH_STATE_RESULT or not in_async_main_thread()\n    ):\n        # Fetch defaults to `True` for sync users or async users who have opted in\n        fetch = True\n\n    if not fetch:\n        if fetch is None and in_async_main_thread():\n            warnings.warn(\n                (\n                    \"State.result() was called from an async context but not awaited. \"\n                    \"This method will be updated to return a coroutine by default in \"\n                    \"the future. Pass `fetch=True` and `await` the call to get rid of \"\n                    \"this warning.\"\n                ),\n                DeprecationWarning,\n                stacklevel=2,\n            )\n        # Backwards compatibility\n        if isinstance(state.data, DataDocument):\n            return result_from_state_with_data_document(\n                state, raise_on_failure=raise_on_failure\n            )\n        else:\n            return state.data\n    else:\n        return _get_state_result(state, raise_on_failure=raise_on_failure)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.is_state","title":"is_state","text":"

Check if the given object is a state instance

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def is_state(obj: Any) -> TypeGuard[State]:\n\"\"\"\n    Check if the given object is a state instance\n    \"\"\"\n    # We may want to narrow this to client-side state types but for now this provides\n    # backwards compatibility\n    from prefect.server.schemas.states import State as State_\n\n    return isinstance(obj, (State, State_))\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.is_state_iterable","title":"is_state_iterable","text":"

Check if a the given object is an iterable of states types

Supported iterables are: - set - list - tuple

Other iterables will return False even if they contain states.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def is_state_iterable(obj: Any) -> TypeGuard[Iterable[State]]:\n\"\"\"\n    Check if a the given object is an iterable of states types\n\n    Supported iterables are:\n    - set\n    - list\n    - tuple\n\n    Other iterables will return `False` even if they contain states.\n    \"\"\"\n    # We do not check for arbitary iterables because this is not intended to be used\n    # for things like dictionaries, dataframes, or pydantic models\n    if (\n        not isinstance(obj, BaseAnnotation)\n        and isinstance(obj, (list, set, tuple))\n        and obj\n    ):\n        return all([is_state(o) for o in obj])\n    else:\n        return False\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.raise_state_exception","title":"raise_state_exception async","text":"

Given a FAILED or CRASHED state, raise the contained exception.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
@sync_compatible\nasync def raise_state_exception(state: State) -> None:\n\"\"\"\n    Given a FAILED or CRASHED state, raise the contained exception.\n    \"\"\"\n    if not (state.is_failed() or state.is_crashed() or state.is_cancelled()):\n        return None\n\n    raise await get_state_exception(state)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.return_value_to_state","title":"return_value_to_state async","text":"

Given a return value from a user's function, create a State the run should be placed in.

The aggregate rule says that given multiple states we will determine the final state such that:

Callers should resolve all futures into states before passing return values to this function.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
async def return_value_to_state(retval: R, result_factory: ResultFactory) -> State[R]:\n\"\"\"\n    Given a return value from a user's function, create a `State` the run should\n    be placed in.\n\n    - If data is returned, we create a 'COMPLETED' state with the data\n    - If a single, manually created state is returned, we use that state as given\n        (manual creation is determined by the lack of ids)\n    - If an upstream state or iterable of upstream states is returned, we apply the\n        aggregate rule\n\n    The aggregate rule says that given multiple states we will determine the final state\n    such that:\n\n    - If any states are not COMPLETED the final state is FAILED\n    - If all of the states are COMPLETED the final state is COMPLETED\n    - The states will be placed in the final state `data` attribute\n\n    Callers should resolve all futures into states before passing return values to this\n    function.\n    \"\"\"\n\n    if (\n        is_state(retval)\n        # Check for manual creation\n        and not retval.state_details.flow_run_id\n        and not retval.state_details.task_run_id\n    ):\n        state = retval\n\n        # Do not modify states with data documents attached; backwards compatibility\n        if isinstance(state.data, DataDocument):\n            return state\n\n        # Unless the user has already constructed a result explicitly, use the factory\n        # to update the data to the correct type\n        if not isinstance(state.data, BaseResult):\n            state.data = await result_factory.create_result(state.data)\n\n        return state\n\n    # Determine a new state from the aggregate of contained states\n    if is_state(retval) or is_state_iterable(retval):\n        states = StateGroup(ensure_iterable(retval))\n\n        # Determine the new state type\n        if states.all_completed():\n            new_state_type = StateType.COMPLETED\n        elif states.any_cancelled():\n            new_state_type = StateType.CANCELLED\n        elif states.any_paused():\n            new_state_type = StateType.PAUSED\n        else:\n            new_state_type = StateType.FAILED\n\n        # Generate a nice message for the aggregate\n        if states.all_completed():\n            message = \"All states completed.\"\n        elif states.any_cancelled():\n            message = f\"{states.cancelled_count}/{states.total_count} states cancelled.\"\n        elif states.any_paused():\n            message = f\"{states.paused_count}/{states.total_count} states paused.\"\n        elif states.any_failed():\n            message = f\"{states.fail_count}/{states.total_count} states failed.\"\n        elif not states.all_final():\n            message = (\n                f\"{states.not_final_count}/{states.total_count} states are not final.\"\n            )\n        else:\n            message = \"Given states: \" + states.counts_message()\n\n        # TODO: We may actually want to set the data to a `StateGroup` object and just\n        #       allow it to be unpacked into a tuple and such so users can interact with\n        #       it\n        return State(\n            type=new_state_type,\n            message=message,\n            data=await result_factory.create_result(retval),\n        )\n\n    # Generators aren't portable, implicitly convert them to a list.\n    if isinstance(retval, GeneratorType):\n        data = list(retval)\n    else:\n        data = retval\n\n    # Otherwise, they just gave data and this is a completed retval\n    return Completed(data=await result_factory.create_result(data))\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/task-runners/","title":"prefect.task_runners","text":"","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners","title":"prefect.task_runners","text":"

Interface and implementations of various task runners.

Task Runners in Prefect are responsible for managing the execution of Prefect task runs. Generally speaking, users are not expected to interact with task runners outside of configuring and initializing them for a flow.

Example
>>> from prefect import flow, task\n>>> from prefect.task_runners import SequentialTaskRunner\n>>> from typing import List\n>>>\n>>> @task\n>>> def say_hello(name):\n...     print(f\"hello {name}\")\n>>>\n>>> @task\n>>> def say_goodbye(name):\n...     print(f\"goodbye {name}\")\n>>>\n>>> @flow(task_runner=SequentialTaskRunner())\n>>> def greetings(names: List[str]):\n...     for name in names:\n...         say_hello(name)\n...         say_goodbye(name)\n>>>\n>>> greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\nhello arthur\ngoodbye arthur\nhello trillian\ngoodbye trillian\nhello ford\ngoodbye ford\nhello marvin\ngoodbye marvin\n

Switching to a DaskTaskRunner:

>>> from prefect_dask.task_runners import DaskTaskRunner\n>>> flow.task_runner = DaskTaskRunner()\n>>> greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\nhello arthur\ngoodbye arthur\nhello trillian\nhello ford\ngoodbye marvin\nhello marvin\ngoodbye ford\ngoodbye trillian\n

For usage details, see the Task Runners documentation.

","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner","title":"BaseTaskRunner","text":"Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
class BaseTaskRunner(metaclass=abc.ABCMeta):\n    def __init__(self) -> None:\n        self.logger = get_logger(f\"task_runner.{self.name}\")\n        self._started: bool = False\n\n    @property\n    @abc.abstractmethod\n    def concurrency_type(self) -> TaskConcurrencyType:\n        pass  # noqa\n\n    @property\n    def name(self):\n        return type(self).__name__.lower().replace(\"taskrunner\", \"\")\n\n    def duplicate(self):\n\"\"\"\n        Return a new task runner instance with the same options.\n        \"\"\"\n        # The base class returns `NotImplemented` to indicate that this is not yet\n        # implemented by a given task runner.\n        return NotImplemented\n\n    def __eq__(self, other: object) -> bool:\n\"\"\"\n        Returns true if the task runners use the same options.\n        \"\"\"\n        if type(other) == type(self) and (\n            # Compare public attributes for naive equality check\n            # Subclasses should implement this method with a check init option equality\n            {k: v for k, v in self.__dict__.items() if not k.startswith(\"_\")}\n            == {k: v for k, v in other.__dict__.items() if not k.startswith(\"_\")}\n        ):\n            return True\n        else:\n            return NotImplemented\n\n    @abc.abstractmethod\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[..., Awaitable[State[R]]],\n    ) -> None:\n\"\"\"\n        Submit a call for execution and return a `PrefectFuture` that can be used to\n        get the call result.\n\n        Args:\n            task_run: The task run being submitted.\n            task_key: A unique key for this orchestration run of the task. Can be used\n                for caching.\n            call: The function to be executed\n            run_kwargs: A dict of keyword arguments to pass to `call`\n\n        Returns:\n            A future representing the result of `call` execution\n        \"\"\"\n        raise NotImplementedError()\n\n    @abc.abstractmethod\n    async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n\"\"\"\n        Given a `PrefectFuture`, wait for its return state up to `timeout` seconds.\n        If it is not finished after the timeout expires, `None` should be returned.\n\n        Implementers should be careful to ensure that this function never returns or\n        raises an exception.\n        \"\"\"\n        raise NotImplementedError()\n\n    @asynccontextmanager\n    async def start(\n        self: T,\n    ) -> AsyncIterator[T]:\n\"\"\"\n        Start the task runner, preparing any resources necessary for task submission.\n\n        Children should implement `_start` to prepare and clean up resources.\n\n        Yields:\n            The prepared task runner\n        \"\"\"\n        if self._started:\n            raise RuntimeError(\"The task runner is already started!\")\n\n        async with AsyncExitStack() as exit_stack:\n            self.logger.debug(\"Starting task runner...\")\n            try:\n                await self._start(exit_stack)\n                self._started = True\n                yield self\n            finally:\n                self.logger.debug(\"Shutting down task runner...\")\n                self._started = False\n\n    async def _start(self, exit_stack: AsyncExitStack) -> None:\n\"\"\"\n        Create any resources required for this task runner to submit work.\n\n        Cleanup of resources should be submitted to the `exit_stack`.\n        \"\"\"\n        pass  # noqa\n\n    def __str__(self) -> str:\n        return type(self).__name__\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.duplicate","title":"duplicate","text":"

Return a new task runner instance with the same options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
def duplicate(self):\n\"\"\"\n    Return a new task runner instance with the same options.\n    \"\"\"\n    # The base class returns `NotImplemented` to indicate that this is not yet\n    # implemented by a given task runner.\n    return NotImplemented\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.start","title":"start async","text":"

Start the task runner, preparing any resources necessary for task submission.

Children should implement _start to prepare and clean up resources.

Yields:

Type Description AsyncIterator[T]

The prepared task runner

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
@asynccontextmanager\nasync def start(\n    self: T,\n) -> AsyncIterator[T]:\n\"\"\"\n    Start the task runner, preparing any resources necessary for task submission.\n\n    Children should implement `_start` to prepare and clean up resources.\n\n    Yields:\n        The prepared task runner\n    \"\"\"\n    if self._started:\n        raise RuntimeError(\"The task runner is already started!\")\n\n    async with AsyncExitStack() as exit_stack:\n        self.logger.debug(\"Starting task runner...\")\n        try:\n            await self._start(exit_stack)\n            self._started = True\n            yield self\n        finally:\n            self.logger.debug(\"Shutting down task runner...\")\n            self._started = False\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.submit","title":"submit abstractmethod async","text":"

Submit a call for execution and return a PrefectFuture that can be used to get the call result.

Parameters:

Name Type Description Default task_run

The task run being submitted.

required task_key

A unique key for this orchestration run of the task. Can be used for caching.

required call Callable[..., Awaitable[State[R]]]

The function to be executed

required run_kwargs

A dict of keyword arguments to pass to call

required

Returns:

Type Description None

A future representing the result of call execution

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
@abc.abstractmethod\nasync def submit(\n    self,\n    key: UUID,\n    call: Callable[..., Awaitable[State[R]]],\n) -> None:\n\"\"\"\n    Submit a call for execution and return a `PrefectFuture` that can be used to\n    get the call result.\n\n    Args:\n        task_run: The task run being submitted.\n        task_key: A unique key for this orchestration run of the task. Can be used\n            for caching.\n        call: The function to be executed\n        run_kwargs: A dict of keyword arguments to pass to `call`\n\n    Returns:\n        A future representing the result of `call` execution\n    \"\"\"\n    raise NotImplementedError()\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.wait","title":"wait abstractmethod async","text":"

Given a PrefectFuture, wait for its return state up to timeout seconds. If it is not finished after the timeout expires, None should be returned.

Implementers should be careful to ensure that this function never returns or raises an exception.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
@abc.abstractmethod\nasync def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n\"\"\"\n    Given a `PrefectFuture`, wait for its return state up to `timeout` seconds.\n    If it is not finished after the timeout expires, `None` should be returned.\n\n    Implementers should be careful to ensure that this function never returns or\n    raises an exception.\n    \"\"\"\n    raise NotImplementedError()\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.ConcurrentTaskRunner","title":"ConcurrentTaskRunner","text":"

Bases: BaseTaskRunner

A concurrent task runner that allows tasks to switch when blocking on IO. Synchronous tasks will be submitted to a thread pool maintained by anyio.

Example
Using a thread for concurrency:\n>>> from prefect import flow\n>>> from prefect.task_runners import ConcurrentTaskRunner\n>>> @flow(task_runner=ConcurrentTaskRunner)\n>>> def my_flow():\n>>>     ...\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
class ConcurrentTaskRunner(BaseTaskRunner):\n\"\"\"\n    A concurrent task runner that allows tasks to switch when blocking on IO.\n    Synchronous tasks will be submitted to a thread pool maintained by `anyio`.\n\n    Example:\n        ```\n        Using a thread for concurrency:\n        >>> from prefect import flow\n        >>> from prefect.task_runners import ConcurrentTaskRunner\n        >>> @flow(task_runner=ConcurrentTaskRunner)\n        >>> def my_flow():\n        >>>     ...\n        ```\n    \"\"\"\n\n    def __init__(self):\n        # TODO: Consider adding `max_workers` support using anyio capacity limiters\n\n        # Runtime attributes\n        self._task_group: anyio.abc.TaskGroup = None\n        self._result_events: Dict[UUID, Event] = {}\n        self._results: Dict[UUID, Any] = {}\n        self._keys: Set[UUID] = set()\n\n        super().__init__()\n\n    @property\n    def concurrency_type(self) -> TaskConcurrencyType:\n        return TaskConcurrencyType.CONCURRENT\n\n    def duplicate(self):\n        return type(self)()\n\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[[], Awaitable[State[R]]],\n    ) -> None:\n        if not self._started:\n            raise RuntimeError(\n                \"The task runner must be started before submitting work.\"\n            )\n\n        if not self._task_group:\n            raise RuntimeError(\n                \"The concurrent task runner cannot be used to submit work after \"\n                \"serialization.\"\n            )\n\n        # Create an event to set on completion\n        self._result_events[key] = Event()\n\n        # Rely on the event loop for concurrency\n        self._task_group.start_soon(self._run_and_store_result, key, call)\n\n    async def wait(\n        self,\n        key: UUID,\n        timeout: float = None,\n    ) -> Optional[State]:\n        if not self._task_group:\n            raise RuntimeError(\n                \"The concurrent task runner cannot be used to wait for work after \"\n                \"serialization.\"\n            )\n\n        return await self._get_run_result(key, timeout)\n\n    async def _run_and_store_result(\n        self, key: UUID, call: Callable[[], Awaitable[State[R]]]\n    ):\n\"\"\"\n        Simple utility to store the orchestration result in memory on completion\n\n        Since this run is occuring on the main thread, we capture exceptions to prevent\n        task crashes from crashing the flow run.\n        \"\"\"\n        try:\n            result = await call()\n        except BaseException as exc:\n            result = await exception_to_crashed_state(exc)\n\n        self._results[key] = result\n        self._result_events[key].set()\n\n    async def _get_run_result(\n        self, key: UUID, timeout: float = None\n    ) -> Optional[State]:\n\"\"\"\n        Block until the run result has been populated.\n        \"\"\"\n        result = None  # retval on tiemout\n\n        # Note we do not use `asyncio.wrap_future` and instead use an `Event` to avoid\n        # stdlib behavior where the wrapped future is cancelled if the parent future is\n        # cancelled (as it would be during a timeout here)\n        with anyio.move_on_after(timeout):\n            await self._result_events[key].wait()\n            result = self._results[key]\n\n        return result  # timeout reached\n\n    async def _start(self, exit_stack: AsyncExitStack):\n\"\"\"\n        Start the process pool\n        \"\"\"\n        self._task_group = await exit_stack.enter_async_context(\n            anyio.create_task_group()\n        )\n\n    def __getstate__(self):\n\"\"\"\n        Allow the `ConcurrentTaskRunner` to be serialized by dropping the task group.\n        \"\"\"\n        data = self.__dict__.copy()\n        data.update({k: None for k in {\"_task_group\"}})\n        return data\n\n    def __setstate__(self, data: dict):\n\"\"\"\n        When deserialized, we will no longer have a reference to the task group.\n        \"\"\"\n        self.__dict__.update(data)\n        self._task_group = None\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.SequentialTaskRunner","title":"SequentialTaskRunner","text":"

Bases: BaseTaskRunner

A simple task runner that executes calls as they are submitted.

If writing synchronous tasks, this runner will always execute tasks sequentially. If writing async tasks, this runner will execute tasks sequentially unless grouped using anyio.create_task_group or asyncio.gather.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
class SequentialTaskRunner(BaseTaskRunner):\n\"\"\"\n    A simple task runner that executes calls as they are submitted.\n\n    If writing synchronous tasks, this runner will always execute tasks sequentially.\n    If writing async tasks, this runner will execute tasks sequentially unless grouped\n    using `anyio.create_task_group` or `asyncio.gather`.\n    \"\"\"\n\n    def __init__(self) -> None:\n        super().__init__()\n        self._results: Dict[str, State] = {}\n\n    @property\n    def concurrency_type(self) -> TaskConcurrencyType:\n        return TaskConcurrencyType.SEQUENTIAL\n\n    def duplicate(self):\n        return type(self)()\n\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[..., Awaitable[State[R]]],\n    ) -> None:\n        # Run the function immediately and store the result in memory\n        try:\n            result = await call()\n        except BaseException as exc:\n            result = await exception_to_crashed_state(exc)\n\n        self._results[key] = result\n\n    async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n        return self._results[key]\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/tasks/","title":"prefect.tasks","text":"","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks","title":"prefect.tasks","text":"

Module containing the base workflow task class and decorator - for most use cases, using the @task decorator is preferred.

","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task","title":"Task","text":"

Bases: Generic[P, R]

A Prefect task definition.

Note

We recommend using the @task decorator for most use-cases.

Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function creates a new task run.

To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and \"Returns\" respectively.

Parameters:

Name Type Description Default fn Callable[P, R]

The function defining the task.

required name str

An optional name for the task; if not provided, the name will be inferred from the given function.

None description str

An optional string description for the task.

None tags Iterable[str]

An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags context at task runtime.

None version str

An optional string specifying the version of this task definition

None cache_key_fn Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]

An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again.

None cache_expiration datetime.timedelta

An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire.

None task_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.

None retries Optional[int]

An optional number of times to retry on task run failure.

None retry_delay_seconds Optional[Union[float, int, List[float], Callable[[int], List[float]]]]

Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.

None retry_jitter_factor Optional[float]

An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".

None persist_result Optional[bool]

An optional toggle indicating whether the result of this task should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

None result_storage Optional[ResultStorage]

An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in.

None result_storage_key Optional[str]

An optional key to store the result in storage at when persisted. Defaults to a unique identifier.

None result_serializer Optional[ResultSerializer]

An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in.

None timeout_seconds Union[int, float]

An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed.

None log_prints Optional[bool]

If set, print statements in the task will be redirected to the Prefect logger for the task run. Defaults to None, which indicates that the value from the flow should be used.

False refresh_cache Optional[bool]

If set, cached results for the cache key are not used. Defaults to None, which indicates that a cached result from a previous execution with matching cache key is used.

None on_failure Optional[List[Callable[[Task, TaskRun, State], None]]]

An optional list of callables to run when the task enters a failed state.

None on_completion Optional[List[Callable[[Task, TaskRun, State], None]]]

An optional list of callables to run when the task enters a completed state.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
@PrefectObjectRegistry.register_instances\nclass Task(Generic[P, R]):\n\"\"\"\n    A Prefect task definition.\n\n    !!! note\n        We recommend using [the `@task` decorator][prefect.tasks.task] for most use-cases.\n\n    Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function\n    creates a new task run.\n\n    To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and\n    \"Returns\" respectively.\n\n    Args:\n        fn: The function defining the task.\n        name: An optional name for the task; if not provided, the name will be inferred\n            from the given function.\n        description: An optional string description for the task.\n        tags: An optional set of tags to be associated with runs of this task. These\n            tags are combined with any tags defined by a `prefect.tags` context at\n            task runtime.\n        version: An optional string specifying the version of this task definition\n        cache_key_fn: An optional callable that, given the task run context and call\n            parameters, generates a string key; if the key matches a previous completed\n            state, that state result will be restored instead of running the task again.\n        cache_expiration: An optional amount of time indicating how long cached states\n            for this task should be restorable; if not provided, cached states will\n            never expire.\n        task_run_name: An optional name to distinguish runs of this task; this name can be provided\n            as a string template with the task's keyword arguments as variables,\n            or a function that returns a string.\n        retries: An optional number of times to retry on task run failure.\n        retry_delay_seconds: Optionally configures how long to wait before retrying the\n            task after failure. This is only applicable if `retries` is nonzero. This\n            setting can either be a number of seconds, a list of retry delays, or a\n            callable that, given the total number of retries, generates a list of retry\n            delays. If a number of seconds, that delay will be applied to all retries.\n            If a list, each retry will wait for the corresponding delay before retrying.\n            When passing a callable or a list, the number of configured retry delays\n            cannot exceed 50.\n        retry_jitter_factor: An optional factor that defines the factor to which a retry\n            can be jittered in order to avoid a \"thundering herd\".\n        persist_result: An optional toggle indicating whether the result of this task\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to persist the result of this task.\n            Defaults to the value set in the flow the task is called in.\n        result_storage_key: An optional key to store the result in storage at when persisted.\n            Defaults to a unique identifier.\n        result_serializer: An optional serializer to use to serialize the result of this\n            task for persistence. Defaults to the value set in the flow the task is\n            called in.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the task. If the task exceeds this runtime, it will be marked as failed.\n        log_prints: If set, `print` statements in the task will be redirected to the\n            Prefect logger for the task run. Defaults to `None`, which indicates\n            that the value from the flow should be used.\n        refresh_cache: If set, cached results for the cache key are not used.\n            Defaults to `None`, which indicates that a cached result from a previous\n            execution with matching cache key is used.\n        on_failure: An optional list of callables to run when the task enters a failed state.\n        on_completion: An optional list of callables to run when the task enters a completed state.\n    \"\"\"\n\n    # NOTE: These parameters (types, defaults, and docstrings) should be duplicated\n    #       exactly in the @task decorator\n    def __init__(\n        self,\n        fn: Callable[P, R],\n        name: str = None,\n        description: str = None,\n        tags: Iterable[str] = None,\n        version: str = None,\n        cache_key_fn: Callable[\n            [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n        ] = None,\n        cache_expiration: datetime.timedelta = None,\n        task_run_name: Optional[Union[Callable[[], str], str]] = None,\n        retries: Optional[int] = None,\n        retry_delay_seconds: Optional[\n            Union[\n                float,\n                int,\n                List[float],\n                Callable[[int], List[float]],\n            ]\n        ] = None,\n        retry_jitter_factor: Optional[float] = None,\n        persist_result: Optional[bool] = None,\n        result_storage: Optional[ResultStorage] = None,\n        result_serializer: Optional[ResultSerializer] = None,\n        result_storage_key: Optional[str] = None,\n        cache_result_in_memory: bool = True,\n        timeout_seconds: Union[int, float] = None,\n        log_prints: Optional[bool] = False,\n        refresh_cache: Optional[bool] = None,\n        on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n        on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n    ):\n        # Validate if hook passed is list and contains callables\n        hook_categories = [on_completion, on_failure]\n        hook_names = [\"on_completion\", \"on_failure\"]\n        for hooks, hook_name in zip(hook_categories, hook_names):\n            if hooks is not None:\n                if not hooks:\n                    raise ValueError(f\"Empty list passed for '{hook_name}'\")\n                try:\n                    hooks = list(hooks)\n                except TypeError:\n                    raise TypeError(\n                        f\"Expected iterable for '{hook_name}'; got\"\n                        f\" {type(hooks).__name__} instead. Please provide a list of\"\n                        f\" hooks to '{hook_name}':\\n\\n\"\n                        f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                        \" my_flow():\\n\\tpass\"\n                    )\n\n                for hook in hooks:\n                    if not callable(hook):\n                        raise TypeError(\n                            f\"Expected callables in '{hook_name}'; got\"\n                            f\" {type(hook).__name__} instead. Please provide a list of\"\n                            f\" hooks to '{hook_name}':\\n\\n\"\n                            f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                            \" my_flow():\\n\\tpass\"\n                        )\n\n        if not callable(fn):\n            raise TypeError(\"'fn' must be callable\")\n\n        self.description = description or inspect.getdoc(fn)\n        update_wrapper(self, fn)\n        self.fn = fn\n        self.isasync = inspect.iscoroutinefunction(self.fn)\n\n        if not name:\n            if not hasattr(self.fn, \"__name__\"):\n                self.name = type(self.fn).__name__\n            else:\n                self.name = self.fn.__name__\n        else:\n            self.name = name\n\n        if task_run_name is not None:\n            if not isinstance(task_run_name, str) and not callable(task_run_name):\n                raise TypeError(\n                    \"Expected string or callable for 'task_run_name'; got\"\n                    f\" {type(task_run_name).__name__} instead.\"\n                )\n        self.task_run_name = task_run_name\n\n        self.version = version\n        self.log_prints = log_prints\n\n        raise_for_reserved_arguments(self.fn, [\"return_state\", \"wait_for\"])\n\n        self.tags = set(tags if tags else [])\n\n        if not hasattr(self.fn, \"__qualname__\"):\n            self.task_key = to_qualified_name(type(self.fn))\n        else:\n            self.task_key = to_qualified_name(self.fn)\n\n        self.cache_key_fn = cache_key_fn\n        self.cache_expiration = cache_expiration\n        self.refresh_cache = refresh_cache\n\n        # TaskRunPolicy settings\n        # TODO: We can instantiate a `TaskRunPolicy` and add Pydantic bound checks to\n        #       validate that the user passes positive numbers here\n\n        self.retries = (\n            retries if retries is not None else PREFECT_TASK_DEFAULT_RETRIES.value()\n        )\n        if retry_delay_seconds is None:\n            retry_delay_seconds = PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS.value()\n\n        if callable(retry_delay_seconds):\n            self.retry_delay_seconds = retry_delay_seconds(retries)\n        else:\n            self.retry_delay_seconds = retry_delay_seconds\n\n        if isinstance(self.retry_delay_seconds, list) and (\n            len(self.retry_delay_seconds) > 50\n        ):\n            raise ValueError(\"Can not configure more than 50 retry delays per task.\")\n\n        if retry_jitter_factor is not None and retry_jitter_factor < 0:\n            raise ValueError(\"`retry_jitter_factor` must be >= 0.\")\n\n        self.retry_jitter_factor = retry_jitter_factor\n\n        self.persist_result = persist_result\n        self.result_storage = result_storage\n        self.result_serializer = result_serializer\n        self.result_storage_key = result_storage_key\n        self.cache_result_in_memory = cache_result_in_memory\n        self.timeout_seconds = float(timeout_seconds) if timeout_seconds else None\n        # Warn if this task's `name` conflicts with another task while having a\n        # different function. This is to detect the case where two or more tasks\n        # share a name or are lambdas, which should result in a warning, and to\n        # differentiate it from the case where the task was 'copied' via\n        # `with_options`, which should not result in a warning.\n        registry = PrefectObjectRegistry.get()\n\n        if registry and any(\n            other\n            for other in registry.get_instances(Task)\n            if other.name == self.name and id(other.fn) != id(self.fn)\n        ):\n            try:\n                file = inspect.getsourcefile(self.fn)\n                line_number = inspect.getsourcelines(self.fn)[1]\n            except TypeError:\n                file = \"unknown\"\n                line_number = \"unknown\"\n\n            warnings.warn(\n                f\"A task named {self.name!r} and defined at '{file}:{line_number}' \"\n                \"conflicts with another task. Consider specifying a unique `name` \"\n                \"parameter in the task definition:\\n\\n \"\n                \"`@task(name='my_unique_name', ...)`\"\n            )\n        self.on_completion = on_completion\n        self.on_failure = on_failure\n\n    def with_options(\n        self,\n        *,\n        name: str = None,\n        description: str = None,\n        tags: Iterable[str] = None,\n        cache_key_fn: Callable[\n            [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n        ] = None,\n        task_run_name: Optional[Union[Callable[[], str], str]] = None,\n        cache_expiration: datetime.timedelta = None,\n        retries: Optional[int] = NotSet,\n        retry_delay_seconds: Union[\n            float,\n            int,\n            List[float],\n            Callable[[int], List[float]],\n        ] = NotSet,\n        retry_jitter_factor: Optional[float] = NotSet,\n        persist_result: Optional[bool] = NotSet,\n        result_storage: Optional[ResultStorage] = NotSet,\n        result_serializer: Optional[ResultSerializer] = NotSet,\n        result_storage_key: Optional[str] = NotSet,\n        cache_result_in_memory: Optional[bool] = None,\n        timeout_seconds: Union[int, float] = None,\n        log_prints: Optional[bool] = NotSet,\n        refresh_cache: Optional[bool] = NotSet,\n        on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n        on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n    ):\n\"\"\"\n        Create a new task from the current object, updating provided options.\n\n        Args:\n            name: A new name for the task.\n            description: A new description for the task.\n            tags: A new set of tags for the task. If given, existing tags are ignored,\n                not merged.\n            cache_key_fn: A new cache key function for the task.\n            cache_expiration: A new cache expiration time for the task.\n            task_run_name: An optional name to distinguish runs of this task; this name can be provided\n                as a string template with the task's keyword arguments as variables,\n                or a function that returns a string.\n            retries: A new number of times to retry on task run failure.\n            retry_delay_seconds: Optionally configures how long to wait before retrying\n                the task after failure. This is only applicable if `retries` is nonzero.\n                This setting can either be a number of seconds, a list of retry delays,\n                or a callable that, given the total number of retries, generates a list\n                of retry delays. If a number of seconds, that delay will be applied to\n                all retries. If a list, each retry will wait for the corresponding delay\n                before retrying. When passing a callable or a list, the number of\n                configured retry delays cannot exceed 50.\n            retry_jitter_factor: An optional factor that defines the factor to which a\n                retry can be jittered in order to avoid a \"thundering herd\".\n            persist_result: A new option for enabling or disabling result persistence.\n            result_storage: A new storage type to use for results.\n            result_serializer: A new serializer to use for results.\n            result_storage_key: A new key for the persisted result to be stored at.\n            timeout_seconds: A new maximum time for the task to complete in seconds.\n            log_prints: A new option for enabling or disabling redirection of `print` statements.\n            refresh_cache: A new option for enabling or disabling cache refresh.\n            on_completion: A new list of callables to run when the task enters a completed state.\n            on_failure: A new list of callables to run when the task enters a failed state.\n\n        Returns:\n            A new `Task` instance.\n\n        Examples:\n\n            Create a new task from an existing task and update the name\n\n            >>> @task(name=\"My task\")\n            >>> def my_task():\n            >>>     return 1\n            >>>\n            >>> new_task = my_task.with_options(name=\"My new task\")\n\n            Create a new task from an existing task and update the retry settings\n\n            >>> from random import randint\n            >>>\n            >>> @task(retries=1, retry_delay_seconds=5)\n            >>> def my_task():\n            >>>     x = randint(0, 5)\n            >>>     if x >= 3:  # Make a task that fails sometimes\n            >>>         raise ValueError(\"Retry me please!\")\n            >>>     return x\n            >>>\n            >>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\n            Use a task with updated options within a flow\n\n            >>> @task(name=\"My task\")\n            >>> def my_task():\n            >>>     return 1\n            >>>\n            >>> @flow\n            >>> my_flow():\n            >>>     new_task = my_task.with_options(name=\"My new task\")\n            >>>     new_task()\n        \"\"\"\n        return Task(\n            fn=self.fn,\n            name=name or self.name,\n            description=description or self.description,\n            tags=tags or copy(self.tags),\n            cache_key_fn=cache_key_fn or self.cache_key_fn,\n            cache_expiration=cache_expiration or self.cache_expiration,\n            task_run_name=task_run_name,\n            retries=retries if retries is not NotSet else self.retries,\n            retry_delay_seconds=(\n                retry_delay_seconds\n                if retry_delay_seconds is not NotSet\n                else self.retry_delay_seconds\n            ),\n            retry_jitter_factor=(\n                retry_jitter_factor\n                if retry_jitter_factor is not NotSet\n                else self.retry_jitter_factor\n            ),\n            persist_result=(\n                persist_result if persist_result is not NotSet else self.persist_result\n            ),\n            result_storage=(\n                result_storage if result_storage is not NotSet else self.result_storage\n            ),\n            result_storage_key=(\n                result_storage_key\n                if result_storage_key is not NotSet\n                else self.result_storage_key\n            ),\n            result_serializer=(\n                result_serializer\n                if result_serializer is not NotSet\n                else self.result_serializer\n            ),\n            cache_result_in_memory=(\n                cache_result_in_memory\n                if cache_result_in_memory is not None\n                else self.cache_result_in_memory\n            ),\n            timeout_seconds=(\n                timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n            ),\n            log_prints=(log_prints if log_prints is not NotSet else self.log_prints),\n            refresh_cache=(\n                refresh_cache if refresh_cache is not NotSet else self.refresh_cache\n            ),\n            on_completion=on_completion or self.on_completion,\n            on_failure=on_failure or self.on_failure,\n        )\n\n    @overload\n    def __call__(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> None:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def __call__(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> T:\n        ...\n\n    @overload\n    def __call__(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def __call__(\n        self,\n        *args: P.args,\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: P.kwargs,\n    ):\n\"\"\"\n        Run the task and return the result. If `return_state` is True returns\n        the result is wrapped in a Prefect State which provides error handling.\n        \"\"\"\n        from prefect.engine import enter_task_run_engine\n        from prefect.task_runners import SequentialTaskRunner\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return_type = \"state\" if return_state else \"result\"\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            task_runner=SequentialTaskRunner(),\n            return_type=return_type,\n            mapped=False,\n        )\n\n    @overload\n    def _run(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> PrefectFuture[None, Sync]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def _run(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[State[T]]:\n        ...\n\n    @overload\n    def _run(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def _run(\n        self,\n        *args: P.args,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: P.kwargs,\n    ) -> Union[State, Awaitable[State]]:\n\"\"\"\n        Run the task and return the final state.\n        \"\"\"\n        from prefect.engine import enter_task_run_engine\n        from prefect.task_runners import SequentialTaskRunner\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            return_type=\"state\",\n            task_runner=SequentialTaskRunner(),\n            mapped=False,\n        )\n\n    @overload\n    def submit(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> PrefectFuture[None, Sync]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[PrefectFuture[T, Async]]:\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> PrefectFuture[T, Sync]:\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def submit(\n        self,\n        *args: Any,\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: Any,\n    ) -> Union[PrefectFuture, Awaitable[PrefectFuture]]:\n\"\"\"\n        Submit a run of the task to a worker.\n\n        Must be called within a flow function. If writing an async task, this call must\n        be awaited.\n\n        Will create a new task run in the backing API and submit the task to the flow's\n        task runner. This call only blocks execution while the task is being submitted,\n        once it is submitted, the flow function will continue executing. However, note\n        that the `SequentialTaskRunner` does not implement parallel execution for sync tasks\n        and they are fully resolved on submission.\n\n        Args:\n            *args: Arguments to run the task with\n            return_state: Return the result of the flow run wrapped in a\n                Prefect State.\n            wait_for: Upstream task futures to wait for before starting the task\n            **kwargs: Keyword arguments to run the task with\n\n        Returns:\n            If `return_state` is False a future allowing asynchronous access to\n                the state of the task\n            If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to\n                the state of the task\n\n        Examples:\n\n            Define a task\n\n            >>> from prefect import task\n            >>> @task\n            >>> def my_task():\n            >>>     return \"hello\"\n\n            Run a task in a flow\n\n            >>> from prefect import flow\n            >>> @flow\n            >>> def my_flow():\n            >>>     my_task.submit()\n\n            Wait for a task to finish\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     my_task.submit().wait()\n\n            Use the result from a task in a flow\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     print(my_task.submit().result())\n            >>>\n            >>> my_flow()\n            hello\n\n            Run an async task in an async flow\n\n            >>> @task\n            >>> async def my_async_task():\n            >>>     pass\n            >>>\n            >>> @flow\n            >>> async def my_flow():\n            >>>     await my_async_task.submit()\n\n            Run a sync task in an async flow\n\n            >>> @flow\n            >>> async def my_flow():\n            >>>     my_task.submit()\n\n            Enforce ordering between tasks that do not exchange data\n            >>> @task\n            >>> def task_1():\n            >>>     pass\n            >>>\n            >>> @task\n            >>> def task_2():\n            >>>     pass\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     x = task_1.submit()\n            >>>\n            >>>     # task 2 will wait for task_1 to complete\n            >>>     y = task_2.submit(wait_for=[x])\n\n        \"\"\"\n\n        from prefect.engine import enter_task_run_engine\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n        return_type = \"state\" if return_state else \"future\"\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            return_type=return_type,\n            task_runner=None,  # Use the flow's task runner\n            mapped=False,\n        )\n\n    @overload\n    def map(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> List[PrefectFuture[None, Sync]]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def map(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[List[PrefectFuture[T, Async]]]:\n        ...\n\n    @overload\n    def map(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> List[PrefectFuture[T, Sync]]:\n        ...\n\n    @overload\n    def map(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> List[State[T]]:\n        ...\n\n    def map(\n        self,\n        *args: Any,\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: Any,\n    ) -> Any:\n\"\"\"\n        Submit a mapped run of the task to a worker.\n\n        Must be called within a flow function. If writing an async task, this\n        call must be awaited.\n\n        Must be called with at least one iterable and all iterables must be\n        the same length. Any arguments that are not iterable will be treated as\n        a static value and each task run will recieve the same value.\n\n        Will create as many task runs as the length of the iterable(s) in the\n        backing API and submit the task runs to the flow's task runner. This\n        call blocks if given a future as input while the future is resolved. It\n        also blocks while the tasks are being submitted, once they are\n        submitted, the flow function will continue executing. However, note\n        that the `SequentialTaskRunner` does not implement parallel execution\n        for sync tasks and they are fully resolved on submission.\n\n        Args:\n            *args: Iterable and static arguments to run the tasks with\n            return_state: Return a list of Prefect States that wrap the results\n                of each task run.\n            wait_for: Upstream task futures to wait for before starting the\n                task\n            **kwargs: Keyword iterable arguments to run the task with\n\n        Returns:\n            A list of futures allowing asynchronous access to the state of the\n            tasks\n\n        Examples:\n\n            Define a task\n\n            >>> from prefect import task\n            >>> @task\n            >>> def my_task(x):\n            >>>     return x + 1\n\n            Create mapped tasks\n\n            >>> from prefect import flow\n            >>> @flow\n            >>> def my_flow():\n            >>>     my_task.map([1, 2, 3])\n\n            Wait for all mapped tasks to finish\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     futures = my_task.map([1, 2, 3])\n            >>>     for future in futures:\n            >>>         future.wait()\n            >>>     # Now all of the mapped tasks have finished\n            >>>     my_task(10)\n\n            Use the result from mapped tasks in a flow\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     futures = my_task.map([1, 2, 3])\n            >>>     for future in futures:\n            >>>         print(future.result())\n            >>> my_flow()\n            2\n            3\n            4\n\n            Enforce ordering between tasks that do not exchange data\n            >>> @task\n            >>> def task_1(x):\n            >>>     pass\n            >>>\n            >>> @task\n            >>> def task_2(y):\n            >>>     pass\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     x = task_1.submit()\n            >>>\n            >>>     # task 2 will wait for task_1 to complete\n            >>>     y = task_2.map([1, 2, 3], wait_for=[x])\n\n            Use a non-iterable input as a constant across mapped tasks\n            >>> @task\n            >>> def display(prefix, item):\n            >>>    print(prefix, item)\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     display.map(\"Check it out: \", [1, 2, 3])\n            >>>\n            >>> my_flow()\n            Check it out: 1\n            Check it out: 2\n            Check it out: 3\n\n            Use `unmapped` to treat an iterable argument as a constant\n            >>> from prefect import unmapped\n            >>>\n            >>> @task\n            >>> def add_n_to_items(items, n):\n            >>>     return [item + n for item in items]\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n            >>>\n            >>> my_flow()\n            [[11, 21], [12, 22], [13, 23]]\n        \"\"\"\n\n        from prefect.engine import enter_task_run_engine\n\n        # Convert the call args/kwargs to a parameter dict; do not apply defaults\n        # since they should not be mapped over\n        parameters = get_call_parameters(self.fn, args, kwargs, apply_defaults=False)\n        return_type = \"state\" if return_state else \"future\"\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            return_type=return_type,\n            task_runner=None,\n            mapped=True,\n        )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.map","title":"map","text":"

Submit a mapped run of the task to a worker.

Must be called within a flow function. If writing an async task, this call must be awaited.

Must be called with at least one iterable and all iterables must be the same length. Any arguments that are not iterable will be treated as a static value and each task run will recieve the same value.

Will create as many task runs as the length of the iterable(s) in the backing API and submit the task runs to the flow's task runner. This call blocks if given a future as input while the future is resolved. It also blocks while the tasks are being submitted, once they are submitted, the flow function will continue executing. However, note that the SequentialTaskRunner does not implement parallel execution for sync tasks and they are fully resolved on submission.

Parameters:

Name Type Description Default *args Any

Iterable and static arguments to run the tasks with

() return_state bool

Return a list of Prefect States that wrap the results of each task run.

False wait_for Optional[Iterable[PrefectFuture]]

Upstream task futures to wait for before starting the task

None **kwargs Any

Keyword iterable arguments to run the task with

{}

Returns:

Type Description Any

A list of futures allowing asynchronous access to the state of the

Any

tasks

Examples:

Define a task

>>> from prefect import task\n>>> @task\n>>> def my_task(x):\n>>>     return x + 1\n

Create mapped tasks

>>> from prefect import flow\n>>> @flow\n>>> def my_flow():\n>>>     my_task.map([1, 2, 3])\n

Wait for all mapped tasks to finish

>>> @flow\n>>> def my_flow():\n>>>     futures = my_task.map([1, 2, 3])\n>>>     for future in futures:\n>>>         future.wait()\n>>>     # Now all of the mapped tasks have finished\n>>>     my_task(10)\n

Use the result from mapped tasks in a flow

>>> @flow\n>>> def my_flow():\n>>>     futures = my_task.map([1, 2, 3])\n>>>     for future in futures:\n>>>         print(future.result())\n>>> my_flow()\n2\n3\n4\n

Enforce ordering between tasks that do not exchange data

>>> @task\n>>> def task_1(x):\n>>>     pass\n>>>\n>>> @task\n>>> def task_2(y):\n>>>     pass\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     x = task_1.submit()\n>>>\n>>>     # task 2 will wait for task_1 to complete\n>>>     y = task_2.map([1, 2, 3], wait_for=[x])\n

Use a non-iterable input as a constant across mapped tasks

>>> @task\n>>> def display(prefix, item):\n>>>    print(prefix, item)\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     display.map(\"Check it out: \", [1, 2, 3])\n>>>\n>>> my_flow()\nCheck it out: 1\nCheck it out: 2\nCheck it out: 3\n

Use unmapped to treat an iterable argument as a constant

>>> from prefect import unmapped\n>>>\n>>> @task\n>>> def add_n_to_items(items, n):\n>>>     return [item + n for item in items]\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n>>>\n>>> my_flow()\n[[11, 21], [12, 22], [13, 23]]\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
def map(\n    self,\n    *args: Any,\n    return_state: bool = False,\n    wait_for: Optional[Iterable[PrefectFuture]] = None,\n    **kwargs: Any,\n) -> Any:\n\"\"\"\n    Submit a mapped run of the task to a worker.\n\n    Must be called within a flow function. If writing an async task, this\n    call must be awaited.\n\n    Must be called with at least one iterable and all iterables must be\n    the same length. Any arguments that are not iterable will be treated as\n    a static value and each task run will recieve the same value.\n\n    Will create as many task runs as the length of the iterable(s) in the\n    backing API and submit the task runs to the flow's task runner. This\n    call blocks if given a future as input while the future is resolved. It\n    also blocks while the tasks are being submitted, once they are\n    submitted, the flow function will continue executing. However, note\n    that the `SequentialTaskRunner` does not implement parallel execution\n    for sync tasks and they are fully resolved on submission.\n\n    Args:\n        *args: Iterable and static arguments to run the tasks with\n        return_state: Return a list of Prefect States that wrap the results\n            of each task run.\n        wait_for: Upstream task futures to wait for before starting the\n            task\n        **kwargs: Keyword iterable arguments to run the task with\n\n    Returns:\n        A list of futures allowing asynchronous access to the state of the\n        tasks\n\n    Examples:\n\n        Define a task\n\n        >>> from prefect import task\n        >>> @task\n        >>> def my_task(x):\n        >>>     return x + 1\n\n        Create mapped tasks\n\n        >>> from prefect import flow\n        >>> @flow\n        >>> def my_flow():\n        >>>     my_task.map([1, 2, 3])\n\n        Wait for all mapped tasks to finish\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     futures = my_task.map([1, 2, 3])\n        >>>     for future in futures:\n        >>>         future.wait()\n        >>>     # Now all of the mapped tasks have finished\n        >>>     my_task(10)\n\n        Use the result from mapped tasks in a flow\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     futures = my_task.map([1, 2, 3])\n        >>>     for future in futures:\n        >>>         print(future.result())\n        >>> my_flow()\n        2\n        3\n        4\n\n        Enforce ordering between tasks that do not exchange data\n        >>> @task\n        >>> def task_1(x):\n        >>>     pass\n        >>>\n        >>> @task\n        >>> def task_2(y):\n        >>>     pass\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     x = task_1.submit()\n        >>>\n        >>>     # task 2 will wait for task_1 to complete\n        >>>     y = task_2.map([1, 2, 3], wait_for=[x])\n\n        Use a non-iterable input as a constant across mapped tasks\n        >>> @task\n        >>> def display(prefix, item):\n        >>>    print(prefix, item)\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     display.map(\"Check it out: \", [1, 2, 3])\n        >>>\n        >>> my_flow()\n        Check it out: 1\n        Check it out: 2\n        Check it out: 3\n\n        Use `unmapped` to treat an iterable argument as a constant\n        >>> from prefect import unmapped\n        >>>\n        >>> @task\n        >>> def add_n_to_items(items, n):\n        >>>     return [item + n for item in items]\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n        >>>\n        >>> my_flow()\n        [[11, 21], [12, 22], [13, 23]]\n    \"\"\"\n\n    from prefect.engine import enter_task_run_engine\n\n    # Convert the call args/kwargs to a parameter dict; do not apply defaults\n    # since they should not be mapped over\n    parameters = get_call_parameters(self.fn, args, kwargs, apply_defaults=False)\n    return_type = \"state\" if return_state else \"future\"\n\n    return enter_task_run_engine(\n        self,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        task_runner=None,\n        mapped=True,\n    )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.submit","title":"submit","text":"

Submit a run of the task to a worker.

Must be called within a flow function. If writing an async task, this call must be awaited.

Will create a new task run in the backing API and submit the task to the flow's task runner. This call only blocks execution while the task is being submitted, once it is submitted, the flow function will continue executing. However, note that the SequentialTaskRunner does not implement parallel execution for sync tasks and they are fully resolved on submission.

Parameters:

Name Type Description Default *args Any

Arguments to run the task with

() return_state bool

Return the result of the flow run wrapped in a Prefect State.

False wait_for Optional[Iterable[PrefectFuture]]

Upstream task futures to wait for before starting the task

None **kwargs Any

Keyword arguments to run the task with

{}

Returns:

Type Description Union[PrefectFuture, Awaitable[PrefectFuture]]

If return_state is False a future allowing asynchronous access to the state of the task

Union[PrefectFuture, Awaitable[PrefectFuture]]

If return_state is True a future wrapped in a Prefect State allowing asynchronous access to the state of the task

Examples:

Define a task

>>> from prefect import task\n>>> @task\n>>> def my_task():\n>>>     return \"hello\"\n

Run a task in a flow

>>> from prefect import flow\n>>> @flow\n>>> def my_flow():\n>>>     my_task.submit()\n

Wait for a task to finish

>>> @flow\n>>> def my_flow():\n>>>     my_task.submit().wait()\n

Use the result from a task in a flow

>>> @flow\n>>> def my_flow():\n>>>     print(my_task.submit().result())\n>>>\n>>> my_flow()\nhello\n

Run an async task in an async flow

>>> @task\n>>> async def my_async_task():\n>>>     pass\n>>>\n>>> @flow\n>>> async def my_flow():\n>>>     await my_async_task.submit()\n

Run a sync task in an async flow

>>> @flow\n>>> async def my_flow():\n>>>     my_task.submit()\n

Enforce ordering between tasks that do not exchange data

>>> @task\n>>> def task_1():\n>>>     pass\n>>>\n>>> @task\n>>> def task_2():\n>>>     pass\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     x = task_1.submit()\n>>>\n>>>     # task 2 will wait for task_1 to complete\n>>>     y = task_2.submit(wait_for=[x])\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
def submit(\n    self,\n    *args: Any,\n    return_state: bool = False,\n    wait_for: Optional[Iterable[PrefectFuture]] = None,\n    **kwargs: Any,\n) -> Union[PrefectFuture, Awaitable[PrefectFuture]]:\n\"\"\"\n    Submit a run of the task to a worker.\n\n    Must be called within a flow function. If writing an async task, this call must\n    be awaited.\n\n    Will create a new task run in the backing API and submit the task to the flow's\n    task runner. This call only blocks execution while the task is being submitted,\n    once it is submitted, the flow function will continue executing. However, note\n    that the `SequentialTaskRunner` does not implement parallel execution for sync tasks\n    and they are fully resolved on submission.\n\n    Args:\n        *args: Arguments to run the task with\n        return_state: Return the result of the flow run wrapped in a\n            Prefect State.\n        wait_for: Upstream task futures to wait for before starting the task\n        **kwargs: Keyword arguments to run the task with\n\n    Returns:\n        If `return_state` is False a future allowing asynchronous access to\n            the state of the task\n        If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to\n            the state of the task\n\n    Examples:\n\n        Define a task\n\n        >>> from prefect import task\n        >>> @task\n        >>> def my_task():\n        >>>     return \"hello\"\n\n        Run a task in a flow\n\n        >>> from prefect import flow\n        >>> @flow\n        >>> def my_flow():\n        >>>     my_task.submit()\n\n        Wait for a task to finish\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     my_task.submit().wait()\n\n        Use the result from a task in a flow\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     print(my_task.submit().result())\n        >>>\n        >>> my_flow()\n        hello\n\n        Run an async task in an async flow\n\n        >>> @task\n        >>> async def my_async_task():\n        >>>     pass\n        >>>\n        >>> @flow\n        >>> async def my_flow():\n        >>>     await my_async_task.submit()\n\n        Run a sync task in an async flow\n\n        >>> @flow\n        >>> async def my_flow():\n        >>>     my_task.submit()\n\n        Enforce ordering between tasks that do not exchange data\n        >>> @task\n        >>> def task_1():\n        >>>     pass\n        >>>\n        >>> @task\n        >>> def task_2():\n        >>>     pass\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     x = task_1.submit()\n        >>>\n        >>>     # task 2 will wait for task_1 to complete\n        >>>     y = task_2.submit(wait_for=[x])\n\n    \"\"\"\n\n    from prefect.engine import enter_task_run_engine\n\n    # Convert the call args/kwargs to a parameter dict\n    parameters = get_call_parameters(self.fn, args, kwargs)\n    return_type = \"state\" if return_state else \"future\"\n\n    return enter_task_run_engine(\n        self,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        task_runner=None,  # Use the flow's task runner\n        mapped=False,\n    )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.with_options","title":"with_options","text":"

Create a new task from the current object, updating provided options.

Parameters:

Name Type Description Default name str

A new name for the task.

None description str

A new description for the task.

None tags Iterable[str]

A new set of tags for the task. If given, existing tags are ignored, not merged.

None cache_key_fn Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]

A new cache key function for the task.

None cache_expiration datetime.timedelta

A new cache expiration time for the task.

None task_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.

None retries Optional[int]

A new number of times to retry on task run failure.

NotSet retry_delay_seconds Union[float, int, List[float], Callable[[int], List[float]]]

Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.

NotSet retry_jitter_factor Optional[float]

An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".

NotSet persist_result Optional[bool]

A new option for enabling or disabling result persistence.

NotSet result_storage Optional[ResultStorage]

A new storage type to use for results.

NotSet result_serializer Optional[ResultSerializer]

A new serializer to use for results.

NotSet result_storage_key Optional[str]

A new key for the persisted result to be stored at.

NotSet timeout_seconds Union[int, float]

A new maximum time for the task to complete in seconds.

None log_prints Optional[bool]

A new option for enabling or disabling redirection of print statements.

NotSet refresh_cache Optional[bool]

A new option for enabling or disabling cache refresh.

NotSet on_completion Optional[List[Callable[[Task, TaskRun, State], None]]]

A new list of callables to run when the task enters a completed state.

None on_failure Optional[List[Callable[[Task, TaskRun, State], None]]]

A new list of callables to run when the task enters a failed state.

None

Returns:

Type Description

A new Task instance.

Examples:

Create a new task from an existing task and update the name

>>> @task(name=\"My task\")\n>>> def my_task():\n>>>     return 1\n>>>\n>>> new_task = my_task.with_options(name=\"My new task\")\n

Create a new task from an existing task and update the retry settings

>>> from random import randint\n>>>\n>>> @task(retries=1, retry_delay_seconds=5)\n>>> def my_task():\n>>>     x = randint(0, 5)\n>>>     if x >= 3:  # Make a task that fails sometimes\n>>>         raise ValueError(\"Retry me please!\")\n>>>     return x\n>>>\n>>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n

Use a task with updated options within a flow

>>> @task(name=\"My task\")\n>>> def my_task():\n>>>     return 1\n>>>\n>>> @flow\n>>> my_flow():\n>>>     new_task = my_task.with_options(name=\"My new task\")\n>>>     new_task()\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
def with_options(\n    self,\n    *,\n    name: str = None,\n    description: str = None,\n    tags: Iterable[str] = None,\n    cache_key_fn: Callable[\n        [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n    ] = None,\n    task_run_name: Optional[Union[Callable[[], str], str]] = None,\n    cache_expiration: datetime.timedelta = None,\n    retries: Optional[int] = NotSet,\n    retry_delay_seconds: Union[\n        float,\n        int,\n        List[float],\n        Callable[[int], List[float]],\n    ] = NotSet,\n    retry_jitter_factor: Optional[float] = NotSet,\n    persist_result: Optional[bool] = NotSet,\n    result_storage: Optional[ResultStorage] = NotSet,\n    result_serializer: Optional[ResultSerializer] = NotSet,\n    result_storage_key: Optional[str] = NotSet,\n    cache_result_in_memory: Optional[bool] = None,\n    timeout_seconds: Union[int, float] = None,\n    log_prints: Optional[bool] = NotSet,\n    refresh_cache: Optional[bool] = NotSet,\n    on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n    on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n):\n\"\"\"\n    Create a new task from the current object, updating provided options.\n\n    Args:\n        name: A new name for the task.\n        description: A new description for the task.\n        tags: A new set of tags for the task. If given, existing tags are ignored,\n            not merged.\n        cache_key_fn: A new cache key function for the task.\n        cache_expiration: A new cache expiration time for the task.\n        task_run_name: An optional name to distinguish runs of this task; this name can be provided\n            as a string template with the task's keyword arguments as variables,\n            or a function that returns a string.\n        retries: A new number of times to retry on task run failure.\n        retry_delay_seconds: Optionally configures how long to wait before retrying\n            the task after failure. This is only applicable if `retries` is nonzero.\n            This setting can either be a number of seconds, a list of retry delays,\n            or a callable that, given the total number of retries, generates a list\n            of retry delays. If a number of seconds, that delay will be applied to\n            all retries. If a list, each retry will wait for the corresponding delay\n            before retrying. When passing a callable or a list, the number of\n            configured retry delays cannot exceed 50.\n        retry_jitter_factor: An optional factor that defines the factor to which a\n            retry can be jittered in order to avoid a \"thundering herd\".\n        persist_result: A new option for enabling or disabling result persistence.\n        result_storage: A new storage type to use for results.\n        result_serializer: A new serializer to use for results.\n        result_storage_key: A new key for the persisted result to be stored at.\n        timeout_seconds: A new maximum time for the task to complete in seconds.\n        log_prints: A new option for enabling or disabling redirection of `print` statements.\n        refresh_cache: A new option for enabling or disabling cache refresh.\n        on_completion: A new list of callables to run when the task enters a completed state.\n        on_failure: A new list of callables to run when the task enters a failed state.\n\n    Returns:\n        A new `Task` instance.\n\n    Examples:\n\n        Create a new task from an existing task and update the name\n\n        >>> @task(name=\"My task\")\n        >>> def my_task():\n        >>>     return 1\n        >>>\n        >>> new_task = my_task.with_options(name=\"My new task\")\n\n        Create a new task from an existing task and update the retry settings\n\n        >>> from random import randint\n        >>>\n        >>> @task(retries=1, retry_delay_seconds=5)\n        >>> def my_task():\n        >>>     x = randint(0, 5)\n        >>>     if x >= 3:  # Make a task that fails sometimes\n        >>>         raise ValueError(\"Retry me please!\")\n        >>>     return x\n        >>>\n        >>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\n        Use a task with updated options within a flow\n\n        >>> @task(name=\"My task\")\n        >>> def my_task():\n        >>>     return 1\n        >>>\n        >>> @flow\n        >>> my_flow():\n        >>>     new_task = my_task.with_options(name=\"My new task\")\n        >>>     new_task()\n    \"\"\"\n    return Task(\n        fn=self.fn,\n        name=name or self.name,\n        description=description or self.description,\n        tags=tags or copy(self.tags),\n        cache_key_fn=cache_key_fn or self.cache_key_fn,\n        cache_expiration=cache_expiration or self.cache_expiration,\n        task_run_name=task_run_name,\n        retries=retries if retries is not NotSet else self.retries,\n        retry_delay_seconds=(\n            retry_delay_seconds\n            if retry_delay_seconds is not NotSet\n            else self.retry_delay_seconds\n        ),\n        retry_jitter_factor=(\n            retry_jitter_factor\n            if retry_jitter_factor is not NotSet\n            else self.retry_jitter_factor\n        ),\n        persist_result=(\n            persist_result if persist_result is not NotSet else self.persist_result\n        ),\n        result_storage=(\n            result_storage if result_storage is not NotSet else self.result_storage\n        ),\n        result_storage_key=(\n            result_storage_key\n            if result_storage_key is not NotSet\n            else self.result_storage_key\n        ),\n        result_serializer=(\n            result_serializer\n            if result_serializer is not NotSet\n            else self.result_serializer\n        ),\n        cache_result_in_memory=(\n            cache_result_in_memory\n            if cache_result_in_memory is not None\n            else self.cache_result_in_memory\n        ),\n        timeout_seconds=(\n            timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n        ),\n        log_prints=(log_prints if log_prints is not NotSet else self.log_prints),\n        refresh_cache=(\n            refresh_cache if refresh_cache is not NotSet else self.refresh_cache\n        ),\n        on_completion=on_completion or self.on_completion,\n        on_failure=on_failure or self.on_failure,\n    )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.exponential_backoff","title":"exponential_backoff","text":"

A task retry backoff utility that configures exponential backoff for task retries. The exponential backoff design matches the urllib3 implementation.

Parameters:

Name Type Description Default backoff_factor float

the base delay for the first retry, subsequent retries will increase the delay time by powers of 2.

required

Returns:

Type Description Callable[[int], List[float]]

a callable that can be passed to the task constructor

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
def exponential_backoff(backoff_factor: float) -> Callable[[int], List[float]]:\n\"\"\"\n    A task retry backoff utility that configures exponential backoff for task retries.\n    The exponential backoff design matches the urllib3 implementation.\n\n    Arguments:\n        backoff_factor: the base delay for the first retry, subsequent retries will\n            increase the delay time by powers of 2.\n\n    Returns:\n        a callable that can be passed to the task constructor\n    \"\"\"\n\n    def retry_backoff_callable(retries: int) -> List[float]:\n        # no more than 50 retry delays can be configured on a task\n        retries = min(retries, 50)\n\n        return [backoff_factor * max(0, 2**r) for r in range(retries)]\n\n    return retry_backoff_callable\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.task","title":"task","text":"

Decorator to designate a function as a task in a Prefect workflow.

This decorator may be used for asynchronous or synchronous functions.

Parameters:

Name Type Description Default name str

An optional name for the task; if not provided, the name will be inferred from the given function.

None description str

An optional string description for the task.

None tags Iterable[str]

An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags context at task runtime.

None version str

An optional string specifying the version of this task definition

None cache_key_fn Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]

An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again.

None cache_expiration datetime.timedelta

An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire.

None task_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.

None retries int

An optional number of times to retry on task run failure

None retry_delay_seconds Union[float, int, List[float], Callable[[int], List[float]]]

Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.

None retry_jitter_factor Optional[float]

An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".

None persist_result Optional[bool]

An optional toggle indicating whether the result of this task should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

None result_storage Optional[ResultStorage]

An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in.

None result_storage_key Optional[str]

An optional key to store the result in storage at when persisted. Defaults to a unique identifier.

None result_serializer Optional[ResultSerializer]

An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in.

None timeout_seconds Union[int, float]

An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed.

None log_prints Optional[bool]

If set, print statements in the task will be redirected to the Prefect logger for the task run. Defaults to None, which indicates that the value from the flow should be used.

None refresh_cache Optional[bool]

If set, cached results for the cache key are not used. Defaults to None, which indicates that a cached result from a previous execution with matching cache key is used.

None on_failure Optional[List[Callable[[Task, TaskRun, State], None]]]

An optional list of callables to run when the task enters a failed state.

None on_completion Optional[List[Callable[[Task, TaskRun, State], None]]]

An optional list of callables to run when the task enters a completed state.

None

Returns:

Type Description

A callable Task object which, when called, will submit the task for execution.

Examples:

Define a simple task

>>> @task\n>>> def add(x, y):\n>>>     return x + y\n

Define an async task

>>> @task\n>>> async def add(x, y):\n>>>     return x + y\n

Define a task with tags and a description

>>> @task(tags={\"a\", \"b\"}, description=\"This task is empty but its my first!\")\n>>> def my_task():\n>>>     pass\n

Define a task with a custom name

>>> @task(name=\"The Ultimate Task\")\n>>> def my_task():\n>>>     pass\n

Define a task that retries 3 times with a 5 second delay between attempts

>>> from random import randint\n>>>\n>>> @task(retries=3, retry_delay_seconds=5)\n>>> def my_task():\n>>>     x = randint(0, 5)\n>>>     if x >= 3:  # Make a task that fails sometimes\n>>>         raise ValueError(\"Retry me please!\")\n>>>     return x\n

Define a task that is cached for a day based on its inputs

>>> from prefect.tasks import task_input_hash\n>>> from datetime import timedelta\n>>>\n>>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\n>>> def my_task():\n>>>     return \"hello\"\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
def task(\n    __fn=None,\n    *,\n    name: str = None,\n    description: str = None,\n    tags: Iterable[str] = None,\n    version: str = None,\n    cache_key_fn: Callable[[\"TaskRunContext\", Dict[str, Any]], Optional[str]] = None,\n    cache_expiration: datetime.timedelta = None,\n    task_run_name: Optional[Union[Callable[[], str], str]] = None,\n    retries: int = None,\n    retry_delay_seconds: Union[\n        float,\n        int,\n        List[float],\n        Callable[[int], List[float]],\n    ] = None,\n    retry_jitter_factor: Optional[float] = None,\n    persist_result: Optional[bool] = None,\n    result_storage: Optional[ResultStorage] = None,\n    result_storage_key: Optional[str] = None,\n    result_serializer: Optional[ResultSerializer] = None,\n    cache_result_in_memory: bool = True,\n    timeout_seconds: Union[int, float] = None,\n    log_prints: Optional[bool] = None,\n    refresh_cache: Optional[bool] = None,\n    on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n    on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n):\n\"\"\"\n    Decorator to designate a function as a task in a Prefect workflow.\n\n    This decorator may be used for asynchronous or synchronous functions.\n\n    Args:\n        name: An optional name for the task; if not provided, the name will be inferred\n            from the given function.\n        description: An optional string description for the task.\n        tags: An optional set of tags to be associated with runs of this task. These\n            tags are combined with any tags defined by a `prefect.tags` context at\n            task runtime.\n        version: An optional string specifying the version of this task definition\n        cache_key_fn: An optional callable that, given the task run context and call\n            parameters, generates a string key; if the key matches a previous completed\n            state, that state result will be restored instead of running the task again.\n        cache_expiration: An optional amount of time indicating how long cached states\n            for this task should be restorable; if not provided, cached states will\n            never expire.\n        task_run_name: An optional name to distinguish runs of this task; this name can be provided\n            as a string template with the task's keyword arguments as variables,\n            or a function that returns a string.\n        retries: An optional number of times to retry on task run failure\n        retry_delay_seconds: Optionally configures how long to wait before retrying the\n            task after failure. This is only applicable if `retries` is nonzero. This\n            setting can either be a number of seconds, a list of retry delays, or a\n            callable that, given the total number of retries, generates a list of retry\n            delays. If a number of seconds, that delay will be applied to all retries.\n            If a list, each retry will wait for the corresponding delay before retrying.\n            When passing a callable or a list, the number of configured retry delays\n            cannot exceed 50.\n        retry_jitter_factor: An optional factor that defines the factor to which a retry\n            can be jittered in order to avoid a \"thundering herd\".\n        persist_result: An optional toggle indicating whether the result of this task\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to persist the result of this task.\n            Defaults to the value set in the flow the task is called in.\n        result_storage_key: An optional key to store the result in storage at when persisted.\n            Defaults to a unique identifier.\n        result_serializer: An optional serializer to use to serialize the result of this\n            task for persistence. Defaults to the value set in the flow the task is\n            called in.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the task. If the task exceeds this runtime, it will be marked as failed.\n        log_prints: If set, `print` statements in the task will be redirected to the\n            Prefect logger for the task run. Defaults to `None`, which indicates\n            that the value from the flow should be used.\n        refresh_cache: If set, cached results for the cache key are not used.\n            Defaults to `None`, which indicates that a cached result from a previous\n            execution with matching cache key is used.\n        on_failure: An optional list of callables to run when the task enters a failed state.\n        on_completion: An optional list of callables to run when the task enters a completed state.\n\n    Returns:\n        A callable `Task` object which, when called, will submit the task for execution.\n\n    Examples:\n        Define a simple task\n\n        >>> @task\n        >>> def add(x, y):\n        >>>     return x + y\n\n        Define an async task\n\n        >>> @task\n        >>> async def add(x, y):\n        >>>     return x + y\n\n        Define a task with tags and a description\n\n        >>> @task(tags={\"a\", \"b\"}, description=\"This task is empty but its my first!\")\n        >>> def my_task():\n        >>>     pass\n\n        Define a task with a custom name\n\n        >>> @task(name=\"The Ultimate Task\")\n        >>> def my_task():\n        >>>     pass\n\n        Define a task that retries 3 times with a 5 second delay between attempts\n\n        >>> from random import randint\n        >>>\n        >>> @task(retries=3, retry_delay_seconds=5)\n        >>> def my_task():\n        >>>     x = randint(0, 5)\n        >>>     if x >= 3:  # Make a task that fails sometimes\n        >>>         raise ValueError(\"Retry me please!\")\n        >>>     return x\n\n        Define a task that is cached for a day based on its inputs\n\n        >>> from prefect.tasks import task_input_hash\n        >>> from datetime import timedelta\n        >>>\n        >>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\n        >>> def my_task():\n        >>>     return \"hello\"\n    \"\"\"\n    if __fn:\n        return cast(\n            Task[P, R],\n            Task(\n                fn=__fn,\n                name=name,\n                description=description,\n                tags=tags,\n                version=version,\n                cache_key_fn=cache_key_fn,\n                cache_expiration=cache_expiration,\n                task_run_name=task_run_name,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                retry_jitter_factor=retry_jitter_factor,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_storage_key=result_storage_key,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                timeout_seconds=timeout_seconds,\n                log_prints=log_prints,\n                refresh_cache=refresh_cache,\n                on_completion=on_completion,\n                on_failure=on_failure,\n            ),\n        )\n    else:\n        return cast(\n            Callable[[Callable[P, R]], Task[P, R]],\n            partial(\n                task,\n                name=name,\n                description=description,\n                tags=tags,\n                version=version,\n                cache_key_fn=cache_key_fn,\n                cache_expiration=cache_expiration,\n                task_run_name=task_run_name,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                retry_jitter_factor=retry_jitter_factor,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_storage_key=result_storage_key,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                timeout_seconds=timeout_seconds,\n                log_prints=log_prints,\n                refresh_cache=refresh_cache,\n                on_completion=on_completion,\n                on_failure=on_failure,\n            ),\n        )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.task_input_hash","title":"task_input_hash","text":"

A task cache key implementation which hashes all inputs to the task using a JSON or cloudpickle serializer. If any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, this will return a null key indicating that a cache key could not be generated for the given inputs.

Parameters:

Name Type Description Default context TaskRunContext

the active TaskRunContext

required arguments Dict[str, Any]

a dictionary of arguments to be passed to the underlying task

required

Returns:

Type Description Optional[str]

a string hash if hashing succeeded, else None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
def task_input_hash(\n    context: \"TaskRunContext\", arguments: Dict[str, Any]\n) -> Optional[str]:\n\"\"\"\n    A task cache key implementation which hashes all inputs to the task using a JSON or\n    cloudpickle serializer. If any arguments are not JSON serializable, the pickle\n    serializer is used as a fallback. If cloudpickle fails, this will return a null key\n    indicating that a cache key could not be generated for the given inputs.\n\n    Arguments:\n        context: the active `TaskRunContext`\n        arguments: a dictionary of arguments to be passed to the underlying task\n\n    Returns:\n        a string hash if hashing succeeded, else `None`\n    \"\"\"\n    return hash_objects(\n        # We use the task key to get the qualified name for the task and include the\n        # task functions `co_code` bytes to avoid caching when the underlying function\n        # changes\n        context.task.task_key,\n        context.task.fn.__code__.co_code.hex(),\n        arguments,\n    )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/blocks/core/","title":"prefect.blocks.core","text":"","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core","title":"prefect.blocks.core","text":"","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block","title":"Block","text":"

Bases: BaseModel, ABC

A base class for implementing a block that wraps an external service.

This class can be defined with an arbitrary set of fields and methods, and couples business logic with data contained in an block document. _block_document_name, _block_document_id, _block_schema_id, and _block_type_id are reserved by Prefect as Block metadata fields, but otherwise a Block can implement arbitrary logic. Blocks can be instantiated without populating these metadata fields, but can only be used interactively, not with the Prefect API.

Instead of the init method, a block implementation allows the definition of a block_initialization method that is called after initialization.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@register_base_type\n@instrument_method_calls_on_class_instances\nclass Block(BaseModel, ABC):\n\"\"\"\n    A base class for implementing a block that wraps an external service.\n\n    This class can be defined with an arbitrary set of fields and methods, and\n    couples business logic with data contained in an block document.\n    `_block_document_name`, `_block_document_id`, `_block_schema_id`, and\n    `_block_type_id` are reserved by Prefect as Block metadata fields, but\n    otherwise a Block can implement arbitrary logic. Blocks can be instantiated\n    without populating these metadata fields, but can only be used interactively,\n    not with the Prefect API.\n\n    Instead of the __init__ method, a block implementation allows the\n    definition of a `block_initialization` method that is called after\n    initialization.\n    \"\"\"\n\n    class Config:\n        extra = \"allow\"\n\n        json_encoders = {SecretDict: lambda v: v.dict()}\n\n        @staticmethod\n        def schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n\"\"\"\n            Customizes Pydantic's schema generation feature to add blocks related information.\n            \"\"\"\n            schema[\"block_type_slug\"] = model.get_block_type_slug()\n            # Ensures args and code examples aren't included in the schema\n            description = model.get_description()\n            if description:\n                schema[\"description\"] = description\n            else:\n                # Prevent the description of the base class from being included in the schema\n                schema.pop(\"description\", None)\n\n            # create a list of secret field names\n            # secret fields include both top-level keys and dot-delimited nested secret keys\n            # A wildcard (*) means that all fields under a given key are secret.\n            # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n            # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n            # nested under the \"child\" key are all secret. There is no limit to nesting.\n            secrets = schema[\"secret_fields\"] = []\n            for field in model.__fields__.values():\n                _collect_secret_fields(field.name, field.type_, secrets)\n\n            # create block schema references\n            refs = schema[\"block_schema_references\"] = {}\n            for field in model.__fields__.values():\n                if Block.is_block_class(field.type_):\n                    refs[field.name] = field.type_._to_block_schema_reference_dict()\n                if get_origin(field.type_) is Union:\n                    for type_ in get_args(field.type_):\n                        if Block.is_block_class(type_):\n                            if isinstance(refs.get(field.name), list):\n                                refs[field.name].append(\n                                    type_._to_block_schema_reference_dict()\n                                )\n                            elif isinstance(refs.get(field.name), dict):\n                                refs[field.name] = [\n                                    refs[field.name],\n                                    type_._to_block_schema_reference_dict(),\n                                ]\n                            else:\n                                refs[field.name] = (\n                                    type_._to_block_schema_reference_dict()\n                                )\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.block_initialization()\n\n    def __str__(self) -> str:\n        return self.__repr__()\n\n    def __repr_args__(self):\n        repr_args = super().__repr_args__()\n        data_keys = self.schema()[\"properties\"].keys()\n        return [\n            (key, value) for key, value in repr_args if key is None or key in data_keys\n        ]\n\n    def block_initialization(self) -> None:\n        pass\n\n    # -- private class variables\n    # set by the class itself\n\n    # Attribute to customize the name of the block type created\n    # when the block is registered with the API. If not set, block\n    # type name will default to the class name.\n    _block_type_name: Optional[str] = None\n    _block_type_slug: Optional[str] = None\n\n    # Attributes used to set properties on a block type when registered\n    # with the API.\n    _logo_url: Optional[HttpUrl] = None\n    _documentation_url: Optional[HttpUrl] = None\n    _description: Optional[str] = None\n    _code_example: Optional[str] = None\n\n    # -- private instance variables\n    # these are set when blocks are loaded from the API\n    _block_type_id: Optional[UUID] = None\n    _block_schema_id: Optional[UUID] = None\n    _block_schema_capabilities: Optional[List[str]] = None\n    _block_schema_version: Optional[str] = None\n    _block_document_id: Optional[UUID] = None\n    _block_document_name: Optional[str] = None\n    _is_anonymous: Optional[bool] = None\n\n    # Exclude `save` as it uses the `sync_compatible` decorator and needs to be\n    # decorated directly.\n    _events_excluded_methods = [\"block_initialization\", \"save\", \"dict\"]\n\n    @classmethod\n    def __dispatch_key__(cls):\n        if cls.__name__ == \"Block\":\n            return None  # The base class is abstract\n        return block_schema_to_key(cls._to_block_schema())\n\n    @classmethod\n    def get_block_type_name(cls):\n        return cls._block_type_name or cls.__name__\n\n    @classmethod\n    def get_block_type_slug(cls):\n        return slugify(cls._block_type_slug or cls.get_block_type_name())\n\n    @classmethod\n    def get_block_capabilities(cls) -> FrozenSet[str]:\n\"\"\"\n        Returns the block capabilities for this Block. Recursively collects all block\n        capabilities of all parent classes into a single frozenset.\n        \"\"\"\n        return frozenset(\n            {\n                c\n                for base in (cls,) + cls.__mro__\n                for c in getattr(base, \"_block_schema_capabilities\", []) or []\n            }\n        )\n\n    @classmethod\n    def _get_current_package_version(cls):\n        current_module = inspect.getmodule(cls)\n        if current_module:\n            top_level_module = sys.modules[\n                current_module.__name__.split(\".\")[0] or \"__main__\"\n            ]\n            try:\n                version = Version(top_level_module.__version__)\n                # Strips off any local version information\n                return version.base_version\n            except (AttributeError, InvalidVersion):\n                # Module does not have a __version__ attribute or is not a parsable format\n                pass\n        return DEFAULT_BLOCK_SCHEMA_VERSION\n\n    @classmethod\n    def get_block_schema_version(cls) -> str:\n        return cls._block_schema_version or cls._get_current_package_version()\n\n    @classmethod\n    def _to_block_schema_reference_dict(cls):\n        return dict(\n            block_type_slug=cls.get_block_type_slug(),\n            block_schema_checksum=cls._calculate_schema_checksum(),\n        )\n\n    @classmethod\n    def _calculate_schema_checksum(\n        cls, block_schema_fields: Optional[Dict[str, Any]] = None\n    ):\n\"\"\"\n        Generates a unique hash for the underlying schema of block.\n\n        Args:\n            block_schema_fields: Dictionary detailing block schema fields to generate a\n                checksum for. The fields of the current class is used if this parameter\n                is not provided.\n\n        Returns:\n            str: The calculated checksum prefixed with the hashing algorithm used.\n        \"\"\"\n        block_schema_fields = (\n            cls.schema() if block_schema_fields is None else block_schema_fields\n        )\n        fields_for_checksum = remove_nested_keys([\"secret_fields\"], block_schema_fields)\n        if fields_for_checksum.get(\"definitions\"):\n            non_block_definitions = _get_non_block_reference_definitions(\n                fields_for_checksum, fields_for_checksum[\"definitions\"]\n            )\n            if non_block_definitions:\n                fields_for_checksum[\"definitions\"] = non_block_definitions\n            else:\n                # Pop off definitions entirely instead of empty dict for consistency\n                # with the OpenAPI specification\n                fields_for_checksum.pop(\"definitions\")\n        checksum = hash_objects(fields_for_checksum, hash_algo=hashlib.sha256)\n        if checksum is None:\n            raise ValueError(\"Unable to compute checksum for block schema\")\n        else:\n            return f\"sha256:{checksum}\"\n\n    def _to_block_document(\n        self,\n        name: Optional[str] = None,\n        block_schema_id: Optional[UUID] = None,\n        block_type_id: Optional[UUID] = None,\n        is_anonymous: Optional[bool] = None,\n    ) -> BlockDocument:\n\"\"\"\n        Creates the corresponding block document based on the data stored in a block.\n        The corresponding block document name, block type ID, and block schema ID must\n        either be passed into the method or configured on the block.\n\n        Args:\n            name: The name of the created block document. Not required if anonymous.\n            block_schema_id: UUID of the corresponding block schema.\n            block_type_id: UUID of the corresponding block type.\n            is_anonymous: if True, an anonymous block is created. Anonymous\n                blocks are not displayed in the UI and used primarily for system\n                operations and features that need to automatically generate blocks.\n\n        Returns:\n            BlockDocument: Corresponding block document\n                populated with the block's configured data.\n        \"\"\"\n        if is_anonymous is None:\n            is_anonymous = self._is_anonymous or False\n\n        # name must be present if not anonymous\n        if not is_anonymous and not name and not self._block_document_name:\n            raise ValueError(\"No name provided, either as an argument or on the block.\")\n\n        if not block_schema_id and not self._block_schema_id:\n            raise ValueError(\n                \"No block schema ID provided, either as an argument or on the block.\"\n            )\n        if not block_type_id and not self._block_type_id:\n            raise ValueError(\n                \"No block type ID provided, either as an argument or on the block.\"\n            )\n\n        # The keys passed to `include` must NOT be aliases, else some items will be missed\n        # i.e. must do `self.schema_` vs `self.schema` to get a `schema_ = Field(alias=\"schema\")`\n        # reported from https://github.com/PrefectHQ/prefect-dbt/issues/54\n        data_keys = self.schema(by_alias=False)[\"properties\"].keys()\n\n        # `block_document_data`` must return the aliased version for it to show in the UI\n        block_document_data = self.dict(by_alias=True, include=data_keys)\n\n        # Iterate through and find blocks that already have saved block documents to\n        # create references to those saved block documents.\n        for key in data_keys:\n            field_value = getattr(self, key)\n            if (\n                isinstance(field_value, Block)\n                and field_value._block_document_id is not None\n            ):\n                block_document_data[key] = {\n                    \"$ref\": {\"block_document_id\": field_value._block_document_id}\n                }\n\n        return BlockDocument(\n            id=self._block_document_id or uuid4(),\n            name=(name or self._block_document_name) if not is_anonymous else None,\n            block_schema_id=block_schema_id or self._block_schema_id,\n            block_type_id=block_type_id or self._block_type_id,\n            data=block_document_data,\n            block_schema=self._to_block_schema(\n                block_type_id=block_type_id or self._block_type_id,\n            ),\n            block_type=self._to_block_type(),\n            is_anonymous=is_anonymous,\n        )\n\n    @classmethod\n    def _to_block_schema(cls, block_type_id: Optional[UUID] = None) -> BlockSchema:\n\"\"\"\n        Creates the corresponding block schema of the block.\n        The corresponding block_type_id must either be passed into\n        the method or configured on the block.\n\n        Args:\n            block_type_id: UUID of the corresponding block type.\n\n        Returns:\n            BlockSchema: The corresponding block schema.\n        \"\"\"\n        fields = cls.schema()\n        return BlockSchema(\n            id=cls._block_schema_id if cls._block_schema_id is not None else uuid4(),\n            checksum=cls._calculate_schema_checksum(),\n            fields=fields,\n            block_type_id=block_type_id or cls._block_type_id,\n            block_type=cls._to_block_type(),\n            capabilities=list(cls.get_block_capabilities()),\n            version=cls.get_block_schema_version(),\n        )\n\n    @classmethod\n    def _parse_docstring(cls) -> List[DocstringSection]:\n\"\"\"\n        Parses the docstring into list of DocstringSection objects.\n        Helper method used primarily to suppress irrelevant logs, e.g.\n        `<module>:11: No type or annotation for parameter 'write_json'`\n        because griffe is unable to parse the types from pydantic.BaseModel.\n        \"\"\"\n        with disable_logger(\"griffe.docstrings.google\"):\n            with disable_logger(\"griffe.agents.nodes\"):\n                docstring = Docstring(cls.__doc__)\n                parsed = parse(docstring, Parser.google)\n        return parsed\n\n    @classmethod\n    def get_description(cls) -> Optional[str]:\n\"\"\"\n        Returns the description for the current block. Attempts to parse\n        description from class docstring if an override is not defined.\n        \"\"\"\n        description = cls._description\n        # If no description override has been provided, find the first text section\n        # and use that as the description\n        if description is None and cls.__doc__ is not None:\n            parsed = cls._parse_docstring()\n            parsed_description = next(\n                (\n                    section.as_dict().get(\"value\")\n                    for section in parsed\n                    if section.kind == DocstringSectionKind.text\n                ),\n                None,\n            )\n            if isinstance(parsed_description, str):\n                description = parsed_description.strip()\n        return description\n\n    @classmethod\n    def get_code_example(cls) -> Optional[str]:\n\"\"\"\n        Returns the code example for the given block. Attempts to parse\n        code example from the class docstring if an override is not provided.\n        \"\"\"\n        code_example = (\n            dedent(cls._code_example) if cls._code_example is not None else None\n        )\n        # If no code example override has been provided, attempt to find a examples\n        # section or an admonition with the annotation \"example\" and use that as the\n        # code example\n        if code_example is None and cls.__doc__ is not None:\n            parsed = cls._parse_docstring()\n            for section in parsed:\n                # Section kind will be \"examples\" if Examples section heading is used.\n                if section.kind == DocstringSectionKind.examples:\n                    # Examples sections are made up of smaller sections that need to be\n                    # joined with newlines. Smaller sections are represented as tuples\n                    # with shape (DocstringSectionKind, str)\n                    code_example = \"\\n\".join(\n                        (part[1] for part in section.as_dict().get(\"value\", []))\n                    )\n                    break\n                # Section kind will be \"admonition\" if Example section heading is used.\n                if section.kind == DocstringSectionKind.admonition:\n                    value = section.as_dict().get(\"value\", {})\n                    if value.get(\"annotation\") == \"example\":\n                        code_example = value.get(\"description\")\n                        break\n\n        if code_example is None:\n            # If no code example has been specified or extracted from the class\n            # docstring, generate a sensible default\n            code_example = cls._generate_code_example()\n\n        return code_example\n\n    @classmethod\n    def _generate_code_example(cls) -> str:\n\"\"\"Generates a default code example for the current class\"\"\"\n        qualified_name = to_qualified_name(cls)\n        module_str = \".\".join(qualified_name.split(\".\")[:-1])\n        class_name = cls.__name__\n        block_variable_name = f'{cls.get_block_type_slug().replace(\"-\", \"_\")}_block'\n\n        return dedent(\n            f\"\"\"\\\n        ```python\n        from {module_str} import {class_name}\n\n{block_variable_name} = {class_name}.load(\"BLOCK_NAME\")\n        ```\"\"\"\n        )\n\n    @classmethod\n    def _to_block_type(cls) -> BlockType:\n\"\"\"\n        Creates the corresponding block type of the block.\n\n        Returns:\n            BlockType: The corresponding block type.\n        \"\"\"\n        return BlockType(\n            id=cls._block_type_id or uuid4(),\n            slug=cls.get_block_type_slug(),\n            name=cls.get_block_type_name(),\n            logo_url=cls._logo_url,\n            documentation_url=cls._documentation_url,\n            description=cls.get_description(),\n            code_example=cls.get_code_example(),\n        )\n\n    @classmethod\n    def _from_block_document(cls, block_document: BlockDocument):\n\"\"\"\n        Instantiates a block from a given block document. The corresponding block class\n        will be looked up in the block registry based on the corresponding block schema\n        of the provided block document.\n\n        Args:\n            block_document: The block document used to instantiate a block.\n\n        Raises:\n            ValueError: If the provided block document doesn't have a corresponding block\n                schema.\n\n        Returns:\n            Block: Hydrated block with data from block document.\n        \"\"\"\n        if block_document.block_schema is None:\n            raise ValueError(\n                \"Unable to determine block schema for provided block document\"\n            )\n\n        block_cls = (\n            cls\n            if cls.__name__ != \"Block\"\n            # Look up the block class by dispatch\n            else cls.get_block_class_from_schema(block_document.block_schema)\n        )\n\n        block_cls = instrument_method_calls_on_class_instances(block_cls)\n\n        block = block_cls.parse_obj(block_document.data)\n        block._block_document_id = block_document.id\n        block.__class__._block_schema_id = block_document.block_schema_id\n        block.__class__._block_type_id = block_document.block_type_id\n        block._block_document_name = block_document.name\n        block._is_anonymous = block_document.is_anonymous\n        block._define_metadata_on_nested_blocks(\n            block_document.block_document_references\n        )\n\n        # Due to the way blocks are loaded we can't directly instrument the\n        # `load` method and have the data be about the block document. Instead\n        # this will emit a proxy event for the load method so that block\n        # document data can be included instead of the event being about an\n        # 'anonymous' block.\n\n        emit_instance_method_called_event(block, \"load\", successful=True)\n\n        return block\n\n    def _event_kind(self) -> str:\n        return f\"prefect.block.{self.get_block_type_slug()}\"\n\n    def _event_method_called_resources(self) -> Optional[ResourceTuple]:\n        if not (self._block_document_id and self._block_document_name):\n            return None\n\n        return (\n            {\n                \"prefect.resource.id\": (\n                    f\"prefect.block-document.{self._block_document_id}\"\n                ),\n                \"prefect.resource.name\": self._block_document_name,\n            },\n            [\n                {\n                    \"prefect.resource.id\": (\n                        f\"prefect.block-type.{self.get_block_type_slug()}\"\n                    ),\n                    \"prefect.resource.role\": \"block-type\",\n                }\n            ],\n        )\n\n    @classmethod\n    def get_block_class_from_schema(cls: Type[Self], schema: BlockSchema) -> Type[Self]:\n\"\"\"\n        Retieve the block class implementation given a schema.\n        \"\"\"\n        return cls.get_block_class_from_key(block_schema_to_key(schema))\n\n    @classmethod\n    def get_block_class_from_key(cls: Type[Self], key: str) -> Type[Self]:\n\"\"\"\n        Retieve the block class implementation given a key.\n        \"\"\"\n        # Ensure collections are imported and have the opportunity to register types\n        # before looking up the block class\n        prefect.plugins.load_prefect_collections()\n\n        return lookup_type(cls, key)\n\n    def _define_metadata_on_nested_blocks(\n        self, block_document_references: Dict[str, Dict[str, Any]]\n    ):\n\"\"\"\n        Recursively populates metadata fields on nested blocks based on the\n        provided block document references.\n        \"\"\"\n        for item in block_document_references.items():\n            field_name, block_document_reference = item\n            nested_block = getattr(self, field_name)\n            if isinstance(nested_block, Block):\n                nested_block_document_info = block_document_reference.get(\n                    \"block_document\", {}\n                )\n                nested_block._define_metadata_on_nested_blocks(\n                    nested_block_document_info.get(\"block_document_references\", {})\n                )\n                nested_block_document_id = nested_block_document_info.get(\"id\")\n                nested_block._block_document_id = (\n                    UUID(nested_block_document_id) if nested_block_document_id else None\n                )\n                nested_block._block_document_name = nested_block_document_info.get(\n                    \"name\"\n                )\n                nested_block._is_anonymous = nested_block_document_info.get(\n                    \"is_anonymous\"\n                )\n\n    @classmethod\n    @inject_client\n    async def _get_block_document(\n        cls,\n        name: str,\n        client: \"PrefectClient\" = None,\n    ):\n        if cls.__name__ == \"Block\":\n            block_type_slug, block_document_name = name.split(\"/\", 1)\n        else:\n            block_type_slug = cls.get_block_type_slug()\n            block_document_name = name\n\n        try:\n            block_document = await client.read_block_document_by_name(\n                name=block_document_name, block_type_slug=block_type_slug\n            )\n        except prefect.exceptions.ObjectNotFound as e:\n            raise ValueError(\n                f\"Unable to find block document named {block_document_name} for block\"\n                f\" type {block_type_slug}\"\n            ) from e\n\n        return block_document, block_document_name\n\n    @classmethod\n    @sync_compatible\n    @inject_client\n    async def load(\n        cls,\n        name: str,\n        validate: bool = True,\n        client: \"PrefectClient\" = None,\n    ):\n\"\"\"\n        Retrieves data from the block document with the given name for the block type\n        that corresponds with the current class and returns an instantiated version of\n        the current class with the data stored in the block document.\n\n        If a block document for a given block type is saved with a different schema\n        than the current class calling `load`, a warning will be raised.\n\n        If the current class schema is a subset of the block document schema, the block\n        can be loaded as normal using the default `validate = True`.\n\n        If the current class schema is a superset of the block document schema, `load`\n        must be called with `validate` set to False to prevent a validation error. In\n        this case, the block attributes will default to `None` and must be set manually\n        and saved to a new block document before the block can be used as expected.\n\n        Args:\n            name: The name or slug of the block document. A block document slug is a\n                string with the format <block_type_slug>/<block_document_name>\n            validate: If False, the block document will be loaded without Pydantic\n                validating the block schema. This is useful if the block schema has\n                changed client-side since the block document referred to by `name` was saved.\n            client: The client to use to load the block document. If not provided, the\n                default client will be injected.\n\n        Raises:\n            ValueError: If the requested block document is not found.\n\n        Returns:\n            An instance of the current class hydrated with the data stored in the\n            block document with the specified name.\n\n        Examples:\n            Load from a Block subclass with a block document name:\n            ```python\n            class Custom(Block):\n                message: str\n\n            Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n            loaded_block = Custom.load(\"my-custom-message\")\n            ```\n\n            Load from Block with a block document slug:\n            ```python\n            class Custom(Block):\n                message: str\n\n            Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n            loaded_block = Block.load(\"custom/my-custom-message\")\n            ```\n\n            Migrate a block document to a new schema:\n            ```python\n            # original class\n            class Custom(Block):\n                message: str\n\n            Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n            # Updated class with new required field\n            class Custom(Block):\n                message: str\n                number_of_ducks: int\n\n            loaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n            # Prints UserWarning about schema mismatch\n\n            loaded_block.number_of_ducks = 42\n\n            loaded_block.save(\"my-custom-message\", overwrite=True)\n            ```\n        \"\"\"\n        block_document, block_document_name = await cls._get_block_document(name)\n\n        try:\n            return cls._from_block_document(block_document)\n        except ValidationError as e:\n            if not validate:\n                missing_fields = tuple(err[\"loc\"][0] for err in e.errors())\n                missing_block_data = {field: None for field in missing_fields}\n                warnings.warn(\n                    f\"Could not fully load {block_document_name!r} of block type\"\n                    f\" {cls._block_type_slug!r} - this is likely because one or more\"\n                    \" required fields were added to the schema for\"\n                    f\" {cls.__name__!r} that did not exist on the class when this block\"\n                    \" was last saved. Please specify values for new field(s):\"\n                    f\" {listrepr(missing_fields)}, then run\"\n                    f' `{cls.__name__}.save(\"{block_document_name}\", overwrite=True)`,'\n                    \" and load this block again before attempting to use it.\"\n                )\n                return cls.construct(**block_document.data, **missing_block_data)\n            raise RuntimeError(\n                f\"Unable to load {block_document_name!r} of block type\"\n                f\" {cls._block_type_slug!r} due to failed validation. To load without\"\n                \" validation, try loading again with `validate=False`.\"\n            ) from e\n\n    @staticmethod\n    def is_block_class(block) -> bool:\n        return _is_subclass(block, Block)\n\n    @classmethod\n    @sync_compatible\n    @inject_client\n    async def register_type_and_schema(cls, client: \"PrefectClient\" = None):\n\"\"\"\n        Makes block available for configuration with current Prefect API.\n        Recursively registers all nested blocks. Registration is idempotent.\n\n        Args:\n            client: Optional client to use for registering type and schema with the\n                Prefect API. A new client will be created and used if one is not\n                provided.\n        \"\"\"\n        if cls.__name__ == \"Block\":\n            raise InvalidBlockRegistration(\n                \"`register_type_and_schema` should be called on a Block \"\n                \"subclass and not on the Block class directly.\"\n            )\n        if ABC in getattr(cls, \"__bases__\", []):\n            raise InvalidBlockRegistration(\n                \"`register_type_and_schema` should be called on a Block \"\n                \"subclass and not on a Block interface class directly.\"\n            )\n\n        for field in cls.__fields__.values():\n            if Block.is_block_class(field.type_):\n                await field.type_.register_type_and_schema(client=client)\n            if get_origin(field.type_) is Union:\n                for type_ in get_args(field.type_):\n                    if Block.is_block_class(type_):\n                        await type_.register_type_and_schema(client=client)\n\n        try:\n            block_type = await client.read_block_type_by_slug(\n                slug=cls.get_block_type_slug()\n            )\n            cls._block_type_id = block_type.id\n            local_block_type = cls._to_block_type()\n            if _should_update_block_type(\n                local_block_type=local_block_type, server_block_type=block_type\n            ):\n                await client.update_block_type(\n                    block_type_id=block_type.id, block_type=local_block_type\n                )\n        except prefect.exceptions.ObjectNotFound:\n            block_type = await client.create_block_type(block_type=cls._to_block_type())\n            cls._block_type_id = block_type.id\n\n        try:\n            block_schema = await client.read_block_schema_by_checksum(\n                checksum=cls._calculate_schema_checksum(),\n                version=cls.get_block_schema_version(),\n            )\n        except prefect.exceptions.ObjectNotFound:\n            block_schema = await client.create_block_schema(\n                block_schema=cls._to_block_schema(block_type_id=block_type.id)\n            )\n\n        cls._block_schema_id = block_schema.id\n\n    @inject_client\n    async def _save(\n        self,\n        name: Optional[str] = None,\n        is_anonymous: bool = False,\n        overwrite: bool = False,\n        client: \"PrefectClient\" = None,\n    ):\n\"\"\"\n        Saves the values of a block as a block document with an option to save as an\n        anonymous block document.\n\n        Args:\n            name: User specified name to give saved block document which can later be used to load the\n                block document.\n            is_anonymous: Boolean value specifying whether the block document is anonymous. Anonymous\n                blocks are intended for system use and are not shown in the UI. Anonymous blocks do not\n                require a user-supplied name.\n            overwrite: Boolean value specifying if values should be overwritten if a block document with\n                the specified name already exists.\n\n        Raises:\n            ValueError: If a name is not given and `is_anonymous` is `False` or a name is given and\n                `is_anonymous` is `True`.\n        \"\"\"\n        if name is None and not is_anonymous:\n            raise ValueError(\n                \"You're attempting to save a block document without a name. \"\n                \"Please either save a block document with a name or set \"\n                \"is_anonymous to True.\"\n            )\n\n        self._is_anonymous = is_anonymous\n\n        # Ensure block type and schema are registered before saving block document.\n        await self.register_type_and_schema(client=client)\n\n        try:\n            block_document = await client.create_block_document(\n                block_document=self._to_block_document(name=name)\n            )\n        except prefect.exceptions.ObjectAlreadyExists as err:\n            if overwrite:\n                block_document_id = self._block_document_id\n                if block_document_id is None:\n                    existing_block_document = await client.read_block_document_by_name(\n                        name=name, block_type_slug=self.get_block_type_slug()\n                    )\n                    block_document_id = existing_block_document.id\n                await client.update_block_document(\n                    block_document_id=block_document_id,\n                    block_document=self._to_block_document(name=name),\n                )\n                block_document = await client.read_block_document(\n                    block_document_id=block_document_id\n                )\n            else:\n                raise ValueError(\n                    \"You are attempting to save values with a name that is already in\"\n                    \" use for this block type. If you would like to overwrite the\"\n                    \" values that are saved, then save with `overwrite=True`.\"\n                ) from err\n\n        # Update metadata on block instance for later use.\n        self._block_document_name = block_document.name\n        self._block_document_id = block_document.id\n        return self._block_document_id\n\n    @sync_compatible\n    @instrument_instance_method_call()\n    async def save(\n        self, name: str, overwrite: bool = False, client: \"PrefectClient\" = None\n    ):\n\"\"\"\n        Saves the values of a block as a block document.\n\n        Args:\n            name: User specified name to give saved block document which can later be used to load the\n                block document.\n            overwrite: Boolean value specifying if values should be overwritten if a block document with\n                the specified name already exists.\n\n        \"\"\"\n        document_id = await self._save(name=name, overwrite=overwrite, client=client)\n\n        return document_id\n\n    @classmethod\n    @sync_compatible\n    @inject_client\n    async def delete(\n        cls,\n        name: str,\n        client: \"PrefectClient\" = None,\n    ):\n        block_document, block_document_name = await cls._get_block_document(name)\n\n        await client.delete_block_document(block_document.id)\n\n    def _iter(self, *, include=None, exclude=None, **kwargs):\n        # Injects the `block_type_slug` into serialized payloads for dispatch\n        for key_value in super()._iter(include=include, exclude=exclude, **kwargs):\n            yield key_value\n\n        # Respect inclusion and exclusion still\n        if include and \"block_type_slug\" not in include:\n            return\n        if exclude and \"block_type_slug\" in exclude:\n            return\n\n        yield \"block_type_slug\", self.get_block_type_slug()\n\n    def __new__(cls: Type[Self], **kwargs) -> Self:\n\"\"\"\n        Create an instance of the Block subclass type if a `block_type_slug` is\n        present in the data payload.\n        \"\"\"\n        block_type_slug = kwargs.pop(\"block_type_slug\", None)\n        if block_type_slug:\n            subcls = lookup_type(cls, dispatch_key=block_type_slug)\n            m = super().__new__(subcls)\n            # NOTE: This is a workaround for an obscure issue where copied models were\n            #       missing attributes. This pattern is from Pydantic's\n            #       `BaseModel._copy_and_set_values`.\n            #       The issue this fixes could not be reproduced in unit tests that\n            #       directly targeted dispatch handling and was only observed when\n            #       copying then saving infrastructure blocks on deployment models.\n            object.__setattr__(m, \"__dict__\", kwargs)\n            object.__setattr__(m, \"__fields_set__\", set(kwargs.keys()))\n            return m\n        else:\n            m = super().__new__(cls)\n            object.__setattr__(m, \"__dict__\", kwargs)\n            object.__setattr__(m, \"__fields_set__\", set(kwargs.keys()))\n            return m\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.Config","title":"Config","text":"Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
class Config:\n    extra = \"allow\"\n\n    json_encoders = {SecretDict: lambda v: v.dict()}\n\n    @staticmethod\n    def schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n\"\"\"\n        Customizes Pydantic's schema generation feature to add blocks related information.\n        \"\"\"\n        schema[\"block_type_slug\"] = model.get_block_type_slug()\n        # Ensures args and code examples aren't included in the schema\n        description = model.get_description()\n        if description:\n            schema[\"description\"] = description\n        else:\n            # Prevent the description of the base class from being included in the schema\n            schema.pop(\"description\", None)\n\n        # create a list of secret field names\n        # secret fields include both top-level keys and dot-delimited nested secret keys\n        # A wildcard (*) means that all fields under a given key are secret.\n        # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n        # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n        # nested under the \"child\" key are all secret. There is no limit to nesting.\n        secrets = schema[\"secret_fields\"] = []\n        for field in model.__fields__.values():\n            _collect_secret_fields(field.name, field.type_, secrets)\n\n        # create block schema references\n        refs = schema[\"block_schema_references\"] = {}\n        for field in model.__fields__.values():\n            if Block.is_block_class(field.type_):\n                refs[field.name] = field.type_._to_block_schema_reference_dict()\n            if get_origin(field.type_) is Union:\n                for type_ in get_args(field.type_):\n                    if Block.is_block_class(type_):\n                        if isinstance(refs.get(field.name), list):\n                            refs[field.name].append(\n                                type_._to_block_schema_reference_dict()\n                            )\n                        elif isinstance(refs.get(field.name), dict):\n                            refs[field.name] = [\n                                refs[field.name],\n                                type_._to_block_schema_reference_dict(),\n                            ]\n                        else:\n                            refs[field.name] = (\n                                type_._to_block_schema_reference_dict()\n                            )\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.Config.schema_extra","title":"schema_extra staticmethod","text":"

Customizes Pydantic's schema generation feature to add blocks related information.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@staticmethod\ndef schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n\"\"\"\n    Customizes Pydantic's schema generation feature to add blocks related information.\n    \"\"\"\n    schema[\"block_type_slug\"] = model.get_block_type_slug()\n    # Ensures args and code examples aren't included in the schema\n    description = model.get_description()\n    if description:\n        schema[\"description\"] = description\n    else:\n        # Prevent the description of the base class from being included in the schema\n        schema.pop(\"description\", None)\n\n    # create a list of secret field names\n    # secret fields include both top-level keys and dot-delimited nested secret keys\n    # A wildcard (*) means that all fields under a given key are secret.\n    # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n    # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n    # nested under the \"child\" key are all secret. There is no limit to nesting.\n    secrets = schema[\"secret_fields\"] = []\n    for field in model.__fields__.values():\n        _collect_secret_fields(field.name, field.type_, secrets)\n\n    # create block schema references\n    refs = schema[\"block_schema_references\"] = {}\n    for field in model.__fields__.values():\n        if Block.is_block_class(field.type_):\n            refs[field.name] = field.type_._to_block_schema_reference_dict()\n        if get_origin(field.type_) is Union:\n            for type_ in get_args(field.type_):\n                if Block.is_block_class(type_):\n                    if isinstance(refs.get(field.name), list):\n                        refs[field.name].append(\n                            type_._to_block_schema_reference_dict()\n                        )\n                    elif isinstance(refs.get(field.name), dict):\n                        refs[field.name] = [\n                            refs[field.name],\n                            type_._to_block_schema_reference_dict(),\n                        ]\n                    else:\n                        refs[field.name] = (\n                            type_._to_block_schema_reference_dict()\n                        )\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_capabilities","title":"get_block_capabilities classmethod","text":"

Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\ndef get_block_capabilities(cls) -> FrozenSet[str]:\n\"\"\"\n    Returns the block capabilities for this Block. Recursively collects all block\n    capabilities of all parent classes into a single frozenset.\n    \"\"\"\n    return frozenset(\n        {\n            c\n            for base in (cls,) + cls.__mro__\n            for c in getattr(base, \"_block_schema_capabilities\", []) or []\n        }\n    )\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_class_from_key","title":"get_block_class_from_key classmethod","text":"

Retieve the block class implementation given a key.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\ndef get_block_class_from_key(cls: Type[Self], key: str) -> Type[Self]:\n\"\"\"\n    Retieve the block class implementation given a key.\n    \"\"\"\n    # Ensure collections are imported and have the opportunity to register types\n    # before looking up the block class\n    prefect.plugins.load_prefect_collections()\n\n    return lookup_type(cls, key)\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_class_from_schema","title":"get_block_class_from_schema classmethod","text":"

Retieve the block class implementation given a schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\ndef get_block_class_from_schema(cls: Type[Self], schema: BlockSchema) -> Type[Self]:\n\"\"\"\n    Retieve the block class implementation given a schema.\n    \"\"\"\n    return cls.get_block_class_from_key(block_schema_to_key(schema))\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_code_example","title":"get_code_example classmethod","text":"

Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\ndef get_code_example(cls) -> Optional[str]:\n\"\"\"\n    Returns the code example for the given block. Attempts to parse\n    code example from the class docstring if an override is not provided.\n    \"\"\"\n    code_example = (\n        dedent(cls._code_example) if cls._code_example is not None else None\n    )\n    # If no code example override has been provided, attempt to find a examples\n    # section or an admonition with the annotation \"example\" and use that as the\n    # code example\n    if code_example is None and cls.__doc__ is not None:\n        parsed = cls._parse_docstring()\n        for section in parsed:\n            # Section kind will be \"examples\" if Examples section heading is used.\n            if section.kind == DocstringSectionKind.examples:\n                # Examples sections are made up of smaller sections that need to be\n                # joined with newlines. Smaller sections are represented as tuples\n                # with shape (DocstringSectionKind, str)\n                code_example = \"\\n\".join(\n                    (part[1] for part in section.as_dict().get(\"value\", []))\n                )\n                break\n            # Section kind will be \"admonition\" if Example section heading is used.\n            if section.kind == DocstringSectionKind.admonition:\n                value = section.as_dict().get(\"value\", {})\n                if value.get(\"annotation\") == \"example\":\n                    code_example = value.get(\"description\")\n                    break\n\n    if code_example is None:\n        # If no code example has been specified or extracted from the class\n        # docstring, generate a sensible default\n        code_example = cls._generate_code_example()\n\n    return code_example\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_description","title":"get_description classmethod","text":"

Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\ndef get_description(cls) -> Optional[str]:\n\"\"\"\n    Returns the description for the current block. Attempts to parse\n    description from class docstring if an override is not defined.\n    \"\"\"\n    description = cls._description\n    # If no description override has been provided, find the first text section\n    # and use that as the description\n    if description is None and cls.__doc__ is not None:\n        parsed = cls._parse_docstring()\n        parsed_description = next(\n            (\n                section.as_dict().get(\"value\")\n                for section in parsed\n                if section.kind == DocstringSectionKind.text\n            ),\n            None,\n        )\n        if isinstance(parsed_description, str):\n            description = parsed_description.strip()\n    return description\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.load","title":"load async classmethod","text":"

Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document.

If a block document for a given block type is saved with a different schema than the current class calling load, a warning will be raised.

If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default validate = True.

If the current class schema is a superset of the block document schema, load must be called with validate set to False to prevent a validation error. In this case, the block attributes will default to None and must be set manually and saved to a new block document before the block can be used as expected.

Parameters:

Name Type Description Default name str

The name or slug of the block document. A block document slug is a string with the format / required validate bool

If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by name was saved.

True client PrefectClient

The client to use to load the block document. If not provided, the default client will be injected.

None

Raises:

Type Description ValueError

If the requested block document is not found.

Returns:

Type Description

An instance of the current class hydrated with the data stored in the

block document with the specified name.

Examples:

Load from a Block subclass with a block document name:

class Custom(Block):\n    message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\nloaded_block = Custom.load(\"my-custom-message\")\n

Load from Block with a block document slug:

class Custom(Block):\n    message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\nloaded_block = Block.load(\"custom/my-custom-message\")\n

Migrate a block document to a new schema:

# original class\nclass Custom(Block):\n    message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\n# Updated class with new required field\nclass Custom(Block):\n    message: str\n    number_of_ducks: int\n\nloaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n# Prints UserWarning about schema mismatch\n\nloaded_block.number_of_ducks = 42\n\nloaded_block.save(\"my-custom-message\", overwrite=True)\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\n@sync_compatible\n@inject_client\nasync def load(\n    cls,\n    name: str,\n    validate: bool = True,\n    client: \"PrefectClient\" = None,\n):\n\"\"\"\n    Retrieves data from the block document with the given name for the block type\n    that corresponds with the current class and returns an instantiated version of\n    the current class with the data stored in the block document.\n\n    If a block document for a given block type is saved with a different schema\n    than the current class calling `load`, a warning will be raised.\n\n    If the current class schema is a subset of the block document schema, the block\n    can be loaded as normal using the default `validate = True`.\n\n    If the current class schema is a superset of the block document schema, `load`\n    must be called with `validate` set to False to prevent a validation error. In\n    this case, the block attributes will default to `None` and must be set manually\n    and saved to a new block document before the block can be used as expected.\n\n    Args:\n        name: The name or slug of the block document. A block document slug is a\n            string with the format <block_type_slug>/<block_document_name>\n        validate: If False, the block document will be loaded without Pydantic\n            validating the block schema. This is useful if the block schema has\n            changed client-side since the block document referred to by `name` was saved.\n        client: The client to use to load the block document. If not provided, the\n            default client will be injected.\n\n    Raises:\n        ValueError: If the requested block document is not found.\n\n    Returns:\n        An instance of the current class hydrated with the data stored in the\n        block document with the specified name.\n\n    Examples:\n        Load from a Block subclass with a block document name:\n        ```python\n        class Custom(Block):\n            message: str\n\n        Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n        loaded_block = Custom.load(\"my-custom-message\")\n        ```\n\n        Load from Block with a block document slug:\n        ```python\n        class Custom(Block):\n            message: str\n\n        Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n        loaded_block = Block.load(\"custom/my-custom-message\")\n        ```\n\n        Migrate a block document to a new schema:\n        ```python\n        # original class\n        class Custom(Block):\n            message: str\n\n        Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n        # Updated class with new required field\n        class Custom(Block):\n            message: str\n            number_of_ducks: int\n\n        loaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n        # Prints UserWarning about schema mismatch\n\n        loaded_block.number_of_ducks = 42\n\n        loaded_block.save(\"my-custom-message\", overwrite=True)\n        ```\n    \"\"\"\n    block_document, block_document_name = await cls._get_block_document(name)\n\n    try:\n        return cls._from_block_document(block_document)\n    except ValidationError as e:\n        if not validate:\n            missing_fields = tuple(err[\"loc\"][0] for err in e.errors())\n            missing_block_data = {field: None for field in missing_fields}\n            warnings.warn(\n                f\"Could not fully load {block_document_name!r} of block type\"\n                f\" {cls._block_type_slug!r} - this is likely because one or more\"\n                \" required fields were added to the schema for\"\n                f\" {cls.__name__!r} that did not exist on the class when this block\"\n                \" was last saved. Please specify values for new field(s):\"\n                f\" {listrepr(missing_fields)}, then run\"\n                f' `{cls.__name__}.save(\"{block_document_name}\", overwrite=True)`,'\n                \" and load this block again before attempting to use it.\"\n            )\n            return cls.construct(**block_document.data, **missing_block_data)\n        raise RuntimeError(\n            f\"Unable to load {block_document_name!r} of block type\"\n            f\" {cls._block_type_slug!r} due to failed validation. To load without\"\n            \" validation, try loading again with `validate=False`.\"\n        ) from e\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.register_type_and_schema","title":"register_type_and_schema async classmethod","text":"

Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent.

Parameters:

Name Type Description Default client PrefectClient

Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\n@sync_compatible\n@inject_client\nasync def register_type_and_schema(cls, client: \"PrefectClient\" = None):\n\"\"\"\n    Makes block available for configuration with current Prefect API.\n    Recursively registers all nested blocks. Registration is idempotent.\n\n    Args:\n        client: Optional client to use for registering type and schema with the\n            Prefect API. A new client will be created and used if one is not\n            provided.\n    \"\"\"\n    if cls.__name__ == \"Block\":\n        raise InvalidBlockRegistration(\n            \"`register_type_and_schema` should be called on a Block \"\n            \"subclass and not on the Block class directly.\"\n        )\n    if ABC in getattr(cls, \"__bases__\", []):\n        raise InvalidBlockRegistration(\n            \"`register_type_and_schema` should be called on a Block \"\n            \"subclass and not on a Block interface class directly.\"\n        )\n\n    for field in cls.__fields__.values():\n        if Block.is_block_class(field.type_):\n            await field.type_.register_type_and_schema(client=client)\n        if get_origin(field.type_) is Union:\n            for type_ in get_args(field.type_):\n                if Block.is_block_class(type_):\n                    await type_.register_type_and_schema(client=client)\n\n    try:\n        block_type = await client.read_block_type_by_slug(\n            slug=cls.get_block_type_slug()\n        )\n        cls._block_type_id = block_type.id\n        local_block_type = cls._to_block_type()\n        if _should_update_block_type(\n            local_block_type=local_block_type, server_block_type=block_type\n        ):\n            await client.update_block_type(\n                block_type_id=block_type.id, block_type=local_block_type\n            )\n    except prefect.exceptions.ObjectNotFound:\n        block_type = await client.create_block_type(block_type=cls._to_block_type())\n        cls._block_type_id = block_type.id\n\n    try:\n        block_schema = await client.read_block_schema_by_checksum(\n            checksum=cls._calculate_schema_checksum(),\n            version=cls.get_block_schema_version(),\n        )\n    except prefect.exceptions.ObjectNotFound:\n        block_schema = await client.create_block_schema(\n            block_schema=cls._to_block_schema(block_type_id=block_type.id)\n        )\n\n    cls._block_schema_id = block_schema.id\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.save","title":"save async","text":"

Saves the values of a block as a block document.

Parameters:

Name Type Description Default name str

User specified name to give saved block document which can later be used to load the block document.

required overwrite bool

Boolean value specifying if values should be overwritten if a block document with the specified name already exists.

False Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@sync_compatible\n@instrument_instance_method_call()\nasync def save(\n    self, name: str, overwrite: bool = False, client: \"PrefectClient\" = None\n):\n\"\"\"\n    Saves the values of a block as a block document.\n\n    Args:\n        name: User specified name to give saved block document which can later be used to load the\n            block document.\n        overwrite: Boolean value specifying if values should be overwritten if a block document with\n            the specified name already exists.\n\n    \"\"\"\n    document_id = await self._save(name=name, overwrite=overwrite, client=client)\n\n    return document_id\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.InvalidBlockRegistration","title":"InvalidBlockRegistration","text":"

Bases: Exception

Raised on attempted registration of the base Block class or a Block interface class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
class InvalidBlockRegistration(Exception):\n\"\"\"\n    Raised on attempted registration of the base Block\n    class or a Block interface class\n    \"\"\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.block_schema_to_key","title":"block_schema_to_key","text":"

Defines the unique key used to lookup the Block class for a given schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
def block_schema_to_key(schema: BlockSchema) -> str:\n\"\"\"\n    Defines the unique key used to lookup the Block class for a given schema.\n    \"\"\"\n    return f\"{schema.block_type.slug}\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/kubernetes/","title":"prefect.blocks.kubernetes","text":"","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes","title":"prefect.blocks.kubernetes","text":"","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig","title":"KubernetesClusterConfig","text":"

Bases: Block

Stores configuration for interaction with Kubernetes clusters.

See from_file for creation.

Attributes:

Name Type Description config Dict

The entire loaded YAML contents of a kubectl config file

context_name str

The name of the kubectl context to use

Example

Load a saved Kubernetes cluster config:

from prefect.blocks.kubernetes import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
class KubernetesClusterConfig(Block):\n\"\"\"\n    Stores configuration for interaction with Kubernetes clusters.\n\n    See `from_file` for creation.\n\n    Attributes:\n        config: The entire loaded YAML contents of a kubectl config file\n        context_name: The name of the kubectl context to use\n\n    Example:\n        Load a saved Kubernetes cluster config:\n        ```python\n        from prefect.blocks.kubernetes import KubernetesClusterConfig\n\n        cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Kubernetes Cluster Config\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/1zrSeY8DZ1MJZs2BAyyyGk/20445025358491b8b72600b8f996125b/Kubernetes_logo_without_workmark.svg.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig\"\n\n    config: Dict = Field(\n        default=..., description=\"The entire contents of a kubectl config file.\"\n    )\n    context_name: str = Field(\n        default=..., description=\"The name of the kubectl context to use.\"\n    )\n\n    @validator(\"config\", pre=True)\n    def parse_yaml_config(cls, value):\n        if isinstance(value, str):\n            return yaml.safe_load(value)\n        return value\n\n    @classmethod\n    def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n\"\"\"\n        Create a cluster config from the a Kubernetes config file.\n\n        By default, the current context in the default Kubernetes config file will be\n        used.\n\n        An alternative file or context may be specified.\n\n        The entire config file will be loaded and stored.\n        \"\"\"\n        kube_config = kubernetes.config.kube_config\n\n        path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n        path = path.expanduser().resolve()\n\n        # Determine the context\n        existing_contexts, current_context = kube_config.list_kube_config_contexts(\n            config_file=str(path)\n        )\n        context_names = {ctx[\"name\"] for ctx in existing_contexts}\n        if context_name:\n            if context_name not in context_names:\n                raise ValueError(\n                    f\"Context {context_name!r} not found. \"\n                    f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n                )\n        else:\n            context_name = current_context[\"name\"]\n\n        # Load the entire config file\n        config_file_contents = path.read_text()\n        config_dict = yaml.safe_load(config_file_contents)\n\n        return cls(config=config_dict, context_name=context_name)\n\n    def get_api_client(self) -> \"ApiClient\":\n\"\"\"\n        Returns a Kubernetes API client for this cluster config.\n        \"\"\"\n        return kubernetes.config.kube_config.new_client_from_config_dict(\n            config_dict=self.config, context=self.context_name\n        )\n\n    def configure_client(self) -> None:\n\"\"\"\n        Activates this cluster configuration by loading the configuration into the\n        Kubernetes Python client. After calling this, Kubernetes API clients can use\n        this config's context.\n        \"\"\"\n        kubernetes.config.kube_config.load_kube_config_from_dict(\n            config_dict=self.config, context=self.context_name\n        )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.configure_client","title":"configure_client","text":"

Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
def configure_client(self) -> None:\n\"\"\"\n    Activates this cluster configuration by loading the configuration into the\n    Kubernetes Python client. After calling this, Kubernetes API clients can use\n    this config's context.\n    \"\"\"\n    kubernetes.config.kube_config.load_kube_config_from_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.from_file","title":"from_file classmethod","text":"

Create a cluster config from the a Kubernetes config file.

By default, the current context in the default Kubernetes config file will be used.

An alternative file or context may be specified.

The entire config file will be loaded and stored.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
@classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n\"\"\"\n    Create a cluster config from the a Kubernetes config file.\n\n    By default, the current context in the default Kubernetes config file will be\n    used.\n\n    An alternative file or context may be specified.\n\n    The entire config file will be loaded and stored.\n    \"\"\"\n    kube_config = kubernetes.config.kube_config\n\n    path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n    path = path.expanduser().resolve()\n\n    # Determine the context\n    existing_contexts, current_context = kube_config.list_kube_config_contexts(\n        config_file=str(path)\n    )\n    context_names = {ctx[\"name\"] for ctx in existing_contexts}\n    if context_name:\n        if context_name not in context_names:\n            raise ValueError(\n                f\"Context {context_name!r} not found. \"\n                f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n            )\n    else:\n        context_name = current_context[\"name\"]\n\n    # Load the entire config file\n    config_file_contents = path.read_text()\n    config_dict = yaml.safe_load(config_file_contents)\n\n    return cls(config=config_dict, context_name=context_name)\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.get_api_client","title":"get_api_client","text":"

Returns a Kubernetes API client for this cluster config.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
def get_api_client(self) -> \"ApiClient\":\n\"\"\"\n    Returns a Kubernetes API client for this cluster config.\n    \"\"\"\n    return kubernetes.config.kube_config.new_client_from_config_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/notifications/","title":"prefect.blocks.notifications","text":"","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications","title":"prefect.blocks.notifications","text":"","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.AbstractAppriseNotificationBlock","title":"AbstractAppriseNotificationBlock","text":"

Bases: NotificationBlock, ABC

An abstract class for sending notifications using Apprise.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class AbstractAppriseNotificationBlock(NotificationBlock, ABC):\n\"\"\"\n    An abstract class for sending notifications using Apprise.\n    \"\"\"\n\n    notify_type: Literal[\"prefect_default\", \"info\", \"success\", \"warning\", \"failure\"] = (\n        Field(\n            default=PREFECT_NOTIFY_TYPE_DEFAULT,\n            description=(\n                \"The type of notification being performed; the prefect_default \"\n                \"is a plain notification that does not attach an image.\"\n            ),\n        )\n    )\n\n    def __init__(self, *args, **kwargs):\n        import apprise\n\n        if PREFECT_NOTIFY_TYPE_DEFAULT not in apprise.NOTIFY_TYPES:\n            apprise.NOTIFY_TYPES += (PREFECT_NOTIFY_TYPE_DEFAULT,)\n\n        super().__init__(*args, **kwargs)\n\n    def _start_apprise_client(self, url: SecretStr):\n        from apprise import Apprise, AppriseAsset\n\n        # A custom `AppriseAsset` that ensures Prefect Notifications\n        # appear correctly across multiple messaging platforms\n        prefect_app_data = AppriseAsset(\n            app_id=\"Prefect Notifications\",\n            app_desc=\"Prefect Notifications\",\n            app_url=\"https://prefect.io\",\n        )\n\n        self._apprise_client = Apprise(asset=prefect_app_data)\n        self._apprise_client.add(url.get_secret_value())\n\n    def block_initialization(self) -> None:\n        self._start_apprise_client(self.url)\n\n    @sync_compatible\n    @instrument_instance_method_call()\n    async def notify(self, body: str, subject: Optional[str] = None):\n        await self._apprise_client.async_notify(\n            body=body, title=subject, notify_type=self.notify_type\n        )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.AppriseNotificationBlock","title":"AppriseNotificationBlock","text":"

Bases: AbstractAppriseNotificationBlock, ABC

A base class for sending notifications using Apprise, through webhook URLs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class AppriseNotificationBlock(AbstractAppriseNotificationBlock, ABC):\n\"\"\"\n    A base class for sending notifications using Apprise, through webhook URLs.\n    \"\"\"\n\n    _documentation_url = \"https://docs.prefect.io/ui/notifications/\"\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"Incoming webhook URL used to send notifications.\",\n        example=\"https://hooks.example.com/XXX\",\n    )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.CustomWebhookNotificationBlock","title":"CustomWebhookNotificationBlock","text":"

Bases: NotificationBlock

Enables sending notifications via any custom webhook.

All nested string param contains {{key}} will be substituted with value from context/secrets.

Context values include: subject, body and name.

Examples:

Load a saved custom webhook and send a message:

from prefect.blocks.notifications import CustomWebhookNotificationBlock\n\ncustom_webhook_block = CustomWebhookNotificationBlock.load(\"BLOCK_NAME\")\n\ncustom_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class CustomWebhookNotificationBlock(NotificationBlock):\n\"\"\"\n    Enables sending notifications via any custom webhook.\n\n    All nested string param contains `{{key}}` will be substituted with value from context/secrets.\n\n    Context values include: `subject`, `body` and `name`.\n\n    Examples:\n        Load a saved custom webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import CustomWebhookNotificationBlock\n\n        custom_webhook_block = CustomWebhookNotificationBlock.load(\"BLOCK_NAME\")\n\n        custom_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Custom Webhook\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/6ciCsTFsvUAiiIvTllMfOU/627e9513376ca457785118fbba6a858d/webhook_icon_138018.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.CustomWebhookNotificationBlock\"\n\n    name: str = Field(title=\"Name\", description=\"Name of the webhook.\")\n\n    url: str = Field(\n        title=\"Webhook URL\",\n        description=\"The webhook URL.\",\n        example=\"https://hooks.slack.com/XXX\",\n    )\n\n    method: Literal[\"GET\", \"POST\", \"PUT\", \"PATCH\", \"DELETE\"] = Field(\n        default=\"POST\", description=\"The webhook request method. Defaults to `POST`.\"\n    )\n\n    params: Optional[Dict[str, str]] = Field(\n        default=None, title=\"Query Params\", description=\"Custom query params.\"\n    )\n    json_data: Optional[dict] = Field(\n        default=None,\n        title=\"JSON Data\",\n        description=\"Send json data as payload.\",\n        example=(\n            '{\"text\": \"{{subject}}\\\\n{{body}}\", \"title\": \"{{name}}\", \"token\":'\n            ' \"{{tokenFromSecrets}}\"}'\n        ),\n    )\n    form_data: Optional[Dict[str, str]] = Field(\n        default=None,\n        title=\"Form Data\",\n        description=(\n            \"Send form data as payload. Should not be used together with _JSON Data_.\"\n        ),\n        example=(\n            '{\"text\": \"{{subject}}\\\\n{{body}}\", \"title\": \"{{name}}\", \"token\":'\n            ' \"{{tokenFromSecrets}}\"}'\n        ),\n    )\n\n    headers: Optional[Dict[str, str]] = Field(None, description=\"Custom headers.\")\n    cookies: Optional[Dict[str, str]] = Field(None, description=\"Custom cookies.\")\n\n    timeout: float = Field(\n        default=10, description=\"Request timeout in seconds. Defaults to 10.\"\n    )\n\n    secrets: SecretDict = Field(\n        default_factory=lambda: SecretDict(dict()),\n        title=\"Custom Secret Values\",\n        description=\"A dictionary of secret values to be substituted in other configs.\",\n        example='{\"tokenFromSecrets\":\"SomeSecretToken\"}',\n    )\n\n    def _build_request_args(self, body: str, subject: Optional[str]):\n\"\"\"Build kwargs for httpx.AsyncClient.request\"\"\"\n        # prepare values\n        values = self.secrets.get_secret_value()\n        # use 'null' when subject is None\n        values.update(\n            {\n                \"subject\": \"null\" if subject is None else subject,\n                \"body\": body,\n                \"name\": self.name,\n            }\n        )\n        # do substution\n        return apply_values(\n            {\n                \"method\": self.method,\n                \"url\": self.url,\n                \"params\": self.params,\n                \"data\": self.form_data,\n                \"json\": self.json_data,\n                \"headers\": self.headers,\n                \"cookies\": self.cookies,\n                \"timeout\": self.timeout,\n            },\n            values,\n        )\n\n    def block_initialization(self) -> None:\n        # check form_data and json_data\n        if self.form_data is not None and self.json_data is not None:\n            raise ValueError(\"both `Form Data` and `JSON Data` provided\")\n        allowed_keys = {\"subject\", \"body\", \"name\"}.union(\n            self.secrets.get_secret_value().keys()\n        )\n        # test template to raise a error early\n        for name in [\"url\", \"params\", \"form_data\", \"json_data\", \"headers\", \"cookies\"]:\n            template = getattr(self, name)\n            if template is None:\n                continue\n            # check for placeholders not in predefined keys and secrets\n            placeholders = find_placeholders(template)\n            for placeholder in placeholders:\n                if placeholder.name not in allowed_keys:\n                    raise KeyError(f\"{name}/{placeholder}\")\n\n    @sync_compatible\n    @instrument_instance_method_call()\n    async def notify(self, body: str, subject: Optional[str] = None):\n        import httpx\n\n        # make request with httpx\n        client = httpx.AsyncClient(headers={\"user-agent\": \"Prefect Notifications\"})\n        resp = await client.request(**self._build_request_args(body, subject))\n        resp.raise_for_status()\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MattermostWebhook","title":"MattermostWebhook","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via a provided Mattermost webhook. See Apprise notify_Mattermost docs # noqa

Examples:

Load a saved Mattermost webhook and send a message:

from prefect.blocks.notifications import MattermostWebhook\n\nmattermost_webhook_block = MattermostWebhook.load(\"BLOCK_NAME\")\n\nmattermost_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class MattermostWebhook(AbstractAppriseNotificationBlock):\n\"\"\"\n    Enables sending notifications via a provided Mattermost webhook.\n    See [Apprise notify_Mattermost docs](https://github.com/caronc/apprise/wiki/Notify_Mattermost) # noqa\n\n\n    Examples:\n        Load a saved Mattermost webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import MattermostWebhook\n\n        mattermost_webhook_block = MattermostWebhook.load(\"BLOCK_NAME\")\n\n        mattermost_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided Mattermost webhook.\"\n    _block_type_name = \"Mattermost Webhook\"\n    _block_type_slug = \"mattermost-webhook\"\n    _logo_url = \"https://images.ctfassets.net/zscdif0zqppk/3mlbsJDAmK402ER1sf0zUF/a48ac43fa38f395dd5f56c6ed29f22bb/mattermost-logo-png-transparent.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MattermostWebhook\"\n\n    hostname: str = Field(\n        default=...,\n        description=\"The hostname of your Mattermost server.\",\n        example=\"Mattermost.example.com\",\n    )\n\n    token: SecretStr = Field(\n        default=...,\n        description=\"The token associated with your Mattermost webhook.\",\n    )\n\n    botname: Optional[str] = Field(\n        title=\"Bot name\",\n        default=None,\n        description=\"The name of the bot that will send the message.\",\n    )\n\n    channels: Optional[List[str]] = Field(\n        default=None,\n        description=\"The channel(s) you wish to notify.\",\n    )\n\n    include_image: bool = Field(\n        default=False,\n        description=\"Whether to include the Apprise status image in the message.\",\n    )\n\n    path: Optional[str] = Field(\n        default=None,\n        description=\"An optional sub-path specification to append to the hostname.\",\n    )\n\n    port: int = Field(\n        default=8065,\n        description=\"The port of your Mattermost server.\",\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyMattermost import NotifyMattermost\n\n        url = SecretStr(\n            NotifyMattermost(\n                token=self.token.get_secret_value(),\n                fullpath=self.path,\n                host=self.hostname,\n                botname=self.botname,\n                channels=self.channels,\n                include_image=self.include_image,\n                port=self.port,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MicrosoftTeamsWebhook","title":"MicrosoftTeamsWebhook","text":"

Bases: AppriseNotificationBlock

Enables sending notifications via a provided Microsoft Teams webhook.

Examples:

Load a saved Teams webhook and send a message:

from prefect.blocks.notifications import MicrosoftTeamsWebhook\nteams_webhook_block = MicrosoftTeamsWebhook.load(\"BLOCK_NAME\")\nteams_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class MicrosoftTeamsWebhook(AppriseNotificationBlock):\n\"\"\"\n    Enables sending notifications via a provided Microsoft Teams webhook.\n\n    Examples:\n        Load a saved Teams webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import MicrosoftTeamsWebhook\n        teams_webhook_block = MicrosoftTeamsWebhook.load(\"BLOCK_NAME\")\n        teams_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Microsoft Teams Webhook\"\n    _block_type_slug = \"ms-teams-webhook\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/6n0dSTBzwoVPhX8Vgg37i7/9040e07a62def4f48242be3eae6d3719/teams_logo.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MicrosoftTeamsWebhook\"\n\n    url: SecretStr = Field(\n        ...,\n        title=\"Webhook URL\",\n        description=\"The Teams incoming webhook URL used to send notifications.\",\n        example=(\n            \"https://your-org.webhook.office.com/webhookb2/XXX/IncomingWebhook/YYY/ZZZ\"\n        ),\n    )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.OpsgenieWebhook","title":"OpsgenieWebhook","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via a provided Opsgenie webhook. See Apprise notify_opsgenie docs for more info on formatting the URL.

Examples:

Load a saved Opsgenie webhook and send a message:

from prefect.blocks.notifications import OpsgenieWebhook\nopsgenie_webhook_block = OpsgenieWebhook.load(\"BLOCK_NAME\")\nopsgenie_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class OpsgenieWebhook(AbstractAppriseNotificationBlock):\n\"\"\"\n    Enables sending notifications via a provided Opsgenie webhook.\n    See [Apprise notify_opsgenie docs](https://github.com/caronc/apprise/wiki/Notify_opsgenie)\n    for more info on formatting the URL.\n\n    Examples:\n        Load a saved Opsgenie webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import OpsgenieWebhook\n        opsgenie_webhook_block = OpsgenieWebhook.load(\"BLOCK_NAME\")\n        opsgenie_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided Opsgenie webhook.\"\n\n    _block_type_name = \"Opsgenie Webhook\"\n    _block_type_slug = \"opsgenie-webhook\"\n    _logo_url = \"https://images.ctfassets.net/sahxz1jinscj/3habq8fTzmplh7Ctkppk4/590cecb73f766361fcea9223cd47bad8/opsgenie.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.OpsgenieWebhook\"\n\n    apikey: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=\"The API Key associated with your Opsgenie account.\",\n    )\n\n    target_user: Optional[List] = Field(\n        default=None, description=\"The user(s) you wish to notify.\"\n    )\n\n    target_team: Optional[List] = Field(\n        default=None, description=\"The team(s) you wish to notify.\"\n    )\n\n    target_schedule: Optional[List] = Field(\n        default=None, description=\"The schedule(s) you wish to notify.\"\n    )\n\n    target_escalation: Optional[List] = Field(\n        default=None, description=\"The escalation(s) you wish to notify.\"\n    )\n\n    region_name: Literal[\"us\", \"eu\"] = Field(\n        default=\"us\", description=\"The 2-character region code.\"\n    )\n\n    batch: bool = Field(\n        default=False,\n        description=\"Notify all targets in batches (instead of individually).\",\n    )\n\n    tags: Optional[List] = Field(\n        default=None,\n        description=(\n            \"A comma-separated list of tags you can associate with your Opsgenie\"\n            \" message.\"\n        ),\n        example='[\"tag1\", \"tag2\"]',\n    )\n\n    priority: Optional[str] = Field(\n        default=3,\n        description=(\n            \"The priority to associate with the message. It is on a scale between 1\"\n            \" (LOW) and 5 (EMERGENCY).\"\n        ),\n    )\n\n    alias: Optional[str] = Field(\n        default=None, description=\"The alias to associate with the message.\"\n    )\n\n    entity: Optional[str] = Field(\n        default=None, description=\"The entity to associate with the message.\"\n    )\n\n    details: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=\"Additional details composed of key/values pairs.\",\n        example='{\"key1\": \"value1\", \"key2\": \"value2\"}',\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyOpsgenie import NotifyOpsgenie\n\n        targets = []\n        if self.target_user:\n            [targets.append(f\"@{x}\") for x in self.target_user]\n        if self.target_team:\n            [targets.append(f\"#{x}\") for x in self.target_team]\n        if self.target_schedule:\n            [targets.append(f\"*{x}\") for x in self.target_schedule]\n        if self.target_escalation:\n            [targets.append(f\"^{x}\") for x in self.target_escalation]\n        url = SecretStr(\n            NotifyOpsgenie(\n                apikey=self.apikey.get_secret_value(),\n                targets=targets,\n                region_name=self.region_name,\n                details=self.details,\n                priority=self.priority,\n                alias=self.alias,\n                entity=self.entity,\n                batch=self.batch,\n                tags=self.tags,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.PagerDutyWebHook","title":"PagerDutyWebHook","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via a provided PagerDuty webhook. See Apprise notify_pagerduty docs for more info on formatting the URL.

Examples:

Load a saved PagerDuty webhook and send a message:

from prefect.blocks.notifications import PagerDutyWebHook\npagerduty_webhook_block = PagerDutyWebHook.load(\"BLOCK_NAME\")\npagerduty_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class PagerDutyWebHook(AbstractAppriseNotificationBlock):\n\"\"\"\n    Enables sending notifications via a provided PagerDuty webhook.\n    See [Apprise notify_pagerduty docs](https://github.com/caronc/apprise/wiki/Notify_pagerduty)\n    for more info on formatting the URL.\n\n    Examples:\n        Load a saved PagerDuty webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import PagerDutyWebHook\n        pagerduty_webhook_block = PagerDutyWebHook.load(\"BLOCK_NAME\")\n        pagerduty_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided PagerDuty webhook.\"\n\n    _block_type_name = \"Pager Duty Webhook\"\n    _block_type_slug = \"pager-duty-webhook\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/6FHJ4Lcozjfl1yDPxCvQDT/c2f6bdf47327271c068284897527f3da/PagerDuty-Logo.wine.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.PagerDutyWebHook\"\n\n    # The default cannot be prefect_default because NotifyPagerDuty's\n    # PAGERDUTY_SEVERITY_MAP only has these notify types defined as keys\n    notify_type: Literal[\"info\", \"success\", \"warning\", \"failure\"] = Field(\n        default=\"info\", description=\"The severity of the notification.\"\n    )\n\n    integration_key: SecretStr = Field(\n        default=...,\n        description=(\n            \"This can be found on the Events API V2 \"\n            \"integration's detail page, and is also referred to as a Routing Key. \"\n            \"This must be provided alongside `api_key`, but will error if provided \"\n            \"alongside `url`.\"\n        ),\n    )\n\n    api_key: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=(\n            \"This can be found under Integrations. \"\n            \"This must be provided alongside `integration_key`, but will error if \"\n            \"provided alongside `url`.\"\n        ),\n    )\n\n    source: Optional[str] = Field(\n        default=\"Prefect\", description=\"The source string as part of the payload.\"\n    )\n\n    component: str = Field(\n        default=\"Notification\",\n        description=\"The component string as part of the payload.\",\n    )\n\n    group: Optional[str] = Field(\n        default=None, description=\"The group string as part of the payload.\"\n    )\n\n    class_id: Optional[str] = Field(\n        default=None,\n        title=\"Class ID\",\n        description=\"The class string as part of the payload.\",\n    )\n\n    region_name: Literal[\"us\", \"eu\"] = Field(\n        default=\"us\", description=\"The region name.\"\n    )\n\n    clickable_url: Optional[AnyHttpUrl] = Field(\n        default=None,\n        title=\"Clickable URL\",\n        description=\"A clickable URL to associate with the notice.\",\n    )\n\n    include_image: bool = Field(\n        default=True,\n        description=\"Associate the notification status via a represented icon.\",\n    )\n\n    custom_details: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=\"Additional details to include as part of the payload.\",\n        example='{\"disk_space_left\": \"145GB\"}',\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyPagerDuty import NotifyPagerDuty\n\n        url = SecretStr(\n            NotifyPagerDuty(\n                apikey=self.api_key.get_secret_value(),\n                integrationkey=self.integration_key.get_secret_value(),\n                source=self.source,\n                component=self.component,\n                group=self.group,\n                class_id=self.class_id,\n                region_name=self.region_name,\n                click=self.clickable_url,\n                include_image=self.include_image,\n                details=self.custom_details,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SendgridEmail","title":"SendgridEmail","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via any sendgrid account. See Apprise Notify_sendgrid docs

Examples:

Load a saved Sendgrid and send a email message: ```python from prefect.blocks.notifications import SendgridEmail

sendgrid_block = SendgridEmail.load(\"BLOCK_NAME\")

sendgrid_block.notify(\"Hello from Prefect!\")

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class SendgridEmail(AbstractAppriseNotificationBlock):\n\"\"\"\n    Enables sending notifications via any sendgrid account.\n    See [Apprise Notify_sendgrid docs](https://github.com/caronc/apprise/wiki/Notify_Sendgrid)\n\n    Examples:\n        Load a saved Sendgrid and send a email message:\n        ```python\n        from prefect.blocks.notifications import SendgridEmail\n\n        sendgrid_block = SendgridEmail.load(\"BLOCK_NAME\")\n\n        sendgrid_block.notify(\"Hello from Prefect!\")\n    \"\"\"\n\n    _description = \"Enables sending notifications via Sendgrid email service.\"\n    _block_type_name = \"Sendgrid Email\"\n    _block_type_slug = \"sendgrid-email\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/3PcxFuO9XUqs7wU9MiUBMg/af6affa646899cc1712d14b7fc4c0f1f/email__1_.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SendgridEmail\"\n\n    api_key: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=\"The API Key associated with your sendgrid account.\",\n    )\n\n    sender_email: str = Field(\n        title=\"Sender email id\",\n        description=\"The sender email id.\",\n        example=\"test-support@gmail.com\",\n    )\n\n    to_emails: List[str] = Field(\n        default=...,\n        title=\"Recipient emails\",\n        description=\"Email ids of all recipients.\",\n        example=\"recipient1@gmail.com\",\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifySendGrid import NotifySendGrid\n\n        url = SecretStr(\n            NotifySendGrid(\n                apikey=self.api_key.get_secret_value(),\n                from_email=self.sender_email,\n                targets=self.to_emails,\n            ).url()\n        )\n\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SlackWebhook","title":"SlackWebhook","text":"

Bases: AppriseNotificationBlock

Enables sending notifications via a provided Slack webhook.

Examples:

Load a saved Slack webhook and send a message:

from prefect.blocks.notifications import SlackWebhook\n\nslack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\nslack_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class SlackWebhook(AppriseNotificationBlock):\n\"\"\"\n    Enables sending notifications via a provided Slack webhook.\n\n    Examples:\n        Load a saved Slack webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import SlackWebhook\n\n        slack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n        slack_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Slack Webhook\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/7dkzINU9r6j44giEFuHuUC/85d4cd321ad60c1b1e898bc3fbd28580/5cb480cd5f1b6d3fbadece79.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SlackWebhook\"\n\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"Slack incoming webhook URL used to send notifications.\",\n        example=\"https://hooks.slack.com/XXX\",\n    )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.TwilioSMS","title":"TwilioSMS","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via Twilio SMS. Find more on sending Twilio SMS messages in the docs.

Examples:

Load a saved TwilioSMS block and send a message:

from prefect.blocks.notifications import TwilioSMS\ntwilio_webhook_block = TwilioSMS.load(\"BLOCK_NAME\")\ntwilio_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class TwilioSMS(AbstractAppriseNotificationBlock):\n\"\"\"Enables sending notifications via Twilio SMS.\n    Find more on sending Twilio SMS messages in the [docs](https://www.twilio.com/docs/sms).\n\n    Examples:\n        Load a saved `TwilioSMS` block and send a message:\n        ```python\n        from prefect.blocks.notifications import TwilioSMS\n        twilio_webhook_block = TwilioSMS.load(\"BLOCK_NAME\")\n        twilio_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via Twilio SMS.\"\n    _block_type_name = \"Twilio SMS\"\n    _block_type_slug = \"twilio-sms\"\n    _logo_url = \"https://images.ctfassets.net/zscdif0zqppk/YTCgPL6bnK3BczP2gV9md/609283105a7006c57dbfe44ee1a8f313/58482bb9cef1014c0b5e4a31.png?h=250\"  # noqa\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.TwilioSMS\"\n\n    account_sid: str = Field(\n        default=...,\n        description=(\n            \"The Twilio Account SID - it can be found on the homepage \"\n            \"of the Twilio console.\"\n        ),\n    )\n\n    auth_token: SecretStr = Field(\n        default=...,\n        description=(\n            \"The Twilio Authentication Token - \"\n            \"it can be found on the homepage of the Twilio console.\"\n        ),\n    )\n\n    from_phone_number: str = Field(\n        default=...,\n        description=\"The valid Twilio phone number to send the message from.\",\n        example=\"18001234567\",\n    )\n\n    to_phone_numbers: List[str] = Field(\n        default=...,\n        description=\"A list of valid Twilio phone number(s) to send the message to.\",\n        # not wrapped in brackets because of the way UI displays examples; in code should be [\"18004242424\"]\n        example=\"18004242424\",\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyTwilio import NotifyTwilio\n\n        url = SecretStr(\n            NotifyTwilio(\n                account_sid=self.account_sid,\n                auth_token=self.auth_token.get_secret_value(),\n                source=self.from_phone_number,\n                targets=self.to_phone_numbers,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/system/","title":"prefect.blocks.system","text":"","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system","title":"prefect.blocks.system","text":"","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.DateTime","title":"DateTime","text":"

Bases: Block

A block that represents a datetime

Attributes:

Name Type Description value pendulum.DateTime

An ISO 8601-compatible datetime value.

Example

Load a stored JSON value:

from prefect.blocks.system import DateTime\n\ndata_time_block = DateTime.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/system.py
class DateTime(Block):\n\"\"\"\n    A block that represents a datetime\n\n    Attributes:\n        value: An ISO 8601-compatible datetime value.\n\n    Example:\n        Load a stored JSON value:\n        ```python\n        from prefect.blocks.system import DateTime\n\n        data_time_block = DateTime.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Date Time\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/1gmljt5UBcAwEXHPnIofcE/0f3cf1da45b8b2df846e142ab52b1778/image21.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.DateTime\"\n\n    value: pendulum.DateTime = Field(\n        default=...,\n        description=\"An ISO 8601-compatible datetime value.\",\n    )\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.JSON","title":"JSON","text":"

Bases: Block

A block that represents JSON

Attributes:

Name Type Description value Any

A JSON-compatible value.

Example

Load a stored JSON value:

from prefect.blocks.system import JSON\n\njson_block = JSON.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/system.py
class JSON(Block):\n\"\"\"\n    A block that represents JSON\n\n    Attributes:\n        value: A JSON-compatible value.\n\n    Example:\n        Load a stored JSON value:\n        ```python\n        from prefect.blocks.system import JSON\n\n        json_block = JSON.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/19W3Di10hhb4oma2Qer0x6/764d1e7b4b9974cd268c775a488b9d26/image16.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.JSON\"\n\n    value: Any = Field(default=..., description=\"A JSON-compatible value.\")\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.Secret","title":"Secret","text":"

Bases: Block

A block that represents a secret value. The value stored in this block will be obfuscated when this block is logged or shown in the UI.

Attributes:

Name Type Description value SecretStr

A string value that should be kept secret.

Example
from prefect.blocks.system import Secret\n\nsecret_block = Secret.load(\"BLOCK_NAME\")\n\n# Access the stored secret\nsecret_block.get()\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/system.py
class Secret(Block):\n\"\"\"\n    A block that represents a secret value. The value stored in this block will be obfuscated when\n    this block is logged or shown in the UI.\n\n    Attributes:\n        value: A string value that should be kept secret.\n\n    Example:\n        ```python\n        from prefect.blocks.system import Secret\n\n        secret_block = Secret.load(\"BLOCK_NAME\")\n\n        # Access the stored secret\n        secret_block.get()\n        ```\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5uUmyGBjRejYuGTWbTxz6E/3003e1829293718b3a5d2e909643a331/image8.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.Secret\"\n\n    value: SecretStr = Field(\n        default=..., description=\"A string value that should be kept secret.\"\n    )\n\n    def get(self):\n        return self.value.get_secret_value()\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.String","title":"String","text":"

Bases: Block

A block that represents a string

Attributes:

Name Type Description value str

A string value.

Example

Load a stored string value:

from prefect.blocks.system import String\n\nstring_block = String.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/system.py
class String(Block):\n\"\"\"\n    A block that represents a string\n\n    Attributes:\n        value: A string value.\n\n    Example:\n        Load a stored string value:\n        ```python\n        from prefect.blocks.system import String\n\n        string_block = String.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/4zjrZmh9tBrFiikeB44G4O/2ce1dbbac1c8e356f7c429e0f8bbb58d/image10.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.String\"\n\n    value: str = Field(default=..., description=\"A string value.\")\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/webhook/","title":"prefect.blocks.webhook","text":"","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook","title":"prefect.blocks.webhook","text":"","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook","title":"Webhook","text":"

Bases: Block

Block that enables calling webhooks.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/webhook.py
class Webhook(Block):\n\"\"\"\n    Block that enables calling webhooks.\n    \"\"\"\n\n    _block_type_name = \"Webhook\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/6ciCsTFsvUAiiIvTllMfOU/627e9513376ca457785118fbba6a858d/webhook_icon_138018.png?h=250\"  # type: ignore\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook\"\n\n    method: Literal[\"GET\", \"POST\", \"PUT\", \"PATCH\", \"DELETE\"] = Field(\n        default=\"POST\", description=\"The webhook request method. Defaults to `POST`.\"\n    )\n\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"The webhook URL.\",\n        example=\"https://hooks.slack.com/XXX\",\n    )\n\n    headers: SecretDict = Field(\n        default_factory=lambda: SecretDict(dict()),\n        title=\"Webhook Headers\",\n        description=\"A dictionary of headers to send with the webhook request.\",\n    )\n\n    def block_initialization(self):\n        self._client = AsyncClient(transport=_http_transport)\n\n    async def call(self, payload: Optional[dict] = None) -> Response:\n\"\"\"\n        Call the webhook.\n\n        Args:\n            payload: an optional payload to send when calling the webhook.\n        \"\"\"\n        async with self._client:\n            return await self._client.request(\n                method=self.method,\n                url=self.url.get_secret_value(),\n                headers=self.headers.get_secret_value(),\n                json=payload,\n            )\n
","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook.call","title":"call async","text":"

Call the webhook.

Parameters:

Name Type Description Default payload Optional[dict]

an optional payload to send when calling the webhook.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/webhook.py
async def call(self, payload: Optional[dict] = None) -> Response:\n\"\"\"\n    Call the webhook.\n\n    Args:\n        payload: an optional payload to send when calling the webhook.\n    \"\"\"\n    async with self._client:\n        return await self._client.request(\n            method=self.method,\n            url=self.url.get_secret_value(),\n            headers=self.headers.get_secret_value(),\n            json=payload,\n        )\n
","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/cli/agent/","title":"prefect.cli.agent","text":"","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/agent/#prefect.cli.agent","title":"prefect.cli.agent","text":"

Command line interface for working with agent services

","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/agent/#prefect.cli.agent.start","title":"start async","text":"

Start an agent process to poll one or more work queues for flow runs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/agent.py
@agent_app.command()\nasync def start(\n    # deprecated main argument\n    work_queue: str = typer.Argument(\n        None,\n        show_default=False,\n        help=\"DEPRECATED: A work queue name or ID\",\n    ),\n    work_queues: List[str] = typer.Option(\n        None,\n        \"-q\",\n        \"--work-queue\",\n        help=\"One or more work queue names for the agent to pull from.\",\n    ),\n    work_queue_prefix: List[str] = typer.Option(\n        None,\n        \"-m\",\n        \"--match\",\n        help=(\n            \"Dynamically matches work queue names with the specified prefix for the\"\n            \" agent to pull from,for example `dev-` will match all work queues with a\"\n            \" name that starts with `dev-`\"\n        ),\n    ),\n    work_pool_name: str = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"A work pool name for the agent to pull from.\",\n    ),\n    hide_welcome: bool = typer.Option(False, \"--hide-welcome\"),\n    api: str = SettingsOption(PREFECT_API_URL),\n    run_once: bool = typer.Option(\n        False, help=\"Run the agent loop once, instead of forever.\"\n    ),\n    prefetch_seconds: int = SettingsOption(PREFECT_AGENT_PREFETCH_SECONDS),\n    # deprecated tags\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"DEPRECATED: One or more optional tags that will be used to create a work\"\n            \" queue. This option will be removed on 2023-02-23.\"\n        ),\n    ),\n    limit: int = typer.Option(\n        None,\n        \"-l\",\n        \"--limit\",\n        help=\"Maximum number of flow runs to start simultaneously.\",\n    ),\n):\n\"\"\"\n    Start an agent process to poll one or more work queues for flow runs.\n    \"\"\"\n    work_queues = work_queues or []\n\n    if work_queue is not None:\n        # try to treat the work_queue as a UUID\n        try:\n            async with get_client() as client:\n                q = await client.read_work_queue(UUID(work_queue))\n                work_queue = q.name\n        # otherwise treat it as a string name\n        except (TypeError, ValueError):\n            pass\n        work_queues.append(work_queue)\n        app.console.print(\n            (\n                \"Agents now support multiple work queues. Instead of passing a single\"\n                \" argument, provide work queue names with the `-q` or `--work-queue`\"\n                f\" flag: `prefect agent start -q {work_queue}`\\n\"\n            ),\n            style=\"blue\",\n        )\n\n    if not work_queues and not tags and not work_queue_prefix and not work_pool_name:\n        exit_with_error(\"No work queues provided!\", style=\"red\")\n    elif bool(work_queues) + bool(tags) + bool(work_queue_prefix) > 1:\n        exit_with_error(\n            \"Only one of `work_queues`, `match`, or `tags` can be provided.\",\n            style=\"red\",\n        )\n    if work_pool_name and tags:\n        exit_with_error(\n            \"`tag` and `pool` options cannot be used together.\", style=\"red\"\n        )\n\n    if tags:\n        work_queue_name = f\"Agent queue {'-'.join(sorted(tags))}\"\n        app.console.print(\n            (\n                \"`tags` are deprecated. For backwards-compatibility with old versions\"\n                \" of Prefect, this agent will create a work queue named\"\n                f\" `{work_queue_name}` that uses legacy tag-based matching. This option\"\n                \" will be removed on 2023-02-23.\"\n            ),\n            style=\"red\",\n        )\n\n        async with get_client() as client:\n            try:\n                work_queue = await client.read_work_queue_by_name(work_queue_name)\n                if work_queue.filter is None:\n                    # ensure the work queue has legacy (deprecated) tag-based behavior\n                    await client.update_work_queue(filter=dict(tags=tags))\n            except ObjectNotFound:\n                # if the work queue doesn't already exist, we create it with tags\n                # to enable legacy (deprecated) tag-matching behavior\n                await client.create_work_queue(name=work_queue_name, tags=tags)\n\n        work_queues = [work_queue_name]\n\n    if not hide_welcome:\n        if api:\n            app.console.print(\n                f\"Starting v{prefect.__version__} agent connected to {api}...\"\n            )\n        else:\n            app.console.print(\n                f\"Starting v{prefect.__version__} agent with ephemeral API...\"\n            )\n\n    agent_process_id = os.getpid()\n    setup_signal_handlers_agent(\n        agent_process_id, \"the Prefect agent\", app.console.print\n    )\n\n    async with PrefectAgent(\n        work_queues=work_queues,\n        work_queue_prefix=work_queue_prefix,\n        work_pool_name=work_pool_name,\n        prefetch_seconds=prefetch_seconds,\n        limit=limit,\n    ) as agent:\n        if not hide_welcome:\n            app.console.print(ascii_name)\n            if work_pool_name:\n                app.console.print(\n                    \"Agent started! Looking for work from \"\n                    f\"work pool '{work_pool_name}'...\"\n                )\n            elif work_queue_prefix:\n                app.console.print(\n                    \"Agent started! Looking for work from \"\n                    f\"queue(s) that start with the prefix: {work_queue_prefix}...\"\n                )\n            else:\n                app.console.print(\n                    \"Agent started! Looking for work from \"\n                    f\"queue(s): {', '.join(work_queues)}...\"\n                )\n\n        async with anyio.create_task_group() as tg:\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    agent.get_and_submit_flow_runs,\n                    PREFECT_AGENT_QUERY_INTERVAL.value(),\n                    printer=app.console.print,\n                    run_once=run_once,\n                    jitter_range=0.3,\n                    backoff=4,  # Up to ~1 minute interval during backoff\n                )\n            )\n\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    agent.check_for_cancelled_flow_runs,\n                    PREFECT_AGENT_QUERY_INTERVAL.value() * 2,\n                    printer=app.console.print,\n                    run_once=run_once,\n                    jitter_range=0.3,\n                    backoff=4,\n                )\n            )\n\n    app.console.print(\"Agent stopped!\")\n
","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/cloud-webhook/","title":"prefect.cli.cloud_webhook","text":"","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook","title":"prefect.cli.cloud.webhook","text":"

Command line interface for working with webhooks

","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.create","title":"create async","text":"

Create a new Cloud webhook

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def create(\n    webhook_name: str,\n    description: str = typer.Option(\n        \"\", \"--description\", \"-d\", help=\"Description of the webhook\"\n    ),\n    template: str = typer.Option(\n        None, \"--template\", \"-t\", help=\"Jinja2 template expression\"\n    ),\n):\n\"\"\"\n    Create a new Cloud webhook\n    \"\"\"\n    if not template:\n        exit_with_error(\n            \"Please provide a Jinja2 template expression in the --template flag \\nwhich\"\n            ' should define (at minimum) the following attributes: \\n{ \"event\":'\n            ' \"your.event.name\", \"resource\": { \"prefect.resource.id\":'\n            ' \"your.resource.id\" } }'\n            \" \\nhttps://docs.prefect.io/latest/cloud/webhooks/#webhook-templates\"\n        )\n\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\n            \"POST\",\n            \"/webhooks/\",\n            json={\n                \"name\": webhook_name,\n                \"description\": description,\n                \"template\": template,\n            },\n        )\n        app.console.print(f'Successfully created webhook {response[\"name\"]}')\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.delete","title":"delete async","text":"

Delete an existing Cloud webhook

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def delete(webhook_id: UUID):\n\"\"\"\n    Delete an existing Cloud webhook\n    \"\"\"\n    confirm_logged_in()\n\n    confirm_delete = typer.confirm(\n        \"Are you sure you want to delete it? This cannot be undone.\"\n    )\n\n    if not confirm_delete:\n        return\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        await client.request(\"DELETE\", f\"/webhooks/{webhook_id}\")\n        app.console.print(f\"Successfully deleted webhook {webhook_id}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.get","title":"get async","text":"

Retrieve a webhook by ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def get(webhook_id: UUID):\n\"\"\"\n    Retrieve a webhook by ID.\n    \"\"\"\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        webhook = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n        display_table = _render_webhooks_into_table([webhook])\n        app.console.print(display_table)\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.ls","title":"ls async","text":"

Fetch and list all webhooks in your workspace

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def ls():\n\"\"\"\n    Fetch and list all webhooks in your workspace\n    \"\"\"\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        retrieved_webhooks = await client.request(\"POST\", \"/webhooks/filter\")\n        display_table = _render_webhooks_into_table(retrieved_webhooks)\n        app.console.print(display_table)\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.rotate","title":"rotate async","text":"

Rotate url for an existing Cloud webhook, in case it has been compromised

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def rotate(webhook_id: UUID):\n\"\"\"\n    Rotate url for an existing Cloud webhook, in case it has been compromised\n    \"\"\"\n    confirm_logged_in()\n\n    confirm_rotate = typer.confirm(\n        \"Are you sure you want to rotate? This will invalidate the old URL.\"\n    )\n\n    if not confirm_rotate:\n        return\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\"POST\", f\"/webhooks/{webhook_id}/rotate\")\n        app.console.print(f'Successfully rotated webhook URL to {response[\"slug\"]}')\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.toggle","title":"toggle async","text":"

Toggle the enabled status of an existing Cloud webhook

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def toggle(\n    webhook_id: UUID,\n):\n\"\"\"\n    Toggle the enabled status of an existing Cloud webhook\n    \"\"\"\n    confirm_logged_in()\n\n    status_lookup = {True: \"enabled\", False: \"disabled\"}\n\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n        current_status = response[\"enabled\"]\n        new_status = not current_status\n\n        await client.request(\n            \"PATCH\", f\"/webhooks/{webhook_id}\", json={\"enabled\": new_status}\n        )\n        app.console.print(f\"Webhook is now {status_lookup[new_status]}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.update","title":"update async","text":"

Partially update an existing Cloud webhook

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def update(\n    webhook_id: UUID,\n    webhook_name: str = typer.Option(None, \"--name\", \"-n\", help=\"Webhook name\"),\n    description: str = typer.Option(\n        None, \"--description\", \"-d\", help=\"Description of the webhook\"\n    ),\n    template: str = typer.Option(\n        None, \"--template\", \"-t\", help=\"Jinja2 template expression\"\n    ),\n):\n\"\"\"\n    Partially update an existing Cloud webhook\n    \"\"\"\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n        update_payload = {\n            \"name\": webhook_name or response[\"name\"],\n            \"description\": description or response[\"description\"],\n            \"template\": template or response[\"template\"],\n        }\n\n        await client.request(\"PUT\", f\"/webhooks/{webhook_id}\", json=update_payload)\n        app.console.print(f\"Successfully updated webhook {webhook_id}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud/","title":"prefect.cli.cloud","text":"","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud","title":"prefect.cli.cloud","text":"

Command line interface for interacting with Prefect Cloud

","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login_api","title":"login_api = FastAPI(lifespan=lifespan) module-attribute","text":"

This small API server is used for data transmission for browser-based log in.

","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.check_key_is_valid_for_login","title":"check_key_is_valid_for_login async","text":"

Attempt to use a key to see if it is valid

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
async def check_key_is_valid_for_login(key: str):\n\"\"\"\n    Attempt to use a key to see if it is valid\n    \"\"\"\n    async with get_cloud_client(api_key=key) as client:\n        try:\n            await client.read_workspaces()\n            return True\n        except CloudUnauthorizedError:\n            return False\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login","title":"login async","text":"

Log in to Prefect Cloud. Creates a new profile configured to use the specified PREFECT_API_KEY. Uses a previously configured profile if it exists.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def login(\n    key: Optional[str] = typer.Option(\n        None, \"--key\", \"-k\", help=\"API Key to authenticate with Prefect\"\n    ),\n    workspace_handle: Optional[str] = typer.Option(\n        None,\n        \"--workspace\",\n        \"-w\",\n        help=(\n            \"Full handle of workspace, in format '<account_handle>/<workspace_handle>'\"\n        ),\n    ),\n):\n\"\"\"\n    Log in to Prefect Cloud.\n    Creates a new profile configured to use the specified PREFECT_API_KEY.\n    Uses a previously configured profile if it exists.\n    \"\"\"\n    if not is_interactive() and (not key or not workspace_handle):\n        exit_with_error(\n            \"When not using an interactive terminal, you must supply a `--key` and\"\n            \" `--workspace`.\"\n        )\n\n    profiles = load_profiles()\n    current_profile = get_settings_context().profile\n\n    if key and PREFECT_API_KEY.value() == key:\n        exit_with_success(\"This profile is already authenticated with that key.\")\n\n    already_logged_in_profiles = []\n    for name, profile in profiles.items():\n        profile_key = profile.settings.get(PREFECT_API_KEY)\n        if (\n            # If a key is provided, only show profiles with the same key\n            (key and profile_key == key)\n            # Otherwise, show all profiles with a key set\n            or (not key and profile_key is not None)\n            # Check that the key is usable to avoid suggesting unauthenticated profiles\n            and await check_key_is_valid_for_login(profile_key)\n        ):\n            already_logged_in_profiles.append(name)\n\n    current_profile_is_logged_in = current_profile.name in already_logged_in_profiles\n\n    if current_profile_is_logged_in:\n        app.console.print(\"It looks like you're already authenticated on this profile.\")\n        should_reauth = typer.confirm(\n            \"? Would you like to reauthenticate?\", default=False\n        )\n        if not should_reauth:\n            app.console.print(\"Using the existing authentication on this profile.\")\n            key = PREFECT_API_KEY.value()\n\n    elif already_logged_in_profiles:\n        app.console.print(\n            \"It looks like you're already authenticated with another profile.\"\n        )\n        if not typer.confirm(\n            \"? Would you like to reauthenticate with this profile?\", default=False\n        ):\n            if typer.confirm(\n                \"? Would you like to switch to an authenticated profile?\", default=True\n            ):\n                profile_name = prompt_select_from_list(\n                    app.console,\n                    \"Which authenticated profile would you like to switch to?\",\n                    already_logged_in_profiles,\n                )\n\n                profiles.set_active(profile_name)\n                save_profiles(profiles)\n                exit_with_success(\n                    f\"Switched to authenticated profile {profile_name!r}.\"\n                )\n            else:\n                return\n\n    if not key:\n        choice = prompt_select_from_list(\n            app.console,\n            \"How would you like to authenticate?\",\n            [\n                (\"browser\", \"Log in with a web browser\"),\n                (\"key\", \"Paste an API key\"),\n            ],\n        )\n\n        if choice == \"key\":\n            key = typer.prompt(\"Paste your API key\", hide_input=True)\n        elif choice == \"browser\":\n            key = await login_with_browser()\n\n    async with get_cloud_client(api_key=key) as client:\n        try:\n            workspaces = await client.read_workspaces()\n        except CloudUnauthorizedError:\n            if key.startswith(\"pcu\"):\n                help_message = (\n                    \"It looks like you're using API key from Cloud 1\"\n                    \" (https://cloud.prefect.io). Make sure that you generate API key\"\n                    \" using Cloud 2 (https://app.prefect.cloud)\"\n                )\n            elif not key.startswith(\"pnu\"):\n                help_message = \"Your key is not in our expected format.\"\n            else:\n                help_message = \"Please ensure your credentials are correct.\"\n            exit_with_error(\n                f\"Unable to authenticate with Prefect Cloud. {help_message}\"\n            )\n        except httpx.HTTPStatusError as exc:\n            exit_with_error(f\"Error connecting to Prefect Cloud: {exc!r}\")\n\n    if workspace_handle:\n        # Search for the given workspace\n        for workspace in workspaces:\n            if workspace.handle == workspace_handle:\n                break\n        else:\n            if workspaces:\n                hint = (\n                    \" Available workspaces:\"\n                    f\" {listrepr((w.handle for w in workspaces), ', ')}\"\n                )\n            else:\n                hint = \"\"\n\n            exit_with_error(f\"Workspace {workspace_handle!r} not found.\" + hint)\n    else:\n        # Prompt a switch if the number of workspaces is greater than one\n        prompt_switch_workspace = len(workspaces) > 1\n\n        current_workspace = get_current_workspace(workspaces)\n\n        # Confirm that we want to switch if the current profile is already logged in\n        if (\n            current_profile_is_logged_in and current_workspace is not None\n        ) and prompt_switch_workspace:\n            app.console.print(\n                f\"You are currently using workspace {current_workspace.handle!r}.\"\n            )\n            prompt_switch_workspace = typer.confirm(\n                \"? Would you like to switch workspaces?\", default=False\n            )\n\n        if prompt_switch_workspace:\n            workspace = prompt_select_from_list(\n                app.console,\n                \"Which workspace would you like to use?\",\n                [(workspace, workspace.handle) for workspace in workspaces],\n            )\n        else:\n            if current_workspace:\n                workspace = current_workspace\n            elif len(workspaces) > 0:\n                workspace = workspaces[0]\n            else:\n                exit_with_error(\n                    \"No workspaces found! Create a workspace at\"\n                    f\" {PREFECT_CLOUD_UI_URL.value()} and try again.\"\n                )\n\n    update_current_profile(\n        {\n            PREFECT_API_KEY: key,\n            PREFECT_API_URL: workspace.api_url(),\n        }\n    )\n\n    exit_with_success(\n        f\"Authenticated with Prefect Cloud! Using workspace {workspace.handle!r}.\"\n    )\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login_with_browser","title":"login_with_browser async","text":"

Perform login using the browser.

On failure, this function will exit the process. On success, it will return an API key.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
async def login_with_browser() -> str:\n\"\"\"\n    Perform login using the browser.\n\n    On failure, this function will exit the process.\n    On success, it will return an API key.\n    \"\"\"\n\n    # Set up an event that the login API will toggle on startup\n    ready_event = login_api.extra[\"ready-event\"] = anyio.Event()\n\n    # Set up an event that the login API will set when a response comes from the UI\n    result_event = login_api.extra[\"result-event\"] = anyio.Event()\n\n    timeout_scope = None\n    async with anyio.create_task_group() as tg:\n        # Run a server in the background to get payload from the browser\n        server = await tg.start(serve_login_api, tg.cancel_scope)\n\n        # Wait for the login server to be ready\n        with anyio.fail_after(10):\n            await ready_event.wait()\n\n            # The server may not actually be serving as the lifespan is started first\n            while not server.started:\n                await anyio.sleep(0)\n\n        # Get the port the server is using\n        server_port = server.servers[0].sockets[0].getsockname()[1]\n        callback = urllib.parse.quote(f\"http://localhost:{server_port}\")\n        ui_login_url = (\n            PREFECT_CLOUD_UI_URL.value() + f\"/auth/client?callback={callback}\"\n        )\n\n        # Then open the authorization page in a new browser tab\n        app.console.print(\"Opening browser...\")\n        await run_sync_in_worker_thread(webbrowser.open_new_tab, ui_login_url)\n\n        # Wait for the response from the browser,\n        with anyio.move_on_after(120) as timeout_scope:\n            app.console.print(\"Waiting for response...\")\n            await result_event.wait()\n\n        # Uvicorn installs signal handlers, this is the cleanest way to shutdown the\n        # login API\n        raise_signal(signal.SIGINT)\n\n    result = login_api.extra.get(\"result\")\n    if not result:\n        if timeout_scope and timeout_scope.cancel_called:\n            exit_with_error(\"Timed out while waiting for authorization.\")\n        else:\n            exit_with_error(\"Aborted.\")\n\n    if result.type == \"success\":\n        return result.content.api_key\n    elif result.type == \"failure\":\n        exit_with_error(f\"Failed to log in. {result.content.reason}\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.logout","title":"logout async","text":"

Logout the current workspace. Reset PREFECT_API_KEY and PREFECT_API_URL to default.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def logout():\n\"\"\"\n    Logout the current workspace.\n    Reset PREFECT_API_KEY and PREFECT_API_URL to default.\n    \"\"\"\n    current_profile = prefect.context.get_settings_context().profile\n    if current_profile is None:\n        exit_with_error(\"There is no current profile set.\")\n\n    if current_profile.settings.get(PREFECT_API_KEY) is None:\n        exit_with_error(\"Current profile is not logged into Prefect Cloud.\")\n\n    update_current_profile(\n        {\n            PREFECT_API_URL: None,\n            PREFECT_API_KEY: None,\n        },\n    )\n\n    exit_with_success(\"Logged out from Prefect Cloud.\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.ls","title":"ls async","text":"

List available workspaces.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
@workspace_app.command()\nasync def ls():\n\"\"\"List available workspaces.\"\"\"\n\n    confirm_logged_in()\n\n    async with get_cloud_client() as client:\n        try:\n            workspaces = await client.read_workspaces()\n        except CloudUnauthorizedError:\n            exit_with_error(\n                \"Unable to authenticate. Please ensure your credentials are correct.\"\n            )\n\n    current_workspace = get_current_workspace(workspaces)\n\n    table = Table(caption=\"* active workspace\")\n    table.add_column(\n        \"[#024dfd]Workspaces:\", justify=\"left\", style=\"#8ea0ae\", no_wrap=True\n    )\n\n    for workspace_handle in sorted(workspace.handle for workspace in workspaces):\n        if workspace_handle == current_workspace.handle:\n            table.add_row(f\"[green]* {workspace_handle}[/green]\")\n        else:\n            table.add_row(f\"  {workspace_handle}\")\n\n    app.console.print(table)\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.prompt_select_from_list","title":"prompt_select_from_list","text":"

Given a list of options, display the values to user in a table and prompt them to select one.

Parameters:

Name Type Description Default options Union[List[str], List[Tuple[Hashable, str]]]

A list of options to present to the user. A list of tuples can be passed as key value pairs. If a value is chosen, the key will be returned.

required

Returns:

Name Type Description str str

the selected option

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
def prompt_select_from_list(\n    console, prompt: str, options: Union[List[str], List[Tuple[Hashable, str]]]\n) -> str:\n\"\"\"\n    Given a list of options, display the values to user in a table and prompt them\n    to select one.\n\n    Args:\n        options: A list of options to present to the user.\n            A list of tuples can be passed as key value pairs. If a value is chosen, the\n            key will be returned.\n\n    Returns:\n        str: the selected option\n    \"\"\"\n\n    current_idx = 0\n    selected_option = None\n\n    def build_table() -> Table:\n\"\"\"\n        Generate a table of options. The `current_idx` will be highlighted.\n        \"\"\"\n\n        table = Table(box=False, header_style=None, padding=(0, 0))\n        table.add_column(\n            f\"? [bold]{prompt}[/] [bright_blue][Use arrows to move; enter to select]\",\n            justify=\"left\",\n            no_wrap=True,\n        )\n\n        for i, option in enumerate(options):\n            if isinstance(option, tuple):\n                option = option[1]\n\n            if i == current_idx:\n                # Use blue for selected options\n                table.add_row(\"[bold][blue]> \" + option)\n            else:\n                table.add_row(\"  \" + option)\n        return table\n\n    with Live(build_table(), auto_refresh=False, console=console) as live:\n        while selected_option is None:\n            key = readchar.readkey()\n\n            if key == readchar.key.UP:\n                current_idx = current_idx - 1\n                # wrap to bottom if at the top\n                if current_idx < 0:\n                    current_idx = len(options) - 1\n            elif key == readchar.key.DOWN:\n                current_idx = current_idx + 1\n                # wrap to top if at the bottom\n                if current_idx >= len(options):\n                    current_idx = 0\n            elif key == readchar.key.CTRL_C:\n                # gracefully exit with no message\n                exit_with_error(\"\")\n            elif key == readchar.key.ENTER or key == readchar.key.CR:\n                selected_option = options[current_idx]\n                if isinstance(selected_option, tuple):\n                    selected_option = selected_option[0]\n\n            live.update(build_table(), refresh=True)\n\n        return selected_option\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.set","title":"set async","text":"

Set current workspace. Shows a workspace picker if no workspace is specified.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
@workspace_app.command()\nasync def set(\n    workspace_handle: str = typer.Option(\n        None,\n        \"--workspace\",\n        \"-w\",\n        help=(\n            \"Full handle of workspace, in format '<account_handle>/<workspace_handle>'\"\n        ),\n    ),\n):\n\"\"\"Set current workspace. Shows a workspace picker if no workspace is specified.\"\"\"\n    confirm_logged_in()\n\n    async with get_cloud_client() as client:\n        try:\n            workspaces = await client.read_workspaces()\n        except CloudUnauthorizedError:\n            exit_with_error(\n                \"Unable to authenticate. Please ensure your credentials are correct.\"\n            )\n\n    if workspace_handle:\n        # Search for the given workspace\n        for workspace in workspaces:\n            if workspace.handle == workspace_handle:\n                break\n        else:\n            exit_with_error(f\"Workspace {workspace_handle!r} not found.\")\n    else:\n        workspace = prompt_select_from_list(\n            app.console,\n            \"Which workspace would you like to use?\",\n            [(workspace, workspace.handle) for workspace in workspaces],\n        )\n\n    profile = update_current_profile({PREFECT_API_URL: workspace.api_url()})\n\n    exit_with_success(\n        f\"Successfully set workspace to {workspace.handle!r} in profile\"\n        f\" {profile.name!r}.\"\n    )\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/concurrency_limit/","title":"prefect.cli.concurrency_limit","text":"","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit","title":"prefect.cli.concurrency_limit","text":"

Command line interface for working with concurrency limits.

","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.create","title":"create async","text":"

Create a concurrency limit against a tag.

This limit controls how many task runs with that tag may simultaneously be in a Running state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def create(tag: str, concurrency_limit: int):\n\"\"\"\n    Create a concurrency limit against a tag.\n\n    This limit controls how many task runs with that tag may simultaneously be in a\n    Running state.\n    \"\"\"\n\n    async with get_client() as client:\n        await client.create_concurrency_limit(\n            tag=tag, concurrency_limit=concurrency_limit\n        )\n        await client.read_concurrency_limit_by_tag(tag)\n\n    app.console.print(\n        textwrap.dedent(\n            f\"\"\"\n            Created concurrency limit with properties:\n                tag - {tag!r}\n                concurrency_limit - {concurrency_limit}\n\n            Delete the concurrency limit:\n                prefect concurrency-limit delete {tag!r}\n\n            Inspect the concurrency limit:\n                prefect concurrency-limit inspect {tag!r}\n        \"\"\"\n        )\n    )\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.delete","title":"delete async","text":"

Delete the concurrency limit set on the specified tag.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def delete(tag: str):\n\"\"\"\n    Delete the concurrency limit set on the specified tag.\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            await client.delete_concurrency_limit_by_tag(tag=tag)\n        except ObjectNotFound:\n            exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n    exit_with_success(f\"Deleted concurrency limit set on the tag: {tag}\")\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.inspect","title":"inspect async","text":"

View details about a concurrency limit. active_slots shows a list of TaskRun IDs which are currently using a concurrency slot.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def inspect(tag: str):\n\"\"\"\n    View details about a concurrency limit. `active_slots` shows a list of TaskRun IDs\n    which are currently using a concurrency slot.\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            result = await client.read_concurrency_limit_by_tag(tag=tag)\n        except ObjectNotFound:\n            exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n    trid_table = Table()\n    trid_table.add_column(\"Active Task Run IDs\", style=\"cyan\", no_wrap=True)\n\n    cl_table = Table(title=f\"Concurrency Limit ID: [red]{str(result.id)}\")\n    cl_table.add_column(\"Tag\", style=\"green\", no_wrap=True)\n    cl_table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n    cl_table.add_column(\"Created\", style=\"magenta\", no_wrap=True)\n    cl_table.add_column(\"Updated\", style=\"magenta\", no_wrap=True)\n\n    for trid in sorted(result.active_slots):\n        trid_table.add_row(str(trid))\n\n    cl_table.add_row(\n        str(result.tag),\n        str(result.concurrency_limit),\n        Pretty(pendulum.instance(result.created).diff_for_humans()),\n        Pretty(pendulum.instance(result.updated).diff_for_humans()),\n    )\n\n    group = Group(\n        cl_table,\n        trid_table,\n    )\n    app.console.print(Panel(group, expand=False))\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.ls","title":"ls async","text":"

View all concurrency limits.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def ls(limit: int = 15, offset: int = 0):\n\"\"\"\n    View all concurrency limits.\n    \"\"\"\n    table = Table(\n        title=\"Concurrency Limits\",\n        caption=\"inspect a concurrency limit to show active task run IDs\",\n    )\n    table.add_column(\"Tag\", style=\"green\", no_wrap=True)\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Active Task Runs\", style=\"magenta\", no_wrap=True)\n\n    async with get_client() as client:\n        concurrency_limits = await client.read_concurrency_limits(\n            limit=limit, offset=offset\n        )\n\n    for cl in sorted(concurrency_limits, key=lambda c: c.updated, reverse=True):\n        table.add_row(\n            str(cl.tag),\n            str(cl.id),\n            str(cl.concurrency_limit),\n            str(len(cl.active_slots)),\n        )\n\n    app.console.print(table)\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.reset","title":"reset async","text":"

Resets the concurrency limit slots set on the specified tag.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def reset(tag: str):\n\"\"\"\n    Resets the concurrency limit slots set on the specified tag.\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            await client.reset_concurrency_limit_by_tag(tag=tag)\n        except ObjectNotFound:\n            exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n    exit_with_success(f\"Reset concurrency limit set on the tag: {tag}\")\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/config/","title":"prefect.cli.config","text":"","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config","title":"prefect.cli.config","text":"

Command line interface for working with profiles

","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.set_","title":"set_","text":"

Change the value for a setting by setting the value in the current profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/config.py
@config_app.command(\"set\")\ndef set_(settings: List[str]):\n\"\"\"\n    Change the value for a setting by setting the value in the current profile.\n    \"\"\"\n    parsed_settings = {}\n    for item in settings:\n        try:\n            setting, value = item.split(\"=\", maxsplit=1)\n        except ValueError:\n            exit_with_error(\n                f\"Failed to parse argument {item!r}. Use the format 'VAR=VAL'.\"\n            )\n\n        if setting not in prefect.settings.SETTING_VARIABLES:\n            exit_with_error(f\"Unknown setting name {setting!r}.\")\n\n        # Guard against changing settings that tweak config locations\n        if setting in {\"PREFECT_HOME\", \"PREFECT_PROFILES_PATH\"}:\n            exit_with_error(\n                f\"Setting {setting!r} cannot be changed with this command. \"\n                \"Use an environment variable instead.\"\n            )\n\n        parsed_settings[setting] = value\n\n    try:\n        new_profile = prefect.settings.update_current_profile(parsed_settings)\n    except pydantic.ValidationError as exc:\n        for error in exc.errors():\n            setting = error[\"loc\"][0]\n            message = error[\"msg\"]\n            app.console.print(f\"Validation error for setting {setting!r}: {message}\")\n        exit_with_error(\"Invalid setting value.\")\n\n    for setting, value in parsed_settings.items():\n        app.console.print(f\"Set {setting!r} to {value!r}.\")\n        if setting in os.environ:\n            app.console.print(\n                f\"[yellow]{setting} is also set by an environment variable which will \"\n                f\"override your config value. Run `unset {setting}` to clear it.\"\n            )\n\n        if prefect.settings.SETTING_VARIABLES[setting].deprecated:\n            app.console.print(\n                f\"[yellow]{prefect.settings.SETTING_VARIABLES[setting].deprecated_message}.\"\n            )\n\n    exit_with_success(f\"Updated profile {new_profile.name!r}.\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.unset","title":"unset","text":"

Restore the default value for a setting.

Removes the setting from the current profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/config.py
@config_app.command()\ndef unset(settings: List[str]):\n\"\"\"\n    Restore the default value for a setting.\n\n    Removes the setting from the current profile.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    profile = profiles[prefect.context.get_settings_context().profile.name]\n    parsed = set()\n\n    for setting in settings:\n        if setting not in prefect.settings.SETTING_VARIABLES:\n            exit_with_error(f\"Unknown setting name {setting!r}.\")\n        # Cast to settings objects\n        parsed.add(prefect.settings.SETTING_VARIABLES[setting])\n\n    for setting in parsed:\n        if setting not in profile.settings:\n            exit_with_error(f\"{setting.name!r} is not set in profile {profile.name!r}.\")\n\n    profiles.update_profile(\n        name=profile.name, settings={setting: None for setting in parsed}\n    )\n\n    for setting in settings:\n        app.console.print(f\"Unset {setting!r}.\")\n\n        if setting in os.environ:\n            app.console.print(\n                f\"[yellow]{setting!r} is also set by an environment variable. \"\n                f\"Use `unset {setting}` to clear it.\"\n            )\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(f\"Updated profile {profile.name!r}.\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.validate","title":"validate","text":"

Read and validate the current profile.

Deprecated settings will be automatically converted to new names unless both are set.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/config.py
@config_app.command()\ndef validate():\n\"\"\"\n    Read and validate the current profile.\n\n    Deprecated settings will be automatically converted to new names unless both are\n    set.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    profile = profiles[prefect.context.get_settings_context().profile.name]\n    changed = profile.convert_deprecated_renamed_settings()\n    for old, new in changed:\n        app.console.print(f\"Updated {old.name!r} to {new.name!r}.\")\n\n    for setting in profile.settings.keys():\n        if setting.deprecated:\n            app.console.print(f\"Found deprecated setting {setting.name!r}.\")\n\n    profile.validate_settings()\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(\"Configuration valid!\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.view","title":"view","text":"

Display the current settings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/config.py
@config_app.command()\ndef view(\n    show_defaults: Optional[bool] = typer.Option(\n        False, \"--show-defaults/--hide-defaults\", help=(show_defaults_help)\n    ),\n    show_sources: Optional[bool] = typer.Option(\n        True,\n        \"--show-sources/--hide-sources\",\n        help=(show_sources_help),\n    ),\n    show_secrets: Optional[bool] = typer.Option(\n        False,\n        \"--show-secrets/--hide-secrets\",\n        help=\"Toggle display of secrets setting values.\",\n    ),\n):\n\"\"\"\n    Display the current settings.\n    \"\"\"\n    context = prefect.context.get_settings_context()\n\n    # Get settings at each level, converted to a flat dictionary for easy comparison\n    default_settings = prefect.settings.get_default_settings()\n    env_settings = prefect.settings.get_settings_from_env()\n    current_profile_settings = context.settings\n\n    # Obfuscate secrets\n    if not show_secrets:\n        default_settings = default_settings.with_obfuscated_secrets()\n        env_settings = env_settings.with_obfuscated_secrets()\n        current_profile_settings = current_profile_settings.with_obfuscated_secrets()\n\n    # Display the profile first\n    app.console.print(f\"PREFECT_PROFILE={context.profile.name!r}\")\n\n    settings_output = []\n\n    # The combination of environment variables and profile settings that are in use\n    profile_overrides = current_profile_settings.dict(exclude_unset=True)\n\n    # Used to see which settings in current_profile_settings came from env vars\n    env_overrides = env_settings.dict(exclude_unset=True)\n\n    for key, value in profile_overrides.items():\n        source = \"env\" if env_overrides.get(key) is not None else \"profile\"\n        source_blurb = f\" (from {source})\" if show_sources else \"\"\n        settings_output.append(f\"{key}='{value}'{source_blurb}\")\n\n    if show_defaults:\n        for key, value in default_settings.dict().items():\n            if key not in profile_overrides:\n                source_blurb = \" (from defaults)\" if show_sources else \"\"\n                settings_output.append(f\"{key}='{value}'{source_blurb}\")\n\n    app.console.print(\"\\n\".join(sorted(settings_output)))\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/deployment/","title":"prefect.cli.deployment","text":"","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment","title":"prefect.cli.deployment","text":"

Command line interface for working with deployments.

","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.apply","title":"apply async","text":"

Create or update a deployment from a YAML file.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command()\nasync def apply(\n    paths: List[str] = typer.Argument(\n        ...,\n        help=\"One or more paths to deployment YAML files.\",\n    ),\n    upload: bool = typer.Option(\n        False,\n        \"--upload\",\n        help=(\n            \"A flag that, when provided, uploads this deployment's files to remote\"\n            \" storage.\"\n        ),\n    ),\n    work_queue_concurrency: int = typer.Option(\n        None,\n        \"--limit\",\n        \"-l\",\n        help=(\n            \"Sets the concurrency limit on the work queue that handles this\"\n            \" deployment's runs\"\n        ),\n    ),\n):\n\"\"\"\n    Create or update a deployment from a YAML file.\n    \"\"\"\n    deployment = None\n    async with get_client() as client:\n        for path in paths:\n            try:\n                deployment = await Deployment.load_from_yaml(path)\n                app.console.print(\n                    f\"Successfully loaded {deployment.name!r}\", style=\"green\"\n                )\n            except Exception as exc:\n                exit_with_error(\n                    f\"'{path!s}' did not conform to deployment spec: {exc!r}\"\n                )\n\n            assert deployment\n\n            await create_work_queue_and_set_concurrency_limit(\n                deployment.work_queue_name,\n                deployment.work_pool_name,\n                work_queue_concurrency,\n            )\n\n            if upload:\n                if (\n                    deployment.storage\n                    and \"put-directory\" in deployment.storage.get_block_capabilities()\n                ):\n                    file_count = await deployment.upload_to_storage()\n                    if file_count:\n                        app.console.print(\n                            (\n                                f\"Successfully uploaded {file_count} files to\"\n                                f\" {deployment.location}\"\n                            ),\n                            style=\"green\",\n                        )\n                else:\n                    app.console.print(\n                        (\n                            f\"Deployment storage {deployment.storage} does not have\"\n                            \" upload capabilities; no files uploaded.\"\n                        ),\n                        style=\"red\",\n                    )\n            await check_work_pool_exists(\n                work_pool_name=deployment.work_pool_name, client=client\n            )\n\n            if client.server_type != ServerType.CLOUD and deployment.triggers:\n                app.console.print(\n                    (\n                        \"Deployment triggers are only supported on \"\n                        f\"Prefect Cloud. Triggers defined in {path!r} will be \"\n                        \"ignored.\"\n                    ),\n                    style=\"red\",\n                )\n\n            deployment_id = await deployment.apply()\n            app.console.print(\n                (\n                    f\"Deployment '{deployment.flow_name}/{deployment.name}'\"\n                    f\" successfully created with id '{deployment_id}'.\"\n                ),\n                style=\"green\",\n            )\n\n            if PREFECT_UI_URL:\n                app.console.print(\n                    \"View Deployment in UI:\"\n                    f\" {PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}\"\n                )\n\n            if deployment.work_pool_name is not None:\n                await _print_deployment_work_pool_instructions(\n                    work_pool_name=deployment.work_pool_name, client=client\n                )\n            elif deployment.work_queue_name is not None:\n                app.console.print(\n                    \"\\nTo execute flow runs from this deployment, start an agent that\"\n                    f\" pulls work from the {deployment.work_queue_name!r} work queue:\"\n                )\n                app.console.print(\n                    f\"$ prefect agent start -q {deployment.work_queue_name!r}\",\n                    style=\"blue\",\n                )\n            else:\n                app.console.print(\n                    (\n                        \"\\nThis deployment does not specify a work queue name, which\"\n                        \" means agents will not be able to pick up its runs. To add a\"\n                        \" work queue, edit the deployment spec and re-run this command,\"\n                        \" or visit the deployment in the UI.\"\n                    ),\n                    style=\"red\",\n                )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.build","title":"build async","text":"

Generate a deployment YAML from /path/to/file.py:flow_function

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command()\nasync def build(\n    entrypoint: str = typer.Argument(\n        ...,\n        help=(\n            \"The path to a flow entrypoint, in the form of\"\n            \" `./path/to/file.py:flow_func_name`\"\n        ),\n    ),\n    name: str = typer.Option(\n        None, \"--name\", \"-n\", help=\"The name to give the deployment.\"\n    ),\n    description: str = typer.Option(\n        None,\n        \"--description\",\n        \"-d\",\n        help=(\n            \"The description to give the deployment. If not provided, the description\"\n            \" will be populated from the flow's description.\"\n        ),\n    ),\n    version: str = typer.Option(\n        None, \"--version\", \"-v\", help=\"A version to give the deployment.\"\n    ),\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"One or more optional tags to apply to the deployment. Note: tags are used\"\n            \" only for organizational purposes. For delegating work to agents, use the\"\n            \" --work-queue flag.\"\n        ),\n    ),\n    work_queue_name: str = typer.Option(\n        None,\n        \"-q\",\n        \"--work-queue\",\n        help=(\n            \"The work queue that will handle this deployment's runs. \"\n            \"It will be created if it doesn't already exist. Defaults to `None`. \"\n            \"Note that if a work queue is not set, work will not be scheduled.\"\n        ),\n    ),\n    work_pool_name: str = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The work pool that will handle this deployment's runs.\",\n    ),\n    work_queue_concurrency: int = typer.Option(\n        None,\n        \"--limit\",\n        \"-l\",\n        help=(\n            \"Sets the concurrency limit on the work queue that handles this\"\n            \" deployment's runs\"\n        ),\n    ),\n    infra_type: str = typer.Option(\n        None,\n        \"--infra\",\n        \"-i\",\n        help=\"The infrastructure type to use, prepopulated with defaults. For example: \"\n        + listrepr(builtin_infrastructure_types, sep=\", \"),\n    ),\n    infra_block: str = typer.Option(\n        None,\n        \"--infra-block\",\n        \"-ib\",\n        help=\"The slug of the infrastructure block to use as a template.\",\n    ),\n    overrides: List[str] = typer.Option(\n        None,\n        \"--override\",\n        help=(\n            \"One or more optional infrastructure overrides provided as a dot delimited\"\n            \" path, e.g., `env.env_key=env_value`\"\n        ),\n    ),\n    storage_block: str = typer.Option(\n        None,\n        \"--storage-block\",\n        \"-sb\",\n        help=(\n            \"The slug of a remote storage block. Use the syntax:\"\n            \" 'block_type/block_name', where block_type is one of 'github', 's3',\"\n            \" 'gcs', 'azure', 'smb', or a registered block from a library that\"\n            \" implements the WritableDeploymentStorage interface such as\"\n            \" 'gitlab-repository', 'bitbucket-repository', 's3-bucket',\"\n            \" 'gcs-bucket'\"\n        ),\n    ),\n    skip_upload: bool = typer.Option(\n        False,\n        \"--skip-upload\",\n        help=(\n            \"A flag that, when provided, skips uploading this deployment's files to\"\n            \" remote storage.\"\n        ),\n    ),\n    cron: str = typer.Option(\n        None,\n        \"--cron\",\n        help=\"A cron string that will be used to set a CronSchedule on the deployment.\",\n    ),\n    interval: int = typer.Option(\n        None,\n        \"--interval\",\n        help=(\n            \"An integer specifying an interval (in seconds) that will be used to set an\"\n            \" IntervalSchedule on the deployment.\"\n        ),\n    ),\n    interval_anchor: Optional[str] = typer.Option(\n        None, \"--anchor-date\", help=\"The anchor date for an interval schedule\"\n    ),\n    rrule: str = typer.Option(\n        None,\n        \"--rrule\",\n        help=\"An RRule that will be used to set an RRuleSchedule on the deployment.\",\n    ),\n    timezone: str = typer.Option(\n        None,\n        \"--timezone\",\n        help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n    ),\n    path: str = typer.Option(\n        None,\n        \"--path\",\n        help=(\n            \"An optional path to specify a subdirectory of remote storage to upload to,\"\n            \" or to point to a subdirectory of a locally stored flow.\"\n        ),\n    ),\n    output: str = typer.Option(\n        None,\n        \"--output\",\n        \"-o\",\n        help=\"An optional filename to write the deployment file to.\",\n    ),\n    _apply: bool = typer.Option(\n        False,\n        \"--apply\",\n        \"-a\",\n        help=(\n            \"An optional flag to automatically register the resulting deployment with\"\n            \" the API.\"\n        ),\n    ),\n    param: List[str] = typer.Option(\n        None,\n        \"--param\",\n        help=(\n            \"An optional parameter override, values are parsed as JSON strings e.g.\"\n            \" --param question=ultimate --param answer=42\"\n        ),\n    ),\n    params: str = typer.Option(\n        None,\n        \"--params\",\n        help=(\n            \"An optional parameter override in a JSON string format e.g.\"\n            ' --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\''\n        ),\n    ),\n):\n\"\"\"\n    Generate a deployment YAML from /path/to/file.py:flow_function\n    \"\"\"\n    # validate inputs\n    if not name:\n        exit_with_error(\n            \"A name for this deployment must be provided with the '--name' flag.\"\n        )\n\n    if len([value for value in (cron, rrule, interval) if value is not None]) > 1:\n        exit_with_error(\"Only one schedule type can be provided.\")\n\n    if infra_block and infra_type:\n        exit_with_error(\n            \"Only one of `infra` or `infra_block` can be provided, please choose one.\"\n        )\n\n    output_file = None\n    if output:\n        output_file = Path(output)\n        if output_file.suffix and output_file.suffix != \".yaml\":\n            exit_with_error(\"Output file must be a '.yaml' file.\")\n        else:\n            output_file = output_file.with_suffix(\".yaml\")\n\n    # validate flow\n    try:\n        fpath, obj_name = entrypoint.rsplit(\":\", 1)\n    except ValueError as exc:\n        if str(exc) == \"not enough values to unpack (expected 2, got 1)\":\n            missing_flow_name_msg = (\n                \"Your flow entrypoint must include the name of the function that is\"\n                f\" the entrypoint to your flow.\\nTry {entrypoint}:<flow_name>\"\n            )\n            exit_with_error(missing_flow_name_msg)\n        else:\n            raise exc\n    try:\n        flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, entrypoint)\n    except Exception as exc:\n        exit_with_error(exc)\n    app.console.print(f\"Found flow {flow.name!r}\", style=\"green\")\n    infra_overrides = {}\n    for override in overrides or []:\n        key, value = override.split(\"=\", 1)\n        infra_overrides[key] = value\n\n    if infra_block:\n        infrastructure = await Block.load(infra_block)\n    elif infra_type:\n        # Create an instance of the given type\n        infrastructure = Block.get_block_class_from_key(infra_type)()\n    else:\n        # will reset to a default of Process is no infra is present on the\n        # server-side definition of this deployment\n        infrastructure = None\n\n    if interval_anchor and not interval:\n        exit_with_error(\"An anchor date can only be provided with an interval schedule\")\n\n    schedule = None\n    if cron:\n        cron_kwargs = {\"cron\": cron, \"timezone\": timezone}\n        schedule = CronSchedule(\n            **{k: v for k, v in cron_kwargs.items() if v is not None}\n        )\n    elif interval:\n        interval_kwargs = {\n            \"interval\": timedelta(seconds=interval),\n            \"anchor_date\": interval_anchor,\n            \"timezone\": timezone,\n        }\n        schedule = IntervalSchedule(\n            **{k: v for k, v in interval_kwargs.items() if v is not None}\n        )\n    elif rrule:\n        try:\n            schedule = RRuleSchedule(**json.loads(rrule))\n            if timezone:\n                # override timezone if specified via CLI argument\n                schedule.timezone = timezone\n        except json.JSONDecodeError:\n            schedule = RRuleSchedule(rrule=rrule, timezone=timezone)\n\n    # parse storage_block\n    if storage_block:\n        block_type, block_name, *block_path = storage_block.split(\"/\")\n        if block_path and path:\n            exit_with_error(\n                \"Must provide a `path` explicitly or provide one on the storage block\"\n                \" specification, but not both.\"\n            )\n        elif not path:\n            path = \"/\".join(block_path)\n        storage_block = f\"{block_type}/{block_name}\"\n        storage = await Block.load(storage_block)\n    else:\n        storage = None\n\n    if create_default_ignore_file(path=\".\"):\n        app.console.print(\n            (\n                \"Default '.prefectignore' file written to\"\n                f\" {(Path('.') / '.prefectignore').absolute()}\"\n            ),\n            style=\"green\",\n        )\n\n    if param and (params is not None):\n        exit_with_error(\"Can only pass one of `param` or `params` options\")\n\n    parameters = dict()\n\n    if param:\n        for p in param or []:\n            k, unparsed_value = p.split(\"=\", 1)\n            try:\n                v = json.loads(unparsed_value)\n                app.console.print(\n                    f\"The parameter value {unparsed_value} is parsed as a JSON string\"\n                )\n            except json.JSONDecodeError:\n                v = unparsed_value\n            parameters[k] = v\n\n    if params is not None:\n        parameters = json.loads(params)\n\n    # set up deployment object\n    entrypoint = (\n        f\"{Path(fpath).absolute().relative_to(Path('.').absolute())}:{obj_name}\"\n    )\n\n    init_kwargs = dict(\n        path=path,\n        entrypoint=entrypoint,\n        version=version,\n        storage=storage,\n        infra_overrides=infra_overrides or {},\n    )\n\n    if parameters:\n        init_kwargs[\"parameters\"] = parameters\n\n    if description:\n        init_kwargs[\"description\"] = description\n\n    # if a schedule, tags, work_queue_name, or infrastructure are not provided via CLI,\n    # we let `build_from_flow` load them from the server\n    if schedule:\n        init_kwargs.update(schedule=schedule)\n    if tags:\n        init_kwargs.update(tags=tags)\n    if infrastructure:\n        init_kwargs.update(infrastructure=infrastructure)\n    if work_queue_name:\n        init_kwargs.update(work_queue_name=work_queue_name)\n    if work_pool_name:\n        init_kwargs.update(work_pool_name=work_pool_name)\n\n    deployment_loc = output_file or f\"{obj_name}-deployment.yaml\"\n    deployment = await Deployment.build_from_flow(\n        flow=flow,\n        name=name,\n        output=deployment_loc,\n        skip_upload=skip_upload,\n        apply=False,\n        **init_kwargs,\n    )\n    app.console.print(\n        f\"Deployment YAML created at '{Path(deployment_loc).absolute()!s}'.\",\n        style=\"green\",\n    )\n\n    await create_work_queue_and_set_concurrency_limit(\n        deployment.work_queue_name, deployment.work_pool_name, work_queue_concurrency\n    )\n\n    # we process these separately for informative output\n    if not skip_upload:\n        if (\n            deployment.storage\n            and \"put-directory\" in deployment.storage.get_block_capabilities()\n        ):\n            file_count = await deployment.upload_to_storage()\n            if file_count:\n                app.console.print(\n                    (\n                        f\"Successfully uploaded {file_count} files to\"\n                        f\" {deployment.location}\"\n                    ),\n                    style=\"green\",\n                )\n        else:\n            app.console.print(\n                (\n                    f\"Deployment storage {deployment.storage} does not have upload\"\n                    \" capabilities; no files uploaded.  Pass --skip-upload to suppress\"\n                    \" this warning.\"\n                ),\n                style=\"green\",\n            )\n\n    if _apply:\n        async with get_client() as client:\n            await check_work_pool_exists(\n                work_pool_name=deployment.work_pool_name, client=client\n            )\n            deployment_id = await deployment.apply()\n            app.console.print(\n                (\n                    f\"Deployment '{deployment.flow_name}/{deployment.name}'\"\n                    f\" successfully created with id '{deployment_id}'.\"\n                ),\n                style=\"green\",\n            )\n            if deployment.work_pool_name is not None:\n                await _print_deployment_work_pool_instructions(\n                    work_pool_name=deployment.work_pool_name, client=client\n                )\n\n            elif deployment.work_queue_name is not None:\n                app.console.print(\n                    \"\\nTo execute flow runs from this deployment, start an agent that\"\n                    f\" pulls work from the {deployment.work_queue_name!r} work queue:\"\n                )\n                app.console.print(\n                    f\"$ prefect agent start -q {deployment.work_queue_name!r}\",\n                    style=\"blue\",\n                )\n            else:\n                app.console.print(\n                    (\n                        \"\\nThis deployment does not specify a work queue name, which\"\n                        \" means agents will not be able to pick up its runs. To add a\"\n                        \" work queue, edit the deployment spec and re-run this command,\"\n                        \" or visit the deployment in the UI.\"\n                    ),\n                    style=\"red\",\n                )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.delete","title":"delete async","text":"

Delete a deployment.

\b

Examples:

\b $ prefect deployment delete test_flow/test_deployment $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command()\nasync def delete(\n    name: Optional[str] = typer.Argument(\n        None, help=\"A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\"\n    ),\n    deployment_id: Optional[str] = typer.Option(\n        None, \"--id\", help=\"A deployment id to search for if no name is given\"\n    ),\n):\n\"\"\"\n    Delete a deployment.\n\n    \\b\n    Examples:\n        \\b\n        $ prefect deployment delete test_flow/test_deployment\n        $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6\n    \"\"\"\n    async with get_client() as client:\n        if name is None and deployment_id is not None:\n            try:\n                await client.delete_deployment(deployment_id)\n                exit_with_success(f\"Deleted deployment '{deployment_id}'.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {deployment_id!r} not found!\")\n        elif name is not None:\n            try:\n                deployment = await client.read_deployment_by_name(name)\n                await client.delete_deployment(deployment.id)\n                exit_with_success(f\"Deleted deployment '{name}'.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {name!r} not found!\")\n        else:\n            exit_with_error(\"Must provide a deployment name or id\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.inspect","title":"inspect async","text":"

View details about a deployment.

\b

Example

\b $ prefect deployment inspect \"hello-world/my-deployment\" { 'id': '610df9c3-0fb4-4856-b330-67f588d20201', 'created': '2022-08-01T18:36:25.192102+00:00', 'updated': '2022-08-01T18:36:25.188166+00:00', 'name': 'my-deployment', 'description': None, 'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e', 'schedule': None, 'is_schedule_active': True, 'parameters': {'name': 'Marvin'}, 'tags': ['test'], 'parameter_openapi_schema': { 'title': 'Parameters', 'type': 'object', 'properties': { 'name': { 'title': 'name', 'type': 'string' } }, 'required': ['name'] }, 'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32', 'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028', 'infrastructure': { 'type': 'process', 'env': {}, 'labels': {}, 'name': None, 'command': ['python', '-m', 'prefect.engine'], 'stream_output': True } }

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command()\nasync def inspect(name: str):\n\"\"\"\n    View details about a deployment.\n\n    \\b\n    Example:\n        \\b\n        $ prefect deployment inspect \"hello-world/my-deployment\"\n        {\n            'id': '610df9c3-0fb4-4856-b330-67f588d20201',\n            'created': '2022-08-01T18:36:25.192102+00:00',\n            'updated': '2022-08-01T18:36:25.188166+00:00',\n            'name': 'my-deployment',\n            'description': None,\n            'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e',\n            'schedule': None,\n            'is_schedule_active': True,\n            'parameters': {'name': 'Marvin'},\n            'tags': ['test'],\n            'parameter_openapi_schema': {\n                'title': 'Parameters',\n                'type': 'object',\n                'properties': {\n                    'name': {\n                        'title': 'name',\n                        'type': 'string'\n                    }\n                },\n                'required': ['name']\n            },\n            'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32',\n            'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028',\n            'infrastructure': {\n                'type': 'process',\n                'env': {},\n                'labels': {},\n                'name': None,\n                'command': ['python', '-m', 'prefect.engine'],\n                'stream_output': True\n            }\n        }\n\n    \"\"\"\n    assert_deployment_name_format(name)\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(name)\n        except ObjectNotFound:\n            exit_with_error(f\"Deployment {name!r} not found!\")\n\n        deployment_json = deployment.dict(json_compatible=True)\n\n        if deployment.infrastructure_document_id:\n            deployment_json[\"infrastructure\"] = Block._from_block_document(\n                await client.read_block_document(deployment.infrastructure_document_id)\n            ).dict(\n                exclude={\"_block_document_id\", \"_block_document_name\", \"_is_anonymous\"}\n            )\n\n        if client.server_type == ServerType.CLOUD:\n            deployment_json[\"automations\"] = [\n                a.dict()\n                for a in await client.read_resource_related_automations(\n                    f\"prefect.deployment.{deployment.id}\"\n                )\n            ]\n\n    app.console.print(Pretty(deployment_json))\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.ls","title":"ls async","text":"

View all deployments or deployments for specific flows.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command()\nasync def ls(flow_name: List[str] = None, by_created: bool = False):\n\"\"\"\n    View all deployments or deployments for specific flows.\n    \"\"\"\n    async with get_client() as client:\n        deployments = await client.read_deployments(\n            flow_filter=FlowFilter(name={\"any_\": flow_name}) if flow_name else None\n        )\n        flows = {\n            flow.id: flow\n            for flow in await client.read_flows(\n                flow_filter=FlowFilter(id={\"any_\": [d.flow_id for d in deployments]})\n            )\n        }\n\n    def sort_by_name_keys(d):\n        return flows[d.flow_id].name, d.name\n\n    def sort_by_created_key(d):\n        return pendulum.now(\"utc\") - d.created\n\n    table = Table(\n        title=\"Deployments\",\n    )\n    table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n    table.add_column(\"ID\", style=\"cyan\", no_wrap=True)\n\n    for deployment in sorted(\n        deployments, key=sort_by_created_key if by_created else sort_by_name_keys\n    ):\n        table.add_row(\n            f\"{flows[deployment.flow_id].name}/[bold]{deployment.name}[/]\",\n            str(deployment.id),\n        )\n\n    app.console.print(table)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.pause_schedule","title":"pause_schedule async","text":"

Pause schedule of a given deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command(\"pause-schedule\")\nasync def pause_schedule(\n    name: str,\n):\n\"\"\"\n    Pause schedule of a given deployment.\n    \"\"\"\n    assert_deployment_name_format(name)\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(name)\n        except ObjectNotFound:\n            exit_with_error(f\"Deployment {name!r} not found!\")\n\n        await client.update_deployment(deployment, is_schedule_active=False)\n        exit_with_success(f\"Paused schedule for deployment {name}\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.resume_schedule","title":"resume_schedule async","text":"

Resume schedule of a given deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command(\"resume-schedule\")\nasync def resume_schedule(\n    name: str,\n):\n\"\"\"\n    Resume schedule of a given deployment.\n    \"\"\"\n    assert_deployment_name_format(name)\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(name)\n        except ObjectNotFound:\n            exit_with_error(f\"Deployment {name!r} not found!\")\n\n        await client.update_deployment(deployment, is_schedule_active=True)\n        exit_with_success(f\"Resumed schedule for deployment {name}\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.run","title":"run async","text":"

Create a flow run for the given flow and deployment.

The flow run will be scheduled to run immediately unless --start-in or --start-at is specified. The flow run will not execute until an agent starts.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command()\nasync def run(\n    name: Optional[str] = typer.Argument(\n        None, help=\"A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\"\n    ),\n    deployment_id: Optional[str] = typer.Option(\n        None, \"--id\", help=\"A deployment id to search for if no name is given\"\n    ),\n    params: List[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--param\",\n        help=(\n            \"A key, value pair (key=value) specifying a flow parameter. The value will\"\n            \" be interpreted as JSON. May be passed multiple times to specify multiple\"\n            \" parameter values.\"\n        ),\n    ),\n    multiparams: Optional[str] = typer.Option(\n        None,\n        \"--params\",\n        help=(\n            \"A mapping of parameters to values. To use a stdin, pass '-'. Any \"\n            \"parameters passed with `--param` will take precedence over these values.\"\n        ),\n    ),\n    start_in: Optional[str] = typer.Option(\n        None,\n        \"--start-in\",\n    ),\n    start_at: Optional[str] = typer.Option(\n        None,\n        \"--start-at\",\n    ),\n):\n\"\"\"\n    Create a flow run for the given flow and deployment.\n\n    The flow run will be scheduled to run immediately unless `--start-in` or `--start-at` is specified.\n    The flow run will not execute until an agent starts.\n    \"\"\"\n    import dateparser\n\n    now = pendulum.now(\"UTC\")\n\n    multi_params = {}\n    if multiparams:\n        if multiparams == \"-\":\n            multiparams = sys.stdin.read()\n            if not multiparams:\n                exit_with_error(\"No data passed to stdin\")\n\n        try:\n            multi_params = json.loads(multiparams)\n        except ValueError as exc:\n            exit_with_error(f\"Failed to parse JSON: {exc}\")\n\n    cli_params = _load_json_key_values(params, \"parameter\")\n    conflicting_keys = set(cli_params.keys()).intersection(multi_params.keys())\n    if conflicting_keys:\n        app.console.print(\n            \"The following parameters were specified by `--param` and `--params`, the \"\n            f\"`--param` value will be used: {conflicting_keys}\"\n        )\n    parameters = {**multi_params, **cli_params}\n\n    if start_in and start_at:\n        exit_with_error(\n            \"Only one of `--start-in` or `--start-at` can be set, not both.\"\n        )\n\n    elif start_in is None and start_at is None:\n        scheduled_start_time = now\n        human_dt_diff = \" (now)\"\n    else:\n        if start_in:\n            start_time_raw = \"in \" + start_in\n        else:\n            start_time_raw = \"at \" + start_at\n        with warnings.catch_warnings():\n            # PyTZ throws a warning based on dateparser usage of the library\n            # See https://github.com/scrapinghub/dateparser/issues/1089\n            warnings.filterwarnings(\"ignore\", module=\"dateparser\")\n\n            try:\n                start_time_parsed = dateparser.parse(\n                    start_time_raw,\n                    settings={\n                        \"TO_TIMEZONE\": \"UTC\",\n                        \"RETURN_AS_TIMEZONE_AWARE\": False,\n                        \"PREFER_DATES_FROM\": \"future\",\n                        \"RELATIVE_BASE\": datetime.fromtimestamp(now.timestamp()),\n                    },\n                )\n\n            except Exception as exc:\n                exit_with_error(f\"Failed to parse '{start_time_raw!r}': {exc!s}\")\n\n        if start_time_parsed is None:\n            exit_with_error(f\"Unable to parse scheduled start time {start_time_raw!r}.\")\n\n        scheduled_start_time = pendulum.instance(start_time_parsed)\n        human_dt_diff = (\n            \" (\" + pendulum.format_diff(scheduled_start_time.diff(now)) + \")\"\n        )\n\n    async with get_client() as client:\n        deployment = await get_deployment(client, name, deployment_id)\n        flow = await client.read_flow(deployment.flow_id)\n\n        deployment_parameters = deployment.parameter_openapi_schema[\"properties\"].keys()\n        unknown_keys = set(parameters.keys()).difference(deployment_parameters)\n        if unknown_keys:\n            available_parameters = (\n                (\n                    \"The following parameters are available on the deployment: \"\n                    + listrepr(deployment_parameters, sep=\", \")\n                )\n                if deployment_parameters\n                else \"This deployment does not accept parameters.\"\n            )\n\n            exit_with_error(\n                \"The following parameters were specified but not found on the \"\n                f\"deployment: {listrepr(unknown_keys, sep=', ')}\"\n                f\"\\n{available_parameters}\"\n            )\n\n        app.console.print(\n            f\"Creating flow run for deployment '{flow.name}/{deployment.name}'...\",\n        )\n\n        flow_run = await client.create_flow_run_from_deployment(\n            deployment.id,\n            parameters=parameters,\n            state=Scheduled(scheduled_time=scheduled_start_time),\n        )\n\n    if PREFECT_UI_URL:\n        run_url = f\"{PREFECT_UI_URL.value()}/flow-runs/flow-run/{flow_run.id}\"\n    else:\n        run_url = \"<no dashboard available>\"\n\n    datetime_local_tz = scheduled_start_time.in_tz(pendulum.tz.local_timezone())\n    scheduled_display = (\n        datetime_local_tz.to_datetime_string()\n        + \" \"\n        + datetime_local_tz.tzname()\n        + human_dt_diff\n    )\n\n    app.console.print(f\"Created flow run {flow_run.name!r}.\")\n    app.console.print(\n        textwrap.dedent(\n            f\"\"\"\n        \u2514\u2500\u2500 UUID: {flow_run.id}\n        \u2514\u2500\u2500 Parameters: {flow_run.parameters}\n        \u2514\u2500\u2500 Scheduled start time: {scheduled_display}\n        \u2514\u2500\u2500 URL: {run_url}\n        \"\"\"\n        ).strip()\n    )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.set_schedule","title":"set_schedule async","text":"

Set schedule for a given deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command(\"set-schedule\")\nasync def set_schedule(\n    name: str,\n    interval: Optional[float] = typer.Option(\n        None,\n        \"--interval\",\n        help=\"An interval to schedule on, specified in seconds\",\n        min=0.0001,\n    ),\n    interval_anchor: Optional[str] = typer.Option(\n        None,\n        \"--anchor-date\",\n        help=\"The anchor date for an interval schedule\",\n    ),\n    rrule_string: Optional[str] = typer.Option(\n        None, \"--rrule\", help=\"Deployment schedule rrule string\"\n    ),\n    cron_string: Optional[str] = typer.Option(\n        None, \"--cron\", help=\"Deployment schedule cron string\"\n    ),\n    cron_day_or: Optional[str] = typer.Option(\n        None,\n        \"--day_or\",\n        help=\"Control how croniter handles `day` and `day_of_week` entries\",\n    ),\n    timezone: Optional[str] = typer.Option(\n        None,\n        \"--timezone\",\n        help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n    ),\n):\n\"\"\"\n    Set schedule for a given deployment.\n    \"\"\"\n    assert_deployment_name_format(name)\n\n    if sum(option is not None for option in [interval, rrule_string, cron_string]) != 1:\n        exit_with_error(\n            \"Exactly one of `--interval`, `--rrule`, or `--cron` must be provided.\"\n        )\n\n    if interval_anchor and not interval:\n        exit_with_error(\"An anchor date can only be provided with an interval schedule\")\n\n    if interval is not None:\n        if interval_anchor:\n            try:\n                pendulum.parse(interval_anchor)\n            except ValueError:\n                exit_with_error(\"The anchor date must be a valid date string.\")\n        interval_schedule = {\n            \"interval\": interval,\n            \"anchor_date\": interval_anchor,\n            \"timezone\": timezone,\n        }\n        updated_schedule = IntervalSchedule(\n            **{k: v for k, v in interval_schedule.items() if v is not None}\n        )\n\n    if cron_string is not None:\n        cron_schedule = {\n            \"cron\": cron_string,\n            \"day_or\": cron_day_or,\n            \"timezone\": timezone,\n        }\n        updated_schedule = CronSchedule(\n            **{k: v for k, v in cron_schedule.items() if v is not None}\n        )\n\n    if rrule_string is not None:\n        # a timezone in the `rrule_string` gets ignored by the RRuleSchedule constructor\n        if \"TZID\" in rrule_string and not timezone:\n            exit_with_error(\n                \"You can provide a timezone by providing a dict with a `timezone` key\"\n                \" to the --rrule option. E.g. {'rrule': 'FREQ=MINUTELY;INTERVAL=5',\"\n                \" 'timezone': 'America/New_York'}.\\nAlternatively, you can provide a\"\n                \" timezone by passing in a --timezone argument.\"\n            )\n        try:\n            updated_schedule = RRuleSchedule(**json.loads(rrule_string))\n            if timezone:\n                # override timezone if specified via CLI argument\n                updated_schedule.timezone = timezone\n        except json.JSONDecodeError:\n            updated_schedule = RRuleSchedule(rrule=rrule_string, timezone=timezone)\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(name)\n        except ObjectNotFound:\n            exit_with_error(f\"Deployment {name!r} not found!\")\n\n        await client.update_deployment(deployment, schedule=updated_schedule)\n        exit_with_success(\"Updated deployment schedule!\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.str_presenter","title":"str_presenter","text":"

configures yaml for dumping multiline strings Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
def str_presenter(dumper, data):\n\"\"\"\n    configures yaml for dumping multiline strings\n    Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data\n    \"\"\"\n    if len(data.splitlines()) > 1:  # check for multiline string\n        return dumper.represent_scalar(\"tag:yaml.org,2002:str\", data, style=\"|\")\n    return dumper.represent_scalar(\"tag:yaml.org,2002:str\", data)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/dev/","title":"prefect.cli.dev","text":"","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev","title":"prefect.cli.dev","text":"

Command line interface for working with Prefect Server

","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.agent","title":"agent async","text":"

Starts a hot-reloading development agent process.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\nasync def agent(\n    api_url: str = SettingsOption(PREFECT_API_URL),\n    work_queues: List[str] = typer.Option(\n        [\"default\"],\n        \"-q\",\n        \"--work-queue\",\n        help=\"One or more work queue names for the agent to pull from.\",\n    ),\n):\n\"\"\"\n    Starts a hot-reloading development agent process.\n    \"\"\"\n    # Delayed import since this is only a 'dev' dependency\n    import watchfiles\n\n    app.console.print(\"Creating hot-reloading agent process...\")\n\n    try:\n        await watchfiles.arun_process(\n            prefect.__module_path__,\n            target=agent_process_entrypoint,\n            kwargs=dict(api=api_url, work_queues=work_queues),\n        )\n    except RuntimeError as err:\n        # a bug in watchfiles causes an 'Already borrowed' error from Rust when\n        # exiting: https://github.com/samuelcolvin/watchfiles/issues/200\n        if str(err).strip() != \"Already borrowed\":\n            raise\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.agent_process_entrypoint","title":"agent_process_entrypoint","text":"

An entrypoint for starting an agent in a subprocess. Adds a Rich console to the Typer app, processes Typer default parameters, then starts an agent. All kwargs are forwarded to prefect.cli.agent.start.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
def agent_process_entrypoint(**kwargs):\n\"\"\"\n    An entrypoint for starting an agent in a subprocess. Adds a Rich console\n    to the Typer app, processes Typer default parameters, then starts an agent.\n    All kwargs are forwarded to  `prefect.cli.agent.start`.\n    \"\"\"\n    import inspect\n\n    import rich\n\n    # import locally so only the `dev` command breaks if Typer internals change\n    from typer.models import ParameterInfo\n\n    # Typer does not process default parameters when calling a function\n    # directly, so we must set `start_agent`'s default parameters manually.\n    # get the signature of the `start_agent` function\n    start_agent_signature = inspect.signature(start_agent)\n\n    # for any arguments not present in kwargs, use the default value.\n    for name, param in start_agent_signature.parameters.items():\n        if name not in kwargs:\n            # All `param.default` values for start_agent are Typer params that store the\n            # actual default value in their `default` attribute and we must call\n            # `param.default.default` to get the actual default value. We should also\n            # ensure we extract the right default if non-Typer defaults are added\n            # to `start_agent` in the future.\n            if isinstance(param.default, ParameterInfo):\n                default = param.default.default\n            else:\n                default = param.default\n\n            # Some defaults are Prefect `SettingsOption.value` methods\n            # that must be called to get the actual value.\n            kwargs[name] = default() if callable(default) else default\n\n    # add a console, because calling the agent start function directly\n    # instead of via CLI call means `app` has no `console` attached.\n    app.console = (\n        rich.console.Console(\n            highlight=False,\n            color_system=\"auto\" if PREFECT_CLI_COLORS else None,\n            soft_wrap=not PREFECT_CLI_WRAP_LINES.value(),\n        )\n        if not getattr(app, \"console\", None)\n        else app.console\n    )\n\n    try:\n        start_agent(**kwargs)  # type: ignore\n    except KeyboardInterrupt:\n        # expected when watchfiles kills the process\n        pass\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.api","title":"api async","text":"

Starts a hot-reloading development API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\nasync def api(\n    host: str = SettingsOption(PREFECT_SERVER_API_HOST),\n    port: int = SettingsOption(PREFECT_SERVER_API_PORT),\n    log_level: str = \"DEBUG\",\n    services: bool = True,\n):\n\"\"\"\n    Starts a hot-reloading development API.\n    \"\"\"\n    import watchfiles\n\n    server_env = os.environ.copy()\n    server_env[\"PREFECT_API_SERVICES_RUN_IN_APP\"] = str(services)\n    server_env[\"PREFECT_API_SERVICES_UI\"] = \"False\"\n    server_env[\"PREFECT_UI_API_URL\"] = f\"http://{host}:{port}/api\"\n\n    command = [\n        sys.executable,\n        \"-m\",\n        \"uvicorn\",\n        \"--factory\",\n        \"prefect.server.api.server:create_app\",\n        \"--host\",\n        str(host),\n        \"--port\",\n        str(port),\n        \"--log-level\",\n        log_level.lower(),\n    ]\n\n    app.console.print(f\"Running: {' '.join(command)}\")\n    import signal\n\n    stop_event = anyio.Event()\n    start_command = partial(\n        run_process, command=command, env=server_env, stream_output=True\n    )\n\n    async with anyio.create_task_group() as tg:\n        try:\n            server_pid = await tg.start(start_command)\n            async for _ in watchfiles.awatch(\n                prefect.__module_path__, stop_event=stop_event  # type: ignore\n            ):\n                # when any watched files change, restart the server\n                app.console.print(\"Restarting Prefect Server...\")\n                os.kill(server_pid, signal.SIGTERM)  # type: ignore\n                # start a new server\n                server_pid = await tg.start(start_command)\n        except RuntimeError as err:\n            # a bug in watchfiles causes an 'Already borrowed' error from Rust when\n            # exiting: https://github.com/samuelcolvin/watchfiles/issues/200\n            if str(err).strip() != \"Already borrowed\":\n                raise\n        except KeyboardInterrupt:\n            # exit cleanly on ctrl-c by killing the server process if it's\n            # still running\n            try:\n                os.kill(server_pid, signal.SIGTERM)  # type: ignore\n            except ProcessLookupError:\n                # process already exited\n                pass\n\n            stop_event.set()\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.build_docs","title":"build_docs","text":"

Builds REST API reference documentation for static display.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\ndef build_docs(\n    schema_path: str = None,\n):\n\"\"\"\n    Builds REST API reference documentation for static display.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n\n    from prefect.server.api.server import create_app\n\n    schema = create_app(ephemeral=True).openapi()\n\n    if not schema_path:\n        schema_path = (\n            prefect.__development_base_path__ / \"docs\" / \"api-ref\" / \"schema.json\"\n        ).absolute()\n    # overwrite info for display purposes\n    schema[\"info\"] = {}\n    with open(schema_path, \"w\") as f:\n        json.dump(schema, f)\n    app.console.print(f\"OpenAPI schema written to {schema_path}\")\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.build_image","title":"build_image","text":"

Build a docker image for development.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\ndef build_image(\n    arch: str = typer.Option(\n        None,\n        help=(\n            \"The architecture to build the container for. \"\n            \"Defaults to the architecture of the host Python. \"\n            f\"[default: {platform.machine()}]\"\n        ),\n    ),\n    python_version: str = typer.Option(\n        None,\n        help=(\n            \"The Python version to build the container for. \"\n            \"Defaults to the version of the host Python. \"\n            f\"[default: {python_version_minor()}]\"\n        ),\n    ),\n    flavor: str = typer.Option(\n        None,\n        help=(\n            \"An alternative flavor to build, for example 'conda'. \"\n            \"Defaults to the standard Python base image\"\n        ),\n    ),\n    dry_run: bool = False,\n):\n\"\"\"\n    Build a docker image for development.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n    # TODO: Once https://github.com/tiangolo/typer/issues/354 is addresesd, the\n    #       default can be set in the function signature\n    arch = arch or platform.machine()\n    python_version = python_version or python_version_minor()\n\n    tag = get_prefect_image_name(python_version=python_version, flavor=flavor)\n\n    # Here we use a subprocess instead of the docker-py client to easily stream output\n    # as it comes\n    command = [\n        \"docker\",\n        \"build\",\n        str(prefect.__development_base_path__),\n        \"--tag\",\n        tag,\n        \"--platform\",\n        f\"linux/{arch}\",\n        \"--build-arg\",\n        \"PREFECT_EXTRAS=[dev]\",\n        \"--build-arg\",\n        f\"PYTHON_VERSION={python_version}\",\n    ]\n\n    if flavor:\n        command += [\"--build-arg\", f\"BASE_IMAGE=prefect-{flavor}\"]\n\n    if dry_run:\n        print(\" \".join(command))\n        return\n\n    try:\n        subprocess.check_call(command, shell=sys.platform == \"win32\")\n    except subprocess.CalledProcessError:\n        exit_with_error(\"Failed to build image!\")\n    else:\n        exit_with_success(f\"Built image {tag!r} for linux/{arch}\")\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.container","title":"container","text":"

Run a docker container with local code mounted and installed.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\ndef container(bg: bool = False, name=\"prefect-dev\", api: bool = True, tag: str = None):\n\"\"\"\n    Run a docker container with local code mounted and installed.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n    import docker\n    from docker.models.containers import Container\n\n    client = docker.from_env()\n\n    containers = client.containers.list()\n    container_names = {container.name for container in containers}\n    if name in container_names:\n        exit_with_error(\n            f\"Container {name!r} already exists. Specify a different name or stop \"\n            \"the existing container.\"\n        )\n\n    blocking_cmd = \"prefect dev api\" if api else \"sleep infinity\"\n    tag = tag or get_prefect_image_name()\n\n    container: Container = client.containers.create(\n        image=tag,\n        command=[\n            \"/bin/bash\",\n            \"-c\",\n            (  # noqa\n                \"pip install -e /opt/prefect/repo\\\\[dev\\\\] && touch /READY &&\"\n                f\" {blocking_cmd}\"\n            ),\n        ],\n        name=name,\n        auto_remove=True,\n        working_dir=\"/opt/prefect/repo\",\n        volumes=[f\"{prefect.__development_base_path__}:/opt/prefect/repo\"],\n        shm_size=\"4G\",\n    )\n\n    print(f\"Starting container for image {tag!r}...\")\n    container.start()\n\n    print(\"Waiting for installation to complete\", end=\"\", flush=True)\n    try:\n        ready = False\n        while not ready:\n            print(\".\", end=\"\", flush=True)\n            result = container.exec_run(\"test -f /READY\")\n            ready = result.exit_code == 0\n            if not ready:\n                time.sleep(3)\n    except BaseException:\n        print(\"\\nInterrupted. Stopping container...\")\n        container.stop()\n        raise\n\n    print(\n        textwrap.dedent(\n            f\"\"\"\n            Container {container.name!r} is ready! To connect to the container, run:\n\n                docker exec -it {container.name} /bin/bash\n            \"\"\"\n        )\n    )\n\n    if bg:\n        print(\n            textwrap.dedent(\n                f\"\"\"\n                The container will run forever. Stop the container with:\n\n                    docker stop {container.name}\n                \"\"\"\n            )\n        )\n        # Exit without stopping\n        return\n\n    try:\n        print(\"Send a keyboard interrupt to exit...\")\n        container.wait()\n    except KeyboardInterrupt:\n        pass  # Avoid showing \"Abort\"\n    finally:\n        print(\"\\nStopping container...\")\n        container.stop()\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.kubernetes_manifest","title":"kubernetes_manifest","text":"

Generates a Kubernetes manifest for development.

Example

$ prefect dev kubernetes-manifest | kubectl apply -f -

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\ndef kubernetes_manifest():\n\"\"\"\n    Generates a Kubernetes manifest for development.\n\n    Example:\n        $ prefect dev kubernetes-manifest | kubectl apply -f -\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n\n    template = Template(\n        (\n            prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-dev.yaml\"\n        ).read_text()\n    )\n    manifest = template.substitute(\n        {\n            \"prefect_root_directory\": prefect.__development_base_path__,\n            \"image_name\": get_prefect_image_name(),\n        }\n    )\n    print(manifest)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.start","title":"start async","text":"

Starts a hot-reloading development server with API, UI, and agent processes.

Each service has an individual command if you wish to start them separately. Each service can be excluded here as well.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\nasync def start(\n    exclude_api: bool = typer.Option(False, \"--no-api\"),\n    exclude_ui: bool = typer.Option(False, \"--no-ui\"),\n    exclude_agent: bool = typer.Option(False, \"--no-agent\"),\n    work_queues: List[str] = typer.Option(\n        [\"default\"],\n        \"-q\",\n        \"--work-queue\",\n        help=\"One or more work queue names for the dev agent to pull from.\",\n    ),\n):\n\"\"\"\n    Starts a hot-reloading development server with API, UI, and agent processes.\n\n    Each service has an individual command if you wish to start them separately.\n    Each service can be excluded here as well.\n    \"\"\"\n    async with anyio.create_task_group() as tg:\n        if not exclude_api:\n            tg.start_soon(\n                partial(\n                    api,\n                    host=PREFECT_SERVER_API_HOST.value(),\n                    port=PREFECT_SERVER_API_PORT.value(),\n                )\n            )\n        if not exclude_ui:\n            tg.start_soon(ui)\n        if not exclude_agent:\n            # Hook the agent to the hosted API if running\n            if not exclude_api:\n                host = f\"http://{PREFECT_SERVER_API_HOST.value()}:{PREFECT_SERVER_API_PORT.value()}/api\"  # noqa\n            else:\n                host = PREFECT_API_URL.value()\n            tg.start_soon(agent, host, work_queues)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.ui","title":"ui async","text":"

Starts a hot-reloading development UI.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\nasync def ui():\n\"\"\"\n    Starts a hot-reloading development UI.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n    with tmpchdir(prefect.__development_base_path__):\n        with tmpchdir(prefect.__development_base_path__ / \"ui\"):\n            app.console.print(\"Installing npm packages...\")\n            await run_process([\"npm\", \"install\"], stream_output=True)\n\n            app.console.print(\"Starting UI development server...\")\n            await run_process(command=[\"npm\", \"run\", \"serve\"], stream_output=True)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/flow_run/","title":"prefect.cli.flow_run","text":"","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run","title":"prefect.cli.flow_run","text":"

Command line interface for working with flow runs

","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.cancel","title":"cancel async","text":"

Cancel a flow run by ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/flow_run.py
@flow_run_app.command()\nasync def cancel(id: UUID):\n\"\"\"Cancel a flow run by ID.\"\"\"\n    async with get_client() as client:\n        cancelling_state = State(type=StateType.CANCELLING)\n        try:\n            result = await client.set_flow_run_state(\n                flow_run_id=id, state=cancelling_state\n            )\n        except ObjectNotFound:\n            exit_with_error(f\"Flow run '{id}' not found!\")\n\n    if result.status == SetStateStatus.ABORT:\n        exit_with_error(\n            f\"Flow run '{id}' was unable to be cancelled. Reason:\"\n            f\" '{result.details.reason}'\"\n        )\n\n    exit_with_success(f\"Flow run '{id}' was succcessfully scheduled for cancellation.\")\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.delete","title":"delete async","text":"

Delete a flow run by ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/flow_run.py
@flow_run_app.command()\nasync def delete(id: UUID):\n\"\"\"\n    Delete a flow run by ID.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.delete_flow_run(id)\n        except ObjectNotFound:\n            exit_with_error(f\"Flow run '{id}' not found!\")\n\n    exit_with_success(f\"Successfully deleted flow run '{id}'.\")\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.inspect","title":"inspect async","text":"

View details about a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/flow_run.py
@flow_run_app.command()\nasync def inspect(id: UUID):\n\"\"\"\n    View details about a flow run.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            flow_run = await client.read_flow_run(id)\n        except httpx.HTTPStatusError as exc:\n            if exc.response.status_code == status.HTTP_404_NOT_FOUND:\n                exit_with_error(f\"Flow run {id!r} not found!\")\n            else:\n                raise\n\n    app.console.print(Pretty(flow_run))\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.logs","title":"logs async","text":"

View logs for a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/flow_run.py
@flow_run_app.command()\nasync def logs(\n    id: UUID,\n    head: bool = typer.Option(\n        False,\n        \"--head\",\n        \"-h\",\n        help=(\n            f\"Show the first {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of\"\n            \" all logs.\"\n        ),\n    ),\n    num_logs: int = typer.Option(\n        None,\n        \"--num-logs\",\n        \"-n\",\n        help=(\n            \"Number of logs to show when using the --head or --tail flag. If None,\"\n            f\" defaults to {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS}.\"\n        ),\n        min=1,\n    ),\n    reverse: bool = typer.Option(\n        False,\n        \"--reverse\",\n        \"-r\",\n        help=\"Reverse the logs order to print the most recent logs first\",\n    ),\n    tail: bool = typer.Option(\n        False,\n        \"--tail\",\n        \"-t\",\n        help=(\n            f\"Show the last {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of\"\n            \" all logs.\"\n        ),\n    ),\n):\n\"\"\"\n    View logs for a flow run.\n    \"\"\"\n    # Pagination - API returns max 200 (LOGS_DEFAULT_PAGE_SIZE) logs at a time\n    offset = 0\n    more_logs = True\n    num_logs_returned = 0\n\n    # if head and tail flags are being used together\n    if head and tail:\n        exit_with_error(\"Please provide either a `head` or `tail` option but not both.\")\n\n    user_specified_num_logs = (\n        num_logs or LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS\n        if head or tail or num_logs\n        else None\n    )\n\n    # if using tail update offset according to LOGS_DEFAULT_PAGE_SIZE\n    if tail:\n        offset = max(0, user_specified_num_logs - LOGS_DEFAULT_PAGE_SIZE)\n\n    log_filter = LogFilter(flow_run_id={\"any_\": [id]})\n\n    async with get_client() as client:\n        # Get the flow run\n        try:\n            flow_run = await client.read_flow_run(id)\n        except ObjectNotFound:\n            exit_with_error(f\"Flow run {str(id)!r} not found!\")\n\n        while more_logs:\n            num_logs_to_return_from_page = (\n                LOGS_DEFAULT_PAGE_SIZE\n                if user_specified_num_logs is None\n                else min(\n                    LOGS_DEFAULT_PAGE_SIZE, user_specified_num_logs - num_logs_returned\n                )\n            )\n\n            # Get the next page of logs\n            page_logs = await client.read_logs(\n                log_filter=log_filter,\n                limit=num_logs_to_return_from_page,\n                offset=offset,\n                sort=(\n                    LogSort.TIMESTAMP_DESC if reverse or tail else LogSort.TIMESTAMP_ASC\n                ),\n            )\n\n            for log in reversed(page_logs) if tail and not reverse else page_logs:\n                app.console.print(\n                    # Print following the flow run format (declared in logging.yml)\n                    (\n                        f\"{pendulum.instance(log.timestamp).to_datetime_string()}.{log.timestamp.microsecond // 1000:03d} |\"\n                        f\" {logging.getLevelName(log.level):7s} | Flow run\"\n                        f\" {flow_run.name!r} - {log.message}\"\n                    ),\n                    soft_wrap=True,\n                )\n\n            # Update the number of logs retrieved\n            num_logs_returned += num_logs_to_return_from_page\n\n            if tail:\n                #  If the current offset is not 0, update the offset for the next page\n                if offset != 0:\n                    offset = (\n                        0\n                        # Reset the offset to 0 if there are less logs than the LOGS_DEFAULT_PAGE_SIZE to get the remaining log\n                        if offset < LOGS_DEFAULT_PAGE_SIZE\n                        else offset - LOGS_DEFAULT_PAGE_SIZE\n                    )\n                else:\n                    more_logs = False\n            else:\n                if len(page_logs) == LOGS_DEFAULT_PAGE_SIZE:\n                    offset += LOGS_DEFAULT_PAGE_SIZE\n                else:\n                    # No more logs to show, exit\n                    more_logs = False\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.ls","title":"ls async","text":"

View recent flow runs or flow runs for specific flows

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/flow_run.py
@flow_run_app.command()\nasync def ls(\n    flow_name: List[str] = typer.Option(None, help=\"Name of the flow\"),\n    limit: int = typer.Option(15, help=\"Maximum number of flow runs to list\"),\n    state: List[str] = typer.Option(None, help=\"Name of the flow run's state\"),\n    state_type: List[StateType] = typer.Option(\n        None, help=\"Type of the flow run's state\"\n    ),\n):\n\"\"\"\n    View recent flow runs or flow runs for specific flows\n    \"\"\"\n\n    state_filter = {}\n    if state:\n        state_filter[\"name\"] = {\"any_\": state}\n    if state_type:\n        state_filter[\"type\"] = {\"any_\": state_type}\n\n    async with get_client() as client:\n        flow_runs = await client.read_flow_runs(\n            flow_filter=FlowFilter(name={\"any_\": flow_name}) if flow_name else None,\n            flow_run_filter=FlowRunFilter(state=state_filter) if state_filter else None,\n            limit=limit,\n            sort=FlowRunSort.EXPECTED_START_TIME_DESC,\n        )\n        flows_by_id = {\n            flow.id: flow\n            for flow in await client.read_flows(\n                flow_filter=FlowFilter(id={\"any_\": [run.flow_id for run in flow_runs]})\n            )\n        }\n\n    table = Table(title=\"Flow Runs\")\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Flow\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"State\", no_wrap=True)\n    table.add_column(\"When\", style=\"bold\", no_wrap=True)\n\n    for flow_run in sorted(flow_runs, key=lambda d: d.created, reverse=True):\n        flow = flows_by_id[flow_run.flow_id]\n        timestamp = (\n            flow_run.state.state_details.scheduled_time\n            if flow_run.state.is_scheduled()\n            else flow_run.state.timestamp\n        )\n        table.add_row(\n            str(flow_run.id),\n            str(flow.name),\n            str(flow_run.name),\n            str(flow_run.state.type.value),\n            pendulum.instance(timestamp).diff_for_humans(),\n        )\n\n    app.console.print(table)\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/profile/","title":"prefect.cli.profile","text":"","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile","title":"prefect.cli.profile","text":"

Command line interface for working with profiles.

","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.create","title":"create","text":"

Create a new profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/profile.py
@profile_app.command()\ndef create(\n    name: str,\n    from_name: str = typer.Option(None, \"--from\", help=\"Copy an existing profile.\"),\n):\n\"\"\"\n    Create a new profile.\n    \"\"\"\n\n    profiles = prefect.settings.load_profiles()\n    if name in profiles:\n        app.console.print(\n            textwrap.dedent(\n                f\"\"\"\n                [red]Profile {name!r} already exists.[/red]\n                To create a new profile, remove the existing profile first:\n\n                    prefect profile delete {name!r}\n                \"\"\"\n            ).strip()\n        )\n        raise typer.Exit(1)\n\n    if from_name:\n        if from_name not in profiles:\n            exit_with_error(f\"Profile {from_name!r} not found.\")\n\n        # Create a copy of the profile with a new name and add to the collection\n        profiles.add_profile(profiles[from_name].copy(update={\"name\": name}))\n    else:\n        profiles.add_profile(prefect.settings.Profile(name=name, settings={}))\n\n    prefect.settings.save_profiles(profiles)\n\n    app.console.print(\n        textwrap.dedent(\n            f\"\"\"\n            Created profile with properties:\n                name - {name!r}\n                from name - {from_name or None}\n\n            Use created profile for future, subsequent commands:\n                prefect profile use {name!r}\n\n            Use created profile temporarily for a single command:\n                prefect -p {name!r} config view\n            \"\"\"\n        )\n    )\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.delete","title":"delete","text":"

Delete the given profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/profile.py
@profile_app.command()\ndef delete(name: str):\n\"\"\"\n    Delete the given profile.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    if name not in profiles:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    current_profile = prefect.context.get_settings_context().profile\n    if current_profile.name == name:\n        exit_with_error(\n            f\"Profile {name!r} is the active profile. You must switch profiles before\"\n            \" it can be deleted.\"\n        )\n\n    profiles.remove_profile(name)\n\n    verb = \"Removed\"\n    if name == \"default\":\n        verb = \"Reset\"\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(f\"{verb} profile {name!r}.\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.inspect","title":"inspect","text":"

Display settings from a given profile; defaults to active.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/profile.py
@profile_app.command()\ndef inspect(\n    name: Optional[str] = typer.Argument(\n        None, help=\"Name of profile to inspect; defaults to active profile.\"\n    )\n):\n\"\"\"\n    Display settings from a given profile; defaults to active.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    if name is None:\n        current_profile = prefect.context.get_settings_context().profile\n        if not current_profile:\n            exit_with_error(\"No active profile set - please provide a name to inspect.\")\n        name = current_profile.name\n        print(f\"No name provided, defaulting to {name!r}\")\n    if name not in profiles:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    if not profiles[name].settings:\n        # TODO: Consider instructing on how to add settings.\n        print(f\"Profile {name!r} is empty.\")\n\n    for setting, value in profiles[name].settings.items():\n        app.console.print(f\"{setting.name}='{value}'\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.ls","title":"ls","text":"

List profile names.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/profile.py
@profile_app.command()\ndef ls():\n\"\"\"\n    List profile names.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    current_profile = prefect.context.get_settings_context().profile\n    current_name = current_profile.name if current_profile is not None else None\n\n    table = Table(caption=\"* active profile\")\n    table.add_column(\n        \"[#024dfd]Available Profiles:\", justify=\"right\", style=\"#8ea0ae\", no_wrap=True\n    )\n\n    for name in profiles:\n        if name == current_name:\n            table.add_row(f\"[green]  * {name}[/green]\")\n        else:\n            table.add_row(f\"  {name}\")\n    app.console.print(table)\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.rename","title":"rename","text":"

Change the name of a profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/profile.py
@profile_app.command()\ndef rename(name: str, new_name: str):\n\"\"\"\n    Change the name of a profile.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    if name not in profiles:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    if new_name in profiles:\n        exit_with_error(f\"Profile {new_name!r} already exists.\")\n\n    profiles.add_profile(profiles[name].copy(update={\"name\": new_name}))\n    profiles.remove_profile(name)\n\n    # If the active profile was renamed switch the active profile to the new name.\n    prefect.context.get_settings_context().profile\n    if profiles.active_name == name:\n        profiles.set_active(new_name)\n    if os.environ.get(\"PREFECT_PROFILE\") == name:\n        app.console.print(\n            f\"You have set your current profile to {name!r} with the \"\n            \"PREFECT_PROFILE environment variable. You must update this variable to \"\n            f\"{new_name!r} to continue using the profile.\"\n        )\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(f\"Renamed profile {name!r} to {new_name!r}.\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.use","title":"use async","text":"

Set the given profile to active.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/profile.py
@profile_app.command()\nasync def use(name: str):\n\"\"\"\n    Set the given profile to active.\n    \"\"\"\n    status_messages = {\n        ConnectionStatus.CLOUD_CONNECTED: (\n            exit_with_success,\n            f\"Connected to Prefect Cloud using profile {name!r}\",\n        ),\n        ConnectionStatus.CLOUD_ERROR: (\n            exit_with_error,\n            f\"Error connecting to Prefect Cloud using profile {name!r}\",\n        ),\n        ConnectionStatus.CLOUD_UNAUTHORIZED: (\n            exit_with_error,\n            f\"Error authenticating with Prefect Cloud using profile {name!r}\",\n        ),\n        ConnectionStatus.ORION_CONNECTED: (\n            exit_with_success,\n            f\"Connected to Prefect server using profile {name!r}\",\n        ),\n        ConnectionStatus.ORION_ERROR: (\n            exit_with_error,\n            f\"Error connecting to Prefect server using profile {name!r}\",\n        ),\n        ConnectionStatus.EPHEMERAL: (\n            exit_with_success,\n            (\n                f\"No Prefect server specified using profile {name!r} - the API will run\"\n                \" in ephemeral mode.\"\n            ),\n        ),\n        ConnectionStatus.INVALID_API: (\n            exit_with_error,\n            \"Error connecting to Prefect API URL\",\n        ),\n    }\n\n    profiles = prefect.settings.load_profiles()\n    if name not in profiles.names:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    profiles.set_active(name)\n    prefect.settings.save_profiles(profiles)\n\n    with Progress(\n        SpinnerColumn(),\n        TextColumn(\"[progress.description]{task.description}\"),\n        transient=False,\n    ) as progress:\n        progress.add_task(\n            description=\"Checking API connectivity...\",\n            total=None,\n        )\n\n        with use_profile(name, include_current_context=False):\n            connection_status = await check_orion_connection()\n\n        exit_method, msg = status_messages[connection_status]\n\n    exit_method(msg)\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/project/","title":"prefect.cli.project","text":"","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project","title":"prefect.cli.project","text":"

Deprecated - Command line interface for working with projects.

","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.clone","title":"clone async","text":"

Clone an existing project for a given deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/project.py
@project_app.command()\nasync def clone(\n    deployment_name: str = typer.Option(\n        None,\n        \"--deployment\",\n        \"-d\",\n        help=\"The name of the deployment to clone a project for.\",\n    ),\n    deployment_id: str = typer.Option(\n        None,\n        \"--id\",\n        \"-i\",\n        help=\"The id of the deployment to clone a project for.\",\n    ),\n):\n\"\"\"\n    Clone an existing project for a given deployment.\n    \"\"\"\n    app.console.print(\n        generate_deprecation_message(\n            \"The `prefect project clone` command\",\n            start_date=\"Jun 2023\",\n        )\n    )\n    if deployment_name and deployment_id:\n        exit_with_error(\n            \"Can only pass one of deployment name or deployment ID options.\"\n        )\n\n    if not deployment_name and not deployment_id:\n        exit_with_error(\"Must pass either a deployment name or deployment ID.\")\n\n    if deployment_name:\n        async with get_client() as client:\n            try:\n                deployment = await client.read_deployment_by_name(deployment_name)\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n    else:\n        async with get_client() as client:\n            try:\n                deployment = await client.read_deployment(deployment_id)\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {deployment_id!r} not found!\")\n\n    if deployment.pull_steps:\n        output = await run_steps(deployment.pull_steps)\n        app.console.out(output[\"directory\"])\n    else:\n        exit_with_error(\"No pull steps found, exiting early.\")\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.init","title":"init async","text":"

Initialize a new project.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/project.py
@project_app.command()\n@app.command()\nasync def init(\n    name: str = None,\n    recipe: str = None,\n    fields: List[str] = typer.Option(\n        None,\n        \"-f\",\n        \"--field\",\n        help=(\n            \"One or more fields to pass to the recipe (e.g., image_name) in the format\"\n            \" of key=value.\"\n        ),\n    ),\n):\n\"\"\"\n    Initialize a new project.\n    \"\"\"\n    inputs = {}\n    fields = fields or []\n    recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n\n    for field in fields:\n        key, value = field.split(\"=\")\n        inputs[key] = value\n\n    if not recipe and is_interactive():\n        recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n        recipes = []\n\n        for r in recipe_paths.iterdir():\n            if r.is_dir() and (r / \"prefect.yaml\").exists():\n                with open(r / \"prefect.yaml\") as f:\n                    recipe_data = yaml.safe_load(f)\n                    recipe_name = r.name\n                    recipe_description = recipe_data.get(\n                        \"description\", \"(no description available)\"\n                    )\n                    recipe_dict = {\n                        \"name\": recipe_name,\n                        \"description\": recipe_description,\n                    }\n                    recipes.append(recipe_dict)\n\n        selected_recipe = prompt_select_from_table(\n            app.console,\n            \"Would you like to initialize your deployment configuration with a recipe?\",\n            columns=[\n                {\"header\": \"Name\", \"key\": \"name\"},\n                {\"header\": \"Description\", \"key\": \"description\"},\n            ],\n            data=recipes,\n            opt_out_message=\"No, I'll use the default deployment configuration.\",\n            opt_out_response={},\n        )\n        if selected_recipe != {}:\n            recipe = selected_recipe[\"name\"]\n\n    if recipe and (recipe_paths / recipe / \"prefect.yaml\").exists():\n        with open(recipe_paths / recipe / \"prefect.yaml\") as f:\n            recipe_inputs = yaml.safe_load(f).get(\"required_inputs\") or {}\n\n        if recipe_inputs:\n            if set(recipe_inputs.keys()) < set(inputs.keys()):\n                # message to user about extra fields\n                app.console.print(\n                    (\n                        f\"Warning: extra fields provided for {recipe!r} recipe:\"\n                        f\" '{', '.join(set(inputs.keys()) - set(recipe_inputs.keys()))}'\"\n                    ),\n                    style=\"red\",\n                )\n            elif set(recipe_inputs.keys()) > set(inputs.keys()):\n                table = Table(\n                    title=f\"[red]Required inputs for {recipe!r} recipe[/red]\",\n                )\n                table.add_column(\"Field Name\", style=\"green\", no_wrap=True)\n                table.add_column(\n                    \"Description\", justify=\"left\", style=\"white\", no_wrap=False\n                )\n                for field, description in recipe_inputs.items():\n                    if field not in inputs:\n                        table.add_row(field, description)\n\n                app.console.print(table)\n\n                for key, description in recipe_inputs.items():\n                    if key not in inputs:\n                        inputs[key] = typer.prompt(key)\n\n            app.console.print(\"-\" * 15)\n\n    try:\n        files = [\n            f\"[green]{fname}[/green]\"\n            for fname in initialize_project(name=name, recipe=recipe, inputs=inputs)\n        ]\n    except ValueError as exc:\n        if \"Unknown recipe\" in str(exc):\n            exit_with_error(\n                f\"Unknown recipe {recipe!r} provided - run [yellow]`prefect init\"\n                \"`[/yellow] to see all available recipes.\"\n            )\n        else:\n            raise\n\n    files = \"\\n\".join(files)\n    empty_msg = (\n        f\"Created project in [green]{Path('.').resolve()}[/green]; no new files\"\n        \" created.\"\n    )\n    file_msg = (\n        f\"Created project in [green]{Path('.').resolve()}[/green] with the following\"\n        f\" new files:\\n{files}\"\n    )\n    app.console.print(file_msg if files else empty_msg)\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.ls","title":"ls async","text":"

List available recipes.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/project.py
@recipe_app.command()\nasync def ls():\n\"\"\"\n    List available recipes.\n    \"\"\"\n\n    recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n    recipes = {}\n\n    for recipe in recipe_paths.iterdir():\n        if recipe.is_dir() and (recipe / \"prefect.yaml\").exists():\n            with open(recipe / \"prefect.yaml\") as f:\n                recipes[recipe.name] = yaml.safe_load(f).get(\n                    \"description\", \"(no description available)\"\n                )\n\n    table = Table(\n        title=\"Available project recipes\",\n        caption=(\n            \"Run `prefect project init --recipe <recipe>` to initialize a project with\"\n            \" a recipe.\"\n        ),\n        caption_style=\"red\",\n    )\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"Description\", justify=\"left\", style=\"white\", no_wrap=False)\n    for name, description in sorted(recipes.items(), key=lambda x: x[0]):\n        table.add_row(name, description)\n\n    app.console.print(table)\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.register_flow","title":"register_flow async","text":"

Register a flow with this project.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/project.py
@project_app.command()\nasync def register_flow(\n    entrypoint: str = typer.Argument(\n        ...,\n        help=(\n            \"The path to a flow entrypoint, in the form of\"\n            \" `./path/to/file.py:flow_func_name`\"\n        ),\n    ),\n    force: bool = typer.Option(\n        False,\n        \"--force\",\n        \"-f\",\n        help=(\n            \"An optional flag to force register this flow and overwrite any existing\"\n            \" entry\"\n        ),\n    ),\n):\n\"\"\"\n    Register a flow with this project.\n    \"\"\"\n    try:\n        flow = await register(entrypoint, force=force)\n    except Exception as exc:\n        exit_with_error(exc)\n\n    app.console.print(\n        (\n            f\"Registered flow {flow.name!r} in\"\n            f\" {(find_prefect_directory()/'flows.json').resolve()!s}\"\n        ),\n        style=\"green\",\n    )\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/root/","title":"prefect.cli.root","text":"","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/root/#prefect.cli.root","title":"prefect.cli.root","text":"

Base prefect command-line application

","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/root/#prefect.cli.root.version","title":"version async","text":"

Get the current Prefect version.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/root.py
@app.command()\nasync def version():\n\"\"\"Get the current Prefect version.\"\"\"\n    import sqlite3\n\n    from prefect.server.api.server import SERVER_API_VERSION\n    from prefect.server.utilities.database import get_dialect\n    from prefect.settings import PREFECT_API_DATABASE_CONNECTION_URL\n\n    version_info = {\n        \"Version\": prefect.__version__,\n        \"API version\": SERVER_API_VERSION,\n        \"Python version\": platform.python_version(),\n        \"Git commit\": prefect.__version_info__[\"full-revisionid\"][:8],\n        \"Built\": pendulum.parse(\n            prefect.__version_info__[\"date\"]\n        ).to_day_datetime_string(),\n        \"OS/Arch\": f\"{sys.platform}/{platform.machine()}\",\n        \"Profile\": prefect.context.get_settings_context().profile.name,\n    }\n\n    server_type: str\n\n    try:\n        # We do not context manage the client because when using an ephemeral app we do not\n        # want to create the database or run migrations\n        client = prefect.get_client()\n        server_type = client.server_type.value\n    except Exception:\n        server_type = \"<client error>\"\n\n    version_info[\"Server type\"] = server_type.lower()\n\n    # TODO: Consider adding an API route to retrieve this information?\n    if server_type == ServerType.EPHEMERAL.value:\n        database = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value()).name\n        version_info[\"Server\"] = {\"Database\": database}\n        if database == \"sqlite\":\n            version_info[\"Server\"][\"SQLite version\"] = sqlite3.sqlite_version\n\n    def display(object: dict, nesting: int = 0):\n        # Recursive display of a dictionary with nesting\n        for key, value in object.items():\n            key += \":\"\n            if isinstance(value, dict):\n                app.console.print(key)\n                return display(value, nesting + 2)\n            prefix = \" \" * nesting\n            app.console.print(f\"{prefix}{key.ljust(20 - len(prefix))} {value}\")\n\n    display(version_info)\n
","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/server/","title":"prefect.cli.server","text":"","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server","title":"prefect.cli.server","text":"

Command line interface for working with Prefect

","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.downgrade","title":"downgrade async","text":"

Downgrade the Prefect database

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/server.py
@orion_database_app.command()\n@database_app.command()\nasync def downgrade(\n    yes: bool = typer.Option(False, \"--yes\", \"-y\"),\n    revision: str = typer.Option(\n        \"base\",\n        \"-r\",\n        help=(\n            \"The revision to pass to `alembic downgrade`. If not provided, runs all\"\n            \" migrations.\"\n        ),\n    ),\n    dry_run: bool = typer.Option(\n        False,\n        help=(\n            \"Flag to show what migrations would be made without applying them. Will\"\n            \" emit sql statements to stdout.\"\n        ),\n    ),\n):\n\"\"\"Downgrade the Prefect database\"\"\"\n    from prefect.server.database.alembic_commands import alembic_downgrade\n    from prefect.server.database.dependencies import provide_database_interface\n\n    db = provide_database_interface()\n\n    engine = await db.engine()\n\n    if not yes:\n        confirm = typer.confirm(\n            \"Are you sure you want to downgrade the Prefect \"\n            f\"database at {engine.url!r}?\"\n        )\n        if not confirm:\n            exit_with_error(\"Database downgrade aborted!\")\n\n    app.console.print(\"Running downgrade migrations ...\")\n    await run_sync_in_worker_thread(\n        alembic_downgrade, revision=revision, dry_run=dry_run\n    )\n    app.console.print(\"Migrations succeeded!\")\n    exit_with_success(f\"Prefect database at {engine.url!r} downgraded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.reset","title":"reset async","text":"

Drop and recreate all Prefect database tables

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/server.py
@orion_database_app.command()\n@database_app.command()\nasync def reset(yes: bool = typer.Option(False, \"--yes\", \"-y\")):\n\"\"\"Drop and recreate all Prefect database tables\"\"\"\n    from prefect.server.database.dependencies import provide_database_interface\n\n    db = provide_database_interface()\n    engine = await db.engine()\n    if not yes:\n        confirm = typer.confirm(\n            \"Are you sure you want to reset the Prefect database located \"\n            f'at \"{engine.url!r}\"? This will drop and recreate all tables.'\n        )\n        if not confirm:\n            exit_with_error(\"Database reset aborted\")\n    app.console.print(\"Downgrading database...\")\n    await db.drop_db()\n    app.console.print(\"Upgrading database...\")\n    await db.create_db()\n    exit_with_success(f'Prefect database \"{engine.url!r}\" reset!')\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.revision","title":"revision async","text":"

Create a new migration for the Prefect database

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/server.py
@orion_database_app.command()\n@database_app.command()\nasync def revision(\n    message: str = typer.Option(\n        None,\n        \"--message\",\n        \"-m\",\n        help=\"A message to describe the migration.\",\n    ),\n    autogenerate: bool = False,\n):\n\"\"\"Create a new migration for the Prefect database\"\"\"\n    from prefect.server.database.alembic_commands import alembic_revision\n\n    app.console.print(\"Running migration file creation ...\")\n    await run_sync_in_worker_thread(\n        alembic_revision,\n        message=message,\n        autogenerate=autogenerate,\n    )\n    exit_with_success(\"Creating new migration file succeeded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.stamp","title":"stamp async","text":"

Stamp the revision table with the given revision; don't run any migrations

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/server.py
@orion_database_app.command()\n@database_app.command()\nasync def stamp(revision: str):\n\"\"\"Stamp the revision table with the given revision; don't run any migrations\"\"\"\n    from prefect.server.database.alembic_commands import alembic_stamp\n\n    app.console.print(\"Stamping database with revision ...\")\n    await run_sync_in_worker_thread(alembic_stamp, revision=revision)\n    exit_with_success(\"Stamping database with revision succeeded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.start","title":"start async","text":"

Start a Prefect server

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/server.py
@orion_app.command()\n@server_app.command()\nasync def start(\n    host: str = SettingsOption(PREFECT_SERVER_API_HOST),\n    port: int = SettingsOption(PREFECT_SERVER_API_PORT),\n    keep_alive_timeout: int = SettingsOption(PREFECT_SERVER_API_KEEPALIVE_TIMEOUT),\n    log_level: str = SettingsOption(PREFECT_LOGGING_SERVER_LEVEL),\n    scheduler: bool = SettingsOption(PREFECT_API_SERVICES_SCHEDULER_ENABLED),\n    analytics: bool = SettingsOption(\n        PREFECT_SERVER_ANALYTICS_ENABLED, \"--analytics-on/--analytics-off\"\n    ),\n    late_runs: bool = SettingsOption(PREFECT_API_SERVICES_LATE_RUNS_ENABLED),\n    ui: bool = SettingsOption(PREFECT_UI_ENABLED),\n):\n\"\"\"Start a Prefect server\"\"\"\n\n    server_env = os.environ.copy()\n    server_env[\"PREFECT_API_SERVICES_SCHEDULER_ENABLED\"] = str(scheduler)\n    server_env[\"PREFECT_SERVER_ANALYTICS_ENABLED\"] = str(analytics)\n    server_env[\"PREFECT_API_SERVICES_LATE_RUNS_ENABLED\"] = str(late_runs)\n    server_env[\"PREFECT_API_SERVICES_UI\"] = str(ui)\n    server_env[\"PREFECT_LOGGING_SERVER_LEVEL\"] = log_level\n\n    base_url = f\"http://{host}:{port}\"\n\n    async with anyio.create_task_group() as tg:\n        app.console.print(generate_welcome_blurb(base_url, ui_enabled=ui))\n        app.console.print(\"\\n\")\n\n        server_process_id = await tg.start(\n            partial(\n                run_process,\n                command=[\n                    sys.executable,\n                    \"-m\",\n                    \"uvicorn\",\n                    \"--app-dir\",\n                    # quote wrapping needed for windows paths with spaces\n                    f'\"{prefect.__module_path__.parent}\"',\n                    \"--factory\",\n                    \"prefect.server.api.server:create_app\",\n                    \"--host\",\n                    str(host),\n                    \"--port\",\n                    str(port),\n                    \"--timeout-keep-alive\",\n                    str(keep_alive_timeout),\n                ],\n                env=server_env,\n                stream_output=True,\n            )\n        )\n\n        # Explicitly handle the interrupt signal here, as it will allow us to\n        # cleanly stop the uvicorn server. Failing to do that may cause a\n        # large amount of anyio error traces on the terminal, because the\n        # SIGINT is handled by Typer/Click in this process (the parent process)\n        # and will start shutting down subprocesses:\n        # https://github.com/PrefectHQ/server/issues/2475\n\n        setup_signal_handlers_server(\n            server_process_id, \"the Prefect server\", app.console.print\n        )\n\n    app.console.print(\"Server stopped!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.upgrade","title":"upgrade async","text":"

Upgrade the Prefect database

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/server.py
@orion_database_app.command()\n@database_app.command()\nasync def upgrade(\n    yes: bool = typer.Option(False, \"--yes\", \"-y\"),\n    revision: str = typer.Option(\n        \"head\",\n        \"-r\",\n        help=(\n            \"The revision to pass to `alembic upgrade`. If not provided, runs all\"\n            \" migrations.\"\n        ),\n    ),\n    dry_run: bool = typer.Option(\n        False,\n        help=(\n            \"Flag to show what migrations would be made without applying them. Will\"\n            \" emit sql statements to stdout.\"\n        ),\n    ),\n):\n\"\"\"Upgrade the Prefect database\"\"\"\n    from prefect.server.database.alembic_commands import alembic_upgrade\n    from prefect.server.database.dependencies import provide_database_interface\n\n    db = provide_database_interface()\n    engine = await db.engine()\n\n    if not yes:\n        confirm = typer.confirm(\n            f\"Are you sure you want to upgrade the Prefect database at {engine.url!r}?\"\n        )\n        if not confirm:\n            exit_with_error(\"Database upgrade aborted!\")\n\n    app.console.print(\"Running upgrade migrations ...\")\n    await run_sync_in_worker_thread(alembic_upgrade, revision=revision, dry_run=dry_run)\n    app.console.print(\"Migrations succeeded!\")\n    exit_with_success(f\"Prefect database at {engine.url!r} upgraded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/work_queue/","title":"prefect.cli.work_queue","text":"","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue","title":"prefect.cli.work_queue","text":"

Command line interface for working with work queues.

","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.clear_concurrency_limit","title":"clear_concurrency_limit async","text":"

Clear any concurrency limits from a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def clear_concurrency_limit(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to clear\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n\"\"\"\n    Clear any concurrency limits from a work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                concurrency_limit=None,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = (\n            f\"Concurrency limits removed on work queue {name!r} in work pool {pool!r}\"\n        )\n    else:\n        success_message = f\"Concurrency limits removed on work queue {name!r}\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.create","title":"create async","text":"

Create a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def create(\n    name: str = typer.Argument(..., help=\"The unique name to assign this work queue\"),\n    limit: int = typer.Option(\n        None, \"-l\", \"--limit\", help=\"The concurrency limit to set on the queue.\"\n    ),\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"DEPRECATED: One or more optional tags. This option will be removed on\"\n            \" 2023-02-23.\"\n        ),\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool to create the work queue in.\",\n    ),\n    priority: Optional[int] = typer.Option(\n        None,\n        \"-q\",\n        \"--priority\",\n        help=\"The associated priority for the created work queue\",\n    ),\n):\n\"\"\"\n    Create a work queue.\n    \"\"\"\n    if tags:\n        app.console.print(\n            (\n                \"Supplying `tags` for work queues is deprecated. This work \"\n                \"queue will use legacy tag-matching behavior. \"\n                \"This option will be removed on 2023-02-23.\"\n            ),\n            style=\"red\",\n        )\n\n    if pool and tags:\n        exit_with_error(\n            \"Work queues created with tags cannot specify work pools or set priorities.\"\n        )\n\n    async with get_client() as client:\n        try:\n            result = await client.create_work_queue(\n                name=name, tags=tags or None, work_pool_name=pool, priority=priority\n            )\n            if limit is not None:\n                await client.update_work_queue(\n                    id=result.id,\n                    concurrency_limit=limit,\n                )\n        except ObjectAlreadyExists:\n            exit_with_error(f\"Work queue with name: {name!r} already exists.\")\n        except ObjectNotFound:\n            exit_with_error(f\"Work pool with name: {pool!r} not found.\")\n\n    if tags:\n        tags_message = f\"tags - {', '.join(sorted(tags))}\\n\" or \"\"\n        output_msg = dedent(\n            f\"\"\"\n            Created work queue with properties:\n                name - {name!r}\n                id - {result.id}\n                concurrency limit - {limit}\n{tags_message}\n            Start an agent to pick up flow runs from the work queue:\n                prefect agent start -q '{result.name}'\n\n            Inspect the work queue:\n                prefect work-queue inspect '{result.name}'\n            \"\"\"\n        )\n    else:\n        if not pool:\n            # specify the default work pool name after work queue creation to allow the server\n            # to handle a bunch of logic associated with agents without work pools\n            pool = DEFAULT_AGENT_WORK_POOL_NAME\n        output_msg = dedent(\n            f\"\"\"\n            Created work queue with properties:\n                name - {name!r}\n                work pool - {pool!r}\n                id - {result.id}\n                concurrency limit - {limit}\n            Start an agent to pick up flow runs from the work queue:\n                prefect agent start -q '{result.name} -p {pool}'\n\n            Inspect the work queue:\n                prefect work-queue inspect '{result.name}'\n            \"\"\"\n        )\n    exit_with_success(output_msg)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.delete","title":"delete async","text":"

Delete a work queue by ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def delete(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to delete\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool containing the work queue to delete.\",\n    ),\n):\n\"\"\"\n    Delete a work queue by ID.\n    \"\"\"\n\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            await client.delete_work_queue_by_id(id=queue_id)\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n    if pool:\n        success_message = (\n            f\"Successfully deleted work queue {name!r} in work pool {pool!r}\"\n        )\n    else:\n        success_message = f\"Successfully deleted work queue {name!r}\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.inspect","title":"inspect async","text":"

Inspect a work queue by ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def inspect(\n    name: str = typer.Argument(\n        None, help=\"The name or ID of the work queue to inspect\"\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n\"\"\"\n    Inspect a work queue by ID.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            result = await client.read_work_queue(id=queue_id)\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    app.console.print(Pretty(result))\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.ls","title":"ls async","text":"

View all work queues.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def ls(\n    verbose: bool = typer.Option(\n        False, \"--verbose\", \"-v\", help=\"Display more information.\"\n    ),\n    work_queue_prefix: str = typer.Option(\n        None,\n        \"--match\",\n        \"-m\",\n        help=(\n            \"Will match work queues with names that start with the specified prefix\"\n            \" string\"\n        ),\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool containing the work queues to list.\",\n    ),\n):\n\"\"\"\n    View all work queues.\n    \"\"\"\n    if not pool and not experiment_enabled(\"work_pools\"):\n        table = Table(\n            title=\"Work Queues\",\n            caption=\"(**) denotes a paused queue\",\n            caption_style=\"red\",\n        )\n        table.add_column(\"Name\", style=\"green\", no_wrap=True)\n        table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n        table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n        if verbose:\n            table.add_column(\"Filter (Deprecated)\", style=\"magenta\", no_wrap=True)\n\n        async with get_client() as client:\n            if work_queue_prefix is not None:\n                queues = await client.match_work_queues([work_queue_prefix])\n            else:\n                queues = await client.read_work_queues()\n\n            def sort_by_created_key(q):\n                return pendulum.now(\"utc\") - q.created\n\n            for queue in sorted(queues, key=sort_by_created_key):\n                row = [\n                    f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n                    str(queue.id),\n                    (\n                        f\"[red]{queue.concurrency_limit}\"\n                        if queue.concurrency_limit\n                        else \"[blue]None\"\n                    ),\n                ]\n                if verbose and queue.filter is not None:\n                    row.append(queue.filter.json())\n                table.add_row(*row)\n    elif not pool:\n        table = Table(\n            title=\"Work Queues\",\n            caption=\"(**) denotes a paused queue\",\n            caption_style=\"red\",\n        )\n        table.add_column(\"Name\", style=\"green\", no_wrap=True)\n        table.add_column(\"Pool\", style=\"magenta\", no_wrap=True)\n        table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n        table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n        if verbose:\n            table.add_column(\"Filter (Deprecated)\", style=\"magenta\", no_wrap=True)\n\n        async with get_client() as client:\n            if work_queue_prefix is not None:\n                queues = await client.match_work_queues([work_queue_prefix])\n            else:\n                queues = await client.read_work_queues()\n\n            pool_ids = [q.work_pool_id for q in queues]\n            wp_filter = WorkPoolFilter(id=WorkPoolFilterId(any_=pool_ids))\n            pools = await client.read_work_pools(work_pool_filter=wp_filter)\n            pool_id_name_map = {p.id: p.name for p in pools}\n\n            def sort_by_created_key(q):\n                return pendulum.now(\"utc\") - q.created\n\n            for queue in sorted(queues, key=sort_by_created_key):\n                row = [\n                    f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n                    pool_id_name_map[queue.work_pool_id],\n                    str(queue.id),\n                    (\n                        f\"[red]{queue.concurrency_limit}\"\n                        if queue.concurrency_limit\n                        else \"[blue]None\"\n                    ),\n                ]\n                if verbose and queue.filter is not None:\n                    row.append(queue.filter.json())\n                table.add_row(*row)\n\n    else:\n        table = Table(\n            title=f\"Work Queues in Work Pool {pool!r}\",\n            caption=\"(**) denotes a paused queue\",\n            caption_style=\"red\",\n        )\n        table.add_column(\"Name\", style=\"green\", no_wrap=True)\n        table.add_column(\"Priority\", style=\"magenta\", no_wrap=True)\n        table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n        if verbose:\n            table.add_column(\"Description\", style=\"cyan\", no_wrap=False)\n\n        async with get_client() as client:\n            try:\n                queues = await client.read_work_queues(work_pool_name=pool)\n            except ObjectNotFound:\n                exit_with_error(f\"No work pool found: {pool!r}\")\n\n            def sort_by_created_key(q):\n                return pendulum.now(\"utc\") - q.created\n\n            for queue in sorted(queues, key=sort_by_created_key):\n                row = [\n                    f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n                    f\"{queue.priority}\",\n                    (\n                        f\"[red]{queue.concurrency_limit}\"\n                        if queue.concurrency_limit\n                        else \"[blue]None\"\n                    ),\n                ]\n                if verbose:\n                    row.append(queue.description)\n                table.add_row(*row)\n\n    app.console.print(table)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.pause","title":"pause async","text":"

Pause a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def pause(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to pause\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n\"\"\"\n    Pause a work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                is_paused=True,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = f\"Work queue {name!r} in work pool {pool!r} paused\"\n    else:\n        success_message = f\"Work queue {name!r} paused\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.preview","title":"preview async","text":"

Preview a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def preview(\n    name: str = typer.Argument(\n        None, help=\"The name or ID of the work queue to preview\"\n    ),\n    hours: int = typer.Option(\n        None,\n        \"-h\",\n        \"--hours\",\n        help=\"The number of hours to look ahead; defaults to 1 hour\",\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n\"\"\"\n    Preview a work queue.\n    \"\"\"\n    if pool:\n        title = f\"Preview of Work Queue {name!r} in Work Pool {pool!r}\"\n    else:\n        title = f\"Preview of Work Queue {name!r}\"\n\n    table = Table(title=title, caption=\"(**) denotes a late run\", caption_style=\"red\")\n    table.add_column(\n        \"Scheduled Start Time\", justify=\"left\", style=\"yellow\", no_wrap=True\n    )\n    table.add_column(\"Run ID\", justify=\"left\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"Deployment ID\", style=\"blue\", no_wrap=True)\n\n    window = pendulum.now(\"utc\").add(hours=hours or 1)\n\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name, work_pool_name=pool\n    )\n    async with get_client() as client:\n        if pool:\n            try:\n                responses = await client.get_scheduled_flow_runs_for_work_pool(\n                    work_pool_name=pool,\n                    work_queue_names=[name],\n                )\n                runs = [response.flow_run for response in responses]\n            except ObjectNotFound:\n                exit_with_error(f\"No work queue found: {name!r} in work pool {pool!r}\")\n        else:\n            try:\n                runs = await client.get_runs_in_work_queue(\n                    queue_id,\n                    limit=10,\n                    scheduled_before=window,\n                )\n            except ObjectNotFound:\n                exit_with_error(f\"No work queue found: {name!r}\")\n    now = pendulum.now(\"utc\")\n\n    def sort_by_created_key(r):\n        return now - r.created\n\n    for run in sorted(runs, key=sort_by_created_key):\n        table.add_row(\n            (\n                f\"{run.expected_start_time} [red](**)\"\n                if run.expected_start_time < now\n                else f\"{run.expected_start_time}\"\n            ),\n            str(run.id),\n            run.name,\n            str(run.deployment_id),\n        )\n\n    if runs:\n        app.console.print(table)\n    else:\n        app.console.print(\n            (\n                \"No runs found - try increasing how far into the future you preview\"\n                \" with the --hours flag\"\n            ),\n            style=\"yellow\",\n        )\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.resume","title":"resume async","text":"

Resume a paused work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def resume(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to resume\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n\"\"\"\n    Resume a paused work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                is_paused=False,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = f\"Work queue {name!r} in work pool {pool!r} resumed\"\n    else:\n        success_message = f\"Work queue {name!r} resumed\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.set_concurrency_limit","title":"set_concurrency_limit async","text":"

Set a concurrency limit on a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def set_concurrency_limit(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue\"),\n    limit: int = typer.Argument(..., help=\"The concurrency limit to set on the queue.\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n\"\"\"\n    Set a concurrency limit on a work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                concurrency_limit=limit,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = (\n                    f\"No work queue named {name!r} found in work pool {pool!r}.\"\n                )\n            else:\n                error_message = f\"No work queue named {name!r} found.\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = (\n            f\"Concurrency limit of {limit} set on work queue {name!r} in work pool\"\n            f\" {pool!r}\"\n        )\n    else:\n        success_message = f\"Concurrency limit of {limit} set on work queue {name!r}\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/client/base/","title":"prefect.client.base","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base","title":"prefect.client.base","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse","title":"PrefectResponse","text":"

Bases: httpx.Response

A Prefect wrapper for the httpx.Response class.

Provides more informative error messages.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/base.py
class PrefectResponse(httpx.Response):\n\"\"\"\n    A Prefect wrapper for the `httpx.Response` class.\n\n    Provides more informative error messages.\n    \"\"\"\n\n    def raise_for_status(self) -> None:\n\"\"\"\n        Raise an exception if the response contains an HTTPStatusError.\n\n        The `PrefectHTTPStatusError` contains useful additional information that\n        is not contained in the `HTTPStatusError`.\n        \"\"\"\n        try:\n            return super().raise_for_status()\n        except HTTPStatusError as exc:\n            raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__\n\n    @classmethod\n    def from_httpx_response(cls: Type[Self], response: httpx.Response) -> Self:\n\"\"\"\n        Create a `PrefectReponse` from an `httpx.Response`.\n\n        By changing the `__class__` attribute of the Response, we change the method\n        resolution order to look for methods defined in PrefectResponse, while leaving\n        everything else about the original Response instance intact.\n        \"\"\"\n        new_response = copy.copy(response)\n        new_response.__class__ = cls\n        return new_response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse.raise_for_status","title":"raise_for_status","text":"

Raise an exception if the response contains an HTTPStatusError.

The PrefectHTTPStatusError contains useful additional information that is not contained in the HTTPStatusError.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/base.py
def raise_for_status(self) -> None:\n\"\"\"\n    Raise an exception if the response contains an HTTPStatusError.\n\n    The `PrefectHTTPStatusError` contains useful additional information that\n    is not contained in the `HTTPStatusError`.\n    \"\"\"\n    try:\n        return super().raise_for_status()\n    except HTTPStatusError as exc:\n        raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse.from_httpx_response","title":"from_httpx_response classmethod","text":"

Create a PrefectReponse from an httpx.Response.

By changing the __class__ attribute of the Response, we change the method resolution order to look for methods defined in PrefectResponse, while leaving everything else about the original Response instance intact.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/base.py
@classmethod\ndef from_httpx_response(cls: Type[Self], response: httpx.Response) -> Self:\n\"\"\"\n    Create a `PrefectReponse` from an `httpx.Response`.\n\n    By changing the `__class__` attribute of the Response, we change the method\n    resolution order to look for methods defined in PrefectResponse, while leaving\n    everything else about the original Response instance intact.\n    \"\"\"\n    new_response = copy.copy(response)\n    new_response.__class__ = cls\n    return new_response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectHttpxClient","title":"PrefectHttpxClient","text":"

Bases: httpx.AsyncClient

A Prefect wrapper for the async httpx client with support for retry-after headers for:

Additionally, this client will always call raise_for_status on responses.

For more details on rate limit headers, see: Configuring Cloudflare Rate Limiting

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/base.py
class PrefectHttpxClient(httpx.AsyncClient):\n\"\"\"\n    A Prefect wrapper for the async httpx client with support for retry-after headers\n    for:\n\n    - 429 CloudFlare-style rate limiting\n    - 503 Service unavailable\n\n    Additionally, this client will always call `raise_for_status` on responses.\n\n    For more details on rate limit headers, see:\n    [Configuring Cloudflare Rate Limiting](https://support.cloudflare.com/hc/en-us/articles/115001635128-Configuring-Rate-Limiting-from-UI)\n    \"\"\"\n\n    async def _send_with_retry(\n        self,\n        request: Callable,\n        retry_codes: Set[int] = set(),\n        retry_exceptions: Tuple[Exception, ...] = tuple(),\n    ):\n\"\"\"\n        Send a request and retry it if it fails.\n\n        Sends the provided request and retries it up to PREFECT_CLIENT_MAX_RETRIES times\n        if the request either raises an exception listed in `retry_exceptions` or\n        receives a response with a status code listed in `retry_codes`.\n\n        Retries will be delayed based on either the retry header (preferred) or\n        exponential backoff if a retry header is not provided.\n        \"\"\"\n        try_count = 0\n        response = None\n\n        while try_count <= PREFECT_CLIENT_MAX_RETRIES.value():\n            try_count += 1\n            retry_seconds = None\n            exc_info = None\n\n            try:\n                response = await request()\n            except retry_exceptions:  # type: ignore\n                if try_count > PREFECT_CLIENT_MAX_RETRIES.value():\n                    raise\n                # Otherwise, we will ignore this error but capture the info for logging\n                exc_info = sys.exc_info()\n            else:\n                # We got a response; return immediately if it is not retryable\n                if response.status_code not in retry_codes:\n                    return response\n\n                if \"Retry-After\" in response.headers:\n                    retry_seconds = float(response.headers[\"Retry-After\"])\n\n            # Use an exponential back-off if not set in a header\n            if retry_seconds is None:\n                retry_seconds = 2**try_count\n\n            # Add jitter\n            jitter_factor = PREFECT_CLIENT_RETRY_JITTER_FACTOR.value()\n            if retry_seconds > 0 and jitter_factor > 0:\n                if response is not None and \"Retry-After\" in response.headers:\n                    # Always wait for _at least_ retry seconds if requested by the API\n                    retry_seconds = bounded_poisson_interval(\n                        retry_seconds, retry_seconds * (1 + jitter_factor)\n                    )\n                else:\n                    # Otherwise, use a symmetrical jitter\n                    retry_seconds = clamped_poisson_interval(\n                        retry_seconds, jitter_factor\n                    )\n\n            logger.debug(\n                (\n                    \"Encountered retryable exception during request. \"\n                    if exc_info\n                    else (\n                        \"Received response with retryable status code\"\n                        f\" {response.status_code}. \"\n                    )\n                )\n                + f\"Another attempt will be made in {retry_seconds}s. \"\n                \"This is attempt\"\n                f\" {try_count}/{PREFECT_CLIENT_MAX_RETRIES.value() + 1}.\",\n                exc_info=exc_info,\n            )\n            await anyio.sleep(retry_seconds)\n\n        assert (\n            response is not None\n        ), \"Retry handling ended without response or exception\"\n\n        # We ran out of retries, return the failed response\n        return response\n\n    async def send(self, *args, **kwargs) -> Response:\n        api_request = partial(super().send, *args, **kwargs)\n\n        response = await self._send_with_retry(\n            request=api_request,\n            retry_codes={\n                status.HTTP_429_TOO_MANY_REQUESTS,\n                status.HTTP_503_SERVICE_UNAVAILABLE,\n                status.HTTP_502_BAD_GATEWAY,\n                *PREFECT_CLIENT_RETRY_EXTRA_CODES.value(),\n            },\n            retry_exceptions=(\n                httpx.ReadTimeout,\n                httpx.PoolTimeout,\n                httpx.ConnectTimeout,\n                # `ConnectionResetError` when reading socket raises as a `ReadError`\n                httpx.ReadError,\n                # Sockets can be closed during writes resulting in a `WriteError`\n                httpx.WriteError,\n                # Uvicorn bug, see https://github.com/PrefectHQ/prefect/issues/7512\n                httpx.RemoteProtocolError,\n                # HTTP2 bug, see https://github.com/PrefectHQ/prefect/issues/7442\n                httpx.LocalProtocolError,\n            ),\n        )\n\n        # Convert to a Prefect response to add nicer errors messages\n        response = PrefectResponse.from_httpx_response(response)\n\n        # Always raise bad responses\n        # NOTE: We may want to remove this and handle responses per route in the\n        #       `PrefectClient`\n        response.raise_for_status()\n\n        return response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.app_lifespan_context","title":"app_lifespan_context async","text":"

A context manager that calls startup/shutdown hooks for the given application.

Lifespan contexts are cached per application to avoid calling the lifespan hooks more than once if the context is entered in nested code. A no-op context will be returned if the context for the given application is already being managed.

This manager is robust to concurrent access within the event loop. For example, if you have concurrent contexts for the same application, it is guaranteed that startup hooks will be called before their context starts and shutdown hooks will only be called after their context exits.

A reference count is used to support nested use of clients without running lifespan hooks excessively. The first client context entered will create and enter a lifespan context. Each subsequent client will increment a reference count but will not create a new lifespan context. When each client context exits, the reference count is decremented. When the last client context exits, the lifespan will be closed.

In simple nested cases, the first client context will be the one to exit the lifespan. However, if client contexts are entered concurrently they may not exit in a consistent order. If the first client context was responsible for closing the lifespan, it would have to wait until all other client contexts to exit to avoid firing shutdown hooks while the application is in use. Waiting for the other clients to exit can introduce deadlocks, so, instead, the first client will exit without closing the lifespan context and reference counts will be used to ensure the lifespan is closed once all of the clients are done.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/base.py
@asynccontextmanager\nasync def app_lifespan_context(app: FastAPI) -> ContextManager[None]:\n\"\"\"\n    A context manager that calls startup/shutdown hooks for the given application.\n\n    Lifespan contexts are cached per application to avoid calling the lifespan hooks\n    more than once if the context is entered in nested code. A no-op context will be\n    returned if the context for the given application is already being managed.\n\n    This manager is robust to concurrent access within the event loop. For example,\n    if you have concurrent contexts for the same application, it is guaranteed that\n    startup hooks will be called before their context starts and shutdown hooks will\n    only be called after their context exits.\n\n    A reference count is used to support nested use of clients without running\n    lifespan hooks excessively. The first client context entered will create and enter\n    a lifespan context. Each subsequent client will increment a reference count but will\n    not create a new lifespan context. When each client context exits, the reference\n    count is decremented. When the last client context exits, the lifespan will be\n    closed.\n\n    In simple nested cases, the first client context will be the one to exit the\n    lifespan. However, if client contexts are entered concurrently they may not exit\n    in a consistent order. If the first client context was responsible for closing\n    the lifespan, it would have to wait until all other client contexts to exit to\n    avoid firing shutdown hooks while the application is in use. Waiting for the other\n    clients to exit can introduce deadlocks, so, instead, the first client will exit\n    without closing the lifespan context and reference counts will be used to ensure\n    the lifespan is closed once all of the clients are done.\n    \"\"\"\n    # TODO: A deadlock has been observed during multithreaded use of clients while this\n    #       lifespan context is being used. This has only been reproduced on Python 3.7\n    #       and while we hope to discourage using multiple event loops in threads, this\n    #       bug may emerge again.\n    #       See https://github.com/PrefectHQ/orion/pull/1696\n    thread_id = threading.get_ident()\n\n    # The id of the application is used instead of the hash so each application instance\n    # is managed independently even if they share the same settings. We include the\n    # thread id since applications are managed separately per thread.\n    key = (thread_id, id(app))\n\n    # On exception, this will be populated with exception details\n    exc_info = (None, None, None)\n\n    # Get a lock unique to this thread since anyio locks are not threadsafe\n    lock = APP_LIFESPANS_LOCKS[thread_id]\n\n    async with lock:\n        if key in APP_LIFESPANS:\n            # The lifespan is already being managed, just increment the reference count\n            APP_LIFESPANS_REF_COUNTS[key] += 1\n        else:\n            # Create a new lifespan manager\n            APP_LIFESPANS[key] = context = LifespanManager(\n                app, startup_timeout=30, shutdown_timeout=30\n            )\n            APP_LIFESPANS_REF_COUNTS[key] = 1\n\n            # Ensure we enter the context before releasing the lock so startup hooks\n            # are complete before another client can be used\n            await context.__aenter__()\n\n    try:\n        yield\n    except BaseException:\n        exc_info = sys.exc_info()\n        raise\n    finally:\n        # If we do not shield against anyio cancellation, the lock will return\n        # immediately and the code in its context will not run, leaving the lifespan\n        # open\n        with anyio.CancelScope(shield=True):\n            async with lock:\n                # After the consumer exits the context, decrement the reference count\n                APP_LIFESPANS_REF_COUNTS[key] -= 1\n\n                # If this the last context to exit, close the lifespan\n                if APP_LIFESPANS_REF_COUNTS[key] <= 0:\n                    APP_LIFESPANS_REF_COUNTS.pop(key)\n                    context = APP_LIFESPANS.pop(key)\n                    await context.__aexit__(*exc_info)\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/","title":"prefect.client.cloud","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud","title":"prefect.client.cloud","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudUnauthorizedError","title":"CloudUnauthorizedError","text":"

Bases: PrefectException

Raised when the CloudClient receives a 401 or 403 from the Cloud API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/cloud.py
class CloudUnauthorizedError(PrefectException):\n\"\"\"\n    Raised when the CloudClient receives a 401 or 403 from the Cloud API.\n    \"\"\"\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudClient","title":"CloudClient","text":"Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/cloud.py
class CloudClient:\n    def __init__(\n        self,\n        host: str,\n        api_key: str,\n        httpx_settings: dict = None,\n    ) -> None:\n        httpx_settings = httpx_settings or dict()\n        httpx_settings.setdefault(\"headers\", dict())\n        httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n        httpx_settings.setdefault(\"base_url\", host)\n        if not PREFECT_UNIT_TEST_MODE.value():\n            httpx_settings.setdefault(\"follow_redirects\", True)\n        self._client = PrefectHttpxClient(**httpx_settings)\n\n    async def api_healthcheck(self):\n\"\"\"\n        Attempts to connect to the Cloud API and raises the encountered exception if not\n        successful.\n\n        If successful, returns `None`.\n        \"\"\"\n        with anyio.fail_after(10):\n            await self.read_workspaces()\n\n    async def read_workspaces(self) -> List[Workspace]:\n        return pydantic.parse_obj_as(List[Workspace], await self.get(\"/me/workspaces\"))\n\n    async def read_worker_metadata(self) -> Dict[str, Any]:\n        return await self.get(\"collections/views/aggregate-worker-metadata\")\n\n    async def __aenter__(self):\n        await self._client.__aenter__()\n        return self\n\n    async def __aexit__(self, *exc_info):\n        return await self._client.__aexit__(*exc_info)\n\n    def __enter__(self):\n        raise RuntimeError(\n            \"The `CloudClient` must be entered with an async context. Use 'async \"\n            \"with CloudClient(...)' not 'with CloudClient(...)'\"\n        )\n\n    def __exit__(self, *_):\n        assert False, \"This should never be called but must be defined for __enter__\"\n\n    async def get(self, route, **kwargs):\n        return await self.request(\"GET\", route, **kwargs)\n\n    async def request(self, method, route, **kwargs):\n        try:\n            res = await self._client.request(method, route, **kwargs)\n            res.raise_for_status()\n        except httpx.HTTPStatusError as exc:\n            if exc.response.status_code in (\n                status.HTTP_401_UNAUTHORIZED,\n                status.HTTP_403_FORBIDDEN,\n            ):\n                raise CloudUnauthorizedError\n            else:\n                raise exc\n\n        if res.status_code == status.HTTP_204_NO_CONTENT:\n            return\n\n        return res.json()\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudClient.api_healthcheck","title":"api_healthcheck async","text":"

Attempts to connect to the Cloud API and raises the encountered exception if not successful.

If successful, returns None.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/cloud.py
async def api_healthcheck(self):\n\"\"\"\n    Attempts to connect to the Cloud API and raises the encountered exception if not\n    successful.\n\n    If successful, returns `None`.\n    \"\"\"\n    with anyio.fail_after(10):\n        await self.read_workspaces()\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.get_cloud_client","title":"get_cloud_client","text":"

Needs a docstring.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/cloud.py
def get_cloud_client(\n    host: Optional[str] = None,\n    api_key: Optional[str] = None,\n    httpx_settings: Optional[dict] = None,\n    infer_cloud_url: bool = False,\n) -> \"CloudClient\":\n\"\"\"\n    Needs a docstring.\n    \"\"\"\n    if httpx_settings is not None:\n        httpx_settings = httpx_settings.copy()\n\n    if infer_cloud_url is False:\n        host = host or PREFECT_CLOUD_API_URL.value()\n    else:\n        configured_url = prefect.settings.PREFECT_API_URL.value()\n        host = re.sub(r\"accounts/.{36}/workspaces/.{36}\\Z\", \"\", configured_url)\n\n    return CloudClient(\n        host=host,\n        api_key=api_key or PREFECT_API_KEY.value(),\n        httpx_settings=httpx_settings,\n    )\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/orchestration/","title":"prefect.client.orchestration","text":"

Asynchronous client implementation for communicating with the Prefect REST API.

Explore the client by communicating with an in-memory webserver \u2014 no setup required:

$ # start python REPL with native await functionality\n$ python -m asyncio\n>>> from prefect import get_client\n>>> async with get_client() as client:\n...     response = await client.hello()\n...     print(response.json())\n\ud83d\udc4b\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration","title":"prefect.client.orchestration","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient","title":"PrefectClient","text":"

An asynchronous client for interacting with the Prefect REST API.

Parameters:

Name Type Description Default api Union[str, FastAPI]

the REST API URL or FastAPI application to connect to

required api_key str

An optional API key for authentication.

None api_version str

The API version this client is compatible with.

None httpx_settings dict

An optional dictionary of settings to pass to the underlying httpx.AsyncClient

None

Examples:

Say hello to a Prefect REST API

>>> async with get_client() as client:\n>>>     response = await client.hello()\n>>>\n>>> print(response.json())\n\ud83d\udc4b\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
class PrefectClient:\n\"\"\"\n    An asynchronous client for interacting with the [Prefect REST API](/api-ref/rest-api/).\n\n    Args:\n        api: the REST API URL or FastAPI application to connect to\n        api_key: An optional API key for authentication.\n        api_version: The API version this client is compatible with.\n        httpx_settings: An optional dictionary of settings to pass to the underlying\n            `httpx.AsyncClient`\n\n    Examples:\n\n        Say hello to a Prefect REST API\n\n        <div class=\"terminal\">\n        ```\n        >>> async with get_client() as client:\n        >>>     response = await client.hello()\n        >>>\n        >>> print(response.json())\n        \ud83d\udc4b\n        ```\n        </div>\n    \"\"\"\n\n    def __init__(\n        self,\n        api: Union[str, FastAPI],\n        *,\n        api_key: str = None,\n        api_version: str = None,\n        httpx_settings: dict = None,\n    ) -> None:\n        httpx_settings = httpx_settings.copy() if httpx_settings else {}\n        httpx_settings.setdefault(\"headers\", {})\n\n        if PREFECT_API_TLS_INSECURE_SKIP_VERIFY:\n            httpx_settings.setdefault(\"verify\", False)\n\n        if api_version is None:\n            # deferred import to avoid importing the entire server unless needed\n            from prefect.server.api.server import SERVER_API_VERSION\n\n            api_version = SERVER_API_VERSION\n        httpx_settings[\"headers\"].setdefault(\"X-PREFECT-API-VERSION\", api_version)\n        if api_key:\n            httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n        # Context management\n        self._exit_stack = AsyncExitStack()\n        self._ephemeral_app: Optional[FastAPI] = None\n        self.manage_lifespan = True\n        self.server_type: ServerType\n\n        # Only set if this client started the lifespan of the application\n        self._ephemeral_lifespan: Optional[LifespanManager] = None\n\n        self._closed = False\n        self._started = False\n\n        # Connect to an external application\n        if isinstance(api, str):\n            if httpx_settings.get(\"app\"):\n                raise ValueError(\n                    \"Invalid httpx settings: `app` cannot be set when providing an \"\n                    \"api url. `app` is only for use with ephemeral instances. Provide \"\n                    \"it as the `api` parameter instead.\"\n                )\n            httpx_settings.setdefault(\"base_url\", api)\n\n            # See https://www.python-httpx.org/advanced/#pool-limit-configuration\n            httpx_settings.setdefault(\n                \"limits\",\n                httpx.Limits(\n                    # We see instability when allowing the client to open many connections at once.\n                    # Limiting concurrency results in more stable performance.\n                    max_connections=16,\n                    max_keepalive_connections=8,\n                    # The Prefect Cloud LB will keep connections alive for 30s.\n                    # Only allow the client to keep connections alive for 25s.\n                    keepalive_expiry=25,\n                ),\n            )\n\n            # See https://www.python-httpx.org/http2/\n            # Enabling HTTP/2 support on the client does not necessarily mean that your requests\n            # and responses will be transported over HTTP/2, since both the client and the server\n            # need to support HTTP/2. If you connect to a server that only supports HTTP/1.1 the\n            # client will use a standard HTTP/1.1 connection instead.\n            httpx_settings.setdefault(\"http2\", PREFECT_API_ENABLE_HTTP2.value())\n\n            self.server_type = (\n                ServerType.CLOUD\n                if api.startswith(PREFECT_CLOUD_API_URL.value())\n                else ServerType.SERVER\n            )\n\n        # Connect to an in-process application\n        elif isinstance(api, FastAPI):\n            self._ephemeral_app = api\n            self.server_type = ServerType.EPHEMERAL\n\n            # When using an ephemeral server, server-side exceptions can be raised\n            # client-side breaking all of our response error code handling. To work\n            # around this, we create an ASGI transport with application exceptions\n            # disabled instead of using the application directly.\n            # refs:\n            # - https://github.com/PrefectHQ/prefect/pull/9637\n            # - https://github.com/encode/starlette/blob/d3a11205ed35f8e5a58a711db0ff59c86fa7bb31/starlette/middleware/errors.py#L184\n            # - https://github.com/tiangolo/fastapi/blob/8cc967a7605d3883bd04ceb5d25cc94ae079612f/fastapi/applications.py#L163-L164\n            httpx_settings.setdefault(\n                \"transport\",\n                httpx.ASGITransport(\n                    app=self._ephemeral_app, raise_app_exceptions=False\n                ),\n            )\n            httpx_settings.setdefault(\"base_url\", \"http://ephemeral-prefect/api\")\n\n        else:\n            raise TypeError(\n                f\"Unexpected type {type(api).__name__!r} for argument `api`. Expected\"\n                \" 'str' or 'FastAPI'\"\n            )\n\n        # See https://www.python-httpx.org/advanced/#timeout-configuration\n        httpx_settings.setdefault(\n            \"timeout\",\n            httpx.Timeout(\n                connect=PREFECT_API_REQUEST_TIMEOUT.value(),\n                read=PREFECT_API_REQUEST_TIMEOUT.value(),\n                write=PREFECT_API_REQUEST_TIMEOUT.value(),\n                pool=PREFECT_API_REQUEST_TIMEOUT.value(),\n            ),\n        )\n\n        if not PREFECT_UNIT_TEST_MODE:\n            httpx_settings.setdefault(\"follow_redirects\", True)\n        self._client = PrefectHttpxClient(**httpx_settings)\n        self._loop = None\n\n        # See https://www.python-httpx.org/advanced/#custom-transports\n        #\n        # If we're using an HTTP/S client (not the ephemeral client), adjust the\n        # transport to add retries _after_ it is instantiated. If we alter the transport\n        # before instantiation, the transport will not be aware of proxies unless we\n        # reproduce all of the logic to make it so.\n        #\n        # Only alter the transport to set our default of 3 retries, don't modify any\n        # transport a user may have provided via httpx_settings.\n        #\n        # Making liberal use of getattr and isinstance checks here to avoid any\n        # surprises if the internals of httpx or httpcore change on us\n        if isinstance(api, str) and not httpx_settings.get(\"transport\"):\n            transport_for_url = getattr(self._client, \"_transport_for_url\", None)\n            if callable(transport_for_url):\n                server_transport = transport_for_url(httpx.URL(api))\n                if isinstance(server_transport, httpx.AsyncHTTPTransport):\n                    pool = getattr(server_transport, \"_pool\", None)\n                    if isinstance(pool, httpcore.AsyncConnectionPool):\n                        pool._retries = 3\n\n        self.logger = get_logger(\"client\")\n\n    @property\n    def api_url(self) -> httpx.URL:\n\"\"\"\n        Get the base URL for the API.\n        \"\"\"\n        return self._client.base_url\n\n    # API methods ----------------------------------------------------------------------\n\n    async def api_healthcheck(self) -> Optional[Exception]:\n\"\"\"\n        Attempts to connect to the API and returns the encountered exception if not\n        successful.\n\n        If successful, returns `None`.\n        \"\"\"\n        try:\n            await self._client.get(\"/health\")\n            return None\n        except Exception as exc:\n            return exc\n\n    async def hello(self) -> httpx.Response:\n\"\"\"\n        Send a GET request to /hello for testing purposes.\n        \"\"\"\n        return await self._client.get(\"/hello\")\n\n    async def create_flow(self, flow: \"FlowObject\") -> UUID:\n\"\"\"\n        Create a flow in the Prefect API.\n\n        Args:\n            flow: a [Flow][prefect.flows.Flow] object\n\n        Raises:\n            httpx.RequestError: if a flow was not created for any reason\n\n        Returns:\n            the ID of the flow in the backend\n        \"\"\"\n        return await self.create_flow_from_name(flow.name)\n\n    async def create_flow_from_name(self, flow_name: str) -> UUID:\n\"\"\"\n        Create a flow in the Prefect API.\n\n        Args:\n            flow_name: the name of the new flow\n\n        Raises:\n            httpx.RequestError: if a flow was not created for any reason\n\n        Returns:\n            the ID of the flow in the backend\n        \"\"\"\n        flow_data = FlowCreate(name=flow_name)\n        response = await self._client.post(\n            \"/flows/\", json=flow_data.dict(json_compatible=True)\n        )\n\n        flow_id = response.json().get(\"id\")\n        if not flow_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        # Return the id of the created flow\n        return UUID(flow_id)\n\n    async def read_flow(self, flow_id: UUID) -> Flow:\n\"\"\"\n        Query the Prefect API for a flow by id.\n\n        Args:\n            flow_id: the flow ID of interest\n\n        Returns:\n            a [Flow model][prefect.client.schemas.objects.Flow] representation of the flow\n        \"\"\"\n        response = await self._client.get(f\"/flows/{flow_id}\")\n        return Flow.parse_obj(response.json())\n\n    async def read_flows(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        work_pool_filter: WorkPoolFilter = None,\n        work_queue_filter: WorkQueueFilter = None,\n        sort: FlowSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[Flow]:\n\"\"\"\n        Query the Prefect API for flows. Only flows matching all criteria will\n        be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            work_pool_filter: filter criteria for work pools\n            work_queue_filter: filter criteria for work pool queues\n            sort: sort criteria for the flows\n            limit: limit for the flow query\n            offset: offset for the flow query\n\n        Returns:\n            a list of Flow model representations of the flows\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n            \"work_queues\": (\n                work_queue_filter.dict(json_compatible=True)\n                if work_queue_filter\n                else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        response = await self._client.post(\"/flows/filter\", json=body)\n        return pydantic.parse_obj_as(List[Flow], response.json())\n\n    async def read_flow_by_name(\n        self,\n        flow_name: str,\n    ) -> Flow:\n\"\"\"\n        Query the Prefect API for a flow by name.\n\n        Args:\n            flow_name: the name of a flow\n\n        Returns:\n            a fully hydrated Flow model\n        \"\"\"\n        response = await self._client.get(f\"/flows/name/{flow_name}\")\n        return Flow.parse_obj(response.json())\n\n    async def create_flow_run_from_deployment(\n        self,\n        deployment_id: UUID,\n        *,\n        parameters: Dict[str, Any] = None,\n        context: dict = None,\n        state: prefect.states.State = None,\n        name: str = None,\n        tags: Iterable[str] = None,\n        idempotency_key: str = None,\n        parent_task_run_id: UUID = None,\n        work_queue_name: str = None,\n    ) -> FlowRun:\n\"\"\"\n        Create a flow run for a deployment.\n\n        Args:\n            deployment_id: The deployment ID to create the flow run from\n            parameters: Parameter overrides for this flow run. Merged with the\n                deployment defaults\n            context: Optional run context data\n            state: The initial state for the run. If not provided, defaults to\n                `Scheduled` for now. Should always be a `Scheduled` type.\n            name: An optional name for the flow run. If not provided, the server will\n                generate a name.\n            tags: An optional iterable of tags to apply to the flow run; these tags\n                are merged with the deployment's tags.\n            idempotency_key: Optional idempotency key for creation of the flow run.\n                If the key matches the key of an existing flow run, the existing run will\n                be returned instead of creating a new one.\n            parent_task_run_id: if a subflow run is being created, the placeholder task\n                run identifier in the parent flow\n            work_queue_name: An optional work queue name to add this run to. If not provided,\n                will default to the deployment's set work queue.  If one is provided that does not\n                exist, a new work queue will be created within the deployment's work pool.\n\n        Raises:\n            httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n        Returns:\n            The flow run model\n        \"\"\"\n        parameters = parameters or {}\n        context = context or {}\n        state = state or prefect.states.Scheduled()\n        tags = tags or []\n\n        flow_run_create = DeploymentFlowRunCreate(\n            parameters=parameters,\n            context=context,\n            state=state.to_state_create(),\n            tags=tags,\n            name=name,\n            idempotency_key=idempotency_key,\n            parent_task_run_id=parent_task_run_id,\n        )\n\n        # done separately to avoid including this field in payloads sent to older API versions\n        if work_queue_name:\n            flow_run_create.work_queue_name = work_queue_name\n\n        response = await self._client.post(\n            f\"/deployments/{deployment_id}/create_flow_run\",\n            json=flow_run_create.dict(json_compatible=True, exclude_unset=True),\n        )\n        return FlowRun.parse_obj(response.json())\n\n    async def create_flow_run(\n        self,\n        flow: \"FlowObject\",\n        name: str = None,\n        parameters: Dict[str, Any] = None,\n        context: dict = None,\n        tags: Iterable[str] = None,\n        parent_task_run_id: UUID = None,\n        state: \"prefect.states.State\" = None,\n    ) -> FlowRun:\n\"\"\"\n        Create a flow run for a flow.\n\n        Args:\n            flow: The flow model to create the flow run for\n            name: An optional name for the flow run\n            parameters: Parameter overrides for this flow run.\n            context: Optional run context data\n            tags: a list of tags to apply to this flow run\n            parent_task_run_id: if a subflow run is being created, the placeholder task\n                run identifier in the parent flow\n            state: The initial state for the run. If not provided, defaults to\n                `Scheduled` for now. Should always be a `Scheduled` type.\n\n        Raises:\n            httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n        Returns:\n            The flow run model\n        \"\"\"\n        parameters = parameters or {}\n        context = context or {}\n\n        if state is None:\n            state = prefect.states.Pending()\n\n        # Retrieve the flow id\n        flow_id = await self.create_flow(flow)\n\n        flow_run_create = FlowRunCreate(\n            flow_id=flow_id,\n            flow_version=flow.version,\n            name=name,\n            parameters=parameters,\n            context=context,\n            tags=list(tags or []),\n            parent_task_run_id=parent_task_run_id,\n            state=state.to_state_create(),\n            empirical_policy=FlowRunPolicy(\n                retries=flow.retries,\n                retry_delay=flow.retry_delay_seconds,\n            ),\n        )\n\n        flow_run_create_json = flow_run_create.dict(json_compatible=True)\n        response = await self._client.post(\"/flow_runs/\", json=flow_run_create_json)\n        flow_run = FlowRun.parse_obj(response.json())\n\n        # Restore the parameters to the local objects to retain expectations about\n        # Python objects\n        flow_run.parameters = parameters\n\n        return flow_run\n\n    async def update_flow_run(\n        self,\n        flow_run_id: UUID,\n        flow_version: Optional[str] = None,\n        parameters: Optional[dict] = None,\n        name: Optional[str] = None,\n        tags: Optional[Iterable[str]] = None,\n        empirical_policy: Optional[FlowRunPolicy] = None,\n        infrastructure_pid: Optional[str] = None,\n    ) -> httpx.Response:\n\"\"\"\n        Update a flow run's details.\n\n        Args:\n            flow_run_id: The identifier for the flow run to update.\n            flow_version: A new version string for the flow run.\n            parameters: A dictionary of parameter values for the flow run. This will not\n                be merged with any existing parameters.\n            name: A new name for the flow run.\n            empirical_policy: A new flow run orchestration policy. This will not be\n                merged with any existing policy.\n            tags: An iterable of new tags for the flow run. These will not be merged with\n                any existing tags.\n            infrastructure_pid: The id of flow run as returned by an\n                infrastructure block.\n\n        Returns:\n            an `httpx.Response` object from the PATCH request\n        \"\"\"\n        params = {}\n        if flow_version is not None:\n            params[\"flow_version\"] = flow_version\n        if parameters is not None:\n            params[\"parameters\"] = parameters\n        if name is not None:\n            params[\"name\"] = name\n        if tags is not None:\n            params[\"tags\"] = tags\n        if empirical_policy is not None:\n            params[\"empirical_policy\"] = empirical_policy\n        if infrastructure_pid:\n            params[\"infrastructure_pid\"] = infrastructure_pid\n\n        flow_run_data = FlowRunUpdate(**params)\n\n        return await self._client.patch(\n            f\"/flow_runs/{flow_run_id}\",\n            json=flow_run_data.dict(json_compatible=True, exclude_unset=True),\n        )\n\n    async def delete_flow_run(\n        self,\n        flow_run_id: UUID,\n    ) -> None:\n\"\"\"\n        Delete a flow run by UUID.\n\n        Args:\n            flow_run_id: The flow run UUID of interest.\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(f\"/flow_runs/{flow_run_id}\"),\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_concurrency_limit(\n        self,\n        tag: str,\n        concurrency_limit: int,\n    ) -> UUID:\n\"\"\"\n        Create a tag concurrency limit in the Prefect API. These limits govern concurrently\n        running tasks.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n            concurrency_limit: the maximum number of concurrent task runs for a given tag\n\n        Raises:\n            httpx.RequestError: if the concurrency limit was not created for any reason\n\n        Returns:\n            the ID of the concurrency limit in the backend\n        \"\"\"\n\n        concurrency_limit_create = ConcurrencyLimitCreate(\n            tag=tag,\n            concurrency_limit=concurrency_limit,\n        )\n        response = await self._client.post(\n            \"/concurrency_limits/\",\n            json=concurrency_limit_create.dict(json_compatible=True),\n        )\n\n        concurrency_limit_id = response.json().get(\"id\")\n\n        if not concurrency_limit_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(concurrency_limit_id)\n\n    async def read_concurrency_limit_by_tag(\n        self,\n        tag: str,\n    ):\n\"\"\"\n        Read the concurrency limit set on a specific tag.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: if the concurrency limit was not created for any reason\n\n        Returns:\n            the concurrency limit set on a specific tag\n        \"\"\"\n        try:\n            response = await self._client.get(\n                f\"/concurrency_limits/tag/{tag}\",\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        concurrency_limit_id = response.json().get(\"id\")\n\n        if not concurrency_limit_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        concurrency_limit = ConcurrencyLimit.parse_obj(response.json())\n        return concurrency_limit\n\n    async def read_concurrency_limits(\n        self,\n        limit: int,\n        offset: int,\n    ):\n\"\"\"\n        Lists concurrency limits set on task run tags.\n\n        Args:\n            limit: the maximum number of concurrency limits returned\n            offset: the concurrency limit query offset\n\n        Returns:\n            a list of concurrency limits\n        \"\"\"\n\n        body = {\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        response = await self._client.post(\"/concurrency_limits/filter\", json=body)\n        return pydantic.parse_obj_as(List[ConcurrencyLimit], response.json())\n\n    async def reset_concurrency_limit_by_tag(\n        self,\n        tag: str,\n        slot_override: Optional[List[Union[UUID, str]]] = None,\n    ):\n\"\"\"\n        Resets the concurrency limit slots set on a specific tag.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n            slot_override: a list of task run IDs that are currently using a\n                concurrency slot, please check that any task run IDs included in\n                `slot_override` are currently running, otherwise those concurrency\n                slots will never be released.\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        \"\"\"\n        if slot_override is not None:\n            slot_override = [str(slot) for slot in slot_override]\n\n        try:\n            await self._client.post(\n                f\"/concurrency_limits/tag/{tag}/reset\",\n                json=dict(slot_override=slot_override),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_concurrency_limit_by_tag(\n        self,\n        tag: str,\n    ):\n\"\"\"\n        Delete the concurrency limit set on a specific tag.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        \"\"\"\n        try:\n            await self._client.delete(\n                f\"/concurrency_limits/tag/{tag}\",\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_work_queue(\n        self,\n        name: str,\n        tags: Optional[List[str]] = None,\n        description: Optional[str] = None,\n        is_paused: Optional[bool] = None,\n        concurrency_limit: Optional[int] = None,\n        priority: Optional[int] = None,\n        work_pool_name: Optional[str] = None,\n    ) -> WorkQueue:\n\"\"\"\n        Create a work queue.\n\n        Args:\n            name: a unique name for the work queue\n            tags: DEPRECATED: an optional list of tags to filter on; only work scheduled with these tags\n                will be included in the queue. This option will be removed on 2023-02-23.\n            description: An optional description for the work queue.\n            is_paused: Whether or not the work queue is paused.\n            concurrency_limit: An optional concurrency limit for the work queue.\n            priority: The queue's priority. Lower values are higher priority (1 is the highest).\n            work_pool_name: The name of the work pool to use for this queue.\n\n        Raises:\n            prefect.exceptions.ObjectAlreadyExists: If request returns 409\n            httpx.RequestError: If request fails\n\n        Returns:\n            The created work queue\n        \"\"\"\n        if tags:\n            warnings.warn(\n                (\n                    \"The use of tags for creating work queue filters is deprecated.\"\n                    \" This option will be removed on 2023-02-23.\"\n                ),\n                DeprecationWarning,\n            )\n            filter = QueueFilter(tags=tags)\n        else:\n            filter = None\n        create_model = WorkQueueCreate(name=name, filter=filter)\n        if description is not None:\n            create_model.description = description\n        if is_paused is not None:\n            create_model.is_paused = is_paused\n        if concurrency_limit is not None:\n            create_model.concurrency_limit = concurrency_limit\n        if priority is not None:\n            create_model.priority = priority\n\n        data = create_model.dict(json_compatible=True)\n        try:\n            if work_pool_name is not None:\n                response = await self._client.post(\n                    f\"/work_pools/{work_pool_name}/queues\", json=data\n                )\n            else:\n                response = await self._client.post(\"/work_queues/\", json=data)\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            elif e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return WorkQueue.parse_obj(response.json())\n\n    async def read_work_queue_by_name(\n        self,\n        name: str,\n        work_pool_name: Optional[str] = None,\n    ) -> WorkQueue:\n\"\"\"\n        Read a work queue by name.\n\n        Args:\n            name (str): a unique name for the work queue\n            work_pool_name (str, optional): the name of the work pool\n                the queue belongs to.\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: if no work queue is found\n            httpx.HTTPStatusError: other status errors\n\n        Returns:\n            WorkQueue: a work queue API object\n        \"\"\"\n        try:\n            if work_pool_name is not None:\n                response = await self._client.get(\n                    f\"/work_pools/{work_pool_name}/queues/{name}\"\n                )\n            else:\n                response = await self._client.get(f\"/work_queues/name/{name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        return WorkQueue.parse_obj(response.json())\n\n    async def update_work_queue(self, id: UUID, **kwargs):\n\"\"\"\n        Update properties of a work queue.\n\n        Args:\n            id: the ID of the work queue to update\n            **kwargs: the fields to update\n\n        Raises:\n            ValueError: if no kwargs are provided\n            prefect.exceptions.ObjectNotFound: if request returns 404\n            httpx.RequestError: if the request fails\n\n        \"\"\"\n        if not kwargs:\n            raise ValueError(\"No fields provided to update.\")\n\n        data = WorkQueueUpdate(**kwargs).dict(json_compatible=True, exclude_unset=True)\n        try:\n            await self._client.patch(f\"/work_queues/{id}\", json=data)\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def get_runs_in_work_queue(\n        self,\n        id: UUID,\n        limit: int = 10,\n        scheduled_before: datetime.datetime = None,\n    ) -> List[FlowRun]:\n\"\"\"\n        Read flow runs off a work queue.\n\n        Args:\n            id: the id of the work queue to read from\n            limit: a limit on the number of runs to return\n            scheduled_before: a timestamp; only runs scheduled before this time will be returned.\n                Defaults to now.\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            List[FlowRun]: a list of FlowRun objects read from the queue\n        \"\"\"\n        if scheduled_before is None:\n            scheduled_before = pendulum.now(\"UTC\")\n\n        try:\n            response = await self._client.post(\n                f\"/work_queues/{id}/get_runs\",\n                json={\n                    \"limit\": limit,\n                    \"scheduled_before\": scheduled_before.isoformat(),\n                },\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return pydantic.parse_obj_as(List[FlowRun], response.json())\n\n    async def read_work_queue(\n        self,\n        id: UUID,\n    ) -> WorkQueue:\n\"\"\"\n        Read a work queue.\n\n        Args:\n            id: the id of the work queue to load\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            WorkQueue: an instantiated WorkQueue object\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/work_queues/{id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return WorkQueue.parse_obj(response.json())\n\n    async def match_work_queues(\n        self,\n        prefixes: List[str],\n    ) -> List[WorkQueue]:\n\"\"\"\n        Query the Prefect API for work queues with names with a specific prefix.\n\n        Args:\n            prefixes: a list of strings used to match work queue name prefixes\n\n        Returns:\n            a list of WorkQueue model representations\n                of the work queues\n        \"\"\"\n        page_length = 100\n        current_page = 0\n        work_queues = []\n\n        while True:\n            new_queues = await self.read_work_queues(\n                offset=current_page * page_length,\n                limit=page_length,\n                work_queue_filter=WorkQueueFilter(\n                    name=WorkQueueFilterName(startswith_=prefixes)\n                ),\n            )\n            if not new_queues:\n                break\n            work_queues += new_queues\n            current_page += 1\n\n        return work_queues\n\n    async def delete_work_queue_by_id(\n        self,\n        id: UUID,\n    ):\n\"\"\"\n        Delete a work queue by its ID.\n\n        Args:\n            id: the id of the work queue to delete\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(\n                f\"/work_queues/{id}\",\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_block_type(self, block_type: BlockTypeCreate) -> BlockType:\n\"\"\"\n        Create a block type in the Prefect API.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/block_types/\",\n                json=block_type.dict(\n                    json_compatible=True, exclude_unset=True, exclude={\"id\"}\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n        return BlockType.parse_obj(response.json())\n\n    async def create_block_schema(self, block_schema: BlockSchemaCreate) -> BlockSchema:\n\"\"\"\n        Create a block schema in the Prefect API.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/block_schemas/\",\n                json=block_schema.dict(\n                    json_compatible=True,\n                    exclude_unset=True,\n                    exclude={\"id\", \"block_type\", \"checksum\"},\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n        return BlockSchema.parse_obj(response.json())\n\n    async def create_block_document(\n        self,\n        block_document: Union[BlockDocument, BlockDocumentCreate],\n        include_secrets: bool = True,\n    ) -> BlockDocument:\n\"\"\"\n        Create a block document in the Prefect API. This data is used to configure a\n        corresponding Block.\n\n        Args:\n            include_secrets (bool): whether to include secret values\n                on the stored Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. Note Blocks may not work as expected if\n                this is set to `False`.\n        \"\"\"\n        if isinstance(block_document, BlockDocument):\n            block_document = BlockDocumentCreate.parse_obj(\n                block_document.dict(\n                    json_compatible=True,\n                    include_secrets=include_secrets,\n                    exclude_unset=True,\n                    exclude={\"id\", \"block_schema\", \"block_type\"},\n                ),\n            )\n\n        try:\n            response = await self._client.post(\n                \"/block_documents/\",\n                json=block_document.dict(\n                    json_compatible=True,\n                    include_secrets=include_secrets,\n                    exclude_unset=True,\n                    exclude={\"id\", \"block_schema\", \"block_type\"},\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n        return BlockDocument.parse_obj(response.json())\n\n    async def update_block_document(\n        self,\n        block_document_id: UUID,\n        block_document: BlockDocumentUpdate,\n    ):\n\"\"\"\n        Update a block document in the Prefect API.\n        \"\"\"\n        try:\n            await self._client.patch(\n                f\"/block_documents/{block_document_id}\",\n                json=block_document.dict(\n                    json_compatible=True,\n                    exclude_unset=True,\n                    include={\"data\", \"merge_existing_data\", \"block_schema_id\"},\n                    include_secrets=True,\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_block_document(self, block_document_id: UUID):\n\"\"\"\n        Delete a block document.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/block_documents/{block_document_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_block_type_by_slug(self, slug: str) -> BlockType:\n\"\"\"\n        Read a block type by its slug.\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/block_types/slug/{slug}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockType.parse_obj(response.json())\n\n    async def read_block_schema_by_checksum(\n        self, checksum: str, version: Optional[str] = None\n    ) -> BlockSchema:\n\"\"\"\n        Look up a block schema checksum\n        \"\"\"\n        try:\n            url = f\"/block_schemas/checksum/{checksum}\"\n            if version is not None:\n                url = f\"{url}?version={version}\"\n            response = await self._client.get(url)\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockSchema.parse_obj(response.json())\n\n    async def update_block_type(self, block_type_id: UUID, block_type: BlockTypeUpdate):\n\"\"\"\n        Update a block document in the Prefect API.\n        \"\"\"\n        try:\n            await self._client.patch(\n                f\"/block_types/{block_type_id}\",\n                json=block_type.dict(\n                    json_compatible=True,\n                    exclude_unset=True,\n                    include=BlockTypeUpdate.updatable_fields(),\n                    include_secrets=True,\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_block_type(self, block_type_id: UUID):\n\"\"\"\n        Delete a block type.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/block_types/{block_type_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            elif (\n                e.response.status_code == status.HTTP_403_FORBIDDEN\n                and e.response.json()[\"detail\"]\n                == \"protected block types cannot be deleted.\"\n            ):\n                raise prefect.exceptions.ProtectedBlockError(\n                    \"Protected block types cannot be deleted.\"\n                ) from e\n            else:\n                raise\n\n    async def read_block_types(self) -> List[BlockType]:\n\"\"\"\n        Read all block types\n        Raises:\n            httpx.RequestError: if the block types were not found\n\n        Returns:\n            List of BlockTypes.\n        \"\"\"\n        response = await self._client.post(\"/block_types/filter\", json={})\n        return pydantic.parse_obj_as(List[BlockType], response.json())\n\n    async def read_block_schemas(self) -> List[BlockSchema]:\n\"\"\"\n        Read all block schemas\n        Raises:\n            httpx.RequestError: if a valid block schema was not found\n\n        Returns:\n            A BlockSchema.\n        \"\"\"\n        response = await self._client.post(\"/block_schemas/filter\", json={})\n        return pydantic.parse_obj_as(List[BlockSchema], response.json())\n\n    async def read_block_document(\n        self,\n        block_document_id: UUID,\n        include_secrets: bool = True,\n    ):\n\"\"\"\n        Read the block document with the specified ID.\n\n        Args:\n            block_document_id: the block document id\n            include_secrets (bool): whether to include secret values\n                on the Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. These fields are automatically obfuscated\n                by Pydantic, but users can additionally choose not to receive\n                their values from the API. Note that any business logic on the\n                Block may not work if this is `False`.\n\n        Raises:\n            httpx.RequestError: if the block document was not found for any reason\n\n        Returns:\n            A block document or None.\n        \"\"\"\n        try:\n            response = await self._client.get(\n                f\"/block_documents/{block_document_id}\",\n                params=dict(include_secrets=include_secrets),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockDocument.parse_obj(response.json())\n\n    async def read_block_document_by_name(\n        self,\n        name: str,\n        block_type_slug: str,\n        include_secrets: bool = True,\n    ):\n\"\"\"\n        Read the block document with the specified name that corresponds to a\n        specific block type name.\n\n        Args:\n            name: The block document name.\n            block_type_slug: The block type slug.\n            include_secrets (bool): whether to include secret values\n                on the Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. These fields are automatically obfuscated\n                by Pydantic, but users can additionally choose not to receive\n                their values from the API. Note that any business logic on the\n                Block may not work if this is `False`.\n\n        Raises:\n            httpx.RequestError: if the block document was not found for any reason\n\n        Returns:\n            A block document or None.\n        \"\"\"\n        try:\n            response = await self._client.get(\n                f\"/block_types/slug/{block_type_slug}/block_documents/name/{name}\",\n                params=dict(include_secrets=include_secrets),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockDocument.parse_obj(response.json())\n\n    async def read_block_documents(\n        self,\n        block_schema_type: Optional[str] = None,\n        offset: Optional[int] = None,\n        limit: Optional[int] = None,\n        include_secrets: bool = True,\n    ):\n\"\"\"\n        Read block documents\n\n        Args:\n            block_schema_type: an optional block schema type\n            offset: an offset\n            limit: the number of blocks to return\n            include_secrets (bool): whether to include secret values\n                on the Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. These fields are automatically obfuscated\n                by Pydantic, but users can additionally choose not to receive\n                their values from the API. Note that any business logic on the\n                Block may not work if this is `False`.\n\n        Returns:\n            A list of block documents\n        \"\"\"\n        response = await self._client.post(\n            \"/block_documents/filter\",\n            json=dict(\n                block_schema_type=block_schema_type,\n                offset=offset,\n                limit=limit,\n                include_secrets=include_secrets,\n            ),\n        )\n        return pydantic.parse_obj_as(List[BlockDocument], response.json())\n\n    async def create_deployment(\n        self,\n        flow_id: UUID,\n        name: str,\n        version: str = None,\n        schedule: SCHEDULE_TYPES = None,\n        parameters: Dict[str, Any] = None,\n        description: str = None,\n        work_queue_name: str = None,\n        work_pool_name: str = None,\n        tags: List[str] = None,\n        storage_document_id: UUID = None,\n        manifest_path: str = None,\n        path: str = None,\n        entrypoint: str = None,\n        infrastructure_document_id: UUID = None,\n        infra_overrides: Dict[str, Any] = None,\n        parameter_openapi_schema: dict = None,\n        is_schedule_active: Optional[bool] = None,\n        pull_steps: Optional[List[dict]] = None,\n    ) -> UUID:\n\"\"\"\n        Create a deployment.\n\n        Args:\n            flow_id: the flow ID to create a deployment for\n            name: the name of the deployment\n            version: an optional version string for the deployment\n            schedule: an optional schedule to apply to the deployment\n            tags: an optional list of tags to apply to the deployment\n            storage_document_id: an reference to the storage block document\n                used for the deployed flow\n            infrastructure_document_id: an reference to the infrastructure block document\n                to use for this deployment\n\n        Raises:\n            httpx.RequestError: if the deployment was not created for any reason\n\n        Returns:\n            the ID of the deployment in the backend\n        \"\"\"\n        deployment_create = DeploymentCreate(\n            flow_id=flow_id,\n            name=name,\n            version=version,\n            schedule=schedule,\n            parameters=dict(parameters or {}),\n            tags=list(tags or []),\n            work_queue_name=work_queue_name,\n            description=description,\n            storage_document_id=storage_document_id,\n            path=path,\n            entrypoint=entrypoint,\n            manifest_path=manifest_path,  # for backwards compat\n            infrastructure_document_id=infrastructure_document_id,\n            infra_overrides=infra_overrides or {},\n            parameter_openapi_schema=parameter_openapi_schema,\n            is_schedule_active=is_schedule_active,\n            pull_steps=pull_steps,\n        )\n\n        if work_pool_name is not None:\n            deployment_create.work_pool_name = work_pool_name\n\n        # Exclude newer fields that are not set to avoid compatibility issues\n        exclude = {\n            field\n            for field in [\"work_pool_name\", \"work_queue_name\"]\n            if field not in deployment_create.__fields_set__\n        }\n\n        if deployment_create.is_schedule_active is None:\n            exclude.add(\"is_schedule_active\")\n\n        if deployment_create.pull_steps is None:\n            exclude.add(\"pull_steps\")\n\n        json = deployment_create.dict(json_compatible=True, exclude=exclude)\n        response = await self._client.post(\n            \"/deployments/\",\n            json=json,\n        )\n        deployment_id = response.json().get(\"id\")\n        if not deployment_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(deployment_id)\n\n    async def update_deployment(\n        self,\n        deployment: Deployment,\n        schedule: SCHEDULE_TYPES = None,\n        is_schedule_active: bool = None,\n    ):\n        deployment_update = DeploymentUpdate(\n            version=deployment.version,\n            schedule=schedule if schedule is not None else deployment.schedule,\n            is_schedule_active=(\n                is_schedule_active\n                if is_schedule_active is not None\n                else deployment.is_schedule_active\n            ),\n            description=deployment.description,\n            work_queue_name=deployment.work_queue_name,\n            tags=deployment.tags,\n            manifest_path=deployment.manifest_path,\n            path=deployment.path,\n            entrypoint=deployment.entrypoint,\n            parameters=deployment.parameters,\n            storage_document_id=deployment.storage_document_id,\n            infrastructure_document_id=deployment.infrastructure_document_id,\n            infra_overrides=deployment.infra_overrides,\n        )\n\n        if getattr(deployment, \"work_pool_name\", None) is not None:\n            deployment_update.work_pool_name = deployment.work_pool_name\n\n        await self._client.patch(\n            f\"/deployments/{deployment.id}\",\n            json=deployment_update.dict(json_compatible=True),\n        )\n\n    async def _create_deployment_from_schema(self, schema: DeploymentCreate) -> UUID:\n\"\"\"\n        Create a deployment from a prepared `DeploymentCreate` schema.\n        \"\"\"\n        # TODO: We are likely to remove this method once we have considered the\n        #       packaging interface for deployments further.\n        response = await self._client.post(\n            \"/deployments/\", json=schema.dict(json_compatible=True)\n        )\n        deployment_id = response.json().get(\"id\")\n        if not deployment_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(deployment_id)\n\n    async def read_deployment(\n        self,\n        deployment_id: UUID,\n    ) -> DeploymentResponse:\n\"\"\"\n        Query the Prefect API for a deployment by id.\n\n        Args:\n            deployment_id: the deployment ID of interest\n\n        Returns:\n            a [Deployment model][prefect.client.schemas.objects.Deployment] representation of the deployment\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/deployments/{deployment_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return DeploymentResponse.parse_obj(response.json())\n\n    async def read_deployment_by_name(\n        self,\n        name: str,\n    ) -> DeploymentResponse:\n\"\"\"\n        Query the Prefect API for a deployment by name.\n\n        Args:\n            name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            a Deployment model representation of the deployment\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/deployments/name/{name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        return DeploymentResponse.parse_obj(response.json())\n\n    async def read_deployments(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        work_pool_filter: WorkPoolFilter = None,\n        work_queue_filter: WorkQueueFilter = None,\n        limit: int = None,\n        sort: DeploymentSort = None,\n        offset: int = 0,\n    ) -> List[DeploymentResponse]:\n\"\"\"\n        Query the Prefect API for deployments. Only deployments matching all\n        the provided criteria will be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            work_pool_filter: filter criteria for work pools\n            work_queue_filter: filter criteria for work pool queues\n            limit: a limit for the deployment query\n            offset: an offset for the deployment query\n\n        Returns:\n            a list of Deployment model representations\n                of the deployments\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n            \"work_pool_queues\": (\n                work_queue_filter.dict(json_compatible=True)\n                if work_queue_filter\n                else None\n            ),\n            \"limit\": limit,\n            \"offset\": offset,\n            \"sort\": sort,\n        }\n\n        response = await self._client.post(\"/deployments/filter\", json=body)\n        return pydantic.parse_obj_as(List[DeploymentResponse], response.json())\n\n    async def delete_deployment(\n        self,\n        deployment_id: UUID,\n    ):\n\"\"\"\n        Delete deployment by id.\n\n        Args:\n            deployment_id: The deployment id of interest.\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(f\"/deployments/{deployment_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_flow_run(self, flow_run_id: UUID) -> FlowRun:\n\"\"\"\n        Query the Prefect API for a flow run by id.\n\n        Args:\n            flow_run_id: the flow run ID of interest\n\n        Returns:\n            a Flow Run model representation of the flow run\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/flow_runs/{flow_run_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return FlowRun.parse_obj(response.json())\n\n    async def resume_flow_run(self, flow_run_id: UUID) -> OrchestrationResult:\n\"\"\"\n        Resumes a paused flow run.\n\n        Args:\n            flow_run_id: the flow run ID of interest\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        try:\n            response = await self._client.post(f\"/flow_runs/{flow_run_id}/resume\")\n        except httpx.HTTPStatusError:\n            raise\n\n        return OrchestrationResult.parse_obj(response.json())\n\n    async def read_flow_runs(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        work_pool_filter: WorkPoolFilter = None,\n        work_queue_filter: WorkQueueFilter = None,\n        sort: FlowRunSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[FlowRun]:\n\"\"\"\n        Query the Prefect API for flow runs. Only flow runs matching all criteria will\n        be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            work_pool_filter: filter criteria for work pools\n            work_queue_filter: filter criteria for work pool queues\n            sort: sort criteria for the flow runs\n            limit: limit for the flow run query\n            offset: offset for the flow run query\n\n        Returns:\n            a list of Flow Run model representations\n                of the flow runs\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n            \"work_pool_queues\": (\n                work_queue_filter.dict(json_compatible=True)\n                if work_queue_filter\n                else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        response = await self._client.post(\"/flow_runs/filter\", json=body)\n        return pydantic.parse_obj_as(List[FlowRun], response.json())\n\n    async def set_flow_run_state(\n        self,\n        flow_run_id: UUID,\n        state: \"prefect.states.State\",\n        force: bool = False,\n    ) -> OrchestrationResult:\n\"\"\"\n        Set the state of a flow run.\n\n        Args:\n            flow_run_id: the id of the flow run\n            state: the state to set\n            force: if True, disregard orchestration logic when setting the state,\n                forcing the Prefect API to accept the state\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        state_create = state.to_state_create()\n        state_create.state_details.flow_run_id = flow_run_id\n        try:\n            response = await self._client.post(\n                f\"/flow_runs/{flow_run_id}/set_state\",\n                json=dict(state=state_create.dict(json_compatible=True), force=force),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        return OrchestrationResult.parse_obj(response.json())\n\n    async def read_flow_run_states(\n        self, flow_run_id: UUID\n    ) -> List[prefect.states.State]:\n\"\"\"\n        Query for the states of a flow run\n\n        Args:\n            flow_run_id: the id of the flow run\n\n        Returns:\n            a list of State model representations\n                of the flow run states\n        \"\"\"\n        response = await self._client.get(\n            \"/flow_run_states/\", params=dict(flow_run_id=str(flow_run_id))\n        )\n        return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n\n    async def set_task_run_name(self, task_run_id: UUID, name: str):\n        task_run_data = TaskRunUpdate(name=name)\n        return await self._client.patch(\n            f\"/task_runs/{task_run_id}\",\n            json=task_run_data.dict(json_compatible=True, exclude_unset=True),\n        )\n\n    async def create_task_run(\n        self,\n        task: \"TaskObject\",\n        flow_run_id: UUID,\n        dynamic_key: str,\n        name: str = None,\n        extra_tags: Iterable[str] = None,\n        state: prefect.states.State = None,\n        task_inputs: Dict[\n            str,\n            List[\n                Union[\n                    TaskRunResult,\n                    Parameter,\n                    Constant,\n                ]\n            ],\n        ] = None,\n    ) -> TaskRun:\n\"\"\"\n        Create a task run\n\n        Args:\n            task: The Task to run\n            flow_run_id: The flow run id with which to associate the task run\n            dynamic_key: A key unique to this particular run of a Task within the flow\n            name: An optional name for the task run\n            extra_tags: an optional list of extra tags to apply to the task run in\n                addition to `task.tags`\n            state: The initial state for the run. If not provided, defaults to\n                `Pending` for now. Should always be a `Scheduled` type.\n            task_inputs: the set of inputs passed to the task\n\n        Returns:\n            The created task run.\n        \"\"\"\n        tags = set(task.tags).union(extra_tags or [])\n\n        if state is None:\n            state = prefect.states.Pending()\n\n        task_run_data = TaskRunCreate(\n            name=name,\n            flow_run_id=flow_run_id,\n            task_key=task.task_key,\n            dynamic_key=dynamic_key,\n            tags=list(tags),\n            task_version=task.version,\n            empirical_policy=TaskRunPolicy(\n                retries=task.retries,\n                retry_delay=task.retry_delay_seconds,\n                retry_jitter_factor=task.retry_jitter_factor,\n            ),\n            state=state.to_state_create(),\n            task_inputs=task_inputs or {},\n        )\n\n        response = await self._client.post(\n            \"/task_runs/\", json=task_run_data.dict(json_compatible=True)\n        )\n        return TaskRun.parse_obj(response.json())\n\n    async def read_task_run(self, task_run_id: UUID) -> TaskRun:\n\"\"\"\n        Query the Prefect API for a task run by id.\n\n        Args:\n            task_run_id: the task run ID of interest\n\n        Returns:\n            a Task Run model representation of the task run\n        \"\"\"\n        response = await self._client.get(f\"/task_runs/{task_run_id}\")\n        return TaskRun.parse_obj(response.json())\n\n    async def read_task_runs(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        sort: TaskRunSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[TaskRun]:\n\"\"\"\n        Query the Prefect API for task runs. Only task runs matching all criteria will\n        be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            sort: sort criteria for the task runs\n            limit: a limit for the task run query\n            offset: an offset for the task run query\n\n        Returns:\n            a list of Task Run model representations\n                of the task runs\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\"/task_runs/filter\", json=body)\n        return pydantic.parse_obj_as(List[TaskRun], response.json())\n\n    async def set_task_run_state(\n        self,\n        task_run_id: UUID,\n        state: prefect.states.State,\n        force: bool = False,\n    ) -> OrchestrationResult:\n\"\"\"\n        Set the state of a task run.\n\n        Args:\n            task_run_id: the id of the task run\n            state: the state to set\n            force: if True, disregard orchestration logic when setting the state,\n                forcing the Prefect API to accept the state\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        state_create = state.to_state_create()\n        state_create.state_details.task_run_id = task_run_id\n        response = await self._client.post(\n            f\"/task_runs/{task_run_id}/set_state\",\n            json=dict(state=state_create.dict(json_compatible=True), force=force),\n        )\n        return OrchestrationResult.parse_obj(response.json())\n\n    async def read_task_run_states(\n        self, task_run_id: UUID\n    ) -> List[prefect.states.State]:\n\"\"\"\n        Query for the states of a task run\n\n        Args:\n            task_run_id: the id of the task run\n\n        Returns:\n            a list of State model representations of the task run states\n        \"\"\"\n        response = await self._client.get(\n            \"/task_run_states/\", params=dict(task_run_id=str(task_run_id))\n        )\n        return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n\n    async def create_logs(self, logs: Iterable[Union[LogCreate, dict]]) -> None:\n\"\"\"\n        Create logs for a flow or task run\n\n        Args:\n            logs: An iterable of `LogCreate` objects or already json-compatible dicts\n        \"\"\"\n        serialized_logs = [\n            log.dict(json_compatible=True) if isinstance(log, LogCreate) else log\n            for log in logs\n        ]\n        await self._client.post(\"/logs/\", json=serialized_logs)\n\n    async def create_flow_run_notification_policy(\n        self,\n        block_document_id: UUID,\n        is_active: bool = True,\n        tags: List[str] = None,\n        state_names: List[str] = None,\n        message_template: Optional[str] = None,\n    ) -> UUID:\n\"\"\"\n        Create a notification policy for flow runs\n\n        Args:\n            block_document_id: The block document UUID\n            is_active: Whether the notification policy is active\n            tags: List of flow tags\n            state_names: List of state names\n            message_template: Notification message template\n        \"\"\"\n        if tags is None:\n            tags = []\n        if state_names is None:\n            state_names = []\n\n        policy = FlowRunNotificationPolicyCreate(\n            block_document_id=block_document_id,\n            is_active=is_active,\n            tags=tags,\n            state_names=state_names,\n            message_template=message_template,\n        )\n        response = await self._client.post(\n            \"/flow_run_notification_policies/\",\n            json=policy.dict(json_compatible=True),\n        )\n\n        policy_id = response.json().get(\"id\")\n        if not policy_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(policy_id)\n\n    async def read_flow_run_notification_policies(\n        self,\n        flow_run_notification_policy_filter: FlowRunNotificationPolicyFilter,\n        limit: Optional[int] = None,\n        offset: int = 0,\n    ) -> List[FlowRunNotificationPolicy]:\n\"\"\"\n        Query the Prefect API for flow run notification policies. Only policies matching all criteria will\n        be returned.\n\n        Args:\n            flow_run_notification_policy_filter: filter criteria for notification policies\n            limit: a limit for the notification policies query\n            offset: an offset for the notification policies query\n\n        Returns:\n            a list of FlowRunNotificationPolicy model representations\n                of the notification policies\n        \"\"\"\n        body = {\n            \"flow_run_notification_policy_filter\": (\n                flow_run_notification_policy_filter.dict(json_compatible=True)\n                if flow_run_notification_policy_filter\n                else None\n            ),\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\n            \"/flow_run_notification_policies/filter\", json=body\n        )\n        return pydantic.parse_obj_as(List[FlowRunNotificationPolicy], response.json())\n\n    async def read_logs(\n        self,\n        log_filter: LogFilter = None,\n        limit: int = None,\n        offset: int = None,\n        sort: LogSort = LogSort.TIMESTAMP_ASC,\n    ) -> None:\n\"\"\"\n        Read flow and task run logs.\n        \"\"\"\n        body = {\n            \"logs\": log_filter.dict(json_compatible=True) if log_filter else None,\n            \"limit\": limit,\n            \"offset\": offset,\n            \"sort\": sort,\n        }\n\n        response = await self._client.post(\"/logs/filter\", json=body)\n        return pydantic.parse_obj_as(List[Log], response.json())\n\n    async def resolve_datadoc(self, datadoc: DataDocument) -> Any:\n\"\"\"\n        Recursively decode possibly nested data documents.\n\n        \"server\" encoded documents will be retrieved from the server.\n\n        Args:\n            datadoc: The data document to resolve\n\n        Returns:\n            a decoded object, the innermost data\n        \"\"\"\n        if not isinstance(datadoc, DataDocument):\n            raise TypeError(\n                f\"`resolve_datadoc` received invalid type {type(datadoc).__name__}\"\n            )\n\n        async def resolve_inner(data):\n            if isinstance(data, bytes):\n                try:\n                    data = DataDocument.parse_raw(data)\n                except pydantic.ValidationError:\n                    return data\n\n            if isinstance(data, DataDocument):\n                return await resolve_inner(data.decode())\n\n            return data\n\n        return await resolve_inner(datadoc)\n\n    async def send_worker_heartbeat(self, work_pool_name: str, worker_name: str):\n\"\"\"\n        Sends a worker heartbeat for a given work pool.\n\n        Args:\n            work_pool_name: The name of the work pool to heartbeat against.\n            worker_name: The name of the worker sending the heartbeat.\n        \"\"\"\n        await self._client.post(\n            f\"/work_pools/{work_pool_name}/workers/heartbeat\",\n            json={\"name\": worker_name},\n        )\n\n    async def read_workers_for_work_pool(\n        self,\n        work_pool_name: str,\n        worker_filter: Optional[WorkerFilter] = None,\n        offset: Optional[int] = None,\n        limit: Optional[int] = None,\n    ) -> List[Worker]:\n\"\"\"\n        Reads workers for a given work pool.\n\n        Args:\n            work_pool_name: The name of the work pool for which to get\n                member workers.\n            worker_filter: Criteria by which to filter workers.\n            limit: Limit for the worker query.\n            offset: Limit for the worker query.\n        \"\"\"\n        response = await self._client.post(\n            f\"/work_pools/{work_pool_name}/workers/filter\",\n            json={\n                \"worker_filter\": (\n                    worker_filter.dict(json_compatible=True, exclude_unset=True)\n                    if worker_filter\n                    else None\n                ),\n                \"offset\": offset,\n                \"limit\": limit,\n            },\n        )\n\n        return pydantic.parse_obj_as(List[Worker], response.json())\n\n    async def read_work_pool(self, work_pool_name: str) -> WorkPool:\n\"\"\"\n        Reads information for a given work pool\n\n        Args:\n            work_pool_name: The name of the work pool to for which to get\n                information.\n\n        Returns:\n            Information about the requested work pool.\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/work_pools/{work_pool_name}\")\n            return pydantic.parse_obj_as(WorkPool, response.json())\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_work_pools(\n        self,\n        limit: Optional[int] = None,\n        offset: int = 0,\n        work_pool_filter: Optional[WorkPoolFilter] = None,\n    ) -> List[WorkPool]:\n\"\"\"\n        Reads work pools.\n\n        Args:\n            limit: Limit for the work pool query.\n            offset: Offset for the work pool query.\n            work_pool_filter: Criteria by which to filter work pools.\n\n        Returns:\n            A list of work pools.\n        \"\"\"\n\n        body = {\n            \"limit\": limit,\n            \"offset\": offset,\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n        }\n        response = await self._client.post(\"/work_pools/filter\", json=body)\n        return pydantic.parse_obj_as(List[WorkPool], response.json())\n\n    async def create_work_pool(\n        self,\n        work_pool: WorkPoolCreate,\n    ) -> WorkPool:\n\"\"\"\n        Creates a work pool with the provided configuration.\n\n        Args:\n            work_pool: Desired configuration for the new work pool.\n\n        Returns:\n            Information about the newly created work pool.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/work_pools/\",\n                json=work_pool.dict(json_compatible=True, exclude_unset=True),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n\n        return pydantic.parse_obj_as(WorkPool, response.json())\n\n    async def update_work_pool(\n        self,\n        work_pool_name: str,\n        work_pool: WorkPoolUpdate,\n    ):\n        await self._client.patch(\n            f\"/work_pools/{work_pool_name}\",\n            json=work_pool.dict(json_compatible=True, exclude_unset=True),\n        )\n\n    async def delete_work_pool(\n        self,\n        work_pool_name: str,\n    ):\n\"\"\"\n        Deletes a work pool.\n\n        Args:\n            work_pool_name: Name of the work pool to delete.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/work_pools/{work_pool_name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_work_queues(\n        self,\n        work_pool_name: Optional[str] = None,\n        work_queue_filter: Optional[WorkQueueFilter] = None,\n        limit: Optional[int] = None,\n        offset: Optional[int] = None,\n    ) -> List[WorkQueue]:\n\"\"\"\n        Retrieves queues for a work pool.\n\n        Args:\n            work_pool_name: Name of the work pool for which to get queues.\n            work_queue_filter: Criteria by which to filter queues.\n            limit: Limit for the queue query.\n            offset: Limit for the queue query.\n\n        Returns:\n            List of queues for the specified work pool.\n        \"\"\"\n        json = {\n            \"work_queues\": (\n                work_queue_filter.dict(json_compatible=True, exclude_unset=True)\n                if work_queue_filter\n                else None\n            ),\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        if work_pool_name:\n            try:\n                response = await self._client.post(\n                    f\"/work_pools/{work_pool_name}/queues/filter\",\n                    json=json,\n                )\n            except httpx.HTTPStatusError as e:\n                if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                    raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n                else:\n                    raise\n        else:\n            response = await self._client.post(\"/work_queues/filter\", json=json)\n\n        return pydantic.parse_obj_as(List[WorkQueue], response.json())\n\n    async def get_scheduled_flow_runs_for_work_pool(\n        self,\n        work_pool_name: str,\n        work_queue_names: Optional[List[str]] = None,\n        scheduled_before: Optional[datetime.datetime] = None,\n    ) -> List[WorkerFlowRunResponse]:\n\"\"\"\n        Retrieves scheduled flow runs for the provided set of work pool queues.\n\n        Args:\n            work_pool_name: The name of the work pool that the work pool\n                queues are associated with.\n            work_queue_names: The names of the work pool queues from which\n                to get scheduled flow runs.\n            scheduled_before: Datetime used to filter returned flow runs. Flow runs\n                scheduled for after the given datetime string will not be returned.\n\n        Returns:\n            A list of worker flow run responses containing information about the\n            retrieved flow runs.\n        \"\"\"\n        body: Dict[str, Any] = {}\n        if work_queue_names is not None:\n            body[\"work_queue_names\"] = list(work_queue_names)\n        if scheduled_before:\n            body[\"scheduled_before\"] = str(scheduled_before)\n\n        response = await self._client.post(\n            f\"/work_pools/{work_pool_name}/get_scheduled_flow_runs\",\n            json=body,\n        )\n\n        return pydantic.parse_obj_as(List[WorkerFlowRunResponse], response.json())\n\n    async def create_artifact(\n        self,\n        artifact: ArtifactCreate,\n    ) -> Artifact:\n\"\"\"\n        Creates an artifact with the provided configuration.\n\n        Args:\n            artifact: Desired configuration for the new artifact.\n        Returns:\n            Information about the newly created artifact.\n        \"\"\"\n\n        response = await self._client.post(\n            \"/artifacts/\",\n            json=artifact.dict(json_compatible=True, exclude_unset=True),\n        )\n\n        return pydantic.parse_obj_as(Artifact, response.json())\n\n    async def read_artifacts(\n        self,\n        *,\n        artifact_filter: ArtifactFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        sort: ArtifactSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[Artifact]:\n\"\"\"\n        Query the Prefect API for artifacts. Only artifacts matching all criteria will\n        be returned.\n        Args:\n            artifact_filter: filter criteria for artifacts\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            sort: sort criteria for the artifacts\n            limit: limit for the artifact query\n            offset: offset for the artifact query\n        Returns:\n            a list of Artifact model representations of the artifacts\n        \"\"\"\n        body = {\n            \"artifacts\": (\n                artifact_filter.dict(json_compatible=True) if artifact_filter else None\n            ),\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\"/artifacts/filter\", json=body)\n        return pydantic.parse_obj_as(List[Artifact], response.json())\n\n    async def read_latest_artifacts(\n        self,\n        *,\n        artifact_filter: ArtifactCollectionFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        sort: ArtifactCollectionSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[ArtifactCollection]:\n\"\"\"\n        Query the Prefect API for artifacts. Only artifacts matching all criteria will\n        be returned.\n        Args:\n            artifact_filter: filter criteria for artifacts\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            sort: sort criteria for the artifacts\n            limit: limit for the artifact query\n            offset: offset for the artifact query\n        Returns:\n            a list of Artifact model representations of the artifacts\n        \"\"\"\n        body = {\n            \"artifacts\": (\n                artifact_filter.dict(json_compatible=True) if artifact_filter else None\n            ),\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\"/artifacts/latest/filter\", json=body)\n        return pydantic.parse_obj_as(List[ArtifactCollection], response.json())\n\n    async def delete_artifact(self, artifact_id: UUID) -> None:\n\"\"\"\n        Deletes an artifact with the provided id.\n\n        Args:\n            artifact_id: The id of the artifact to delete.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/artifacts/{artifact_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_variable_by_name(self, name: str) -> Optional[Variable]:\n\"\"\"Reads a variable by name. Returns None if no variable is found.\"\"\"\n        try:\n            response = await self._client.get(f\"/variables/name/{name}\")\n            return pydantic.parse_obj_as(Variable, response.json())\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                return None\n            else:\n                raise\n\n    async def delete_variable_by_name(self, name: str):\n\"\"\"Deletes a variable by name.\"\"\"\n        try:\n            await self._client.delete(f\"/variables/name/{name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_variables(self, limit: int = None) -> List[Variable]:\n\"\"\"Reads all variables.\"\"\"\n        response = await self._client.post(\"/variables/filter\", json={\"limit\": limit})\n        return pydantic.parse_obj_as(List[Variable], response.json())\n\n    async def read_worker_metadata(self) -> Dict[str, Any]:\n\"\"\"Reads worker metadata stored in Prefect collection registry.\"\"\"\n        response = await self._client.get(\"collections/views/aggregate-worker-metadata\")\n        response.raise_for_status()\n        return response.json()\n\n    async def create_automation(self, automation: Automation) -> UUID:\n\"\"\"Creates an automation in Prefect Cloud.\"\"\"\n        if self.server_type != ServerType.CLOUD:\n            raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n        response = await self._client.post(\n            \"/automations/\",\n            json=automation.dict(json_compatible=True),\n        )\n\n        return UUID(response.json()[\"id\"])\n\n    async def read_resource_related_automations(\n        self, resource_id: str\n    ) -> List[ExistingAutomation]:\n        if self.server_type != ServerType.CLOUD:\n            raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n        response = await self._client.get(f\"/automations/related-to/{resource_id}\")\n        response.raise_for_status()\n        return pydantic.parse_obj_as(List[ExistingAutomation], response.json())\n\n    async def delete_resource_owned_automations(self, resource_id: str):\n        if self.server_type != ServerType.CLOUD:\n            raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n        await self._client.delete(f\"/automations/owned-by/{resource_id}\")\n\n    async def increment_concurrency_slots(\n        self, names: List[str], slots: int, mode: str\n    ) -> httpx.Response:\n        return await self._client.post(\n            \"/v2/concurrency_limits/increment\",\n            json={\"names\": names, \"slots\": slots, \"mode\": mode},\n        )\n\n    async def release_concurrency_slots(\n        self, names: List[str], slots: int\n    ) -> httpx.Response:\n        return await self._client.post(\n            \"/v2/concurrency_limits/decrement\",\n            json={\"names\": names, \"slots\": slots},\n        )\n\n    async def __aenter__(self):\n\"\"\"\n        Start the client.\n\n        If the client is already started, this will raise an exception.\n\n        If the client is already closed, this will raise an exception. Use a new client\n        instance instead.\n        \"\"\"\n        if self._closed:\n            # httpx.AsyncClient does not allow reuse so we will not either.\n            raise RuntimeError(\n                \"The client cannot be started again after closing. \"\n                \"Retrieve a new client with `get_client()` instead.\"\n            )\n\n        if self._started:\n            # httpx.AsyncClient does not allow reentrancy so we will not either.\n            raise RuntimeError(\"The client cannot be started more than once.\")\n\n        self._loop = asyncio.get_running_loop()\n        await self._exit_stack.__aenter__()\n\n        # Enter a lifespan context if using an ephemeral application.\n        # See https://github.com/encode/httpx/issues/350\n        if self._ephemeral_app and self.manage_lifespan:\n            self._ephemeral_lifespan = await self._exit_stack.enter_async_context(\n                app_lifespan_context(self._ephemeral_app)\n            )\n\n        if self._ephemeral_app:\n            self.logger.debug(\n                \"Using ephemeral application with database at \"\n                f\"{PREFECT_API_DATABASE_CONNECTION_URL.value()}\"\n            )\n        else:\n            self.logger.debug(f\"Connecting to API at {self.api_url}\")\n\n        # Enter the httpx client's context\n        await self._exit_stack.enter_async_context(self._client)\n\n        self._started = True\n\n        return self\n\n    async def __aexit__(self, *exc_info):\n\"\"\"\n        Shutdown the client.\n        \"\"\"\n        self._closed = True\n        return await self._exit_stack.__aexit__(*exc_info)\n\n    def __enter__(self):\n        raise RuntimeError(\n            \"The `PrefectClient` must be entered with an async context. Use 'async \"\n            \"with PrefectClient(...)' not 'with PrefectClient(...)'\"\n        )\n\n    def __exit__(self, *_):\n        assert False, \"This should never be called but must be defined for __enter__\"\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.api_url","title":"api_url: httpx.URL property","text":"

Get the base URL for the API.

","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.api_healthcheck","title":"api_healthcheck async","text":"

Attempts to connect to the API and returns the encountered exception if not successful.

If successful, returns None.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def api_healthcheck(self) -> Optional[Exception]:\n\"\"\"\n    Attempts to connect to the API and returns the encountered exception if not\n    successful.\n\n    If successful, returns `None`.\n    \"\"\"\n    try:\n        await self._client.get(\"/health\")\n        return None\n    except Exception as exc:\n        return exc\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.hello","title":"hello async","text":"

Send a GET request to /hello for testing purposes.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def hello(self) -> httpx.Response:\n\"\"\"\n    Send a GET request to /hello for testing purposes.\n    \"\"\"\n    return await self._client.get(\"/hello\")\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow","title":"create_flow async","text":"

Create a flow in the Prefect API.

Parameters:

Name Type Description Default flow FlowObject

a Flow object

required

Raises:

Type Description httpx.RequestError

if a flow was not created for any reason

Returns:

Type Description UUID

the ID of the flow in the backend

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_flow(self, flow: \"FlowObject\") -> UUID:\n\"\"\"\n    Create a flow in the Prefect API.\n\n    Args:\n        flow: a [Flow][prefect.flows.Flow] object\n\n    Raises:\n        httpx.RequestError: if a flow was not created for any reason\n\n    Returns:\n        the ID of the flow in the backend\n    \"\"\"\n    return await self.create_flow_from_name(flow.name)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_from_name","title":"create_flow_from_name async","text":"

Create a flow in the Prefect API.

Parameters:

Name Type Description Default flow_name str

the name of the new flow

required

Raises:

Type Description httpx.RequestError

if a flow was not created for any reason

Returns:

Type Description UUID

the ID of the flow in the backend

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_flow_from_name(self, flow_name: str) -> UUID:\n\"\"\"\n    Create a flow in the Prefect API.\n\n    Args:\n        flow_name: the name of the new flow\n\n    Raises:\n        httpx.RequestError: if a flow was not created for any reason\n\n    Returns:\n        the ID of the flow in the backend\n    \"\"\"\n    flow_data = FlowCreate(name=flow_name)\n    response = await self._client.post(\n        \"/flows/\", json=flow_data.dict(json_compatible=True)\n    )\n\n    flow_id = response.json().get(\"id\")\n    if not flow_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    # Return the id of the created flow\n    return UUID(flow_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow","title":"read_flow async","text":"

Query the Prefect API for a flow by id.

Parameters:

Name Type Description Default flow_id UUID

the flow ID of interest

required

Returns:

Type Description Flow

a Flow model representation of the flow

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flow(self, flow_id: UUID) -> Flow:\n\"\"\"\n    Query the Prefect API for a flow by id.\n\n    Args:\n        flow_id: the flow ID of interest\n\n    Returns:\n        a [Flow model][prefect.client.schemas.objects.Flow] representation of the flow\n    \"\"\"\n    response = await self._client.get(f\"/flows/{flow_id}\")\n    return Flow.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flows","title":"read_flows async","text":"

Query the Prefect API for flows. Only flows matching all criteria will be returned.

Parameters:

Name Type Description Default flow_filter FlowFilter

filter criteria for flows

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None deployment_filter DeploymentFilter

filter criteria for deployments

None work_pool_filter WorkPoolFilter

filter criteria for work pools

None work_queue_filter WorkQueueFilter

filter criteria for work pool queues

None sort FlowSort

sort criteria for the flows

None limit int

limit for the flow query

None offset int

offset for the flow query

0

Returns:

Type Description List[Flow]

a list of Flow model representations of the flows

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flows(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    work_pool_filter: WorkPoolFilter = None,\n    work_queue_filter: WorkQueueFilter = None,\n    sort: FlowSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[Flow]:\n\"\"\"\n    Query the Prefect API for flows. Only flows matching all criteria will\n    be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        work_pool_filter: filter criteria for work pools\n        work_queue_filter: filter criteria for work pool queues\n        sort: sort criteria for the flows\n        limit: limit for the flow query\n        offset: offset for the flow query\n\n    Returns:\n        a list of Flow model representations of the flows\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n        \"work_queues\": (\n            work_queue_filter.dict(json_compatible=True)\n            if work_queue_filter\n            else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    response = await self._client.post(\"/flows/filter\", json=body)\n    return pydantic.parse_obj_as(List[Flow], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_by_name","title":"read_flow_by_name async","text":"

Query the Prefect API for a flow by name.

Parameters:

Name Type Description Default flow_name str

the name of a flow

required

Returns:

Type Description Flow

a fully hydrated Flow model

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flow_by_name(\n    self,\n    flow_name: str,\n) -> Flow:\n\"\"\"\n    Query the Prefect API for a flow by name.\n\n    Args:\n        flow_name: the name of a flow\n\n    Returns:\n        a fully hydrated Flow model\n    \"\"\"\n    response = await self._client.get(f\"/flows/name/{flow_name}\")\n    return Flow.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_from_deployment","title":"create_flow_run_from_deployment async","text":"

Create a flow run for a deployment.

Parameters:

Name Type Description Default deployment_id UUID

The deployment ID to create the flow run from

required parameters Dict[str, Any]

Parameter overrides for this flow run. Merged with the deployment defaults

None context dict

Optional run context data

None state prefect.states.State

The initial state for the run. If not provided, defaults to Scheduled for now. Should always be a Scheduled type.

None name str

An optional name for the flow run. If not provided, the server will generate a name.

None tags Iterable[str]

An optional iterable of tags to apply to the flow run; these tags are merged with the deployment's tags.

None idempotency_key str

Optional idempotency key for creation of the flow run. If the key matches the key of an existing flow run, the existing run will be returned instead of creating a new one.

None parent_task_run_id UUID

if a subflow run is being created, the placeholder task run identifier in the parent flow

None work_queue_name str

An optional work queue name to add this run to. If not provided, will default to the deployment's set work queue. If one is provided that does not exist, a new work queue will be created within the deployment's work pool.

None

Raises:

Type Description httpx.RequestError

if the Prefect API does not successfully create a run for any reason

Returns:

Type Description FlowRun

The flow run model

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_flow_run_from_deployment(\n    self,\n    deployment_id: UUID,\n    *,\n    parameters: Dict[str, Any] = None,\n    context: dict = None,\n    state: prefect.states.State = None,\n    name: str = None,\n    tags: Iterable[str] = None,\n    idempotency_key: str = None,\n    parent_task_run_id: UUID = None,\n    work_queue_name: str = None,\n) -> FlowRun:\n\"\"\"\n    Create a flow run for a deployment.\n\n    Args:\n        deployment_id: The deployment ID to create the flow run from\n        parameters: Parameter overrides for this flow run. Merged with the\n            deployment defaults\n        context: Optional run context data\n        state: The initial state for the run. If not provided, defaults to\n            `Scheduled` for now. Should always be a `Scheduled` type.\n        name: An optional name for the flow run. If not provided, the server will\n            generate a name.\n        tags: An optional iterable of tags to apply to the flow run; these tags\n            are merged with the deployment's tags.\n        idempotency_key: Optional idempotency key for creation of the flow run.\n            If the key matches the key of an existing flow run, the existing run will\n            be returned instead of creating a new one.\n        parent_task_run_id: if a subflow run is being created, the placeholder task\n            run identifier in the parent flow\n        work_queue_name: An optional work queue name to add this run to. If not provided,\n            will default to the deployment's set work queue.  If one is provided that does not\n            exist, a new work queue will be created within the deployment's work pool.\n\n    Raises:\n        httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n    Returns:\n        The flow run model\n    \"\"\"\n    parameters = parameters or {}\n    context = context or {}\n    state = state or prefect.states.Scheduled()\n    tags = tags or []\n\n    flow_run_create = DeploymentFlowRunCreate(\n        parameters=parameters,\n        context=context,\n        state=state.to_state_create(),\n        tags=tags,\n        name=name,\n        idempotency_key=idempotency_key,\n        parent_task_run_id=parent_task_run_id,\n    )\n\n    # done separately to avoid including this field in payloads sent to older API versions\n    if work_queue_name:\n        flow_run_create.work_queue_name = work_queue_name\n\n    response = await self._client.post(\n        f\"/deployments/{deployment_id}/create_flow_run\",\n        json=flow_run_create.dict(json_compatible=True, exclude_unset=True),\n    )\n    return FlowRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run","title":"create_flow_run async","text":"

Create a flow run for a flow.

Parameters:

Name Type Description Default flow FlowObject

The flow model to create the flow run for

required name str

An optional name for the flow run

None parameters Dict[str, Any]

Parameter overrides for this flow run.

None context dict

Optional run context data

None tags Iterable[str]

a list of tags to apply to this flow run

None parent_task_run_id UUID

if a subflow run is being created, the placeholder task run identifier in the parent flow

None state prefect.states.State

The initial state for the run. If not provided, defaults to Scheduled for now. Should always be a Scheduled type.

None

Raises:

Type Description httpx.RequestError

if the Prefect API does not successfully create a run for any reason

Returns:

Type Description FlowRun

The flow run model

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_flow_run(\n    self,\n    flow: \"FlowObject\",\n    name: str = None,\n    parameters: Dict[str, Any] = None,\n    context: dict = None,\n    tags: Iterable[str] = None,\n    parent_task_run_id: UUID = None,\n    state: \"prefect.states.State\" = None,\n) -> FlowRun:\n\"\"\"\n    Create a flow run for a flow.\n\n    Args:\n        flow: The flow model to create the flow run for\n        name: An optional name for the flow run\n        parameters: Parameter overrides for this flow run.\n        context: Optional run context data\n        tags: a list of tags to apply to this flow run\n        parent_task_run_id: if a subflow run is being created, the placeholder task\n            run identifier in the parent flow\n        state: The initial state for the run. If not provided, defaults to\n            `Scheduled` for now. Should always be a `Scheduled` type.\n\n    Raises:\n        httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n    Returns:\n        The flow run model\n    \"\"\"\n    parameters = parameters or {}\n    context = context or {}\n\n    if state is None:\n        state = prefect.states.Pending()\n\n    # Retrieve the flow id\n    flow_id = await self.create_flow(flow)\n\n    flow_run_create = FlowRunCreate(\n        flow_id=flow_id,\n        flow_version=flow.version,\n        name=name,\n        parameters=parameters,\n        context=context,\n        tags=list(tags or []),\n        parent_task_run_id=parent_task_run_id,\n        state=state.to_state_create(),\n        empirical_policy=FlowRunPolicy(\n            retries=flow.retries,\n            retry_delay=flow.retry_delay_seconds,\n        ),\n    )\n\n    flow_run_create_json = flow_run_create.dict(json_compatible=True)\n    response = await self._client.post(\"/flow_runs/\", json=flow_run_create_json)\n    flow_run = FlowRun.parse_obj(response.json())\n\n    # Restore the parameters to the local objects to retain expectations about\n    # Python objects\n    flow_run.parameters = parameters\n\n    return flow_run\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_flow_run","title":"update_flow_run async","text":"

Update a flow run's details.

Parameters:

Name Type Description Default flow_run_id UUID

The identifier for the flow run to update.

required flow_version Optional[str]

A new version string for the flow run.

None parameters Optional[dict]

A dictionary of parameter values for the flow run. This will not be merged with any existing parameters.

None name Optional[str]

A new name for the flow run.

None empirical_policy Optional[FlowRunPolicy]

A new flow run orchestration policy. This will not be merged with any existing policy.

None tags Optional[Iterable[str]]

An iterable of new tags for the flow run. These will not be merged with any existing tags.

None infrastructure_pid Optional[str]

The id of flow run as returned by an infrastructure block.

None

Returns:

Type Description httpx.Response

an httpx.Response object from the PATCH request

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def update_flow_run(\n    self,\n    flow_run_id: UUID,\n    flow_version: Optional[str] = None,\n    parameters: Optional[dict] = None,\n    name: Optional[str] = None,\n    tags: Optional[Iterable[str]] = None,\n    empirical_policy: Optional[FlowRunPolicy] = None,\n    infrastructure_pid: Optional[str] = None,\n) -> httpx.Response:\n\"\"\"\n    Update a flow run's details.\n\n    Args:\n        flow_run_id: The identifier for the flow run to update.\n        flow_version: A new version string for the flow run.\n        parameters: A dictionary of parameter values for the flow run. This will not\n            be merged with any existing parameters.\n        name: A new name for the flow run.\n        empirical_policy: A new flow run orchestration policy. This will not be\n            merged with any existing policy.\n        tags: An iterable of new tags for the flow run. These will not be merged with\n            any existing tags.\n        infrastructure_pid: The id of flow run as returned by an\n            infrastructure block.\n\n    Returns:\n        an `httpx.Response` object from the PATCH request\n    \"\"\"\n    params = {}\n    if flow_version is not None:\n        params[\"flow_version\"] = flow_version\n    if parameters is not None:\n        params[\"parameters\"] = parameters\n    if name is not None:\n        params[\"name\"] = name\n    if tags is not None:\n        params[\"tags\"] = tags\n    if empirical_policy is not None:\n        params[\"empirical_policy\"] = empirical_policy\n    if infrastructure_pid:\n        params[\"infrastructure_pid\"] = infrastructure_pid\n\n    flow_run_data = FlowRunUpdate(**params)\n\n    return await self._client.patch(\n        f\"/flow_runs/{flow_run_id}\",\n        json=flow_run_data.dict(json_compatible=True, exclude_unset=True),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run","title":"delete_flow_run async","text":"

Delete a flow run by UUID.

Parameters:

Name Type Description Default flow_run_id UUID

The flow run UUID of interest.

required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If requests fails

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_flow_run(\n    self,\n    flow_run_id: UUID,\n) -> None:\n\"\"\"\n    Delete a flow run by UUID.\n\n    Args:\n        flow_run_id: The flow run UUID of interest.\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(f\"/flow_runs/{flow_run_id}\"),\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_concurrency_limit","title":"create_concurrency_limit async","text":"

Create a tag concurrency limit in the Prefect API. These limits govern concurrently running tasks.

Parameters:

Name Type Description Default tag str

a tag the concurrency limit is applied to

required concurrency_limit int

the maximum number of concurrent task runs for a given tag

required

Raises:

Type Description httpx.RequestError

if the concurrency limit was not created for any reason

Returns:

Type Description UUID

the ID of the concurrency limit in the backend

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_concurrency_limit(\n    self,\n    tag: str,\n    concurrency_limit: int,\n) -> UUID:\n\"\"\"\n    Create a tag concurrency limit in the Prefect API. These limits govern concurrently\n    running tasks.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n        concurrency_limit: the maximum number of concurrent task runs for a given tag\n\n    Raises:\n        httpx.RequestError: if the concurrency limit was not created for any reason\n\n    Returns:\n        the ID of the concurrency limit in the backend\n    \"\"\"\n\n    concurrency_limit_create = ConcurrencyLimitCreate(\n        tag=tag,\n        concurrency_limit=concurrency_limit,\n    )\n    response = await self._client.post(\n        \"/concurrency_limits/\",\n        json=concurrency_limit_create.dict(json_compatible=True),\n    )\n\n    concurrency_limit_id = response.json().get(\"id\")\n\n    if not concurrency_limit_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    return UUID(concurrency_limit_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_concurrency_limit_by_tag","title":"read_concurrency_limit_by_tag async","text":"

Read the concurrency limit set on a specific tag.

Parameters:

Name Type Description Default tag str

a tag the concurrency limit is applied to

required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

if the concurrency limit was not created for any reason

Returns:

Type Description

the concurrency limit set on a specific tag

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_concurrency_limit_by_tag(\n    self,\n    tag: str,\n):\n\"\"\"\n    Read the concurrency limit set on a specific tag.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: if the concurrency limit was not created for any reason\n\n    Returns:\n        the concurrency limit set on a specific tag\n    \"\"\"\n    try:\n        response = await self._client.get(\n            f\"/concurrency_limits/tag/{tag}\",\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    concurrency_limit_id = response.json().get(\"id\")\n\n    if not concurrency_limit_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    concurrency_limit = ConcurrencyLimit.parse_obj(response.json())\n    return concurrency_limit\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_concurrency_limits","title":"read_concurrency_limits async","text":"

Lists concurrency limits set on task run tags.

Parameters:

Name Type Description Default limit int

the maximum number of concurrency limits returned

required offset int

the concurrency limit query offset

required

Returns:

Type Description

a list of concurrency limits

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_concurrency_limits(\n    self,\n    limit: int,\n    offset: int,\n):\n\"\"\"\n    Lists concurrency limits set on task run tags.\n\n    Args:\n        limit: the maximum number of concurrency limits returned\n        offset: the concurrency limit query offset\n\n    Returns:\n        a list of concurrency limits\n    \"\"\"\n\n    body = {\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    response = await self._client.post(\"/concurrency_limits/filter\", json=body)\n    return pydantic.parse_obj_as(List[ConcurrencyLimit], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.reset_concurrency_limit_by_tag","title":"reset_concurrency_limit_by_tag async","text":"

Resets the concurrency limit slots set on a specific tag.

Parameters:

Name Type Description Default tag str

a tag the concurrency limit is applied to

required slot_override Optional[List[Union[UUID, str]]]

a list of task run IDs that are currently using a concurrency slot, please check that any task run IDs included in slot_override are currently running, otherwise those concurrency slots will never be released.

None

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If request fails

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def reset_concurrency_limit_by_tag(\n    self,\n    tag: str,\n    slot_override: Optional[List[Union[UUID, str]]] = None,\n):\n\"\"\"\n    Resets the concurrency limit slots set on a specific tag.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n        slot_override: a list of task run IDs that are currently using a\n            concurrency slot, please check that any task run IDs included in\n            `slot_override` are currently running, otherwise those concurrency\n            slots will never be released.\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    \"\"\"\n    if slot_override is not None:\n        slot_override = [str(slot) for slot in slot_override]\n\n    try:\n        await self._client.post(\n            f\"/concurrency_limits/tag/{tag}/reset\",\n            json=dict(slot_override=slot_override),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_concurrency_limit_by_tag","title":"delete_concurrency_limit_by_tag async","text":"

Delete the concurrency limit set on a specific tag.

Parameters:

Name Type Description Default tag str

a tag the concurrency limit is applied to

required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If request fails

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_concurrency_limit_by_tag(\n    self,\n    tag: str,\n):\n\"\"\"\n    Delete the concurrency limit set on a specific tag.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    \"\"\"\n    try:\n        await self._client.delete(\n            f\"/concurrency_limits/tag/{tag}\",\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_work_queue","title":"create_work_queue async","text":"

Create a work queue.

Parameters:

Name Type Description Default name str

a unique name for the work queue

required tags Optional[List[str]]

will be included in the queue. This option will be removed on 2023-02-23.

None description Optional[str]

An optional description for the work queue.

None is_paused Optional[bool]

Whether or not the work queue is paused.

None concurrency_limit Optional[int]

An optional concurrency limit for the work queue.

None priority Optional[int]

The queue's priority. Lower values are higher priority (1 is the highest).

None work_pool_name Optional[str]

The name of the work pool to use for this queue.

None

Raises:

Type Description prefect.exceptions.ObjectAlreadyExists

If request returns 409

httpx.RequestError

If request fails

Returns:

Type Description WorkQueue

The created work queue

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_work_queue(\n    self,\n    name: str,\n    tags: Optional[List[str]] = None,\n    description: Optional[str] = None,\n    is_paused: Optional[bool] = None,\n    concurrency_limit: Optional[int] = None,\n    priority: Optional[int] = None,\n    work_pool_name: Optional[str] = None,\n) -> WorkQueue:\n\"\"\"\n    Create a work queue.\n\n    Args:\n        name: a unique name for the work queue\n        tags: DEPRECATED: an optional list of tags to filter on; only work scheduled with these tags\n            will be included in the queue. This option will be removed on 2023-02-23.\n        description: An optional description for the work queue.\n        is_paused: Whether or not the work queue is paused.\n        concurrency_limit: An optional concurrency limit for the work queue.\n        priority: The queue's priority. Lower values are higher priority (1 is the highest).\n        work_pool_name: The name of the work pool to use for this queue.\n\n    Raises:\n        prefect.exceptions.ObjectAlreadyExists: If request returns 409\n        httpx.RequestError: If request fails\n\n    Returns:\n        The created work queue\n    \"\"\"\n    if tags:\n        warnings.warn(\n            (\n                \"The use of tags for creating work queue filters is deprecated.\"\n                \" This option will be removed on 2023-02-23.\"\n            ),\n            DeprecationWarning,\n        )\n        filter = QueueFilter(tags=tags)\n    else:\n        filter = None\n    create_model = WorkQueueCreate(name=name, filter=filter)\n    if description is not None:\n        create_model.description = description\n    if is_paused is not None:\n        create_model.is_paused = is_paused\n    if concurrency_limit is not None:\n        create_model.concurrency_limit = concurrency_limit\n    if priority is not None:\n        create_model.priority = priority\n\n    data = create_model.dict(json_compatible=True)\n    try:\n        if work_pool_name is not None:\n            response = await self._client.post(\n                f\"/work_pools/{work_pool_name}/queues\", json=data\n            )\n        else:\n            response = await self._client.post(\"/work_queues/\", json=data)\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        elif e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue_by_name","title":"read_work_queue_by_name async","text":"

Read a work queue by name.

Parameters:

Name Type Description Default name str

a unique name for the work queue

required work_pool_name str

the name of the work pool the queue belongs to.

None

Raises:

Type Description prefect.exceptions.ObjectNotFound

if no work queue is found

httpx.HTTPStatusError

other status errors

Returns:

Name Type Description WorkQueue WorkQueue

a work queue API object

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_work_queue_by_name(\n    self,\n    name: str,\n    work_pool_name: Optional[str] = None,\n) -> WorkQueue:\n\"\"\"\n    Read a work queue by name.\n\n    Args:\n        name (str): a unique name for the work queue\n        work_pool_name (str, optional): the name of the work pool\n            the queue belongs to.\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: if no work queue is found\n        httpx.HTTPStatusError: other status errors\n\n    Returns:\n        WorkQueue: a work queue API object\n    \"\"\"\n    try:\n        if work_pool_name is not None:\n            response = await self._client.get(\n                f\"/work_pools/{work_pool_name}/queues/{name}\"\n            )\n        else:\n            response = await self._client.get(f\"/work_queues/name/{name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_work_queue","title":"update_work_queue async","text":"

Update properties of a work queue.

Parameters:

Name Type Description Default id UUID

the ID of the work queue to update

required **kwargs

the fields to update

{}

Raises:

Type Description ValueError

if no kwargs are provided

prefect.exceptions.ObjectNotFound

if request returns 404

httpx.RequestError

if the request fails

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def update_work_queue(self, id: UUID, **kwargs):\n\"\"\"\n    Update properties of a work queue.\n\n    Args:\n        id: the ID of the work queue to update\n        **kwargs: the fields to update\n\n    Raises:\n        ValueError: if no kwargs are provided\n        prefect.exceptions.ObjectNotFound: if request returns 404\n        httpx.RequestError: if the request fails\n\n    \"\"\"\n    if not kwargs:\n        raise ValueError(\"No fields provided to update.\")\n\n    data = WorkQueueUpdate(**kwargs).dict(json_compatible=True, exclude_unset=True)\n    try:\n        await self._client.patch(f\"/work_queues/{id}\", json=data)\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_runs_in_work_queue","title":"get_runs_in_work_queue async","text":"

Read flow runs off a work queue.

Parameters:

Name Type Description Default id UUID

the id of the work queue to read from

required limit int

a limit on the number of runs to return

10 scheduled_before datetime.datetime

a timestamp; only runs scheduled before this time will be returned. Defaults to now.

None

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If request fails

Returns:

Type Description List[FlowRun]

List[FlowRun]: a list of FlowRun objects read from the queue

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def get_runs_in_work_queue(\n    self,\n    id: UUID,\n    limit: int = 10,\n    scheduled_before: datetime.datetime = None,\n) -> List[FlowRun]:\n\"\"\"\n    Read flow runs off a work queue.\n\n    Args:\n        id: the id of the work queue to read from\n        limit: a limit on the number of runs to return\n        scheduled_before: a timestamp; only runs scheduled before this time will be returned.\n            Defaults to now.\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        List[FlowRun]: a list of FlowRun objects read from the queue\n    \"\"\"\n    if scheduled_before is None:\n        scheduled_before = pendulum.now(\"UTC\")\n\n    try:\n        response = await self._client.post(\n            f\"/work_queues/{id}/get_runs\",\n            json={\n                \"limit\": limit,\n                \"scheduled_before\": scheduled_before.isoformat(),\n            },\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return pydantic.parse_obj_as(List[FlowRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue","title":"read_work_queue async","text":"

Read a work queue.

Parameters:

Name Type Description Default id UUID

the id of the work queue to load

required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If request fails

Returns:

Name Type Description WorkQueue WorkQueue

an instantiated WorkQueue object

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_work_queue(\n    self,\n    id: UUID,\n) -> WorkQueue:\n\"\"\"\n    Read a work queue.\n\n    Args:\n        id: the id of the work queue to load\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        WorkQueue: an instantiated WorkQueue object\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/work_queues/{id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.match_work_queues","title":"match_work_queues async","text":"

Query the Prefect API for work queues with names with a specific prefix.

Parameters:

Name Type Description Default prefixes List[str]

a list of strings used to match work queue name prefixes

required

Returns:

Type Description List[WorkQueue]

a list of WorkQueue model representations of the work queues

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def match_work_queues(\n    self,\n    prefixes: List[str],\n) -> List[WorkQueue]:\n\"\"\"\n    Query the Prefect API for work queues with names with a specific prefix.\n\n    Args:\n        prefixes: a list of strings used to match work queue name prefixes\n\n    Returns:\n        a list of WorkQueue model representations\n            of the work queues\n    \"\"\"\n    page_length = 100\n    current_page = 0\n    work_queues = []\n\n    while True:\n        new_queues = await self.read_work_queues(\n            offset=current_page * page_length,\n            limit=page_length,\n            work_queue_filter=WorkQueueFilter(\n                name=WorkQueueFilterName(startswith_=prefixes)\n            ),\n        )\n        if not new_queues:\n            break\n        work_queues += new_queues\n        current_page += 1\n\n    return work_queues\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_work_queue_by_id","title":"delete_work_queue_by_id async","text":"

Delete a work queue by its ID.

Parameters:

Name Type Description Default id UUID

the id of the work queue to delete

required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If requests fails

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_work_queue_by_id(\n    self,\n    id: UUID,\n):\n\"\"\"\n    Delete a work queue by its ID.\n\n    Args:\n        id: the id of the work queue to delete\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(\n            f\"/work_queues/{id}\",\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_type","title":"create_block_type async","text":"

Create a block type in the Prefect API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_block_type(self, block_type: BlockTypeCreate) -> BlockType:\n\"\"\"\n    Create a block type in the Prefect API.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/block_types/\",\n            json=block_type.dict(\n                json_compatible=True, exclude_unset=True, exclude={\"id\"}\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n    return BlockType.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_schema","title":"create_block_schema async","text":"

Create a block schema in the Prefect API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_block_schema(self, block_schema: BlockSchemaCreate) -> BlockSchema:\n\"\"\"\n    Create a block schema in the Prefect API.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/block_schemas/\",\n            json=block_schema.dict(\n                json_compatible=True,\n                exclude_unset=True,\n                exclude={\"id\", \"block_type\", \"checksum\"},\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n    return BlockSchema.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_document","title":"create_block_document async","text":"

Create a block document in the Prefect API. This data is used to configure a corresponding Block.

Parameters:

Name Type Description Default include_secrets bool

whether to include secret values on the stored Block, corresponding to Pydantic's SecretStr and SecretBytes fields. Note Blocks may not work as expected if this is set to False.

True Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_block_document(\n    self,\n    block_document: Union[BlockDocument, BlockDocumentCreate],\n    include_secrets: bool = True,\n) -> BlockDocument:\n\"\"\"\n    Create a block document in the Prefect API. This data is used to configure a\n    corresponding Block.\n\n    Args:\n        include_secrets (bool): whether to include secret values\n            on the stored Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. Note Blocks may not work as expected if\n            this is set to `False`.\n    \"\"\"\n    if isinstance(block_document, BlockDocument):\n        block_document = BlockDocumentCreate.parse_obj(\n            block_document.dict(\n                json_compatible=True,\n                include_secrets=include_secrets,\n                exclude_unset=True,\n                exclude={\"id\", \"block_schema\", \"block_type\"},\n            ),\n        )\n\n    try:\n        response = await self._client.post(\n            \"/block_documents/\",\n            json=block_document.dict(\n                json_compatible=True,\n                include_secrets=include_secrets,\n                exclude_unset=True,\n                exclude={\"id\", \"block_schema\", \"block_type\"},\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n    return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_block_document","title":"update_block_document async","text":"

Update a block document in the Prefect API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def update_block_document(\n    self,\n    block_document_id: UUID,\n    block_document: BlockDocumentUpdate,\n):\n\"\"\"\n    Update a block document in the Prefect API.\n    \"\"\"\n    try:\n        await self._client.patch(\n            f\"/block_documents/{block_document_id}\",\n            json=block_document.dict(\n                json_compatible=True,\n                exclude_unset=True,\n                include={\"data\", \"merge_existing_data\", \"block_schema_id\"},\n                include_secrets=True,\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_block_document","title":"delete_block_document async","text":"

Delete a block document.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_block_document(self, block_document_id: UUID):\n\"\"\"\n    Delete a block document.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/block_documents/{block_document_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_type_by_slug","title":"read_block_type_by_slug async","text":"

Read a block type by its slug.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_type_by_slug(self, slug: str) -> BlockType:\n\"\"\"\n    Read a block type by its slug.\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/block_types/slug/{slug}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockType.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_schema_by_checksum","title":"read_block_schema_by_checksum async","text":"

Look up a block schema checksum

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_schema_by_checksum(\n    self, checksum: str, version: Optional[str] = None\n) -> BlockSchema:\n\"\"\"\n    Look up a block schema checksum\n    \"\"\"\n    try:\n        url = f\"/block_schemas/checksum/{checksum}\"\n        if version is not None:\n            url = f\"{url}?version={version}\"\n        response = await self._client.get(url)\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockSchema.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_block_type","title":"update_block_type async","text":"

Update a block document in the Prefect API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def update_block_type(self, block_type_id: UUID, block_type: BlockTypeUpdate):\n\"\"\"\n    Update a block document in the Prefect API.\n    \"\"\"\n    try:\n        await self._client.patch(\n            f\"/block_types/{block_type_id}\",\n            json=block_type.dict(\n                json_compatible=True,\n                exclude_unset=True,\n                include=BlockTypeUpdate.updatable_fields(),\n                include_secrets=True,\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_block_type","title":"delete_block_type async","text":"

Delete a block type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_block_type(self, block_type_id: UUID):\n\"\"\"\n    Delete a block type.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/block_types/{block_type_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        elif (\n            e.response.status_code == status.HTTP_403_FORBIDDEN\n            and e.response.json()[\"detail\"]\n            == \"protected block types cannot be deleted.\"\n        ):\n            raise prefect.exceptions.ProtectedBlockError(\n                \"Protected block types cannot be deleted.\"\n            ) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_types","title":"read_block_types async","text":"

Read all block types

Raises:

Type Description httpx.RequestError

if the block types were not found

Returns:

Type Description List[BlockType]

List of BlockTypes.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_types(self) -> List[BlockType]:\n\"\"\"\n    Read all block types\n    Raises:\n        httpx.RequestError: if the block types were not found\n\n    Returns:\n        List of BlockTypes.\n    \"\"\"\n    response = await self._client.post(\"/block_types/filter\", json={})\n    return pydantic.parse_obj_as(List[BlockType], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_schemas","title":"read_block_schemas async","text":"

Read all block schemas

Raises:

Type Description httpx.RequestError

if a valid block schema was not found

Returns:

Type Description List[BlockSchema]

A BlockSchema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_schemas(self) -> List[BlockSchema]:\n\"\"\"\n    Read all block schemas\n    Raises:\n        httpx.RequestError: if a valid block schema was not found\n\n    Returns:\n        A BlockSchema.\n    \"\"\"\n    response = await self._client.post(\"/block_schemas/filter\", json={})\n    return pydantic.parse_obj_as(List[BlockSchema], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_document","title":"read_block_document async","text":"

Read the block document with the specified ID.

Parameters:

Name Type Description Default block_document_id UUID

the block document id

required include_secrets bool

whether to include secret values on the Block, corresponding to Pydantic's SecretStr and SecretBytes fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False.

True

Raises:

Type Description httpx.RequestError

if the block document was not found for any reason

Returns:

Type Description

A block document or None.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_document(\n    self,\n    block_document_id: UUID,\n    include_secrets: bool = True,\n):\n\"\"\"\n    Read the block document with the specified ID.\n\n    Args:\n        block_document_id: the block document id\n        include_secrets (bool): whether to include secret values\n            on the Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. These fields are automatically obfuscated\n            by Pydantic, but users can additionally choose not to receive\n            their values from the API. Note that any business logic on the\n            Block may not work if this is `False`.\n\n    Raises:\n        httpx.RequestError: if the block document was not found for any reason\n\n    Returns:\n        A block document or None.\n    \"\"\"\n    try:\n        response = await self._client.get(\n            f\"/block_documents/{block_document_id}\",\n            params=dict(include_secrets=include_secrets),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_document_by_name","title":"read_block_document_by_name async","text":"

Read the block document with the specified name that corresponds to a specific block type name.

Parameters:

Name Type Description Default name str

The block document name.

required block_type_slug str

The block type slug.

required include_secrets bool

whether to include secret values on the Block, corresponding to Pydantic's SecretStr and SecretBytes fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False.

True

Raises:

Type Description httpx.RequestError

if the block document was not found for any reason

Returns:

Type Description

A block document or None.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_document_by_name(\n    self,\n    name: str,\n    block_type_slug: str,\n    include_secrets: bool = True,\n):\n\"\"\"\n    Read the block document with the specified name that corresponds to a\n    specific block type name.\n\n    Args:\n        name: The block document name.\n        block_type_slug: The block type slug.\n        include_secrets (bool): whether to include secret values\n            on the Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. These fields are automatically obfuscated\n            by Pydantic, but users can additionally choose not to receive\n            their values from the API. Note that any business logic on the\n            Block may not work if this is `False`.\n\n    Raises:\n        httpx.RequestError: if the block document was not found for any reason\n\n    Returns:\n        A block document or None.\n    \"\"\"\n    try:\n        response = await self._client.get(\n            f\"/block_types/slug/{block_type_slug}/block_documents/name/{name}\",\n            params=dict(include_secrets=include_secrets),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_documents","title":"read_block_documents async","text":"

Read block documents

Parameters:

Name Type Description Default block_schema_type Optional[str]

an optional block schema type

None offset Optional[int]

an offset

None limit Optional[int]

the number of blocks to return

None include_secrets bool

whether to include secret values on the Block, corresponding to Pydantic's SecretStr and SecretBytes fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False.

True

Returns:

Type Description

A list of block documents

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_documents(\n    self,\n    block_schema_type: Optional[str] = None,\n    offset: Optional[int] = None,\n    limit: Optional[int] = None,\n    include_secrets: bool = True,\n):\n\"\"\"\n    Read block documents\n\n    Args:\n        block_schema_type: an optional block schema type\n        offset: an offset\n        limit: the number of blocks to return\n        include_secrets (bool): whether to include secret values\n            on the Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. These fields are automatically obfuscated\n            by Pydantic, but users can additionally choose not to receive\n            their values from the API. Note that any business logic on the\n            Block may not work if this is `False`.\n\n    Returns:\n        A list of block documents\n    \"\"\"\n    response = await self._client.post(\n        \"/block_documents/filter\",\n        json=dict(\n            block_schema_type=block_schema_type,\n            offset=offset,\n            limit=limit,\n            include_secrets=include_secrets,\n        ),\n    )\n    return pydantic.parse_obj_as(List[BlockDocument], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_deployment","title":"create_deployment async","text":"

Create a deployment.

Parameters:

Name Type Description Default flow_id UUID

the flow ID to create a deployment for

required name str

the name of the deployment

required version str

an optional version string for the deployment

None schedule SCHEDULE_TYPES

an optional schedule to apply to the deployment

None tags List[str]

an optional list of tags to apply to the deployment

None storage_document_id UUID

an reference to the storage block document used for the deployed flow

None infrastructure_document_id UUID

an reference to the infrastructure block document to use for this deployment

None

Raises:

Type Description httpx.RequestError

if the deployment was not created for any reason

Returns:

Type Description UUID

the ID of the deployment in the backend

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_deployment(\n    self,\n    flow_id: UUID,\n    name: str,\n    version: str = None,\n    schedule: SCHEDULE_TYPES = None,\n    parameters: Dict[str, Any] = None,\n    description: str = None,\n    work_queue_name: str = None,\n    work_pool_name: str = None,\n    tags: List[str] = None,\n    storage_document_id: UUID = None,\n    manifest_path: str = None,\n    path: str = None,\n    entrypoint: str = None,\n    infrastructure_document_id: UUID = None,\n    infra_overrides: Dict[str, Any] = None,\n    parameter_openapi_schema: dict = None,\n    is_schedule_active: Optional[bool] = None,\n    pull_steps: Optional[List[dict]] = None,\n) -> UUID:\n\"\"\"\n    Create a deployment.\n\n    Args:\n        flow_id: the flow ID to create a deployment for\n        name: the name of the deployment\n        version: an optional version string for the deployment\n        schedule: an optional schedule to apply to the deployment\n        tags: an optional list of tags to apply to the deployment\n        storage_document_id: an reference to the storage block document\n            used for the deployed flow\n        infrastructure_document_id: an reference to the infrastructure block document\n            to use for this deployment\n\n    Raises:\n        httpx.RequestError: if the deployment was not created for any reason\n\n    Returns:\n        the ID of the deployment in the backend\n    \"\"\"\n    deployment_create = DeploymentCreate(\n        flow_id=flow_id,\n        name=name,\n        version=version,\n        schedule=schedule,\n        parameters=dict(parameters or {}),\n        tags=list(tags or []),\n        work_queue_name=work_queue_name,\n        description=description,\n        storage_document_id=storage_document_id,\n        path=path,\n        entrypoint=entrypoint,\n        manifest_path=manifest_path,  # for backwards compat\n        infrastructure_document_id=infrastructure_document_id,\n        infra_overrides=infra_overrides or {},\n        parameter_openapi_schema=parameter_openapi_schema,\n        is_schedule_active=is_schedule_active,\n        pull_steps=pull_steps,\n    )\n\n    if work_pool_name is not None:\n        deployment_create.work_pool_name = work_pool_name\n\n    # Exclude newer fields that are not set to avoid compatibility issues\n    exclude = {\n        field\n        for field in [\"work_pool_name\", \"work_queue_name\"]\n        if field not in deployment_create.__fields_set__\n    }\n\n    if deployment_create.is_schedule_active is None:\n        exclude.add(\"is_schedule_active\")\n\n    if deployment_create.pull_steps is None:\n        exclude.add(\"pull_steps\")\n\n    json = deployment_create.dict(json_compatible=True, exclude=exclude)\n    response = await self._client.post(\n        \"/deployments/\",\n        json=json,\n    )\n    deployment_id = response.json().get(\"id\")\n    if not deployment_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    return UUID(deployment_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment","title":"read_deployment async","text":"

Query the Prefect API for a deployment by id.

Parameters:

Name Type Description Default deployment_id UUID

the deployment ID of interest

required

Returns:

Type Description DeploymentResponse

a Deployment model representation of the deployment

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_deployment(\n    self,\n    deployment_id: UUID,\n) -> DeploymentResponse:\n\"\"\"\n    Query the Prefect API for a deployment by id.\n\n    Args:\n        deployment_id: the deployment ID of interest\n\n    Returns:\n        a [Deployment model][prefect.client.schemas.objects.Deployment] representation of the deployment\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/deployments/{deployment_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return DeploymentResponse.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment_by_name","title":"read_deployment_by_name async","text":"

Query the Prefect API for a deployment by name.

Parameters:

Name Type Description Default name str

A deployed flow's name: / required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If request fails

Returns:

Type Description DeploymentResponse

a Deployment model representation of the deployment

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_deployment_by_name(\n    self,\n    name: str,\n) -> DeploymentResponse:\n\"\"\"\n    Query the Prefect API for a deployment by name.\n\n    Args:\n        name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        a Deployment model representation of the deployment\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/deployments/name/{name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    return DeploymentResponse.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployments","title":"read_deployments async","text":"

Query the Prefect API for deployments. Only deployments matching all the provided criteria will be returned.

Parameters:

Name Type Description Default flow_filter FlowFilter

filter criteria for flows

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None deployment_filter DeploymentFilter

filter criteria for deployments

None work_pool_filter WorkPoolFilter

filter criteria for work pools

None work_queue_filter WorkQueueFilter

filter criteria for work pool queues

None limit int

a limit for the deployment query

None offset int

an offset for the deployment query

0

Returns:

Type Description List[DeploymentResponse]

a list of Deployment model representations of the deployments

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_deployments(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    work_pool_filter: WorkPoolFilter = None,\n    work_queue_filter: WorkQueueFilter = None,\n    limit: int = None,\n    sort: DeploymentSort = None,\n    offset: int = 0,\n) -> List[DeploymentResponse]:\n\"\"\"\n    Query the Prefect API for deployments. Only deployments matching all\n    the provided criteria will be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        work_pool_filter: filter criteria for work pools\n        work_queue_filter: filter criteria for work pool queues\n        limit: a limit for the deployment query\n        offset: an offset for the deployment query\n\n    Returns:\n        a list of Deployment model representations\n            of the deployments\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n        \"work_pool_queues\": (\n            work_queue_filter.dict(json_compatible=True)\n            if work_queue_filter\n            else None\n        ),\n        \"limit\": limit,\n        \"offset\": offset,\n        \"sort\": sort,\n    }\n\n    response = await self._client.post(\"/deployments/filter\", json=body)\n    return pydantic.parse_obj_as(List[DeploymentResponse], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_deployment","title":"delete_deployment async","text":"

Delete deployment by id.

Parameters:

Name Type Description Default deployment_id UUID

The deployment id of interest.

required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If requests fails

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_deployment(\n    self,\n    deployment_id: UUID,\n):\n\"\"\"\n    Delete deployment by id.\n\n    Args:\n        deployment_id: The deployment id of interest.\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(f\"/deployments/{deployment_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run","title":"read_flow_run async","text":"

Query the Prefect API for a flow run by id.

Parameters:

Name Type Description Default flow_run_id UUID

the flow run ID of interest

required

Returns:

Type Description FlowRun

a Flow Run model representation of the flow run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flow_run(self, flow_run_id: UUID) -> FlowRun:\n\"\"\"\n    Query the Prefect API for a flow run by id.\n\n    Args:\n        flow_run_id: the flow run ID of interest\n\n    Returns:\n        a Flow Run model representation of the flow run\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/flow_runs/{flow_run_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return FlowRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.resume_flow_run","title":"resume_flow_run async","text":"

Resumes a paused flow run.

Parameters:

Name Type Description Default flow_run_id UUID

the flow run ID of interest

required

Returns:

Type Description OrchestrationResult

an OrchestrationResult model representation of state orchestration output

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def resume_flow_run(self, flow_run_id: UUID) -> OrchestrationResult:\n\"\"\"\n    Resumes a paused flow run.\n\n    Args:\n        flow_run_id: the flow run ID of interest\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    try:\n        response = await self._client.post(f\"/flow_runs/{flow_run_id}/resume\")\n    except httpx.HTTPStatusError:\n        raise\n\n    return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_runs","title":"read_flow_runs async","text":"

Query the Prefect API for flow runs. Only flow runs matching all criteria will be returned.

Parameters:

Name Type Description Default flow_filter FlowFilter

filter criteria for flows

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None deployment_filter DeploymentFilter

filter criteria for deployments

None work_pool_filter WorkPoolFilter

filter criteria for work pools

None work_queue_filter WorkQueueFilter

filter criteria for work pool queues

None sort FlowRunSort

sort criteria for the flow runs

None limit int

limit for the flow run query

None offset int

offset for the flow run query

0

Returns:

Type Description List[FlowRun]

a list of Flow Run model representations of the flow runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flow_runs(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    work_pool_filter: WorkPoolFilter = None,\n    work_queue_filter: WorkQueueFilter = None,\n    sort: FlowRunSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[FlowRun]:\n\"\"\"\n    Query the Prefect API for flow runs. Only flow runs matching all criteria will\n    be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        work_pool_filter: filter criteria for work pools\n        work_queue_filter: filter criteria for work pool queues\n        sort: sort criteria for the flow runs\n        limit: limit for the flow run query\n        offset: offset for the flow run query\n\n    Returns:\n        a list of Flow Run model representations\n            of the flow runs\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n        \"work_pool_queues\": (\n            work_queue_filter.dict(json_compatible=True)\n            if work_queue_filter\n            else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    response = await self._client.post(\"/flow_runs/filter\", json=body)\n    return pydantic.parse_obj_as(List[FlowRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.set_flow_run_state","title":"set_flow_run_state async","text":"

Set the state of a flow run.

Parameters:

Name Type Description Default flow_run_id UUID

the id of the flow run

required state prefect.states.State

the state to set

required force bool

if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state

False

Returns:

Type Description OrchestrationResult

an OrchestrationResult model representation of state orchestration output

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def set_flow_run_state(\n    self,\n    flow_run_id: UUID,\n    state: \"prefect.states.State\",\n    force: bool = False,\n) -> OrchestrationResult:\n\"\"\"\n    Set the state of a flow run.\n\n    Args:\n        flow_run_id: the id of the flow run\n        state: the state to set\n        force: if True, disregard orchestration logic when setting the state,\n            forcing the Prefect API to accept the state\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    state_create = state.to_state_create()\n    state_create.state_details.flow_run_id = flow_run_id\n    try:\n        response = await self._client.post(\n            f\"/flow_runs/{flow_run_id}/set_state\",\n            json=dict(state=state_create.dict(json_compatible=True), force=force),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_states","title":"read_flow_run_states async","text":"

Query for the states of a flow run

Parameters:

Name Type Description Default flow_run_id UUID

the id of the flow run

required

Returns:

Type Description List[prefect.states.State]

a list of State model representations of the flow run states

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flow_run_states(\n    self, flow_run_id: UUID\n) -> List[prefect.states.State]:\n\"\"\"\n    Query for the states of a flow run\n\n    Args:\n        flow_run_id: the id of the flow run\n\n    Returns:\n        a list of State model representations\n            of the flow run states\n    \"\"\"\n    response = await self._client.get(\n        \"/flow_run_states/\", params=dict(flow_run_id=str(flow_run_id))\n    )\n    return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_task_run","title":"create_task_run async","text":"

Create a task run

Parameters:

Name Type Description Default task TaskObject

The Task to run

required flow_run_id UUID

The flow run id with which to associate the task run

required dynamic_key str

A key unique to this particular run of a Task within the flow

required name str

An optional name for the task run

None extra_tags Iterable[str]

an optional list of extra tags to apply to the task run in addition to task.tags

None state prefect.states.State

The initial state for the run. If not provided, defaults to Pending for now. Should always be a Scheduled type.

None task_inputs Dict[str, List[Union[TaskRunResult, Parameter, Constant]]]

the set of inputs passed to the task

None

Returns:

Type Description TaskRun

The created task run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_task_run(\n    self,\n    task: \"TaskObject\",\n    flow_run_id: UUID,\n    dynamic_key: str,\n    name: str = None,\n    extra_tags: Iterable[str] = None,\n    state: prefect.states.State = None,\n    task_inputs: Dict[\n        str,\n        List[\n            Union[\n                TaskRunResult,\n                Parameter,\n                Constant,\n            ]\n        ],\n    ] = None,\n) -> TaskRun:\n\"\"\"\n    Create a task run\n\n    Args:\n        task: The Task to run\n        flow_run_id: The flow run id with which to associate the task run\n        dynamic_key: A key unique to this particular run of a Task within the flow\n        name: An optional name for the task run\n        extra_tags: an optional list of extra tags to apply to the task run in\n            addition to `task.tags`\n        state: The initial state for the run. If not provided, defaults to\n            `Pending` for now. Should always be a `Scheduled` type.\n        task_inputs: the set of inputs passed to the task\n\n    Returns:\n        The created task run.\n    \"\"\"\n    tags = set(task.tags).union(extra_tags or [])\n\n    if state is None:\n        state = prefect.states.Pending()\n\n    task_run_data = TaskRunCreate(\n        name=name,\n        flow_run_id=flow_run_id,\n        task_key=task.task_key,\n        dynamic_key=dynamic_key,\n        tags=list(tags),\n        task_version=task.version,\n        empirical_policy=TaskRunPolicy(\n            retries=task.retries,\n            retry_delay=task.retry_delay_seconds,\n            retry_jitter_factor=task.retry_jitter_factor,\n        ),\n        state=state.to_state_create(),\n        task_inputs=task_inputs or {},\n    )\n\n    response = await self._client.post(\n        \"/task_runs/\", json=task_run_data.dict(json_compatible=True)\n    )\n    return TaskRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_run","title":"read_task_run async","text":"

Query the Prefect API for a task run by id.

Parameters:

Name Type Description Default task_run_id UUID

the task run ID of interest

required

Returns:

Type Description TaskRun

a Task Run model representation of the task run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_task_run(self, task_run_id: UUID) -> TaskRun:\n\"\"\"\n    Query the Prefect API for a task run by id.\n\n    Args:\n        task_run_id: the task run ID of interest\n\n    Returns:\n        a Task Run model representation of the task run\n    \"\"\"\n    response = await self._client.get(f\"/task_runs/{task_run_id}\")\n    return TaskRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_runs","title":"read_task_runs async","text":"

Query the Prefect API for task runs. Only task runs matching all criteria will be returned.

Parameters:

Name Type Description Default flow_filter FlowFilter

filter criteria for flows

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None deployment_filter DeploymentFilter

filter criteria for deployments

None sort TaskRunSort

sort criteria for the task runs

None limit int

a limit for the task run query

None offset int

an offset for the task run query

0

Returns:

Type Description List[TaskRun]

a list of Task Run model representations of the task runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_task_runs(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    sort: TaskRunSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[TaskRun]:\n\"\"\"\n    Query the Prefect API for task runs. Only task runs matching all criteria will\n    be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        sort: sort criteria for the task runs\n        limit: a limit for the task run query\n        offset: an offset for the task run query\n\n    Returns:\n        a list of Task Run model representations\n            of the task runs\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\"/task_runs/filter\", json=body)\n    return pydantic.parse_obj_as(List[TaskRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.set_task_run_state","title":"set_task_run_state async","text":"

Set the state of a task run.

Parameters:

Name Type Description Default task_run_id UUID

the id of the task run

required state prefect.states.State

the state to set

required force bool

if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state

False

Returns:

Type Description OrchestrationResult

an OrchestrationResult model representation of state orchestration output

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def set_task_run_state(\n    self,\n    task_run_id: UUID,\n    state: prefect.states.State,\n    force: bool = False,\n) -> OrchestrationResult:\n\"\"\"\n    Set the state of a task run.\n\n    Args:\n        task_run_id: the id of the task run\n        state: the state to set\n        force: if True, disregard orchestration logic when setting the state,\n            forcing the Prefect API to accept the state\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    state_create = state.to_state_create()\n    state_create.state_details.task_run_id = task_run_id\n    response = await self._client.post(\n        f\"/task_runs/{task_run_id}/set_state\",\n        json=dict(state=state_create.dict(json_compatible=True), force=force),\n    )\n    return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_run_states","title":"read_task_run_states async","text":"

Query for the states of a task run

Parameters:

Name Type Description Default task_run_id UUID

the id of the task run

required

Returns:

Type Description List[prefect.states.State]

a list of State model representations of the task run states

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_task_run_states(\n    self, task_run_id: UUID\n) -> List[prefect.states.State]:\n\"\"\"\n    Query for the states of a task run\n\n    Args:\n        task_run_id: the id of the task run\n\n    Returns:\n        a list of State model representations of the task run states\n    \"\"\"\n    response = await self._client.get(\n        \"/task_run_states/\", params=dict(task_run_id=str(task_run_id))\n    )\n    return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_logs","title":"create_logs async","text":"

Create logs for a flow or task run

Parameters:

Name Type Description Default logs Iterable[Union[LogCreate, dict]]

An iterable of LogCreate objects or already json-compatible dicts

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_logs(self, logs: Iterable[Union[LogCreate, dict]]) -> None:\n\"\"\"\n    Create logs for a flow or task run\n\n    Args:\n        logs: An iterable of `LogCreate` objects or already json-compatible dicts\n    \"\"\"\n    serialized_logs = [\n        log.dict(json_compatible=True) if isinstance(log, LogCreate) else log\n        for log in logs\n    ]\n    await self._client.post(\"/logs/\", json=serialized_logs)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_notification_policy","title":"create_flow_run_notification_policy async","text":"

Create a notification policy for flow runs

Parameters:

Name Type Description Default block_document_id UUID

The block document UUID

required is_active bool

Whether the notification policy is active

True tags List[str]

List of flow tags

None state_names List[str]

List of state names

None message_template Optional[str]

Notification message template

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_flow_run_notification_policy(\n    self,\n    block_document_id: UUID,\n    is_active: bool = True,\n    tags: List[str] = None,\n    state_names: List[str] = None,\n    message_template: Optional[str] = None,\n) -> UUID:\n\"\"\"\n    Create a notification policy for flow runs\n\n    Args:\n        block_document_id: The block document UUID\n        is_active: Whether the notification policy is active\n        tags: List of flow tags\n        state_names: List of state names\n        message_template: Notification message template\n    \"\"\"\n    if tags is None:\n        tags = []\n    if state_names is None:\n        state_names = []\n\n    policy = FlowRunNotificationPolicyCreate(\n        block_document_id=block_document_id,\n        is_active=is_active,\n        tags=tags,\n        state_names=state_names,\n        message_template=message_template,\n    )\n    response = await self._client.post(\n        \"/flow_run_notification_policies/\",\n        json=policy.dict(json_compatible=True),\n    )\n\n    policy_id = response.json().get(\"id\")\n    if not policy_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    return UUID(policy_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_notification_policies","title":"read_flow_run_notification_policies async","text":"

Query the Prefect API for flow run notification policies. Only policies matching all criteria will be returned.

Parameters:

Name Type Description Default flow_run_notification_policy_filter FlowRunNotificationPolicyFilter

filter criteria for notification policies

required limit Optional[int]

a limit for the notification policies query

None offset int

an offset for the notification policies query

0

Returns:

Type Description List[FlowRunNotificationPolicy]

a list of FlowRunNotificationPolicy model representations of the notification policies

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flow_run_notification_policies(\n    self,\n    flow_run_notification_policy_filter: FlowRunNotificationPolicyFilter,\n    limit: Optional[int] = None,\n    offset: int = 0,\n) -> List[FlowRunNotificationPolicy]:\n\"\"\"\n    Query the Prefect API for flow run notification policies. Only policies matching all criteria will\n    be returned.\n\n    Args:\n        flow_run_notification_policy_filter: filter criteria for notification policies\n        limit: a limit for the notification policies query\n        offset: an offset for the notification policies query\n\n    Returns:\n        a list of FlowRunNotificationPolicy model representations\n            of the notification policies\n    \"\"\"\n    body = {\n        \"flow_run_notification_policy_filter\": (\n            flow_run_notification_policy_filter.dict(json_compatible=True)\n            if flow_run_notification_policy_filter\n            else None\n        ),\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\n        \"/flow_run_notification_policies/filter\", json=body\n    )\n    return pydantic.parse_obj_as(List[FlowRunNotificationPolicy], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_logs","title":"read_logs async","text":"

Read flow and task run logs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_logs(\n    self,\n    log_filter: LogFilter = None,\n    limit: int = None,\n    offset: int = None,\n    sort: LogSort = LogSort.TIMESTAMP_ASC,\n) -> None:\n\"\"\"\n    Read flow and task run logs.\n    \"\"\"\n    body = {\n        \"logs\": log_filter.dict(json_compatible=True) if log_filter else None,\n        \"limit\": limit,\n        \"offset\": offset,\n        \"sort\": sort,\n    }\n\n    response = await self._client.post(\"/logs/filter\", json=body)\n    return pydantic.parse_obj_as(List[Log], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.resolve_datadoc","title":"resolve_datadoc async","text":"

Recursively decode possibly nested data documents.

\"server\" encoded documents will be retrieved from the server.

Parameters:

Name Type Description Default datadoc DataDocument

The data document to resolve

required

Returns:

Type Description Any

a decoded object, the innermost data

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def resolve_datadoc(self, datadoc: DataDocument) -> Any:\n\"\"\"\n    Recursively decode possibly nested data documents.\n\n    \"server\" encoded documents will be retrieved from the server.\n\n    Args:\n        datadoc: The data document to resolve\n\n    Returns:\n        a decoded object, the innermost data\n    \"\"\"\n    if not isinstance(datadoc, DataDocument):\n        raise TypeError(\n            f\"`resolve_datadoc` received invalid type {type(datadoc).__name__}\"\n        )\n\n    async def resolve_inner(data):\n        if isinstance(data, bytes):\n            try:\n                data = DataDocument.parse_raw(data)\n            except pydantic.ValidationError:\n                return data\n\n        if isinstance(data, DataDocument):\n            return await resolve_inner(data.decode())\n\n        return data\n\n    return await resolve_inner(datadoc)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.send_worker_heartbeat","title":"send_worker_heartbeat async","text":"

Sends a worker heartbeat for a given work pool.

Parameters:

Name Type Description Default work_pool_name str

The name of the work pool to heartbeat against.

required worker_name str

The name of the worker sending the heartbeat.

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def send_worker_heartbeat(self, work_pool_name: str, worker_name: str):\n\"\"\"\n    Sends a worker heartbeat for a given work pool.\n\n    Args:\n        work_pool_name: The name of the work pool to heartbeat against.\n        worker_name: The name of the worker sending the heartbeat.\n    \"\"\"\n    await self._client.post(\n        f\"/work_pools/{work_pool_name}/workers/heartbeat\",\n        json={\"name\": worker_name},\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_workers_for_work_pool","title":"read_workers_for_work_pool async","text":"

Reads workers for a given work pool.

Parameters:

Name Type Description Default work_pool_name str

The name of the work pool for which to get member workers.

required worker_filter Optional[WorkerFilter]

Criteria by which to filter workers.

None limit Optional[int]

Limit for the worker query.

None offset Optional[int]

Limit for the worker query.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_workers_for_work_pool(\n    self,\n    work_pool_name: str,\n    worker_filter: Optional[WorkerFilter] = None,\n    offset: Optional[int] = None,\n    limit: Optional[int] = None,\n) -> List[Worker]:\n\"\"\"\n    Reads workers for a given work pool.\n\n    Args:\n        work_pool_name: The name of the work pool for which to get\n            member workers.\n        worker_filter: Criteria by which to filter workers.\n        limit: Limit for the worker query.\n        offset: Limit for the worker query.\n    \"\"\"\n    response = await self._client.post(\n        f\"/work_pools/{work_pool_name}/workers/filter\",\n        json={\n            \"worker_filter\": (\n                worker_filter.dict(json_compatible=True, exclude_unset=True)\n                if worker_filter\n                else None\n            ),\n            \"offset\": offset,\n            \"limit\": limit,\n        },\n    )\n\n    return pydantic.parse_obj_as(List[Worker], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_pool","title":"read_work_pool async","text":"

Reads information for a given work pool

Parameters:

Name Type Description Default work_pool_name str

The name of the work pool to for which to get information.

required

Returns:

Type Description WorkPool

Information about the requested work pool.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_work_pool(self, work_pool_name: str) -> WorkPool:\n\"\"\"\n    Reads information for a given work pool\n\n    Args:\n        work_pool_name: The name of the work pool to for which to get\n            information.\n\n    Returns:\n        Information about the requested work pool.\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/work_pools/{work_pool_name}\")\n        return pydantic.parse_obj_as(WorkPool, response.json())\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_pools","title":"read_work_pools async","text":"

Reads work pools.

Parameters:

Name Type Description Default limit Optional[int]

Limit for the work pool query.

None offset int

Offset for the work pool query.

0 work_pool_filter Optional[WorkPoolFilter]

Criteria by which to filter work pools.

None

Returns:

Type Description List[WorkPool]

A list of work pools.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_work_pools(\n    self,\n    limit: Optional[int] = None,\n    offset: int = 0,\n    work_pool_filter: Optional[WorkPoolFilter] = None,\n) -> List[WorkPool]:\n\"\"\"\n    Reads work pools.\n\n    Args:\n        limit: Limit for the work pool query.\n        offset: Offset for the work pool query.\n        work_pool_filter: Criteria by which to filter work pools.\n\n    Returns:\n        A list of work pools.\n    \"\"\"\n\n    body = {\n        \"limit\": limit,\n        \"offset\": offset,\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n    }\n    response = await self._client.post(\"/work_pools/filter\", json=body)\n    return pydantic.parse_obj_as(List[WorkPool], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_work_pool","title":"create_work_pool async","text":"

Creates a work pool with the provided configuration.

Parameters:

Name Type Description Default work_pool WorkPoolCreate

Desired configuration for the new work pool.

required

Returns:

Type Description WorkPool

Information about the newly created work pool.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_work_pool(\n    self,\n    work_pool: WorkPoolCreate,\n) -> WorkPool:\n\"\"\"\n    Creates a work pool with the provided configuration.\n\n    Args:\n        work_pool: Desired configuration for the new work pool.\n\n    Returns:\n        Information about the newly created work pool.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/work_pools/\",\n            json=work_pool.dict(json_compatible=True, exclude_unset=True),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n\n    return pydantic.parse_obj_as(WorkPool, response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_work_pool","title":"delete_work_pool async","text":"

Deletes a work pool.

Parameters:

Name Type Description Default work_pool_name str

Name of the work pool to delete.

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_work_pool(\n    self,\n    work_pool_name: str,\n):\n\"\"\"\n    Deletes a work pool.\n\n    Args:\n        work_pool_name: Name of the work pool to delete.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/work_pools/{work_pool_name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queues","title":"read_work_queues async","text":"

Retrieves queues for a work pool.

Parameters:

Name Type Description Default work_pool_name Optional[str]

Name of the work pool for which to get queues.

None work_queue_filter Optional[WorkQueueFilter]

Criteria by which to filter queues.

None limit Optional[int]

Limit for the queue query.

None offset Optional[int]

Limit for the queue query.

None

Returns:

Type Description List[WorkQueue]

List of queues for the specified work pool.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_work_queues(\n    self,\n    work_pool_name: Optional[str] = None,\n    work_queue_filter: Optional[WorkQueueFilter] = None,\n    limit: Optional[int] = None,\n    offset: Optional[int] = None,\n) -> List[WorkQueue]:\n\"\"\"\n    Retrieves queues for a work pool.\n\n    Args:\n        work_pool_name: Name of the work pool for which to get queues.\n        work_queue_filter: Criteria by which to filter queues.\n        limit: Limit for the queue query.\n        offset: Limit for the queue query.\n\n    Returns:\n        List of queues for the specified work pool.\n    \"\"\"\n    json = {\n        \"work_queues\": (\n            work_queue_filter.dict(json_compatible=True, exclude_unset=True)\n            if work_queue_filter\n            else None\n        ),\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    if work_pool_name:\n        try:\n            response = await self._client.post(\n                f\"/work_pools/{work_pool_name}/queues/filter\",\n                json=json,\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n    else:\n        response = await self._client.post(\"/work_queues/filter\", json=json)\n\n    return pydantic.parse_obj_as(List[WorkQueue], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_scheduled_flow_runs_for_work_pool","title":"get_scheduled_flow_runs_for_work_pool async","text":"

Retrieves scheduled flow runs for the provided set of work pool queues.

Parameters:

Name Type Description Default work_pool_name str

The name of the work pool that the work pool queues are associated with.

required work_queue_names Optional[List[str]]

The names of the work pool queues from which to get scheduled flow runs.

None scheduled_before Optional[datetime.datetime]

Datetime used to filter returned flow runs. Flow runs scheduled for after the given datetime string will not be returned.

None

Returns:

Type Description List[WorkerFlowRunResponse]

A list of worker flow run responses containing information about the

List[WorkerFlowRunResponse]

retrieved flow runs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def get_scheduled_flow_runs_for_work_pool(\n    self,\n    work_pool_name: str,\n    work_queue_names: Optional[List[str]] = None,\n    scheduled_before: Optional[datetime.datetime] = None,\n) -> List[WorkerFlowRunResponse]:\n\"\"\"\n    Retrieves scheduled flow runs for the provided set of work pool queues.\n\n    Args:\n        work_pool_name: The name of the work pool that the work pool\n            queues are associated with.\n        work_queue_names: The names of the work pool queues from which\n            to get scheduled flow runs.\n        scheduled_before: Datetime used to filter returned flow runs. Flow runs\n            scheduled for after the given datetime string will not be returned.\n\n    Returns:\n        A list of worker flow run responses containing information about the\n        retrieved flow runs.\n    \"\"\"\n    body: Dict[str, Any] = {}\n    if work_queue_names is not None:\n        body[\"work_queue_names\"] = list(work_queue_names)\n    if scheduled_before:\n        body[\"scheduled_before\"] = str(scheduled_before)\n\n    response = await self._client.post(\n        f\"/work_pools/{work_pool_name}/get_scheduled_flow_runs\",\n        json=body,\n    )\n\n    return pydantic.parse_obj_as(List[WorkerFlowRunResponse], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_artifact","title":"create_artifact async","text":"

Creates an artifact with the provided configuration.

Parameters:

Name Type Description Default artifact ArtifactCreate

Desired configuration for the new artifact.

required

Returns:

Type Description Artifact

Information about the newly created artifact.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_artifact(\n    self,\n    artifact: ArtifactCreate,\n) -> Artifact:\n\"\"\"\n    Creates an artifact with the provided configuration.\n\n    Args:\n        artifact: Desired configuration for the new artifact.\n    Returns:\n        Information about the newly created artifact.\n    \"\"\"\n\n    response = await self._client.post(\n        \"/artifacts/\",\n        json=artifact.dict(json_compatible=True, exclude_unset=True),\n    )\n\n    return pydantic.parse_obj_as(Artifact, response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_artifacts","title":"read_artifacts async","text":"

Query the Prefect API for artifacts. Only artifacts matching all criteria will be returned.

Parameters:

Name Type Description Default artifact_filter ArtifactFilter

filter criteria for artifacts

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None sort ArtifactSort

sort criteria for the artifacts

None limit int

limit for the artifact query

None offset int

offset for the artifact query

0

Returns:

Type Description List[Artifact]

a list of Artifact model representations of the artifacts

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_artifacts(\n    self,\n    *,\n    artifact_filter: ArtifactFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    sort: ArtifactSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[Artifact]:\n\"\"\"\n    Query the Prefect API for artifacts. Only artifacts matching all criteria will\n    be returned.\n    Args:\n        artifact_filter: filter criteria for artifacts\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        sort: sort criteria for the artifacts\n        limit: limit for the artifact query\n        offset: offset for the artifact query\n    Returns:\n        a list of Artifact model representations of the artifacts\n    \"\"\"\n    body = {\n        \"artifacts\": (\n            artifact_filter.dict(json_compatible=True) if artifact_filter else None\n        ),\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\"/artifacts/filter\", json=body)\n    return pydantic.parse_obj_as(List[Artifact], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_latest_artifacts","title":"read_latest_artifacts async","text":"

Query the Prefect API for artifacts. Only artifacts matching all criteria will be returned.

Parameters:

Name Type Description Default artifact_filter ArtifactCollectionFilter

filter criteria for artifacts

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None sort ArtifactCollectionSort

sort criteria for the artifacts

None limit int

limit for the artifact query

None offset int

offset for the artifact query

0

Returns:

Type Description List[ArtifactCollection]

a list of Artifact model representations of the artifacts

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_latest_artifacts(\n    self,\n    *,\n    artifact_filter: ArtifactCollectionFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    sort: ArtifactCollectionSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[ArtifactCollection]:\n\"\"\"\n    Query the Prefect API for artifacts. Only artifacts matching all criteria will\n    be returned.\n    Args:\n        artifact_filter: filter criteria for artifacts\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        sort: sort criteria for the artifacts\n        limit: limit for the artifact query\n        offset: offset for the artifact query\n    Returns:\n        a list of Artifact model representations of the artifacts\n    \"\"\"\n    body = {\n        \"artifacts\": (\n            artifact_filter.dict(json_compatible=True) if artifact_filter else None\n        ),\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\"/artifacts/latest/filter\", json=body)\n    return pydantic.parse_obj_as(List[ArtifactCollection], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_artifact","title":"delete_artifact async","text":"

Deletes an artifact with the provided id.

Parameters:

Name Type Description Default artifact_id UUID

The id of the artifact to delete.

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_artifact(self, artifact_id: UUID) -> None:\n\"\"\"\n    Deletes an artifact with the provided id.\n\n    Args:\n        artifact_id: The id of the artifact to delete.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/artifacts/{artifact_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_variable_by_name","title":"read_variable_by_name async","text":"

Reads a variable by name. Returns None if no variable is found.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_variable_by_name(self, name: str) -> Optional[Variable]:\n\"\"\"Reads a variable by name. Returns None if no variable is found.\"\"\"\n    try:\n        response = await self._client.get(f\"/variables/name/{name}\")\n        return pydantic.parse_obj_as(Variable, response.json())\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            return None\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_variable_by_name","title":"delete_variable_by_name async","text":"

Deletes a variable by name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_variable_by_name(self, name: str):\n\"\"\"Deletes a variable by name.\"\"\"\n    try:\n        await self._client.delete(f\"/variables/name/{name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_variables","title":"read_variables async","text":"

Reads all variables.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_variables(self, limit: int = None) -> List[Variable]:\n\"\"\"Reads all variables.\"\"\"\n    response = await self._client.post(\"/variables/filter\", json={\"limit\": limit})\n    return pydantic.parse_obj_as(List[Variable], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_worker_metadata","title":"read_worker_metadata async","text":"

Reads worker metadata stored in Prefect collection registry.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_worker_metadata(self) -> Dict[str, Any]:\n\"\"\"Reads worker metadata stored in Prefect collection registry.\"\"\"\n    response = await self._client.get(\"collections/views/aggregate-worker-metadata\")\n    response.raise_for_status()\n    return response.json()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_automation","title":"create_automation async","text":"

Creates an automation in Prefect Cloud.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_automation(self, automation: Automation) -> UUID:\n\"\"\"Creates an automation in Prefect Cloud.\"\"\"\n    if self.server_type != ServerType.CLOUD:\n        raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n    response = await self._client.post(\n        \"/automations/\",\n        json=automation.dict(json_compatible=True),\n    )\n\n    return UUID(response.json()[\"id\"])\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.OrionClient","title":"OrionClient","text":"

Bases: PrefectClient

Deprecated. Use PrefectClient instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
@deprecated_callable(start_date=\"Feb 2023\", help=\"Use `PrefectClient` instead.\")\nclass OrionClient(PrefectClient):\n\"\"\"\n    Deprecated. Use `PrefectClient` instead.\n    \"\"\"\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.get_client","title":"get_client","text":"

Retrieve a HTTP client for communicating with the Prefect REST API.

The client must be context managed; for example:

async with get_client() as client:\n    await client.hello()\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
def get_client(httpx_settings: Optional[dict] = None) -> \"PrefectClient\":\n\"\"\"\n    Retrieve a HTTP client for communicating with the Prefect REST API.\n\n    The client must be context managed; for example:\n\n    ```python\n    async with get_client() as client:\n        await client.hello()\n    ```\n    \"\"\"\n    ctx = prefect.context.get_settings_context()\n    api = PREFECT_API_URL.value()\n\n    if not api:\n        # create an ephemeral API if none was provided\n        from prefect.server.api.server import create_app\n\n        api = create_app(ctx.settings, ephemeral=True)\n\n    return PrefectClient(\n        api,\n        api_key=PREFECT_API_KEY.value(),\n        httpx_settings=httpx_settings,\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/","title":"prefect.client.schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas","title":"prefect.client.schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/","title":"prefect.client.utilities","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities","title":"prefect.client.utilities","text":"

Utilities for working with clients.

","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities.inject_client","title":"inject_client","text":"

Simple helper to provide a context managed client to a asynchronous function.

The decorated function must take a client kwarg and if a client is passed when called it will be used instead of creating a new one, but it will not be context managed as it is assumed that the caller is managing the context.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/utilities.py
def inject_client(fn):\n\"\"\"\n    Simple helper to provide a context managed client to a asynchronous function.\n\n    The decorated function _must_ take a `client` kwarg and if a client is passed when\n    called it will be used instead of creating a new one, but it will not be context\n    managed as it is assumed that the caller is managing the context.\n    \"\"\"\n\n    @wraps(fn)\n    async def with_injected_client(*args, **kwargs):\n        from prefect.client.orchestration import get_client\n        from prefect.context import FlowRunContext, TaskRunContext\n\n        flow_run_context = FlowRunContext.get()\n        task_run_context = TaskRunContext.get()\n        client = None\n        client_context = asyncnullcontext()\n\n        if \"client\" in kwargs and kwargs[\"client\"] is not None:\n            # Client provided in kwargs\n            client = kwargs[\"client\"]\n        elif flow_run_context and flow_run_context.client._loop == get_running_loop():\n            # Client available from flow run context\n            client = flow_run_context.client\n        elif task_run_context and task_run_context.client._loop == get_running_loop():\n            # Client available from task run context\n            client = task_run_context.client\n\n        else:\n            # A new client is needed\n            client_context = get_client()\n\n        # Removes existing client to allow it to be set by setdefault below\n        kwargs.pop(\"client\", None)\n\n        async with client_context as new_client:\n            kwargs.setdefault(\"client\", new_client or client)\n            return await fn(*args, **kwargs)\n\n    return with_injected_client\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/deployments/base/","title":"prefect.deployments.base","text":"","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base","title":"prefect.deployments.base","text":"

Core primitives for managing Prefect projects. Projects provide a minimally opinionated build system for managing flows and deployments.

To get started, follow along with the deloyments tutorial.

","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.configure_project_by_recipe","title":"configure_project_by_recipe","text":"

Given a recipe name, returns a dictionary representing base configuration options.

Parameters:

Name Type Description Default recipe str

the name of the recipe to use

required formatting_kwargs dict

additional keyword arguments to format the recipe

{}

Raises:

Type Description ValueError

if provided recipe name does not exist.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/base.py
def configure_project_by_recipe(recipe: str, **formatting_kwargs) -> dict:\n\"\"\"\n    Given a recipe name, returns a dictionary representing base configuration options.\n\n    Args:\n        recipe (str): the name of the recipe to use\n        formatting_kwargs (dict, optional): additional keyword arguments to format the recipe\n\n    Raises:\n        ValueError: if provided recipe name does not exist.\n    \"\"\"\n    # load the recipe\n    recipe_path = Path(__file__).parent / \"recipes\" / recipe / \"prefect.yaml\"\n\n    if not recipe_path.exists():\n        raise ValueError(f\"Unknown recipe {recipe!r} provided.\")\n\n    with recipe_path.open(mode=\"r\") as f:\n        config = yaml.safe_load(f)\n\n    config = apply_values(\n        template=config, values=formatting_kwargs, remove_notset=False\n    )\n\n    return config\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.create_default_prefect_yaml","title":"create_default_prefect_yaml","text":"

Creates default prefect.yaml file in the provided path if one does not already exist; returns boolean specifying whether a file was created.

Parameters:

Name Type Description Default name str

the name of the project; if not provided, the current directory name will be used

None contents dict

a dictionary of contents to write to the file; if not provided, defaults will be used

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/base.py
def create_default_prefect_yaml(\n    path: str, name: str = None, contents: dict = None\n) -> bool:\n\"\"\"\n    Creates default `prefect.yaml` file in the provided path if one does not already exist;\n    returns boolean specifying whether a file was created.\n\n    Args:\n        name (str, optional): the name of the project; if not provided, the current directory name\n            will be used\n        contents (dict, optional): a dictionary of contents to write to the file; if not provided,\n            defaults will be used\n    \"\"\"\n    path = Path(path)\n    prefect_file = path / \"prefect.yaml\"\n    if prefect_file.exists():\n        return False\n    default_file = Path(__file__).parent / \"templates\" / \"prefect.yaml\"\n\n    with default_file.open(mode=\"r\") as df:\n        default_contents = yaml.safe_load(df)\n\n    import prefect\n\n    contents[\"prefect-version\"] = prefect.__version__\n    contents[\"name\"] = name\n\n    with prefect_file.open(mode=\"w\") as f:\n        # write header\n        f.write(\n            \"# Welcome to your prefect.yaml file! You can you this file for storing and\"\n            \" managing\\n# configuration for deploying your flows. We recommend\"\n            \" committing this file to source\\n# control along with your flow code.\\n\\n\"\n        )\n\n        f.write(\"# Generic metadata about this project\\n\")\n        yaml.dump({\"name\": contents[\"name\"]}, f, sort_keys=False)\n        yaml.dump({\"prefect-version\": contents[\"prefect-version\"]}, f, sort_keys=False)\n        f.write(\"\\n\")\n\n        # build\n        f.write(\"# build section allows you to manage and build docker images\\n\")\n        yaml.dump(\n            {\"build\": contents.get(\"build\", default_contents.get(\"build\"))},\n            f,\n            sort_keys=False,\n        )\n        f.write(\"\\n\")\n\n        # push\n        f.write(\n            \"# push section allows you to manage if and how this project is uploaded to\"\n            \" remote locations\\n\"\n        )\n        yaml.dump(\n            {\"push\": contents.get(\"push\", default_contents.get(\"push\"))},\n            f,\n            sort_keys=False,\n        )\n        f.write(\"\\n\")\n\n        # pull\n        f.write(\n            \"# pull section allows you to provide instructions for cloning this project\"\n            \" in remote locations\\n\"\n        )\n        yaml.dump(\n            {\"pull\": contents.get(\"pull\", default_contents.get(\"pull\"))},\n            f,\n            sort_keys=False,\n        )\n        f.write(\"\\n\")\n\n        # deployments\n        f.write(\n            \"# the deployments section allows you to provide configuration for\"\n            \" deploying flows\\n\"\n        )\n        yaml.dump(\n            {\n                \"deployments\": contents.get(\n                    \"deployments\", default_contents.get(\"deployments\")\n                )\n            },\n            f,\n            sort_keys=False,\n        )\n    return True\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.find_prefect_directory","title":"find_prefect_directory","text":"

Given a path, recurses upward looking for .prefect/ directories.

Once found, returns absolute path to the ./prefect directory, which is assumed to reside within the root for the current project.

If one is never found, None is returned.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/base.py
def find_prefect_directory(path: Path = None) -> Optional[Path]:\n\"\"\"\n    Given a path, recurses upward looking for .prefect/ directories.\n\n    Once found, returns absolute path to the ./prefect directory, which is assumed to reside within the\n    root for the current project.\n\n    If one is never found, `None` is returned.\n    \"\"\"\n    path = Path(path or \".\").resolve()\n    parent = path.parent.resolve()\n    while path != parent:\n        prefect_dir = path.joinpath(\".prefect\")\n        if prefect_dir.is_dir():\n            return prefect_dir\n\n        path = parent.resolve()\n        parent = path.parent.resolve()\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.initialize_project","title":"initialize_project","text":"

Initializes a basic project structure with base files. If no name is provided, the name of the current directory is used. If no recipe is provided, one is inferred.

Parameters:

Name Type Description Default name str

the name of the project; if not provided, the current directory name

None recipe str

the name of the recipe to use; if not provided, one is inferred

None inputs dict

a dictionary of inputs to use when formatting the recipe

None

Returns:

Type Description List[str]

List[str]: a list of files / directories that were created

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/base.py
def initialize_project(\n    name: str = None, recipe: str = None, inputs: dict = None\n) -> List[str]:\n\"\"\"\n    Initializes a basic project structure with base files.  If no name is provided, the name\n    of the current directory is used.  If no recipe is provided, one is inferred.\n\n    Args:\n        name (str, optional): the name of the project; if not provided, the current directory name\n        recipe (str, optional): the name of the recipe to use; if not provided, one is inferred\n        inputs (dict, optional): a dictionary of inputs to use when formatting the recipe\n\n    Returns:\n        List[str]: a list of files / directories that were created\n    \"\"\"\n    # determine if in git repo or use directory name as a default\n    is_git_based = False\n    formatting_kwargs = {\"directory\": str(Path(\".\").absolute().resolve())}\n    dir_name = os.path.basename(os.getcwd())\n\n    remote_url = _get_git_remote_origin_url()\n    if remote_url:\n        formatting_kwargs[\"repository\"] = remote_url\n        is_git_based = True\n        branch = _get_git_branch()\n        formatting_kwargs[\"branch\"] = branch or \"main\"\n\n    formatting_kwargs[\"name\"] = dir_name\n\n    has_dockerfile = Path(\"Dockerfile\").exists()\n\n    if has_dockerfile:\n        formatting_kwargs[\"dockerfile\"] = \"Dockerfile\"\n    elif recipe is not None and \"docker\" in recipe:\n        formatting_kwargs[\"dockerfile\"] = \"auto\"\n\n    # hand craft a pull step\n    if is_git_based and recipe is None:\n        if has_dockerfile:\n            recipe = \"docker-git\"\n        else:\n            recipe = \"git\"\n    elif recipe is None and has_dockerfile:\n        recipe = \"docker\"\n    elif recipe is None:\n        recipe = \"local\"\n\n    formatting_kwargs.update(inputs or {})\n    configuration = configure_project_by_recipe(recipe=recipe, **formatting_kwargs)\n\n    project_name = name or dir_name\n\n    files = []\n    if create_default_ignore_file(\".\"):\n        files.append(\".prefectignore\")\n    if create_default_prefect_yaml(\".\", name=project_name, contents=configuration):\n        files.append(\"prefect.yaml\")\n    if set_prefect_hidden_dir():\n        files.append(\".prefect/\")\n\n    return files\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.register_flow","title":"register_flow async","text":"

Register a flow with this project from an entrypoint.

Parameters:

Name Type Description Default entrypoint str

the entrypoint to the flow to register

required force bool

whether or not to overwrite an existing flow with the same name

False

Raises:

Type Description ValueError

if force is False and registration would overwrite an existing flow

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/base.py
async def register_flow(entrypoint: str, force: bool = False):\n\"\"\"\n    Register a flow with this project from an entrypoint.\n\n    Args:\n        entrypoint (str): the entrypoint to the flow to register\n        force (bool, optional): whether or not to overwrite an existing flow with the same name\n\n    Raises:\n        ValueError: if `force` is `False` and registration would overwrite an existing flow\n    \"\"\"\n    try:\n        fpath, obj_name = entrypoint.rsplit(\":\", 1)\n    except ValueError as exc:\n        if str(exc) == \"not enough values to unpack (expected 2, got 1)\":\n            missing_flow_name_msg = (\n                \"Your flow entrypoint must include the name of the function that is\"\n                f\" the entrypoint to your flow.\\nTry {entrypoint}:<flow_name> as your\"\n                f\" entrypoint. If you meant to specify '{entrypoint}' as the deployment\"\n                f\" name, try `prefect deploy -n {entrypoint}`.\"\n            )\n            raise ValueError(missing_flow_name_msg)\n        else:\n            raise exc\n\n    flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, entrypoint)\n\n    fpath = Path(fpath).absolute()\n    prefect_dir = find_prefect_directory()\n    if not prefect_dir:\n        raise FileNotFoundError(\n            \"No .prefect directory could be found - run `prefect project\"\n            \" init` to create one.\"\n        )\n\n    entrypoint = f\"{fpath.relative_to(prefect_dir.parent)!s}:{obj_name}\"\n\n    flows_file = prefect_dir / \"flows.json\"\n    if flows_file.exists():\n        with flows_file.open(mode=\"r\") as f:\n            flows = json.load(f)\n    else:\n        flows = {}\n\n    ## quality control\n    if flow.name in flows and flows[flow.name] != entrypoint:\n        if not force:\n            raise ValueError(\n                f\"Conflicting entry found for flow with name {flow.name!r}.\\nExisting\"\n                f\" entrypoint: {flows[flow.name]}\\nAttempted entrypoint:\"\n                f\" {entrypoint}\\n\\nYou can try removing the existing entry for\"\n                f\" {flow.name!r} from your [yellow]~/.prefect/flows.json[/yellow].\"\n            )\n\n    flows[flow.name] = entrypoint\n\n    with flows_file.open(mode=\"w\") as f:\n        json.dump(flows, f, sort_keys=True, indent=2)\n\n    return flow\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.set_prefect_hidden_dir","title":"set_prefect_hidden_dir","text":"

Creates default .prefect/ directory if one does not already exist. Returns boolean specifying whether or not a directory was created.

If a path is provided, the directory will be created in that location.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/base.py
def set_prefect_hidden_dir(path: str = None) -> bool:\n\"\"\"\n    Creates default `.prefect/` directory if one does not already exist.\n    Returns boolean specifying whether or not a directory was created.\n\n    If a path is provided, the directory will be created in that location.\n    \"\"\"\n    path = Path(path or \".\") / \".prefect\"\n\n    # use exists so that we dont accidentally overwrite a file\n    if path.exists():\n        return False\n    path.mkdir(mode=0o0700)\n    return True\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/deployments/","title":"prefect.deployments.deployments","text":"","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments","title":"prefect.deployments.deployments","text":"

Objects for specifying deployments and utilities for loading flows from deployments.

","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment","title":"Deployment","text":"

Bases: BaseModel

A Prefect Deployment definition, used for specifying and building deployments.

Parameters:

Name Type Description Default name

A name for the deployment (required).

required version

An optional version for the deployment; defaults to the flow's version

required description

An optional description of the deployment; defaults to the flow's description

required tags

An optional list of tags to associate with this deployment; note that tags are used only for organizational purposes. For delegating work to agents, see work_queue_name.

required schedule

A schedule to run this deployment on, once registered

required is_schedule_active

Whether or not the schedule is active

required work_queue_name

The work queue that will handle this deployment's runs

required work_pool_name

The work pool for the deployment

required flow_name

The name of the flow this deployment encapsulates

required parameters

A dictionary of parameter values to pass to runs created from this deployment

required infrastructure

An optional infrastructure block used to configure infrastructure for runs; if not provided, will default to running this deployment in Agent subprocesses

required infra_overrides

A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example env.CONFIG_KEY=config_value or namespace='prefect'

required storage

An optional remote storage block used to store and retrieve this workflow; if not provided, will default to referencing this flow by its local path

required path

The path to the working directory for the workflow, relative to remote storage or, if stored on a local filesystem, an absolute path

required entrypoint

The path to the entrypoint for the workflow, always relative to the path

required parameter_openapi_schema

The parameter schema of the flow, including defaults.

required

Examples:

Create a new deployment using configuration defaults for an imported flow:

>>> from my_project.flows import my_flow\n>>> from prefect.deployments import Deployment\n>>>\n>>> deployment = Deployment.build_from_flow(\n...     flow=my_flow,\n...     name=\"example\",\n...     version=\"1\",\n...     tags=[\"demo\"],\n>>> )\n>>> deployment.apply()\n

Create a new deployment with custom storage and an infrastructure override:

>>> from my_project.flows import my_flow\n>>> from prefect.deployments import Deployment\n>>> from prefect.filesystems import S3\n
>>> storage = S3.load(\"dev-bucket\") # load a pre-defined block\n>>> deployment = Deployment.build_from_flow(\n...     flow=my_flow,\n...     name=\"s3-example\",\n...     version=\"2\",\n...     tags=[\"aws\"],\n...     storage=storage,\n...     infra_overrides=dict(\"env.PREFECT_LOGGING_LEVEL\"=\"DEBUG\"),\n>>> )\n>>> deployment.apply()\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@experimental_field(\n    \"work_pool_name\",\n    group=\"work_pools\",\n    when=lambda x: x is not None and x != DEFAULT_AGENT_WORK_POOL_NAME,\n)\nclass Deployment(BaseModel):\n\"\"\"\n    A Prefect Deployment definition, used for specifying and building deployments.\n\n    Args:\n        name: A name for the deployment (required).\n        version: An optional version for the deployment; defaults to the flow's version\n        description: An optional description of the deployment; defaults to the flow's\n            description\n        tags: An optional list of tags to associate with this deployment; note that tags\n            are used only for organizational purposes. For delegating work to agents,\n            see `work_queue_name`.\n        schedule: A schedule to run this deployment on, once registered\n        is_schedule_active: Whether or not the schedule is active\n        work_queue_name: The work queue that will handle this deployment's runs\n        work_pool_name: The work pool for the deployment\n        flow_name: The name of the flow this deployment encapsulates\n        parameters: A dictionary of parameter values to pass to runs created from this\n            deployment\n        infrastructure: An optional infrastructure block used to configure\n            infrastructure for runs; if not provided, will default to running this\n            deployment in Agent subprocesses\n        infra_overrides: A dictionary of dot delimited infrastructure overrides that\n            will be applied at runtime; for example `env.CONFIG_KEY=config_value` or\n            `namespace='prefect'`\n        storage: An optional remote storage block used to store and retrieve this\n            workflow; if not provided, will default to referencing this flow by its\n            local path\n        path: The path to the working directory for the workflow, relative to remote\n            storage or, if stored on a local filesystem, an absolute path\n        entrypoint: The path to the entrypoint for the workflow, always relative to the\n            `path`\n        parameter_openapi_schema: The parameter schema of the flow, including defaults.\n\n    Examples:\n\n        Create a new deployment using configuration defaults for an imported flow:\n\n        >>> from my_project.flows import my_flow\n        >>> from prefect.deployments import Deployment\n        >>>\n        >>> deployment = Deployment.build_from_flow(\n        ...     flow=my_flow,\n        ...     name=\"example\",\n        ...     version=\"1\",\n        ...     tags=[\"demo\"],\n        >>> )\n        >>> deployment.apply()\n\n        Create a new deployment with custom storage and an infrastructure override:\n\n        >>> from my_project.flows import my_flow\n        >>> from prefect.deployments import Deployment\n        >>> from prefect.filesystems import S3\n\n        >>> storage = S3.load(\"dev-bucket\") # load a pre-defined block\n        >>> deployment = Deployment.build_from_flow(\n        ...     flow=my_flow,\n        ...     name=\"s3-example\",\n        ...     version=\"2\",\n        ...     tags=[\"aws\"],\n        ...     storage=storage,\n        ...     infra_overrides=dict(\"env.PREFECT_LOGGING_LEVEL\"=\"DEBUG\"),\n        >>> )\n        >>> deployment.apply()\n\n    \"\"\"\n\n    class Config:\n        json_encoders = {SecretDict: lambda v: v.dict()}\n        validate_assignment = True\n        extra = \"forbid\"\n\n    @property\n    def _editable_fields(self) -> List[str]:\n        editable_fields = [\n            \"name\",\n            \"description\",\n            \"version\",\n            \"work_queue_name\",\n            \"work_pool_name\",\n            \"tags\",\n            \"parameters\",\n            \"schedule\",\n            \"is_schedule_active\",\n            \"infra_overrides\",\n        ]\n\n        # if infrastructure is baked as a pre-saved block, then\n        # editing its fields will not update anything\n        if self.infrastructure._block_document_id:\n            return editable_fields\n        else:\n            return editable_fields + [\"infrastructure\"]\n\n    @property\n    def location(self) -> str:\n\"\"\"\n        The 'location' that this deployment points to is given by `path` alone\n        in the case of no remote storage, and otherwise by `storage.basepath / path`.\n\n        The underlying flow entrypoint is interpreted relative to this location.\n        \"\"\"\n        location = \"\"\n        if self.storage:\n            location = (\n                self.storage.basepath + \"/\"\n                if not self.storage.basepath.endswith(\"/\")\n                else \"\"\n            )\n        if self.path:\n            location += self.path\n        return location\n\n    @sync_compatible\n    async def to_yaml(self, path: Path) -> None:\n        yaml_dict = self._yaml_dict()\n        schema = self.schema()\n\n        with open(path, \"w\") as f:\n            # write header\n            f.write(\n                \"###\\n### A complete description of a Prefect Deployment for flow\"\n                f\" {self.flow_name!r}\\n###\\n\"\n            )\n\n            # write editable fields\n            for field in self._editable_fields:\n                # write any comments\n                if schema[\"properties\"][field].get(\"yaml_comment\"):\n                    f.write(f\"# {schema['properties'][field]['yaml_comment']}\\n\")\n                # write the field\n                yaml.dump({field: yaml_dict[field]}, f, sort_keys=False)\n\n            # write non-editable fields\n            f.write(\"\\n###\\n### DO NOT EDIT BELOW THIS LINE\\n###\\n\")\n            yaml.dump(\n                {k: v for k, v in yaml_dict.items() if k not in self._editable_fields},\n                f,\n                sort_keys=False,\n            )\n\n    def _yaml_dict(self) -> dict:\n\"\"\"\n        Returns a YAML-compatible representation of this deployment as a dictionary.\n        \"\"\"\n        # avoids issues with UUIDs showing up in YAML\n        all_fields = json.loads(\n            self.json(\n                exclude={\n                    \"storage\": {\"_filesystem\", \"filesystem\", \"_remote_file_system\"}\n                }\n            )\n        )\n        if all_fields[\"storage\"]:\n            all_fields[\"storage\"][\n                \"_block_type_slug\"\n            ] = self.storage.get_block_type_slug()\n        if all_fields[\"infrastructure\"]:\n            all_fields[\"infrastructure\"][\n                \"_block_type_slug\"\n            ] = self.infrastructure.get_block_type_slug()\n        return all_fields\n\n    # top level metadata\n    name: str = Field(..., description=\"The name of the deployment.\")\n    description: Optional[str] = Field(\n        default=None, description=\"An optional description of the deployment.\"\n    )\n    version: Optional[str] = Field(\n        default=None, description=\"An optional version for the deployment.\"\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"One of more tags to apply to this deployment.\",\n    )\n    schedule: SCHEDULE_TYPES = None\n    is_schedule_active: Optional[bool] = Field(\n        default=None, description=\"Whether or not the schedule is active.\"\n    )\n    flow_name: Optional[str] = Field(default=None, description=\"The name of the flow.\")\n    work_queue_name: Optional[str] = Field(\n        \"default\",\n        description=\"The work queue for the deployment.\",\n        yaml_comment=\"The work queue that will handle this deployment's runs\",\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None, description=\"The work pool for the deployment\"\n    )\n    # flow data\n    parameters: Dict[str, Any] = Field(default_factory=dict)\n    manifest_path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the flow's manifest file, relative to the chosen storage.\"\n        ),\n    )\n    infrastructure: Infrastructure = Field(default_factory=Process)\n    infra_overrides: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to the base infrastructure block at runtime.\",\n    )\n    storage: Optional[Block] = Field(\n        None,\n        help=\"The remote storage to use for this workflow.\",\n    )\n    path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the working directory for the workflow, relative to remote\"\n            \" storage or an absolute path.\"\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the entrypoint for the workflow, relative to the `path`.\"\n        ),\n    )\n    parameter_openapi_schema: ParameterSchema = Field(\n        default_factory=ParameterSchema,\n        description=\"The parameter schema of the flow, including defaults.\",\n    )\n    timestamp: datetime = Field(default_factory=partial(pendulum.now, \"UTC\"))\n    triggers: List[DeploymentTrigger] = Field(\n        default_factory=list,\n        description=\"The triggers that should cause this deployment to run.\",\n    )\n\n    @validator(\"infrastructure\", pre=True)\n    def infrastructure_must_have_capabilities(cls, value):\n        if isinstance(value, dict):\n            if \"_block_type_slug\" in value:\n                # Replace private attribute with public for dispatch\n                value[\"block_type_slug\"] = value.pop(\"_block_type_slug\")\n            block = Block(**value)\n        elif value is None:\n            return value\n        else:\n            block = value\n\n        if \"run-infrastructure\" not in block.get_block_capabilities():\n            raise ValueError(\n                \"Infrastructure block must have 'run-infrastructure' capabilities.\"\n            )\n        return block\n\n    @validator(\"storage\", pre=True)\n    def storage_must_have_capabilities(cls, value):\n        if isinstance(value, dict):\n            block_type = Block.get_block_class_from_key(value.pop(\"_block_type_slug\"))\n            block = block_type(**value)\n        elif value is None:\n            return value\n        else:\n            block = value\n\n        capabilities = block.get_block_capabilities()\n        if \"get-directory\" not in capabilities:\n            raise ValueError(\n                \"Remote Storage block must have 'get-directory' capabilities.\"\n            )\n        return block\n\n    @validator(\"parameter_openapi_schema\", pre=True)\n    def handle_openapi_schema(cls, value):\n\"\"\"\n        This method ensures setting a value of `None` is handled gracefully.\n        \"\"\"\n        if value is None:\n            return ParameterSchema()\n        return value\n\n    @validator(\"triggers\")\n    def validate_automation_names(cls, field_value, values, field, config):\n\"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n        for i, trigger in enumerate(field_value, start=1):\n            if trigger.name is None:\n                trigger.name = f\"{values['name']}__automation_{i}\"\n\n        return field_value\n\n    @classmethod\n    @sync_compatible\n    async def load_from_yaml(cls, path: str):\n        data = yaml.safe_load(await anyio.Path(path).read_bytes())\n        # load blocks from server to ensure secret values are properly hydrated\n        if data.get(\"storage\"):\n            block_doc_name = data[\"storage\"].get(\"_block_document_name\")\n            # if no doc name, this block is not stored on the server\n            if block_doc_name:\n                block_slug = data[\"storage\"][\"_block_type_slug\"]\n                block = await Block.load(f\"{block_slug}/{block_doc_name}\")\n                data[\"storage\"] = block\n\n        if data.get(\"infrastructure\"):\n            block_doc_name = data[\"infrastructure\"].get(\"_block_document_name\")\n            # if no doc name, this block is not stored on the server\n            if block_doc_name:\n                block_slug = data[\"infrastructure\"][\"_block_type_slug\"]\n                block = await Block.load(f\"{block_slug}/{block_doc_name}\")\n                data[\"infrastructure\"] = block\n\n            return cls(**data)\n\n    @sync_compatible\n    async def load(self) -> bool:\n\"\"\"\n        Queries the API for a deployment with this name for this flow, and if found,\n        prepopulates any settings that were not set at initialization.\n\n        Returns a boolean specifying whether a load was successful or not.\n\n        Raises:\n            - ValueError: if both name and flow name are not set\n        \"\"\"\n        if not self.name or not self.flow_name:\n            raise ValueError(\"Both a deployment name and flow name must be provided.\")\n        async with get_client() as client:\n            try:\n                deployment = await client.read_deployment_by_name(\n                    f\"{self.flow_name}/{self.name}\"\n                )\n                if deployment.storage_document_id:\n                    Block._from_block_document(\n                        await client.read_block_document(deployment.storage_document_id)\n                    )\n\n                excluded_fields = self.__fields_set__.union(\n                    {\"infrastructure\", \"storage\", \"timestamp\", \"triggers\"}\n                )\n                for field in set(self.__fields__.keys()) - excluded_fields:\n                    new_value = getattr(deployment, field)\n                    setattr(self, field, new_value)\n\n                if \"infrastructure\" not in self.__fields_set__:\n                    if deployment.infrastructure_document_id:\n                        self.infrastructure = Block._from_block_document(\n                            await client.read_block_document(\n                                deployment.infrastructure_document_id\n                            )\n                        )\n                if \"storage\" not in self.__fields_set__:\n                    if deployment.storage_document_id:\n                        self.storage = Block._from_block_document(\n                            await client.read_block_document(\n                                deployment.storage_document_id\n                            )\n                        )\n            except ObjectNotFound:\n                return False\n        return True\n\n    @sync_compatible\n    async def update(self, ignore_none: bool = False, **kwargs):\n\"\"\"\n        Performs an in-place update with the provided settings.\n\n        Args:\n            ignore_none: if True, all `None` values are ignored when performing the\n                update\n        \"\"\"\n        unknown_keys = set(kwargs.keys()) - set(self.dict().keys())\n        if unknown_keys:\n            raise ValueError(\n                f\"Received unexpected attributes: {', '.join(unknown_keys)}\"\n            )\n        for key, value in kwargs.items():\n            if ignore_none and value is None:\n                continue\n            setattr(self, key, value)\n\n    @sync_compatible\n    async def upload_to_storage(\n        self, storage_block: str = None, ignore_file: str = \".prefectignore\"\n    ) -> Optional[int]:\n\"\"\"\n        Uploads the workflow this deployment represents using a provided storage block;\n        if no block is provided, defaults to configuring self for local storage.\n\n        Args:\n            storage_block: a string reference a remote storage block slug `$type/$name`;\n                if provided, used to upload the workflow's project\n            ignore_file: an optional path to a `.prefectignore` file that specifies\n                filename patterns to ignore when uploading to remote storage; if not\n                provided, looks for `.prefectignore` in the current working directory\n        \"\"\"\n        file_count = None\n        if storage_block:\n            storage = await Block.load(storage_block)\n\n            if \"put-directory\" not in storage.get_block_capabilities():\n                raise BlockMissingCapabilities(\n                    f\"Storage block {storage!r} missing 'put-directory' capability.\"\n                )\n\n            self.storage = storage\n\n            # upload current directory to storage location\n            file_count = await self.storage.put_directory(\n                ignore_file=ignore_file, to_path=self.path\n            )\n        elif self.storage:\n            if \"put-directory\" not in self.storage.get_block_capabilities():\n                raise BlockMissingCapabilities(\n                    f\"Storage block {self.storage!r} missing 'put-directory'\"\n                    \" capability.\"\n                )\n\n            file_count = await self.storage.put_directory(\n                ignore_file=ignore_file, to_path=self.path\n            )\n\n        # persists storage now in case it contains secret values\n        if self.storage and not self.storage._block_document_id:\n            await self.storage._save(is_anonymous=True)\n\n        return file_count\n\n    @sync_compatible\n    async def apply(\n        self, upload: bool = False, work_queue_concurrency: int = None\n    ) -> UUID:\n\"\"\"\n        Registers this deployment with the API and returns the deployment's ID.\n\n        Args:\n            upload: if True, deployment files are automatically uploaded to remote\n                storage\n            work_queue_concurrency: If provided, sets the concurrency limit on the\n                deployment's work queue\n        \"\"\"\n        if not self.name or not self.flow_name:\n            raise ValueError(\"Both a deployment name and flow name must be set.\")\n        async with get_client() as client:\n            # prep IDs\n            flow_id = await client.create_flow_from_name(self.flow_name)\n\n            infrastructure_document_id = self.infrastructure._block_document_id\n            if not infrastructure_document_id:\n                # if not building off a block, will create an anonymous block\n                self.infrastructure = self.infrastructure.copy()\n                infrastructure_document_id = await self.infrastructure._save(\n                    is_anonymous=True,\n                )\n\n            if upload:\n                await self.upload_to_storage()\n\n            if self.work_queue_name and work_queue_concurrency is not None:\n                try:\n                    res = await client.create_work_queue(\n                        name=self.work_queue_name, work_pool_name=self.work_pool_name\n                    )\n                except ObjectAlreadyExists:\n                    res = await client.read_work_queue_by_name(\n                        name=self.work_queue_name, work_pool_name=self.work_pool_name\n                    )\n                await client.update_work_queue(\n                    res.id, concurrency_limit=work_queue_concurrency\n                )\n\n            # we assume storage was already saved\n            storage_document_id = getattr(self.storage, \"_block_document_id\", None)\n            deployment_id = await client.create_deployment(\n                flow_id=flow_id,\n                name=self.name,\n                work_queue_name=self.work_queue_name,\n                work_pool_name=self.work_pool_name,\n                version=self.version,\n                schedule=self.schedule,\n                is_schedule_active=self.is_schedule_active,\n                parameters=self.parameters,\n                description=self.description,\n                tags=self.tags,\n                manifest_path=self.manifest_path,  # allows for backwards YAML compat\n                path=self.path,\n                entrypoint=self.entrypoint,\n                infra_overrides=self.infra_overrides,\n                storage_document_id=storage_document_id,\n                infrastructure_document_id=infrastructure_document_id,\n                parameter_openapi_schema=self.parameter_openapi_schema.dict(),\n            )\n\n            if client.server_type == ServerType.CLOUD:\n                # The triggers defined in the deployment spec are, essentially,\n                # anonymous and attempting truly sync them with cloud is not\n                # feasible. Instead, we remove all automations that are owned\n                # by the deployment, meaning that they were created via this\n                # mechanism below, and then recreate them.\n                await client.delete_resource_owned_automations(\n                    f\"prefect.deployment.{deployment_id}\"\n                )\n                for trigger in self.triggers:\n                    trigger.set_deployment_id(deployment_id)\n                    await client.create_automation(trigger.as_automation())\n\n            return deployment_id\n\n    @classmethod\n    @sync_compatible\n    async def build_from_flow(\n        cls,\n        flow: Flow,\n        name: str,\n        output: str = None,\n        skip_upload: bool = False,\n        ignore_file: str = \".prefectignore\",\n        apply: bool = False,\n        load_existing: bool = True,\n        **kwargs,\n    ) -> \"Deployment\":\n\"\"\"\n        Configure a deployment for a given flow.\n\n        Args:\n            flow: A flow function to deploy\n            name: A name for the deployment\n            output (optional): if provided, the full deployment specification will be\n                written as a YAML file in the location specified by `output`\n            skip_upload: if True, deployment files are not automatically uploaded to\n                remote storage\n            ignore_file: an optional path to a `.prefectignore` file that specifies\n                filename patterns to ignore when uploading to remote storage; if not\n                provided, looks for `.prefectignore` in the current working directory\n            apply: if True, the deployment is automatically registered with the API\n            load_existing: if True, load any settings that may already be configured for\n                the named deployment server-side (e.g., schedules, default parameter\n                values, etc.)\n            **kwargs: other keyword arguments to pass to the constructor for the\n                `Deployment` class\n        \"\"\"\n        if not name:\n            raise ValueError(\"A deployment name must be provided.\")\n\n        # note that `deployment.load` only updates settings that were *not*\n        # provided at initialization\n        deployment = cls(name=name, **kwargs)\n        deployment.flow_name = flow.name\n        if not deployment.entrypoint:\n            ## first see if an entrypoint can be determined\n            flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n            mod_name = getattr(flow, \"__module__\", None)\n            if not flow_file:\n                if not mod_name:\n                    # todo, check if the file location was manually set already\n                    raise ValueError(\"Could not determine flow's file location.\")\n                module = importlib.import_module(mod_name)\n                flow_file = getattr(module, \"__file__\", None)\n                if not flow_file:\n                    raise ValueError(\"Could not determine flow's file location.\")\n\n            # set entrypoint\n            entry_path = Path(flow_file).absolute().relative_to(Path(\".\").absolute())\n            deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n        if load_existing:\n            await deployment.load()\n\n        # set a few attributes for this flow object\n        deployment.parameter_openapi_schema = parameter_schema(flow)\n\n        # ensure the ignore file exists\n        if not Path(ignore_file).exists():\n            Path(ignore_file).touch()\n\n        if not deployment.version:\n            deployment.version = flow.version\n        if not deployment.description:\n            deployment.description = flow.description\n\n        # proxy for whether infra is docker-based\n        is_docker_based = hasattr(deployment.infrastructure, \"image\")\n\n        if not deployment.storage and not is_docker_based and not deployment.path:\n            deployment.path = str(Path(\".\").absolute())\n        elif not deployment.storage and is_docker_based:\n            # only update if a path is not already set\n            if not deployment.path:\n                deployment.path = \"/opt/prefect/flows\"\n\n        if not skip_upload:\n            if (\n                deployment.storage\n                and \"put-directory\" in deployment.storage.get_block_capabilities()\n            ):\n                await deployment.upload_to_storage(ignore_file=ignore_file)\n\n        if output:\n            await deployment.to_yaml(output)\n\n        if apply:\n            await deployment.apply()\n\n        return deployment\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.location","title":"location: str property","text":"

The 'location' that this deployment points to is given by path alone in the case of no remote storage, and otherwise by storage.basepath / path.

The underlying flow entrypoint is interpreted relative to this location.

","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.apply","title":"apply async","text":"

Registers this deployment with the API and returns the deployment's ID.

Parameters:

Name Type Description Default upload bool

if True, deployment files are automatically uploaded to remote storage

False work_queue_concurrency int

If provided, sets the concurrency limit on the deployment's work queue

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@sync_compatible\nasync def apply(\n    self, upload: bool = False, work_queue_concurrency: int = None\n) -> UUID:\n\"\"\"\n    Registers this deployment with the API and returns the deployment's ID.\n\n    Args:\n        upload: if True, deployment files are automatically uploaded to remote\n            storage\n        work_queue_concurrency: If provided, sets the concurrency limit on the\n            deployment's work queue\n    \"\"\"\n    if not self.name or not self.flow_name:\n        raise ValueError(\"Both a deployment name and flow name must be set.\")\n    async with get_client() as client:\n        # prep IDs\n        flow_id = await client.create_flow_from_name(self.flow_name)\n\n        infrastructure_document_id = self.infrastructure._block_document_id\n        if not infrastructure_document_id:\n            # if not building off a block, will create an anonymous block\n            self.infrastructure = self.infrastructure.copy()\n            infrastructure_document_id = await self.infrastructure._save(\n                is_anonymous=True,\n            )\n\n        if upload:\n            await self.upload_to_storage()\n\n        if self.work_queue_name and work_queue_concurrency is not None:\n            try:\n                res = await client.create_work_queue(\n                    name=self.work_queue_name, work_pool_name=self.work_pool_name\n                )\n            except ObjectAlreadyExists:\n                res = await client.read_work_queue_by_name(\n                    name=self.work_queue_name, work_pool_name=self.work_pool_name\n                )\n            await client.update_work_queue(\n                res.id, concurrency_limit=work_queue_concurrency\n            )\n\n        # we assume storage was already saved\n        storage_document_id = getattr(self.storage, \"_block_document_id\", None)\n        deployment_id = await client.create_deployment(\n            flow_id=flow_id,\n            name=self.name,\n            work_queue_name=self.work_queue_name,\n            work_pool_name=self.work_pool_name,\n            version=self.version,\n            schedule=self.schedule,\n            is_schedule_active=self.is_schedule_active,\n            parameters=self.parameters,\n            description=self.description,\n            tags=self.tags,\n            manifest_path=self.manifest_path,  # allows for backwards YAML compat\n            path=self.path,\n            entrypoint=self.entrypoint,\n            infra_overrides=self.infra_overrides,\n            storage_document_id=storage_document_id,\n            infrastructure_document_id=infrastructure_document_id,\n            parameter_openapi_schema=self.parameter_openapi_schema.dict(),\n        )\n\n        if client.server_type == ServerType.CLOUD:\n            # The triggers defined in the deployment spec are, essentially,\n            # anonymous and attempting truly sync them with cloud is not\n            # feasible. Instead, we remove all automations that are owned\n            # by the deployment, meaning that they were created via this\n            # mechanism below, and then recreate them.\n            await client.delete_resource_owned_automations(\n                f\"prefect.deployment.{deployment_id}\"\n            )\n            for trigger in self.triggers:\n                trigger.set_deployment_id(deployment_id)\n                await client.create_automation(trigger.as_automation())\n\n        return deployment_id\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.build_from_flow","title":"build_from_flow async classmethod","text":"

Configure a deployment for a given flow.

Parameters:

Name Type Description Default flow Flow

A flow function to deploy

required name str

A name for the deployment

required output optional

if provided, the full deployment specification will be written as a YAML file in the location specified by output

None skip_upload bool

if True, deployment files are not automatically uploaded to remote storage

False ignore_file str

an optional path to a .prefectignore file that specifies filename patterns to ignore when uploading to remote storage; if not provided, looks for .prefectignore in the current working directory

'.prefectignore' apply bool

if True, the deployment is automatically registered with the API

False load_existing bool

if True, load any settings that may already be configured for the named deployment server-side (e.g., schedules, default parameter values, etc.)

True **kwargs

other keyword arguments to pass to the constructor for the Deployment class

{} Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@classmethod\n@sync_compatible\nasync def build_from_flow(\n    cls,\n    flow: Flow,\n    name: str,\n    output: str = None,\n    skip_upload: bool = False,\n    ignore_file: str = \".prefectignore\",\n    apply: bool = False,\n    load_existing: bool = True,\n    **kwargs,\n) -> \"Deployment\":\n\"\"\"\n    Configure a deployment for a given flow.\n\n    Args:\n        flow: A flow function to deploy\n        name: A name for the deployment\n        output (optional): if provided, the full deployment specification will be\n            written as a YAML file in the location specified by `output`\n        skip_upload: if True, deployment files are not automatically uploaded to\n            remote storage\n        ignore_file: an optional path to a `.prefectignore` file that specifies\n            filename patterns to ignore when uploading to remote storage; if not\n            provided, looks for `.prefectignore` in the current working directory\n        apply: if True, the deployment is automatically registered with the API\n        load_existing: if True, load any settings that may already be configured for\n            the named deployment server-side (e.g., schedules, default parameter\n            values, etc.)\n        **kwargs: other keyword arguments to pass to the constructor for the\n            `Deployment` class\n    \"\"\"\n    if not name:\n        raise ValueError(\"A deployment name must be provided.\")\n\n    # note that `deployment.load` only updates settings that were *not*\n    # provided at initialization\n    deployment = cls(name=name, **kwargs)\n    deployment.flow_name = flow.name\n    if not deployment.entrypoint:\n        ## first see if an entrypoint can be determined\n        flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n        mod_name = getattr(flow, \"__module__\", None)\n        if not flow_file:\n            if not mod_name:\n                # todo, check if the file location was manually set already\n                raise ValueError(\"Could not determine flow's file location.\")\n            module = importlib.import_module(mod_name)\n            flow_file = getattr(module, \"__file__\", None)\n            if not flow_file:\n                raise ValueError(\"Could not determine flow's file location.\")\n\n        # set entrypoint\n        entry_path = Path(flow_file).absolute().relative_to(Path(\".\").absolute())\n        deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n    if load_existing:\n        await deployment.load()\n\n    # set a few attributes for this flow object\n    deployment.parameter_openapi_schema = parameter_schema(flow)\n\n    # ensure the ignore file exists\n    if not Path(ignore_file).exists():\n        Path(ignore_file).touch()\n\n    if not deployment.version:\n        deployment.version = flow.version\n    if not deployment.description:\n        deployment.description = flow.description\n\n    # proxy for whether infra is docker-based\n    is_docker_based = hasattr(deployment.infrastructure, \"image\")\n\n    if not deployment.storage and not is_docker_based and not deployment.path:\n        deployment.path = str(Path(\".\").absolute())\n    elif not deployment.storage and is_docker_based:\n        # only update if a path is not already set\n        if not deployment.path:\n            deployment.path = \"/opt/prefect/flows\"\n\n    if not skip_upload:\n        if (\n            deployment.storage\n            and \"put-directory\" in deployment.storage.get_block_capabilities()\n        ):\n            await deployment.upload_to_storage(ignore_file=ignore_file)\n\n    if output:\n        await deployment.to_yaml(output)\n\n    if apply:\n        await deployment.apply()\n\n    return deployment\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.handle_openapi_schema","title":"handle_openapi_schema","text":"

This method ensures setting a value of None is handled gracefully.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@validator(\"parameter_openapi_schema\", pre=True)\ndef handle_openapi_schema(cls, value):\n\"\"\"\n    This method ensures setting a value of `None` is handled gracefully.\n    \"\"\"\n    if value is None:\n        return ParameterSchema()\n    return value\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.load","title":"load async","text":"

Queries the API for a deployment with this name for this flow, and if found, prepopulates any settings that were not set at initialization.

Returns a boolean specifying whether a load was successful or not.

Raises:

Type Description -ValueError

if both name and flow name are not set

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@sync_compatible\nasync def load(self) -> bool:\n\"\"\"\n    Queries the API for a deployment with this name for this flow, and if found,\n    prepopulates any settings that were not set at initialization.\n\n    Returns a boolean specifying whether a load was successful or not.\n\n    Raises:\n        - ValueError: if both name and flow name are not set\n    \"\"\"\n    if not self.name or not self.flow_name:\n        raise ValueError(\"Both a deployment name and flow name must be provided.\")\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(\n                f\"{self.flow_name}/{self.name}\"\n            )\n            if deployment.storage_document_id:\n                Block._from_block_document(\n                    await client.read_block_document(deployment.storage_document_id)\n                )\n\n            excluded_fields = self.__fields_set__.union(\n                {\"infrastructure\", \"storage\", \"timestamp\", \"triggers\"}\n            )\n            for field in set(self.__fields__.keys()) - excluded_fields:\n                new_value = getattr(deployment, field)\n                setattr(self, field, new_value)\n\n            if \"infrastructure\" not in self.__fields_set__:\n                if deployment.infrastructure_document_id:\n                    self.infrastructure = Block._from_block_document(\n                        await client.read_block_document(\n                            deployment.infrastructure_document_id\n                        )\n                    )\n            if \"storage\" not in self.__fields_set__:\n                if deployment.storage_document_id:\n                    self.storage = Block._from_block_document(\n                        await client.read_block_document(\n                            deployment.storage_document_id\n                        )\n                    )\n        except ObjectNotFound:\n            return False\n    return True\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.update","title":"update async","text":"

Performs an in-place update with the provided settings.

Parameters:

Name Type Description Default ignore_none bool

if True, all None values are ignored when performing the update

False Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@sync_compatible\nasync def update(self, ignore_none: bool = False, **kwargs):\n\"\"\"\n    Performs an in-place update with the provided settings.\n\n    Args:\n        ignore_none: if True, all `None` values are ignored when performing the\n            update\n    \"\"\"\n    unknown_keys = set(kwargs.keys()) - set(self.dict().keys())\n    if unknown_keys:\n        raise ValueError(\n            f\"Received unexpected attributes: {', '.join(unknown_keys)}\"\n        )\n    for key, value in kwargs.items():\n        if ignore_none and value is None:\n            continue\n        setattr(self, key, value)\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.upload_to_storage","title":"upload_to_storage async","text":"

Uploads the workflow this deployment represents using a provided storage block; if no block is provided, defaults to configuring self for local storage.

Parameters:

Name Type Description Default storage_block str

a string reference a remote storage block slug $type/$name; if provided, used to upload the workflow's project

None ignore_file str

an optional path to a .prefectignore file that specifies filename patterns to ignore when uploading to remote storage; if not provided, looks for .prefectignore in the current working directory

'.prefectignore' Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@sync_compatible\nasync def upload_to_storage(\n    self, storage_block: str = None, ignore_file: str = \".prefectignore\"\n) -> Optional[int]:\n\"\"\"\n    Uploads the workflow this deployment represents using a provided storage block;\n    if no block is provided, defaults to configuring self for local storage.\n\n    Args:\n        storage_block: a string reference a remote storage block slug `$type/$name`;\n            if provided, used to upload the workflow's project\n        ignore_file: an optional path to a `.prefectignore` file that specifies\n            filename patterns to ignore when uploading to remote storage; if not\n            provided, looks for `.prefectignore` in the current working directory\n    \"\"\"\n    file_count = None\n    if storage_block:\n        storage = await Block.load(storage_block)\n\n        if \"put-directory\" not in storage.get_block_capabilities():\n            raise BlockMissingCapabilities(\n                f\"Storage block {storage!r} missing 'put-directory' capability.\"\n            )\n\n        self.storage = storage\n\n        # upload current directory to storage location\n        file_count = await self.storage.put_directory(\n            ignore_file=ignore_file, to_path=self.path\n        )\n    elif self.storage:\n        if \"put-directory\" not in self.storage.get_block_capabilities():\n            raise BlockMissingCapabilities(\n                f\"Storage block {self.storage!r} missing 'put-directory'\"\n                \" capability.\"\n            )\n\n        file_count = await self.storage.put_directory(\n            ignore_file=ignore_file, to_path=self.path\n        )\n\n    # persists storage now in case it contains secret values\n    if self.storage and not self.storage._block_document_id:\n        await self.storage._save(is_anonymous=True)\n\n    return file_count\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.validate_automation_names","title":"validate_automation_names","text":"

Ensure that each trigger has a name for its automation if none is provided.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@validator(\"triggers\")\ndef validate_automation_names(cls, field_value, values, field, config):\n\"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n    for i, trigger in enumerate(field_value, start=1):\n        if trigger.name is None:\n            trigger.name = f\"{values['name']}__automation_{i}\"\n\n    return field_value\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.load_deployments_from_yaml","title":"load_deployments_from_yaml","text":"

Load deployments from a yaml file.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
def load_deployments_from_yaml(\n    path: str,\n) -> PrefectObjectRegistry:\n\"\"\"\n    Load deployments from a yaml file.\n    \"\"\"\n    with open(path, \"r\") as f:\n        contents = f.read()\n\n    # Parse into a yaml tree to retrieve separate documents\n    nodes = yaml.compose_all(contents)\n\n    with PrefectObjectRegistry(capture_failures=True) as registry:\n        for node in nodes:\n            with tmpchdir(path):\n                deployment_dict = yaml.safe_load(yaml.serialize(node))\n                # The return value is not necessary, just instantiating the Deployment\n                # is enough to get it recorded on the registry\n                parse_obj_as(Deployment, deployment_dict)\n\n    return registry\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.load_flow_from_flow_run","title":"load_flow_from_flow_run async","text":"

Load a flow from the location/script provided in a deployment's storage document.

If ignore_storage=True is provided, no pull from remote storage occurs. This flag is largely for testing, and assumes the flow is already available locally.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@inject_client\nasync def load_flow_from_flow_run(\n    flow_run: FlowRun, client: PrefectClient, ignore_storage: bool = False\n) -> Flow:\n\"\"\"\n    Load a flow from the location/script provided in a deployment's storage document.\n\n    If `ignore_storage=True` is provided, no pull from remote storage occurs.  This flag\n    is largely for testing, and assumes the flow is already available locally.\n    \"\"\"\n    deployment = await client.read_deployment(flow_run.deployment_id)\n    logger = flow_run_logger(flow_run)\n\n    if not ignore_storage and not deployment.pull_steps:\n        sys.path.insert(0, \".\")\n        if deployment.storage_document_id:\n            storage_document = await client.read_block_document(\n                deployment.storage_document_id\n            )\n            storage_block = Block._from_block_document(storage_document)\n        else:\n            basepath = deployment.path or Path(deployment.manifest_path).parent\n            storage_block = LocalFileSystem(basepath=basepath)\n\n        logger.info(f\"Downloading flow code from storage at {deployment.path!r}\")\n        await storage_block.get_directory(from_path=deployment.path, local_path=\".\")\n\n    if deployment.pull_steps:\n        logger.debug(f\"Running {len(deployment.pull_steps)} deployment pull steps\")\n        output = await run_steps(deployment.pull_steps)\n        if output.get(\"directory\"):\n            logger.debug(f\"Changing working directory to {output['directory']!r}\")\n            os.chdir(output[\"directory\"])\n\n    import_path = relative_path_to_current_platform(deployment.entrypoint)\n    logger.debug(f\"Importing flow code from '{import_path}'\")\n\n    # for backwards compat\n    if deployment.manifest_path:\n        with open(deployment.manifest_path, \"r\") as f:\n            import_path = json.load(f)[\"import_path\"]\n            import_path = (\n                Path(deployment.manifest_path).parent / import_path\n            ).absolute()\n    flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, str(import_path))\n    return flow\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.run_deployment","title":"run_deployment async","text":"

Create a flow run for a deployment and return it after completion or a timeout.

This function will return when the created flow run enters any terminal state or the timeout is reached. If the timeout is reached and the flow run has not reached a terminal state, it will still be returned. When using a timeout, we suggest checking the state of the flow run if completion is important moving forward.

Parameters:

Name Type Description Default name Union[str, UUID]

The deployment id or deployment name in the form: <slugified-flow-name>/<slugified-deployment-name>

required parameters Optional[dict]

Parameter overrides for this flow run. Merged with the deployment defaults.

None scheduled_time Optional[datetime]

The time to schedule the flow run for, defaults to scheduling the flow run to start now.

None flow_run_name Optional[str]

A name for the created flow run

None timeout Optional[float]

The amount of time to wait for the flow run to complete before returning. Setting timeout to 0 will return the flow run immediately. Setting timeout to None will allow this function to poll indefinitely. Defaults to None

None poll_interval Optional[float]

The number of seconds between polls

5 tags Optional[Iterable[str]]

A list of tags to associate with this flow run; note that tags are used only for organizational purposes.

None idempotency_key Optional[str]

A unique value to recognize retries of the same run, and prevent creating multiple flow runs.

None work_queue_name Optional[str]

The name of a work queue to use for this run. Defaults to the default work queue for the deployment.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@sync_compatible\n@inject_client\nasync def run_deployment(\n    name: Union[str, UUID],\n    client: Optional[PrefectClient] = None,\n    parameters: Optional[dict] = None,\n    scheduled_time: Optional[datetime] = None,\n    flow_run_name: Optional[str] = None,\n    timeout: Optional[float] = None,\n    poll_interval: Optional[float] = 5,\n    tags: Optional[Iterable[str]] = None,\n    idempotency_key: Optional[str] = None,\n    work_queue_name: Optional[str] = None,\n):\n\"\"\"\n    Create a flow run for a deployment and return it after completion or a timeout.\n\n    This function will return when the created flow run enters any terminal state or\n    the timeout is reached. If the timeout is reached and the flow run has not reached\n    a terminal state, it will still be returned. When using a timeout, we suggest\n    checking the state of the flow run if completion is important moving forward.\n\n    Args:\n        name: The deployment id or deployment name in the form:\n            `<slugified-flow-name>/<slugified-deployment-name>`\n        parameters: Parameter overrides for this flow run. Merged with the deployment\n            defaults.\n        scheduled_time: The time to schedule the flow run for, defaults to scheduling\n            the flow run to start now.\n        flow_run_name: A name for the created flow run\n        timeout: The amount of time to wait for the flow run to complete before\n            returning. Setting `timeout` to 0 will return the flow run immediately.\n            Setting `timeout` to None will allow this function to poll indefinitely.\n            Defaults to None\n        poll_interval: The number of seconds between polls\n        tags: A list of tags to associate with this flow run; note that tags are used\n            only for organizational purposes.\n        idempotency_key: A unique value to recognize retries of the same run, and\n            prevent creating multiple flow runs.\n        work_queue_name: The name of a work queue to use for this run. Defaults to\n            the default work queue for the deployment.\n    \"\"\"\n    if timeout is not None and timeout < 0:\n        raise ValueError(\"`timeout` cannot be negative\")\n\n    if scheduled_time is None:\n        scheduled_time = pendulum.now(\"UTC\")\n\n    parameters = parameters or {}\n\n    deployment_id = None\n\n    if isinstance(name, UUID):\n        deployment_id = name\n    else:\n        try:\n            deployment_id = UUID(name)\n        except ValueError:\n            pass\n\n    if deployment_id:\n        deployment = await client.read_deployment(deployment_id=deployment_id)\n    else:\n        deployment = await client.read_deployment_by_name(name)\n\n    flow_run_ctx = FlowRunContext.get()\n    if flow_run_ctx:\n        # This was called from a flow. Link the flow run as a subflow.\n        from prefect.engine import (\n            Pending,\n            _dynamic_key_for_task_run,\n            collect_task_run_inputs,\n        )\n\n        task_inputs = {\n            k: await collect_task_run_inputs(v) for k, v in parameters.items()\n        }\n\n        if deployment_id:\n            flow = await client.read_flow(deployment.flow_id)\n            deployment_name = f\"{flow.name}/{deployment.name}\"\n        else:\n            deployment_name = name\n\n        # Generate a task in the parent flow run to represent the result of the subflow\n        dummy_task = Task(\n            name=deployment_name,\n            fn=lambda: None,\n            version=deployment.version,\n        )\n        # Override the default task key to include the deployment name\n        dummy_task.task_key = f\"{__name__}.run_deployment.{slugify(deployment_name)}\"\n        parent_task_run = await client.create_task_run(\n            task=dummy_task,\n            flow_run_id=flow_run_ctx.flow_run.id,\n            dynamic_key=_dynamic_key_for_task_run(flow_run_ctx, dummy_task),\n            task_inputs=task_inputs,\n            state=Pending(),\n        )\n        parent_task_run_id = parent_task_run.id\n    else:\n        parent_task_run_id = None\n\n    flow_run = await client.create_flow_run_from_deployment(\n        deployment.id,\n        parameters=parameters,\n        state=Scheduled(scheduled_time=scheduled_time),\n        name=flow_run_name,\n        tags=tags,\n        idempotency_key=idempotency_key,\n        parent_task_run_id=parent_task_run_id,\n        work_queue_name=work_queue_name,\n    )\n\n    flow_run_id = flow_run.id\n\n    if timeout == 0:\n        return flow_run\n\n    with anyio.move_on_after(timeout):\n        while True:\n            flow_run = await client.read_flow_run(flow_run_id)\n            flow_state = flow_run.state\n            if flow_state and flow_state.is_final():\n                return flow_run\n            await anyio.sleep(poll_interval)\n\n    return flow_run\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/steps/core/","title":"prefect.deployments.steps.core","text":"","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core","title":"prefect.deployments.steps.core","text":"

Core primitives for running Prefect project steps.

Project steps are YAML representations of Python functions along with their inputs.

Whenever a step is run, the following actions are taken:

","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core.StepExecutionError","title":"StepExecutionError","text":"

Bases: Exception

Raised when a step fails to execute.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/core.py
class StepExecutionError(Exception):\n\"\"\"\n    Raised when a step fails to execute.\n    \"\"\"\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core.run_step","title":"run_step async","text":"

Runs a step, returns the step's output.

Steps are assumed to be in the format {\"importable.func.name\": {\"kwarg1\": \"value1\", ...}}.

The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the inputs before passing to the step function:

This keyword is used to specify packages that should be installed before running the step.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/core.py
async def run_step(step: Dict, upstream_outputs: Optional[Dict] = None) -> Dict:\n\"\"\"\n    Runs a step, returns the step's output.\n\n    Steps are assumed to be in the format `{\"importable.func.name\": {\"kwarg1\": \"value1\", ...}}`.\n\n    The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the\n    inputs before passing to the step function:\n\n    This keyword is used to specify packages that should be installed before running the step.\n    \"\"\"\n    fqn, inputs = _get_step_fully_qualified_name_and_inputs(step)\n    upstream_outputs = upstream_outputs or {}\n\n    if len(step.keys()) > 1:\n        raise ValueError(\n            f\"Step has unexpected additional keys: {', '.join(list(step.keys())[1:])}\"\n        )\n\n    keywords = {\n        keyword: inputs.pop(keyword)\n        for keyword in RESERVED_KEYWORDS\n        if keyword in inputs\n    }\n\n    inputs = apply_values(inputs, upstream_outputs)\n    inputs = await resolve_block_document_references(inputs)\n    inputs = await resolve_variables(inputs)\n    inputs = apply_values(inputs, os.environ)\n    step_func = _get_function_for_step(fqn, requires=keywords.get(\"requires\"))\n    result = await from_async.call_soon_in_new_thread(\n        Call.new(step_func, **inputs)\n    ).aresult()\n    return result\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/pull/","title":"prefect.deployments.steps.pull","text":"","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull","title":"prefect.deployments.steps.pull","text":"

Core set of steps for specifying a Prefect project pull step.

","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.git_clone","title":"git_clone","text":"

Clones a git repository into the current working directory.

Parameters:

Name Type Description Default repository str

the URL of the repository to clone

required branch str

the branch to clone; if not provided, the default branch will be used

None include_submodules bool

whether to include git submodules when cloning the repository

False access_token str

an access token to use for cloning the repository; if not provided the repository will be cloned using the default git credentials

None credentials optional

a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the

None

Returns:

Name Type Description dict dict

a dictionary containing a directory key of the new directory that was created

Raises:

Type Description subprocess.CalledProcessError

if the git clone command fails for any reason

Examples:

Clone a public repository:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/PrefectHQ/prefect.git\n

Clone a branch of a public repository:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/PrefectHQ/prefect.git\nbranch: my-branch\n

Clone a private repository using a GitHubCredentials block:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/org/repo.git\ncredentials: \"{{ prefect.blocks.github-credentials.my-github-credentials-block }}\"\n

Clone a private repository using an access token:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/org/repo.git\naccess_token: \"{{ prefect.blocks.secret.github-access-token }}\" # Requires creation of a Secret block\n
Note that you will need to create a Secret block to store the value of your git credentials. You can also store a username/password combo or token prefix (e.g. x-token-auth) in your secret block. Refer to your git providers documentation for the correct authentication schema.

Clone a repository with submodules:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/org/repo.git\ninclude_submodules: true\n

Clone a repository with an SSH key (note that the SSH key must be added to the worker before executing flows):

pull:\n- prefect.deployments.steps.git_clone:\nrepository: git@github.com:org/repo.git\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/pull.py
def git_clone(\n    repository: str,\n    branch: Optional[str] = None,\n    include_submodules: bool = False,\n    access_token: Optional[str] = None,\n    credentials: Optional[Block] = None,\n) -> dict:\n\"\"\"\n    Clones a git repository into the current working directory.\n\n    Args:\n        repository (str): the URL of the repository to clone\n        branch (str, optional): the branch to clone; if not provided, the default branch will be used\n        include_submodules (bool): whether to include git submodules when cloning the repository\n        access_token (str, optional): an access token to use for cloning the repository; if not provided\n            the repository will be cloned using the default git credentials\n        credentials (optional): a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the\n        credentials to use for cloning the repository.\n\n    Returns:\n        dict: a dictionary containing a `directory` key of the new directory that was created\n\n    Raises:\n        subprocess.CalledProcessError: if the git clone command fails for any reason\n\n    Examples:\n        Clone a public repository:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/PrefectHQ/prefect.git\n        ```\n\n        Clone a branch of a public repository:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/PrefectHQ/prefect.git\n                branch: my-branch\n        ```\n\n        Clone a private repository using a GitHubCredentials block:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/org/repo.git\n                credentials: \"{{ prefect.blocks.github-credentials.my-github-credentials-block }}\"\n        ```\n\n        Clone a private repository using an access token:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/org/repo.git\n                access_token: \"{{ prefect.blocks.secret.github-access-token }}\" # Requires creation of a Secret block\n        ```\n        Note that you will need to [create a Secret block](/concepts/blocks/#using-existing-block-types) to store the\n        value of your git credentials. You can also store a username/password combo or token prefix (e.g. `x-token-auth`)\n        in your secret block. Refer to your git providers documentation for the correct authentication schema.\n\n        Clone a repository with submodules:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/org/repo.git\n                include_submodules: true\n        ```\n\n        Clone a repository with an SSH key (note that the SSH key must be added to the worker\n        before executing flows):\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: git@github.com:org/repo.git\n        ```\n    \"\"\"\n    if access_token and credentials:\n        raise ValueError(\n            \"Please provide either an access token or credentials but not both.\"\n        )\n\n    if credentials:\n        if credentials.get(\"token\"):\n            access_token = credentials[\"token\"]\n        elif credentials.get(\"username\") and credentials.get(\"password\"):\n            access_token = f\"{credentials['username']}:{credentials['password']}\"\n\n    url_components = urllib.parse.urlparse(repository)\n    if url_components.scheme == \"https\" and access_token is not None:\n        updated_components = url_components._replace(\n            netloc=f\"{access_token}@{url_components.netloc}\"\n        )\n        repository_url = urllib.parse.urlunparse(updated_components)\n    else:\n        repository_url = repository\n\n    cmd = [\"git\", \"clone\", repository_url]\n    if branch:\n        cmd += [\"-b\", branch]\n    if include_submodules:\n        cmd += [\"--recurse-submodules\"]\n\n    # Limit git history\n    cmd += [\"--depth\", \"1\"]\n\n    try:\n        subprocess.check_call(\n            cmd, shell=sys.platform == \"win32\", stderr=sys.stderr, stdout=sys.stdout\n        )\n    except subprocess.CalledProcessError as exc:\n        # Hide the command used to avoid leaking the access token\n        exc_chain = None if access_token else exc\n        raise RuntimeError(\n            f\"Failed to clone repository {repository!r} with exit code\"\n            f\" {exc.returncode}.\"\n        ) from exc_chain\n\n    directory = \"/\".join(repository.strip().split(\"/\")[-1:]).replace(\".git\", \"\")\n    deployment_logger.info(f\"Cloned repository {repository!r} into {directory!r}\")\n    return {\"directory\": directory}\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.git_clone_project","title":"git_clone_project","text":"

Deprecated. Use git_clone instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/pull.py
@deprecated_callable(start_date=\"Jun 2023\", help=\"Use 'git clone' instead.\")\ndef git_clone_project(\n    repository: str,\n    branch: Optional[str] = None,\n    include_submodules: bool = False,\n    access_token: Optional[str] = None,\n) -> dict:\n\"\"\"Deprecated. Use `git_clone` instead.\"\"\"\n    return git_clone(\n        repository=repository,\n        branch=branch,\n        include_submodules=include_submodules,\n        access_token=access_token,\n    )\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.set_working_directory","title":"set_working_directory","text":"

Sets the working directory; works with both absolute and relative paths.

Parameters:

Name Type Description Default directory str

the directory to set as the working directory

required

Returns:

Name Type Description dict dict

a dictionary containing a directory key of the directory that was set

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/pull.py
def set_working_directory(directory: str) -> dict:\n\"\"\"\n    Sets the working directory; works with both absolute and relative paths.\n\n    Args:\n        directory (str): the directory to set as the working directory\n\n    Returns:\n        dict: a dictionary containing a `directory` key of the\n            directory that was set\n    \"\"\"\n    os.chdir(directory)\n    return dict(directory=directory)\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/utility/","title":"prefect.deployments.steps.utility","text":"","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility","title":"prefect.deployments.steps.utility","text":"

Utility project steps that are useful for managing a project's deployment lifecycle.

Steps within this module can be used within a build, push, or pull deployment action.

Example

Use the run_shell_script setp to retrieve the short Git commit hash of the current repository and use it as a Docker image tag:

build:\n- prefect.deployments.steps.run_shell_script:\nid: get-commit-hash\nscript: git rev-parse --short HEAD\nstream_output: false\n- prefect_docker.deployments.steps.build_docker_image:\nrequires: prefect-docker\nimage_name: my-image\nimage_tag: \"{{ get-commit-hash.stdout }}\"\ndockerfile: auto\n

","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.RunShellScriptResult","title":"RunShellScriptResult","text":"

Bases: TypedDict

The result of a run_shell_script step.

Attributes:

Name Type Description stdout str

The captured standard output of the script.

stderr str

The captured standard error of the script.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/utility.py
class RunShellScriptResult(TypedDict):\n\"\"\"\n    The result of a `run_shell_script` step.\n\n    Attributes:\n        stdout: The captured standard output of the script.\n        stderr: The captured standard error of the script.\n    \"\"\"\n\n    stdout: str\n    stderr: str\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.pip_install_requirements","title":"pip_install_requirements async","text":"

Installs dependencies from a requirements.txt file.

Parameters:

Name Type Description Default requirements_file str

The requirements.txt to use for installation.

'requirements.txt' directory Optional[str]

The directory the requirements.txt file is in. Defaults to the current working directory.

None stream_output bool

Whether to stream the output from pip install should be streamed to the console

True

Returns:

Type Description

A dictionary with the keys stdout and stderr containing the output the pip install command

Raises:

Type Description subprocess.CalledProcessError

if the pip install command fails for any reason

Example
pull:\n- prefect.deployments.steps.git_clone:\nid: clone-step\nrepository: https://github.com/org/repo.git\n- prefect.deployments.steps.pip_install_requirements:\ndirectory: {{ clone-step.directory }}\nrequirements_file: requirements.txt\nstream_output: False\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/utility.py
async def pip_install_requirements(\n    directory: Optional[str] = None,\n    requirements_file: str = \"requirements.txt\",\n    stream_output: bool = True,\n):\n\"\"\"\n    Installs dependencies from a requirements.txt file.\n\n    Args:\n        requirements_file: The requirements.txt to use for installation.\n        directory: The directory the requirements.txt file is in. Defaults to\n            the current working directory.\n        stream_output: Whether to stream the output from pip install should be\n            streamed to the console\n\n    Returns:\n        A dictionary with the keys `stdout` and `stderr` containing the output\n            the `pip install` command\n\n    Raises:\n        subprocess.CalledProcessError: if the pip install command fails for any reason\n\n    Example:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                id: clone-step\n                repository: https://github.com/org/repo.git\n            - prefect.deployments.steps.pip_install_requirements:\n                directory: {{ clone-step.directory }}\n                requirements_file: requirements.txt\n                stream_output: False\n        ```\n    \"\"\"\n    stdout_sink = io.StringIO()\n    stderr_sink = io.StringIO()\n\n    async with open_process(\n        [sys.executable, \"-m\", \"pip\", \"install\", \"-r\", requirements_file],\n        stdout=subprocess.PIPE,\n        stderr=subprocess.PIPE,\n        cwd=directory,\n    ) as process:\n        await _stream_capture_process_output(\n            process,\n            stdout_sink=stdout_sink,\n            stderr_sink=stderr_sink,\n            stream_output=stream_output,\n        )\n        await process.wait()\n\n    return {\n        \"stdout\": stdout_sink.getvalue().strip(),\n        \"stderr\": stderr_sink.getvalue().strip(),\n    }\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.run_shell_script","title":"run_shell_script async","text":"

Runs one or more shell commands in a subprocess. Returns the standard output and standard error of the script.

Parameters:

Name Type Description Default script str

The script to run

required directory Optional[str]

The directory to run the script in. Defaults to the current working directory.

None env Optional[Dict[str, str]]

A dictionary of environment variables to set for the script

None stream_output bool

Whether to stream the output of the script to stdout/stderr

True

Returns:

Type Description RunShellScriptResult

A dictionary with the keys stdout and stderr containing the output of the script

Examples:

Retrieve the short Git commit hash of the current repository to use as a Docker image tag:

build:\n- prefect.deployments.steps.run_shell_script:\nid: get-commit-hash\nscript: git rev-parse --short HEAD\nstream_output: false\n- prefect_docker.deployments.steps.build_docker_image:\nrequires: prefect-docker\nimage_name: my-image\nimage_tag: \"{{ get-commit-hash.stdout }}\"\ndockerfile: auto\n

Run a multi-line shell script:

build:\n- prefect.deployments.steps.run_shell_script:\nscript: |\necho \"Hello\"\necho \"World\"\n

Run a shell script with environment variables:

build:\n- prefect.deployments.steps.run_shell_script:\nscript: echo \"Hello $NAME\"\nenv:\nNAME: World\n

Run a shell script in a specific directory:

build:\n- prefect.deployments.steps.run_shell_script:\nscript: echo \"Hello\"\ndirectory: /path/to/directory\n

Run a script stored in a file:

build:\n- prefect.deployments.steps.run_shell_script:\nscript: \"bash path/to/script.sh\"\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/utility.py
async def run_shell_script(\n    script: str,\n    directory: Optional[str] = None,\n    env: Optional[Dict[str, str]] = None,\n    stream_output: bool = True,\n) -> RunShellScriptResult:\n\"\"\"\n    Runs one or more shell commands in a subprocess. Returns the standard\n    output and standard error of the script.\n\n    Args:\n        script: The script to run\n        directory: The directory to run the script in. Defaults to the current\n            working directory.\n        env: A dictionary of environment variables to set for the script\n        stream_output: Whether to stream the output of the script to\n            stdout/stderr\n\n    Returns:\n        A dictionary with the keys `stdout` and `stderr` containing the output\n            of the script\n\n    Examples:\n        Retrieve the short Git commit hash of the current repository to use as\n            a Docker image tag:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                id: get-commit-hash\n                script: git rev-parse --short HEAD\n                stream_output: false\n            - prefect_docker.deployments.steps.build_docker_image:\n                requires: prefect-docker\n                image_name: my-image\n                image_tag: \"{{ get-commit-hash.stdout }}\"\n                dockerfile: auto\n        ```\n\n        Run a multi-line shell script:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: |\n                    echo \"Hello\"\n                    echo \"World\"\n        ```\n\n        Run a shell script with environment variables:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: echo \"Hello $NAME\"\n                env:\n                    NAME: World\n        ```\n\n        Run a shell script in a specific directory:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: echo \"Hello\"\n                directory: /path/to/directory\n        ```\n\n        Run a script stored in a file:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: \"bash path/to/script.sh\"\n        ```\n    \"\"\"\n    current_env = os.environ.copy()\n    current_env.update(env or {})\n\n    commands = script.splitlines()\n    stdout_sink = io.StringIO()\n    stderr_sink = io.StringIO()\n\n    for command in commands:\n        split_command = shlex.split(command)\n        if not split_command:\n            continue\n        async with open_process(\n            split_command,\n            stdout=subprocess.PIPE,\n            stderr=subprocess.PIPE,\n            cwd=directory,\n            env=current_env,\n        ) as process:\n            await _stream_capture_process_output(\n                process,\n                stdout_sink=stdout_sink,\n                stderr_sink=stderr_sink,\n                stream_output=stream_output,\n            )\n\n            await process.wait()\n\n    return {\n        \"stdout\": stdout_sink.getvalue().strip(),\n        \"stderr\": stderr_sink.getvalue().strip(),\n    }\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/runtime/deployment/","title":"prefect.runtime.deployment","text":"","tags":["Python API","deployment context","context"]},{"location":"api-ref/prefect/runtime/deployment/#prefect.runtime.deployment","title":"prefect.runtime.deployment","text":"

Access attributes of the current deployment run dynamically.

Note that if a deployment is not currently being run, all attributes will return empty values.

You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__DEPLOYMENT.

Example usage
from prefect.runtime import deployment\n\ndef get_task_runner():\n    task_runner_config = deployment.parameters.get(\"runner_config\", \"default config here\")\n    return DummyTaskRunner(task_runner_specs=task_runner_config)\n
Available attributes ","tags":["Python API","deployment context","context"]},{"location":"api-ref/prefect/runtime/flow_run/","title":"prefect.runtime.flow_run","text":"","tags":["Python API","flow run context","context"]},{"location":"api-ref/prefect/runtime/flow_run/#prefect.runtime.flow_run","title":"prefect.runtime.flow_run","text":"

Access attributes of the current flow run dynamically.

Note that if a flow run cannot be discovered, all attributes will return empty values.

You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__FLOW_RUN.

Available attributes ","tags":["Python API","flow run context","context"]},{"location":"api-ref/prefect/runtime/task_run/","title":"prefect.runtime.task_run","text":"","tags":["Python API","task run context","context","task run"]},{"location":"api-ref/prefect/runtime/task_run/#prefect.runtime.task_run","title":"prefect.runtime.task_run","text":"

Access attributes of the current task run dynamically.

Note that if a task run cannot be discovered, all attributes will return empty values.

You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__TASK_RUN.

Available attributes ","tags":["Python API","task run context","context","task run"]},{"location":"api-ref/prefect/utilities/annotations/","title":"prefect.utilities.annotations","text":"","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations","title":"prefect.utilities.annotations","text":"","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.BaseAnnotation","title":"BaseAnnotation","text":"

Bases: namedtuple('BaseAnnotation', field_names='value'), ABC, Generic[T]

Base class for Prefect annotation types.

Inherits from namedtuple for unpacking support in another tools.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/annotations.py
class BaseAnnotation(\n    namedtuple(\"BaseAnnotation\", field_names=\"value\"), ABC, Generic[T]\n):\n\"\"\"\n    Base class for Prefect annotation types.\n\n    Inherits from `namedtuple` for unpacking support in another tools.\n    \"\"\"\n\n    def unwrap(self) -> T:\n        if sys.version_info < (3, 8):\n            # cannot simply return self.value due to recursion error in Python 3.7\n            # also _asdict does not follow convention; it's not an internal method\n            # https://stackoverflow.com/a/26180604\n            return self._asdict()[\"value\"]\n        else:\n            return self.value\n\n    def rewrap(self, value: T) -> \"BaseAnnotation[T]\":\n        return type(self)(value)\n\n    def __eq__(self, other: object) -> bool:\n        if not type(self) == type(other):\n            return False\n        return self.unwrap() == other.unwrap()\n\n    def __repr__(self) -> str:\n        return f\"{type(self).__name__}({self.value!r})\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.NotSet","title":"NotSet","text":"

Singleton to distinguish None from a value that is not provided by the user.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/annotations.py
class NotSet:\n\"\"\"\n    Singleton to distinguish `None` from a value that is not provided by the user.\n    \"\"\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.allow_failure","title":"allow_failure","text":"

Bases: BaseAnnotation[T]

Wrapper for states or futures.

Indicates that the upstream run for this input can be failed.

Generally, Prefect will not allow a downstream run to start if any of its inputs are failed. This annotation allows you to opt into receiving a failed input downstream.

If the input is from a failed run, the attached exception will be passed to your function.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/annotations.py
class allow_failure(BaseAnnotation[T]):\n\"\"\"\n    Wrapper for states or futures.\n\n    Indicates that the upstream run for this input can be failed.\n\n    Generally, Prefect will not allow a downstream run to start if any of its inputs\n    are failed. This annotation allows you to opt into receiving a failed input\n    downstream.\n\n    If the input is from a failed run, the attached exception will be passed to your\n    function.\n    \"\"\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.quote","title":"quote","text":"

Bases: BaseAnnotation[T]

Simple wrapper to mark an expression as a different type so it will not be coerced by Prefect. For example, if you want to return a state from a flow without having the flow assume that state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/annotations.py
class quote(BaseAnnotation[T]):\n\"\"\"\n    Simple wrapper to mark an expression as a different type so it will not be coerced\n    by Prefect. For example, if you want to return a state from a flow without having\n    the flow assume that state.\n    \"\"\"\n\n    def unquote(self) -> T:\n        return self.unwrap()\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.unmapped","title":"unmapped","text":"

Bases: BaseAnnotation[T]

Wrapper for iterables.

Indicates that this input should be sent as-is to all runs created during a mapping operation instead of being split.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/annotations.py
class unmapped(BaseAnnotation[T]):\n\"\"\"\n    Wrapper for iterables.\n\n    Indicates that this input should be sent as-is to all runs created during a mapping\n    operation instead of being split.\n    \"\"\"\n\n    def __getitem__(self, _) -> T:\n        # Internally, this acts as an infinite array where all items are the same value\n        return self.unwrap()\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/asyncutils/","title":"prefect.utilities.asyncutils","text":"","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils","title":"prefect.utilities.asyncutils","text":"

Utilities for interoperability with async functions and workers from various contexts.

","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherIncomplete","title":"GatherIncomplete","text":"

Bases: RuntimeError

Used to indicate retrieving gather results before completion

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
class GatherIncomplete(RuntimeError):\n\"\"\"Used to indicate retrieving gather results before completion\"\"\"\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherTaskGroup","title":"GatherTaskGroup","text":"

Bases: anyio.abc.TaskGroup

A task group that gathers results.

AnyIO does not include support gather. This class extends the TaskGroup interface to allow simple gathering.

See https://github.com/agronholm/anyio/issues/100

This class should be instantiated with create_gather_task_group.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
class GatherTaskGroup(anyio.abc.TaskGroup):\n\"\"\"\n    A task group that gathers results.\n\n    AnyIO does not include support `gather`. This class extends the `TaskGroup`\n    interface to allow simple gathering.\n\n    See https://github.com/agronholm/anyio/issues/100\n\n    This class should be instantiated with `create_gather_task_group`.\n    \"\"\"\n\n    def __init__(self, task_group: anyio.abc.TaskGroup):\n        self._results: Dict[UUID, Any] = {}\n        # The concrete task group implementation to use\n        self._task_group: anyio.abc.TaskGroup = task_group\n\n    async def _run_and_store(self, key, fn, args):\n        self._results[key] = await fn(*args)\n\n    def start_soon(self, fn, *args) -> UUID:\n        key = uuid4()\n        # Put a placeholder in-case the result is retrieved earlier\n        self._results[key] = GatherIncomplete\n        self._task_group.start_soon(self._run_and_store, key, fn, args)\n        return key\n\n    async def start(self, fn, *args):\n\"\"\"\n        Since `start` returns the result of `task_status.started()` but here we must\n        return the key instead, we just won't support this method for now.\n        \"\"\"\n        raise RuntimeError(\"`GatherTaskGroup` does not support `start`.\")\n\n    def get_result(self, key: UUID) -> Any:\n        result = self._results[key]\n        if result is GatherIncomplete:\n            raise GatherIncomplete(\n                \"Task is not complete. \"\n                \"Results should not be retrieved until the task group exits.\"\n            )\n        return result\n\n    async def __aenter__(self):\n        await self._task_group.__aenter__()\n        return self\n\n    async def __aexit__(self, *tb):\n        try:\n            retval = await self._task_group.__aexit__(*tb)\n            return retval\n        finally:\n            del self._task_group\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherTaskGroup.start","title":"start async","text":"

Since start returns the result of task_status.started() but here we must return the key instead, we just won't support this method for now.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
async def start(self, fn, *args):\n\"\"\"\n    Since `start` returns the result of `task_status.started()` but here we must\n    return the key instead, we just won't support this method for now.\n    \"\"\"\n    raise RuntimeError(\"`GatherTaskGroup` does not support `start`.\")\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.add_event_loop_shutdown_callback","title":"add_event_loop_shutdown_callback async","text":"

Adds a callback to the given callable on event loop closure. The callable must be a coroutine function. It will be awaited when the current event loop is shutting down.

Requires use of asyncio.run() which waits for async generator shutdown by default or explicit call of asyncio.shutdown_asyncgens(). If the application is entered with asyncio.run_until_complete() and the user calls asyncio.close() without the generator shutdown call, this will not trigger callbacks.

asyncio does not provided any other way to clean up a resource when the event loop is about to close.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
async def add_event_loop_shutdown_callback(coroutine_fn: Callable[[], Awaitable]):\n\"\"\"\n    Adds a callback to the given callable on event loop closure. The callable must be\n    a coroutine function. It will be awaited when the current event loop is shutting\n    down.\n\n    Requires use of `asyncio.run()` which waits for async generator shutdown by\n    default or explicit call of `asyncio.shutdown_asyncgens()`. If the application\n    is entered with `asyncio.run_until_complete()` and the user calls\n    `asyncio.close()` without the generator shutdown call, this will not trigger\n    callbacks.\n\n    asyncio does not provided _any_ other way to clean up a resource when the event\n    loop is about to close.\n    \"\"\"\n\n    async def on_shutdown(key):\n        # It appears that EVENT_LOOP_GC_REFS is somehow being garbage collected early.\n        # We hold a reference to it so as to preserve it, at least for the lifetime of\n        # this coroutine. See the issue below for the initial report/discussion:\n        # https://github.com/PrefectHQ/prefect/issues/7709#issuecomment-1560021109\n        _ = EVENT_LOOP_GC_REFS\n        try:\n            yield\n        except GeneratorExit:\n            await coroutine_fn()\n            # Remove self from the garbage collection set\n            EVENT_LOOP_GC_REFS.pop(key)\n\n    # Create the iterator and store it in a global variable so it is not garbage\n    # collected. If the iterator is garbage collected before the event loop closes, the\n    # callback will not run. Since this function does not know the scope of the event\n    # loop that is calling it, a reference with global scope is necessary to ensure\n    # garbage collection does not occur until after event loop closure.\n    key = id(on_shutdown)\n    EVENT_LOOP_GC_REFS[key] = on_shutdown(key)\n\n    # Begin iterating so it will be cleaned up as an incomplete generator\n    await EVENT_LOOP_GC_REFS[key].__anext__()\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.create_gather_task_group","title":"create_gather_task_group","text":"

Create a new task group that gathers results

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def create_gather_task_group() -> GatherTaskGroup:\n\"\"\"Create a new task group that gathers results\"\"\"\n    # This function matches the AnyIO API which uses callables since the concrete\n    # task group class depends on the async library being used and cannot be\n    # determined until runtime\n    return GatherTaskGroup(anyio.create_task_group())\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.gather","title":"gather async","text":"

Run calls concurrently and gather their results.

Unlike asyncio.gather this expects to receieve callables not coroutines. This matches anyio semantics.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
async def gather(*calls: Callable[[], Coroutine[Any, Any, T]]) -> List[T]:\n\"\"\"\n    Run calls concurrently and gather their results.\n\n    Unlike `asyncio.gather` this expects to receieve _callables_ not _coroutines_.\n    This matches `anyio` semantics.\n    \"\"\"\n    keys = []\n    async with create_gather_task_group() as tg:\n        for call in calls:\n            keys.append(tg.start_soon(call))\n    return [tg.get_result(key) for key in keys]\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.is_async_fn","title":"is_async_fn","text":"

Returns True if a function returns a coroutine.

See https://github.com/microsoft/pyright/issues/2142 for an example use

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def is_async_fn(\n    func: Union[Callable[P, R], Callable[P, Awaitable[R]]]\n) -> TypeGuard[Callable[P, Awaitable[R]]]:\n\"\"\"\n    Returns `True` if a function returns a coroutine.\n\n    See https://github.com/microsoft/pyright/issues/2142 for an example use\n    \"\"\"\n    while hasattr(func, \"__wrapped__\"):\n        func = func.__wrapped__\n\n    return inspect.iscoroutinefunction(func)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.is_async_gen_fn","title":"is_async_gen_fn","text":"

Returns True if a function is an async generator.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def is_async_gen_fn(func):\n\"\"\"\n    Returns `True` if a function is an async generator.\n    \"\"\"\n    while hasattr(func, \"__wrapped__\"):\n        func = func.__wrapped__\n\n    return inspect.isasyncgenfunction(func)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.raise_async_exception_in_thread","title":"raise_async_exception_in_thread","text":"

Raise an exception in a thread asynchronously.

This will not interrupt long-running system calls like sleep or wait.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def raise_async_exception_in_thread(thread: Thread, exc_type: Type[BaseException]):\n\"\"\"\n    Raise an exception in a thread asynchronously.\n\n    This will not interrupt long-running system calls like `sleep` or `wait`.\n    \"\"\"\n    ret = ctypes.pythonapi.PyThreadState_SetAsyncExc(\n        ctypes.c_long(thread.ident), ctypes.py_object(exc_type)\n    )\n    if ret == 0:\n        raise ValueError(\"Thread not found.\")\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_async_from_worker_thread","title":"run_async_from_worker_thread","text":"

Runs an async function in the main thread's event loop, blocking the worker thread until completion

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def run_async_from_worker_thread(\n    __fn: Callable[..., Awaitable[T]], *args: Any, **kwargs: Any\n) -> T:\n\"\"\"\n    Runs an async function in the main thread's event loop, blocking the worker\n    thread until completion\n    \"\"\"\n    call = partial(__fn, *args, **kwargs)\n    return anyio.from_thread.run(call)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync_in_interruptible_worker_thread","title":"run_sync_in_interruptible_worker_thread async","text":"

Runs a sync function in a new interruptible worker thread so that the main thread's event loop is not blocked

Unlike the anyio function, this performs best-effort cancellation of the thread using the C API. Cancellation will not interrupt system calls like sleep.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
async def run_sync_in_interruptible_worker_thread(\n    __fn: Callable[..., T], *args: Any, **kwargs: Any\n) -> T:\n\"\"\"\n    Runs a sync function in a new interruptible worker thread so that the main\n    thread's event loop is not blocked\n\n    Unlike the anyio function, this performs best-effort cancellation of the\n    thread using the C API. Cancellation will not interrupt system calls like\n    `sleep`.\n    \"\"\"\n\n    class NotSet:\n        pass\n\n    thread: Thread = None\n    result = NotSet\n    event = asyncio.Event()\n    loop = asyncio.get_running_loop()\n\n    def capture_worker_thread_and_result():\n        # Captures the worker thread that AnyIO is using to execute the function so\n        # the main thread can perform actions on it\n        nonlocal thread, result\n        try:\n            thread = threading.current_thread()\n            result = __fn(*args, **kwargs)\n        except BaseException as exc:\n            result = exc\n            raise\n        finally:\n            loop.call_soon_threadsafe(event.set)\n\n    async def send_interrupt_to_thread():\n        # This task waits until the result is returned from the thread, if cancellation\n        # occurs during that time, we will raise the exception in the thread as well\n        try:\n            await event.wait()\n        except anyio.get_cancelled_exc_class():\n            # NOTE: We could send a SIGINT here which allow us to interrupt system\n            # calls but the interrupt bubbles from the child thread into the main thread\n            # and there is not a clear way to prevent it.\n            raise_async_exception_in_thread(thread, anyio.get_cancelled_exc_class())\n            raise\n\n    async with anyio.create_task_group() as tg:\n        tg.start_soon(send_interrupt_to_thread)\n        tg.start_soon(\n            partial(\n                anyio.to_thread.run_sync,\n                capture_worker_thread_and_result,\n                cancellable=True,\n                limiter=get_thread_limiter(),\n            )\n        )\n\n    assert result is not NotSet\n    return result\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync_in_worker_thread","title":"run_sync_in_worker_thread async","text":"

Runs a sync function in a new worker thread so that the main thread's event loop is not blocked

Unlike the anyio function, this defaults to a cancellable thread and does not allow passing arguments to the anyio function so users can pass kwargs to their function.

Note that cancellation of threads will not result in interrupted computation, the thread may continue running \u2014 the outcome will just be ignored.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
async def run_sync_in_worker_thread(\n    __fn: Callable[..., T], *args: Any, **kwargs: Any\n) -> T:\n\"\"\"\n    Runs a sync function in a new worker thread so that the main thread's event loop\n    is not blocked\n\n    Unlike the anyio function, this defaults to a cancellable thread and does not allow\n    passing arguments to the anyio function so users can pass kwargs to their function.\n\n    Note that cancellation of threads will not result in interrupted computation, the\n    thread may continue running \u2014 the outcome will just be ignored.\n    \"\"\"\n    call = partial(__fn, *args, **kwargs)\n    return await anyio.to_thread.run_sync(\n        call, cancellable=True, limiter=get_thread_limiter()\n    )\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.sync","title":"sync","text":"

Call an async function from a synchronous context. Block until completion.

If in an asynchronous context, we will run the code in a separate loop instead of failing but a warning will be displayed since this is not recommended.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def sync(__async_fn: Callable[P, Awaitable[T]], *args: P.args, **kwargs: P.kwargs) -> T:\n\"\"\"\n    Call an async function from a synchronous context. Block until completion.\n\n    If in an asynchronous context, we will run the code in a separate loop instead of\n    failing but a warning will be displayed since this is not recommended.\n    \"\"\"\n    if in_async_main_thread():\n        warnings.warn(\n            \"`sync` called from an asynchronous context; \"\n            \"you should `await` the async function directly instead.\"\n        )\n        with anyio.start_blocking_portal() as portal:\n            return portal.call(partial(__async_fn, *args, **kwargs))\n    elif in_async_worker_thread():\n        # In a sync context but we can access the event loop thread; send the async\n        # call to the parent\n        return run_async_from_worker_thread(__async_fn, *args, **kwargs)\n    else:\n        # In a sync context and there is no event loop; just create an event loop\n        # to run the async code then tear it down\n        return run_async_in_new_loop(__async_fn, *args, **kwargs)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.sync_compatible","title":"sync_compatible","text":"

Converts an async function into a dual async and sync function.

When the returned function is called, we will attempt to determine the best way to enter the async function.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def sync_compatible(async_fn: T) -> T:\n\"\"\"\n    Converts an async function into a dual async and sync function.\n\n    When the returned function is called, we will attempt to determine the best way\n    to enter the async function.\n\n    - If in a thread with a running event loop, we will return the coroutine for the\n        caller to await. This is normal async behavior.\n    - If in a blocking worker thread with access to an event loop in another thread, we\n        will submit the async method to the event loop.\n    - If we cannot find an event loop, we will create a new one and run the async method\n        then tear down the loop.\n    \"\"\"\n\n    @wraps(async_fn)\n    def coroutine_wrapper(*args, **kwargs):\n        from prefect._internal.concurrency.api import create_call, from_sync\n        from prefect._internal.concurrency.calls import get_current_call, logger\n        from prefect._internal.concurrency.event_loop import get_running_loop\n        from prefect._internal.concurrency.threads import get_global_loop\n\n        global_thread_portal = get_global_loop()\n        current_thread = threading.current_thread()\n        current_call = get_current_call()\n        current_loop = get_running_loop()\n\n        if current_thread.ident == global_thread_portal.thread.ident:\n            logger.debug(f\"{async_fn} --> return coroutine for internal await\")\n            # In the prefect async context; return the coro for us to await\n            return async_fn(*args, **kwargs)\n        elif in_async_main_thread() and (\n            not current_call or is_async_fn(current_call.fn)\n        ):\n            # In the main async context; return the coro for them to await\n            logger.debug(f\"{async_fn} --> return coroutine for user await\")\n            return async_fn(*args, **kwargs)\n        elif in_async_worker_thread():\n            # In a sync context but we can access the event loop thread; send the async\n            # call to the parent\n            return run_async_from_worker_thread(async_fn, *args, **kwargs)\n        elif current_loop is not None:\n            logger.debug(f\"{async_fn} --> run async in global loop portal\")\n            # An event loop is already present but we are in a sync context, run the\n            # call in Prefect's event loop thread\n            return from_sync.call_soon_in_loop_thread(\n                create_call(async_fn, *args, **kwargs)\n            ).result()\n        else:\n            logger.debug(f\"{async_fn} --> run async in new loop\")\n            # Run in a new event loop, but use a `Call` for nested context detection\n            call = create_call(async_fn, *args, **kwargs)\n            return call()\n\n    # TODO: This is breaking type hints on the callable... mypy is behind the curve\n    #       on argument annotations. We can still fix this for editors though.\n    if is_async_fn(async_fn):\n        wrapper = coroutine_wrapper\n    elif is_async_gen_fn(async_fn):\n        raise ValueError(\"Async generators cannot yet be marked as `sync_compatible`\")\n    else:\n        raise TypeError(\"The decorated function must be async.\")\n\n    wrapper.aio = async_fn\n    return wrapper\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/callables/","title":"prefect.utilities.callables","text":"","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables","title":"prefect.utilities.callables","text":"

Utilities for working with Python callables.

","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.ParameterSchema","title":"ParameterSchema","text":"

Bases: pydantic.BaseModel

Simple data model corresponding to an OpenAPI Schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
class ParameterSchema(pydantic.BaseModel):\n\"\"\"Simple data model corresponding to an OpenAPI `Schema`.\"\"\"\n\n    title: Literal[\"Parameters\"] = \"Parameters\"\n    type: Literal[\"object\"] = \"object\"\n    properties: Dict[str, Any] = pydantic.Field(default_factory=dict)\n    required: List[str] = None\n    definitions: Dict[str, Any] = None\n\n    def dict(self, *args, **kwargs):\n\"\"\"Exclude `None` fields by default to comply with\n        the OpenAPI spec.\n        \"\"\"\n        kwargs.setdefault(\"exclude_none\", True)\n        return super().dict(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.ParameterSchema.dict","title":"dict","text":"

Exclude None fields by default to comply with the OpenAPI spec.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def dict(self, *args, **kwargs):\n\"\"\"Exclude `None` fields by default to comply with\n    the OpenAPI spec.\n    \"\"\"\n    kwargs.setdefault(\"exclude_none\", True)\n    return super().dict(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.call_with_parameters","title":"call_with_parameters","text":"

Call a function with parameters extracted with get_call_parameters

The function must have an identical signature to the original function or this will fail. If you need to send to a function with a different signature, extract the args/kwargs using parameters_to_positional_and_keyword directly

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def call_with_parameters(fn: Callable, parameters: Dict[str, Any]):\n\"\"\"\n    Call a function with parameters extracted with `get_call_parameters`\n\n    The function _must_ have an identical signature to the original function or this\n    will fail. If you need to send to a function with a different signature, extract\n    the args/kwargs using `parameters_to_positional_and_keyword` directly\n    \"\"\"\n    args, kwargs = parameters_to_args_kwargs(fn, parameters)\n    return fn(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.cloudpickle_wrapped_call","title":"cloudpickle_wrapped_call","text":"

Serializes a function call using cloudpickle then returns a callable which will execute that call and return a cloudpickle serialized return value

This is particularly useful for sending calls to libraries that only use the Python built-in pickler (e.g. anyio.to_process and multiprocessing) but may require a wider range of pickling support.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def cloudpickle_wrapped_call(\n    __fn: Callable, *args: Any, **kwargs: Any\n) -> Callable[[], bytes]:\n\"\"\"\n    Serializes a function call using cloudpickle then returns a callable which will\n    execute that call and return a cloudpickle serialized return value\n\n    This is particularly useful for sending calls to libraries that only use the Python\n    built-in pickler (e.g. `anyio.to_process` and `multiprocessing`) but may require\n    a wider range of pickling support.\n    \"\"\"\n    payload = cloudpickle.dumps((__fn, args, kwargs))\n    return partial(_run_serialized_call, payload)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.collapse_variadic_parameters","title":"collapse_variadic_parameters","text":"

Given a parameter dictionary, move any parameters stored not present in the signature into the variadic keyword argument.

Example
def foo(a, b, **kwargs):\n    pass\n\nparameters = {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\ncollapse_variadic_parameters(foo, parameters)\n# {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def collapse_variadic_parameters(\n    fn: Callable, parameters: Dict[str, Any]\n) -> Dict[str, Any]:\n\"\"\"\n    Given a parameter dictionary, move any parameters stored not present in the\n    signature into the variadic keyword argument.\n\n    Example:\n\n        ```python\n        def foo(a, b, **kwargs):\n            pass\n\n        parameters = {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n        collapse_variadic_parameters(foo, parameters)\n        # {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n        ```\n    \"\"\"\n    signature_parameters = inspect.signature(fn).parameters\n    variadic_key = None\n    for key, parameter in signature_parameters.items():\n        if parameter.kind == parameter.VAR_KEYWORD:\n            variadic_key = key\n            break\n\n    missing_parameters = set(parameters.keys()) - set(signature_parameters.keys())\n\n    if not variadic_key and missing_parameters:\n        raise ValueError(\n            f\"Signature for {fn} does not include any variadic keyword argument \"\n            \"but parameters were given that are not present in the signature.\"\n        )\n\n    if variadic_key and not missing_parameters:\n        # variadic key is present but no missing parameters, return parameters unchanged\n        return parameters\n\n    new_parameters = parameters.copy()\n    if variadic_key:\n        new_parameters[variadic_key] = {}\n\n    for key in missing_parameters:\n        new_parameters[variadic_key][key] = new_parameters.pop(key)\n\n    return new_parameters\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.explode_variadic_parameter","title":"explode_variadic_parameter","text":"

Given a parameter dictionary, move any parameters stored in a variadic keyword argument parameter (i.e. **kwargs) into the top level.

Example
def foo(a, b, **kwargs):\n    pass\n\nparameters = {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\nexplode_variadic_parameter(foo, parameters)\n# {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def explode_variadic_parameter(\n    fn: Callable, parameters: Dict[str, Any]\n) -> Dict[str, Any]:\n\"\"\"\n    Given a parameter dictionary, move any parameters stored in a variadic keyword\n    argument parameter (i.e. **kwargs) into the top level.\n\n    Example:\n\n        ```python\n        def foo(a, b, **kwargs):\n            pass\n\n        parameters = {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n        explode_variadic_parameter(foo, parameters)\n        # {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n        ```\n    \"\"\"\n    variadic_key = None\n    for key, parameter in inspect.signature(fn).parameters.items():\n        if parameter.kind == parameter.VAR_KEYWORD:\n            variadic_key = key\n            break\n\n    if not variadic_key:\n        return parameters\n\n    new_parameters = parameters.copy()\n    for key, value in new_parameters.pop(variadic_key, {}).items():\n        new_parameters[key] = value\n\n    return new_parameters\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.get_call_parameters","title":"get_call_parameters","text":"

Bind a call to a function to get parameter/value mapping. Default values on the signature will be included if not overriden.

Raises a ParameterBindError if the arguments/kwargs are not valid for the function

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def get_call_parameters(\n    fn: Callable,\n    call_args: Tuple[Any, ...],\n    call_kwargs: Dict[str, Any],\n    apply_defaults: bool = True,\n) -> Dict[str, Any]:\n\"\"\"\n    Bind a call to a function to get parameter/value mapping. Default values on the\n    signature will be included if not overriden.\n\n    Raises a ParameterBindError if the arguments/kwargs are not valid for the function\n    \"\"\"\n    try:\n        bound_signature = inspect.signature(fn).bind(*call_args, **call_kwargs)\n    except TypeError as exc:\n        raise ParameterBindError.from_bind_failure(fn, exc, call_args, call_kwargs)\n\n    if apply_defaults:\n        bound_signature.apply_defaults()\n\n    # We cast from `OrderedDict` to `dict` because Dask will not convert futures in an\n    # ordered dictionary to values during execution; this is the default behavior in\n    # Python 3.9 anyway.\n    return dict(bound_signature.arguments)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.get_parameter_defaults","title":"get_parameter_defaults","text":"

Get default parameter values for a callable.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def get_parameter_defaults(\n    fn: Callable,\n) -> Dict[str, Any]:\n\"\"\"\n    Get default parameter values for a callable.\n    \"\"\"\n    signature = inspect.signature(fn)\n\n    parameter_defaults = {}\n\n    for name, param in signature.parameters.items():\n        if param.default is not signature.empty:\n            parameter_defaults[name] = param.default\n\n    return parameter_defaults\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameter_docstrings","title":"parameter_docstrings","text":"

Given a docstring in Google docstring format, parse the parameter section and return a dictionary that maps parameter names to docstring.

Parameters:

Name Type Description Default docstring Optional[str]

The function's docstring.

required

Returns:

Type Description Dict[str, str]

Mapping from parameter names to docstrings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def parameter_docstrings(docstring: Optional[str]) -> Dict[str, str]:\n\"\"\"\n    Given a docstring in Google docstring format, parse the parameter section\n    and return a dictionary that maps parameter names to docstring.\n\n    Args:\n        docstring: The function's docstring.\n\n    Returns:\n        Mapping from parameter names to docstrings.\n    \"\"\"\n    param_docstrings = {}\n\n    if not docstring:\n        return param_docstrings\n\n    with disable_logger(\"griffe.docstrings.google\"), disable_logger(\n        \"griffe.agents.nodes\"\n    ):\n        parsed = parse(Docstring(docstring), Parser.google)\n        for section in parsed:\n            if section.kind != DocstringSectionKind.parameters:\n                continue\n            param_docstrings = {\n                parameter.name: parameter.description for parameter in section.value\n            }\n\n    return param_docstrings\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameter_schema","title":"parameter_schema","text":"

Given a function, generates an OpenAPI-compatible description of the function's arguments, including: - name - typing information - whether it is required - a default value - additional constraints (like possible enum values)

Parameters:

Name Type Description Default fn function

The function whose arguments will be serialized

required

Returns:

Name Type Description dict ParameterSchema

the argument schema

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def parameter_schema(fn: Callable) -> ParameterSchema:\n\"\"\"Given a function, generates an OpenAPI-compatible description\n    of the function's arguments, including:\n        - name\n        - typing information\n        - whether it is required\n        - a default value\n        - additional constraints (like possible enum values)\n\n    Args:\n        fn (function): The function whose arguments will be serialized\n\n    Returns:\n        dict: the argument schema\n    \"\"\"\n    signature = inspect.signature(fn)\n    model_fields = {}\n    aliases = {}\n    docstrings = parameter_docstrings(inspect.getdoc(fn))\n\n    class ModelConfig:\n        arbitrary_types_allowed = True\n\n    for position, param in enumerate(signature.parameters.values()):\n        # Pydantic model creation will fail if names collide with the BaseModel type\n        if hasattr(pydantic.BaseModel, param.name):\n            name = param.name + \"__\"\n            aliases[name] = param.name\n        else:\n            name = param.name\n\n        type_, field = (\n            Any if param.annotation is inspect._empty else param.annotation,\n            pydantic.Field(\n                default=... if param.default is param.empty else param.default,\n                title=param.name,\n                description=docstrings.get(param.name, None),\n                alias=aliases.get(name),\n                position=position,\n            ),\n        )\n\n        # Generate a Pydantic model at each step so we can check if this parameter\n        # type is supported schema generation\n        try:\n            pydantic.create_model(\n                \"CheckParameter\", __config__=ModelConfig, **{name: (type_, field)}\n            ).schema(by_alias=True)\n        except ValueError:\n            # This field's type is not valid for schema creation, update it to `Any`\n            type_ = Any\n\n        model_fields[name] = (type_, field)\n\n    # Generate the final model and schema\n    model = pydantic.create_model(\"Parameters\", __config__=ModelConfig, **model_fields)\n    schema = model.schema(by_alias=True)\n    return ParameterSchema(**schema)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameters_to_args_kwargs","title":"parameters_to_args_kwargs","text":"

Convert a parameters dictionary to positional and keyword arguments

The function must have an identical signature to the original function or this will return an empty tuple and dict.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def parameters_to_args_kwargs(\n    fn: Callable,\n    parameters: Dict[str, Any],\n) -> Tuple[Tuple[Any, ...], Dict[str, Any]]:\n\"\"\"\n    Convert a `parameters` dictionary to positional and keyword arguments\n\n    The function _must_ have an identical signature to the original function or this\n    will return an empty tuple and dict.\n    \"\"\"\n    function_params = dict(inspect.signature(fn).parameters).keys()\n    # Check for parameters that are not present in the function signature\n    unknown_params = parameters.keys() - function_params\n    if unknown_params:\n        raise SignatureMismatchError.from_bad_params(\n            list(function_params), list(parameters.keys())\n        )\n    bound_signature = inspect.signature(fn).bind_partial()\n    bound_signature.arguments = parameters\n\n    return bound_signature.args, bound_signature.kwargs\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.raise_for_reserved_arguments","title":"raise_for_reserved_arguments","text":"

Raise a ReservedArgumentError if fn has any parameters that conflict with the names contained in reserved_arguments.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def raise_for_reserved_arguments(fn: Callable, reserved_arguments: Iterable[str]):\n\"\"\"Raise a ReservedArgumentError if `fn` has any parameters that conflict\n    with the names contained in `reserved_arguments`.\"\"\"\n    function_paremeters = inspect.signature(fn).parameters\n\n    for argument in reserved_arguments:\n        if argument in function_paremeters:\n            raise ReservedArgumentError(\n                f\"{argument!r} is a reserved argument name and cannot be used.\"\n            )\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/collections/","title":"prefect.utilities.collections","text":"","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections","title":"prefect.utilities.collections","text":"

Utilities for extensions of and operations on Python collections.

","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.AutoEnum","title":"AutoEnum","text":"

Bases: str, Enum

An enum class that automatically generates value from variable names.

This guards against common errors where variable names are updated but values are not.

In addition, because AutoEnums inherit from str, they are automatically JSON-serializable.

See https://docs.python.org/3/library/enum.html#using-automatic-values

Example
class MyEnum(AutoEnum):\n    RED = AutoEnum.auto() # equivalent to RED = 'RED'\n    BLUE = AutoEnum.auto() # equivalent to BLUE = 'BLUE'\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
class AutoEnum(str, Enum):\n\"\"\"\n    An enum class that automatically generates value from variable names.\n\n    This guards against common errors where variable names are updated but values are\n    not.\n\n    In addition, because AutoEnums inherit from `str`, they are automatically\n    JSON-serializable.\n\n    See https://docs.python.org/3/library/enum.html#using-automatic-values\n\n    Example:\n        ```python\n        class MyEnum(AutoEnum):\n            RED = AutoEnum.auto() # equivalent to RED = 'RED'\n            BLUE = AutoEnum.auto() # equivalent to BLUE = 'BLUE'\n        ```\n    \"\"\"\n\n    def _generate_next_value_(name, start, count, last_values):\n        return name\n\n    @staticmethod\n    def auto():\n\"\"\"\n        Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n        \"\"\"\n        return auto()\n\n    def __repr__(self) -> str:\n        return f\"{type(self).__name__}.{self.value}\"\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.AutoEnum.auto","title":"auto staticmethod","text":"

Exposes enum.auto() to avoid requiring a second import to use AutoEnum

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
@staticmethod\ndef auto():\n\"\"\"\n    Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n    \"\"\"\n    return auto()\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.StopVisiting","title":"StopVisiting","text":"

Bases: BaseException

A special exception used to stop recursive visits in visit_collection.

When raised, the expression is returned without modification and recursive visits in that path will end.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
class StopVisiting(BaseException):\n\"\"\"\n    A special exception used to stop recursive visits in `visit_collection`.\n\n    When raised, the expression is returned without modification and recursive visits\n    in that path will end.\n    \"\"\"\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.batched_iterable","title":"batched_iterable","text":"

Yield batches of a certain size from an iterable

Parameters:

Name Type Description Default iterable Iterable

An iterable

required size int

The batch size to return

required

Yields:

Name Type Description tuple T

A batch of the iterable

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def batched_iterable(iterable: Iterable[T], size: int) -> Iterator[Tuple[T, ...]]:\n\"\"\"\n    Yield batches of a certain size from an iterable\n\n    Args:\n        iterable (Iterable): An iterable\n        size (int): The batch size to return\n\n    Yields:\n        tuple: A batch of the iterable\n    \"\"\"\n    it = iter(iterable)\n    while True:\n        batch = tuple(itertools.islice(it, size))\n        if not batch:\n            break\n        yield batch\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.dict_to_flatdict","title":"dict_to_flatdict","text":"

Converts a (nested) dictionary to a flattened representation.

Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\" for the corresponding value.

Parameters:

Name Type Description Default dct dict

The dictionary to flatten

required _parent Tuple

The current parent for recursion

None

Returns:

Type Description Dict[Tuple[KT, ...], Any]

A flattened dict of the same type as dct

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def dict_to_flatdict(\n    dct: Dict[KT, Union[Any, Dict[KT, Any]]], _parent: Tuple[KT, ...] = None\n) -> Dict[Tuple[KT, ...], Any]:\n\"\"\"Converts a (nested) dictionary to a flattened representation.\n\n    Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\"\n    for the corresponding value.\n\n    Args:\n        dct (dict): The dictionary to flatten\n        _parent (Tuple, optional): The current parent for recursion\n\n    Returns:\n        A flattened dict of the same type as dct\n    \"\"\"\n    typ = cast(Type[Dict[Tuple[KT, ...], Any]], type(dct))\n    items: List[Tuple[Tuple[KT, ...], Any]] = []\n    parent = _parent or tuple()\n\n    for k, v in dct.items():\n        k_parent = tuple(parent + (k,))\n        # if v is a non-empty dict, recurse\n        if isinstance(v, dict) and v:\n            items.extend(dict_to_flatdict(v, _parent=k_parent).items())\n        else:\n            items.append((k_parent, v))\n    return typ(items)\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.extract_instances","title":"extract_instances","text":"

Extract objects from a file and returns a dict of type -> instances

Parameters:

Name Type Description Default objects Iterable

An iterable of objects

required types Union[Type[T], Tuple[Type[T], ...]]

A type or tuple of types to extract, defaults to all objects

object

Returns:

Type Description Union[List[T], Dict[Type[T], T]]

If a single type is given: a list of instances of that type

Union[List[T], Dict[Type[T], T]]

If a tuple of types is given: a mapping of type to a list of instances

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def extract_instances(\n    objects: Iterable,\n    types: Union[Type[T], Tuple[Type[T], ...]] = object,\n) -> Union[List[T], Dict[Type[T], T]]:\n\"\"\"\n    Extract objects from a file and returns a dict of type -> instances\n\n    Args:\n        objects: An iterable of objects\n        types: A type or tuple of types to extract, defaults to all objects\n\n    Returns:\n        If a single type is given: a list of instances of that type\n        If a tuple of types is given: a mapping of type to a list of instances\n    \"\"\"\n    types = ensure_iterable(types)\n\n    # Create a mapping of type -> instance from the exec values\n    ret = defaultdict(list)\n\n    for o in objects:\n        # We iterate here so that the key is the passed type rather than type(o)\n        for type_ in types:\n            if isinstance(o, type_):\n                ret[type_].append(o)\n\n    if len(types) == 1:\n        return ret[types[0]]\n\n    return ret\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.flatdict_to_dict","title":"flatdict_to_dict","text":"

Converts a flattened dictionary back to a nested dictionary.

Parameters:

Name Type Description Default dct dict

The dictionary to be nested. Each key should be a tuple of keys as generated by dict_to_flatdict

required

Returns A nested dict of the same type as dct

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def flatdict_to_dict(\n    dct: Dict[Tuple[KT, ...], VT]\n) -> Dict[KT, Union[VT, Dict[KT, VT]]]:\n\"\"\"Converts a flattened dictionary back to a nested dictionary.\n\n    Args:\n        dct (dict): The dictionary to be nested. Each key should be a tuple of keys\n            as generated by `dict_to_flatdict`\n\n    Returns\n        A nested dict of the same type as dct\n    \"\"\"\n    typ = type(dct)\n    result = cast(Dict[KT, Union[VT, Dict[KT, VT]]], typ())\n    for key_tuple, value in dct.items():\n        current_dict = result\n        for prefix_key in key_tuple[:-1]:\n            # Build nested dictionaries up for the current key tuple\n            # Use `setdefault` in case the nested dict has already been created\n            current_dict = current_dict.setdefault(prefix_key, typ())  # type: ignore\n        # Set the value\n        current_dict[key_tuple[-1]] = value\n\n    return result\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.get_from_dict","title":"get_from_dict","text":"

Fetch a value from a nested dictionary or list using a sequence of keys.

This function allows to fetch a value from a deeply nested structure of dictionaries and lists using either a dot-separated string or a list of keys. If a requested key does not exist, the function returns the provided default value.

Parameters:

Name Type Description Default dct Dict

The nested dictionary or list from which to fetch the value.

required keys Union[str, List[str]]

The sequence of keys to use for access. Can be a dot-separated string or a list of keys. List indices can be included in the sequence as either integer keys or as string indices in square brackets.

required default Any

The default value to return if the requested key path does not exist. Defaults to None.

None

Returns:

Type Description Any

The fetched value if the key exists, or the default value if it does not.

Examples:

get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]') 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1]) 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default') 'default'

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def get_from_dict(dct: Dict, keys: Union[str, List[str]], default: Any = None) -> Any:\n\"\"\"\n    Fetch a value from a nested dictionary or list using a sequence of keys.\n\n    This function allows to fetch a value from a deeply nested structure\n    of dictionaries and lists using either a dot-separated string or a list\n    of keys. If a requested key does not exist, the function returns the\n    provided default value.\n\n    Args:\n        dct: The nested dictionary or list from which to fetch the value.\n        keys: The sequence of keys to use for access. Can be a\n            dot-separated string or a list of keys. List indices can be included\n            in the sequence as either integer keys or as string indices in square\n            brackets.\n        default: The default value to return if the requested key path does not\n            exist. Defaults to None.\n\n    Returns:\n        The fetched value if the key exists, or the default value if it does not.\n\n    Examples:\n    >>> get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]')\n    2\n    >>> get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1])\n    2\n    >>> get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default')\n    'default'\n    \"\"\"\n    if isinstance(keys, str):\n        keys = keys.replace(\"[\", \".\").replace(\"]\", \"\").split(\".\")\n    try:\n        for key in keys:\n            try:\n                # Try to cast to int to handle list indices\n                key = int(key)\n            except ValueError:\n                # If it's not an int, use the key as-is\n                # for dict lookup\n                pass\n            dct = dct[key]\n        return dct\n    except (TypeError, KeyError, IndexError):\n        return default\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.isiterable","title":"isiterable","text":"

Return a boolean indicating if an object is iterable.

Excludes types that are iterable but typically used as singletons: - str - bytes - IO objects

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def isiterable(obj: Any) -> bool:\n\"\"\"\n    Return a boolean indicating if an object is iterable.\n\n    Excludes types that are iterable but typically used as singletons:\n    - str\n    - bytes\n    - IO objects\n    \"\"\"\n    try:\n        iter(obj)\n    except TypeError:\n        return False\n    else:\n        return not isinstance(obj, (str, bytes, io.IOBase))\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.remove_nested_keys","title":"remove_nested_keys","text":"

Recurses a dictionary returns a copy without all keys that match an entry in key_to_remove. Return obj unchanged if not a dictionary.

Parameters:

Name Type Description Default keys_to_remove List[Hashable]

A list of keys to remove from obj obj: The object to remove keys from.

required

Returns:

Type Description

obj without keys matching an entry in keys_to_remove if obj is a dictionary. obj if obj is not a dictionary.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def remove_nested_keys(keys_to_remove: List[Hashable], obj):\n\"\"\"\n    Recurses a dictionary returns a copy without all keys that match an entry in\n    `key_to_remove`. Return `obj` unchanged if not a dictionary.\n\n    Args:\n        keys_to_remove: A list of keys to remove from obj obj: The object to remove keys\n            from.\n\n    Returns:\n        `obj` without keys matching an entry in `keys_to_remove` if `obj` is a\n            dictionary. `obj` if `obj` is not a dictionary.\n    \"\"\"\n    if not isinstance(obj, dict):\n        return obj\n    return {\n        key: remove_nested_keys(keys_to_remove, value)\n        for key, value in obj.items()\n        if key not in keys_to_remove\n    }\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.visit_collection","title":"visit_collection","text":"

This function visits every element of an arbitrary Python collection. If an element is a Python collection, it will be visited recursively. If an element is not a collection, visit_fn will be called with the element. The return value of visit_fn can be used to alter the element if return_data is set.

Note that when using return_data a copy of each collection is created to avoid mutating the original object. This may have significant performance penalities and should only be used if you intend to transform the collection.

Supported types: - List - Tuple - Set - Dict (note: keys are also visited recursively) - Dataclass - Pydantic model - Prefect annotations

Parameters:

Name Type Description Default expr Any

a Python object or expression

required visit_fn Callable[[Any], Awaitable[Any]]

an async function that will be applied to every non-collection element of expr.

required return_data bool

if True, a copy of expr containing data modified by visit_fn will be returned. This is slower than return_data=False (the default).

False max_depth int

Controls the depth of recursive visitation. If set to zero, no recursion will occur. If set to a positive integer N, visitation will only descend to N layers deep. If set to any negative integer, no limit will be enforced and recursion will continue until terminal items are reached. By default, recursion is unlimited.

-1 context Optional[dict]

An optional dictionary. If passed, the context will be sent to each call to the visit_fn. The context can be mutated by each visitor and will be available for later visits to expressions at the given depth. Values will not be available \"up\" a level from a given expression.

The context will be automatically populated with an 'annotation' key when visiting collections within a BaseAnnotation type. This requires the caller to pass context={} and will not be activated by default.

None remove_annotations bool

If set, annotations will be replaced by their contents. By default, annotations are preserved but their contents are visited.

False Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def visit_collection(\n    expr,\n    visit_fn: Callable[[Any], Any],\n    return_data: bool = False,\n    max_depth: int = -1,\n    context: Optional[dict] = None,\n    remove_annotations: bool = False,\n):\n\"\"\"\n    This function visits every element of an arbitrary Python collection. If an element\n    is a Python collection, it will be visited recursively. If an element is not a\n    collection, `visit_fn` will be called with the element. The return value of\n    `visit_fn` can be used to alter the element if `return_data` is set.\n\n    Note that when using `return_data` a copy of each collection is created to avoid\n    mutating the original object. This may have significant performance penalities and\n    should only be used if you intend to transform the collection.\n\n    Supported types:\n    - List\n    - Tuple\n    - Set\n    - Dict (note: keys are also visited recursively)\n    - Dataclass\n    - Pydantic model\n    - Prefect annotations\n\n    Args:\n        expr (Any): a Python object or expression\n        visit_fn (Callable[[Any], Awaitable[Any]]): an async function that\n            will be applied to every non-collection element of expr.\n        return_data (bool): if `True`, a copy of `expr` containing data modified\n            by `visit_fn` will be returned. This is slower than `return_data=False`\n            (the default).\n        max_depth: Controls the depth of recursive visitation. If set to zero, no\n            recursion will occur. If set to a positive integer N, visitation will only\n            descend to N layers deep. If set to any negative integer, no limit will be\n            enforced and recursion will continue until terminal items are reached. By\n            default, recursion is unlimited.\n        context: An optional dictionary. If passed, the context will be sent to each\n            call to the `visit_fn`. The context can be mutated by each visitor and will\n            be available for later visits to expressions at the given depth. Values\n            will not be available \"up\" a level from a given expression.\n\n            The context will be automatically populated with an 'annotation' key when\n            visiting collections within a `BaseAnnotation` type. This requires the\n            caller to pass `context={}` and will not be activated by default.\n        remove_annotations: If set, annotations will be replaced by their contents. By\n            default, annotations are preserved but their contents are visited.\n    \"\"\"\n\n    def visit_nested(expr):\n        # Utility for a recursive call, preserving options and updating the depth.\n        return visit_collection(\n            expr,\n            visit_fn=visit_fn,\n            return_data=return_data,\n            remove_annotations=remove_annotations,\n            max_depth=max_depth - 1,\n            # Copy the context on nested calls so it does not \"propagate up\"\n            context=context.copy() if context is not None else None,\n        )\n\n    def visit_expression(expr):\n        if context is not None:\n            return visit_fn(expr, context)\n        else:\n            return visit_fn(expr)\n\n    # Visit every expression\n    try:\n        result = visit_expression(expr)\n    except StopVisiting:\n        max_depth = 0\n        result = expr\n\n    if return_data:\n        # Only mutate the expression while returning data, otherwise it could be null\n        expr = result\n\n    # Then, visit every child of the expression recursively\n\n    # If we have reached the maximum depth, do not perform any recursion\n    if max_depth == 0:\n        return result if return_data else None\n\n    # Get the expression type; treat iterators like lists\n    typ = list if isinstance(expr, IteratorABC) and isiterable(expr) else type(expr)\n    typ = cast(type, typ)  # mypy treats this as 'object' otherwise and complains\n\n    # Then visit every item in the expression if it is a collection\n    if isinstance(expr, Mock):\n        # Do not attempt to recurse into mock objects\n        result = expr\n\n    elif isinstance(expr, BaseAnnotation):\n        if context is not None:\n            context[\"annotation\"] = expr\n        value = visit_nested(expr.unwrap())\n\n        if remove_annotations:\n            result = value if return_data else None\n        else:\n            result = expr.rewrap(value) if return_data else None\n\n    elif typ in (list, tuple, set):\n        items = [visit_nested(o) for o in expr]\n        result = typ(items) if return_data else None\n\n    elif typ in (dict, OrderedDict):\n        assert isinstance(expr, (dict, OrderedDict))  # typecheck assertion\n        items = [(visit_nested(k), visit_nested(v)) for k, v in expr.items()]\n        result = typ(items) if return_data else None\n\n    elif is_dataclass(expr) and not isinstance(expr, type):\n        values = [visit_nested(getattr(expr, f.name)) for f in fields(expr)]\n        items = {field.name: value for field, value in zip(fields(expr), values)}\n        result = typ(**items) if return_data else None\n\n    elif isinstance(expr, pydantic.BaseModel):\n        # NOTE: This implementation *does not* traverse private attributes\n        # Pydantic does not expose extras in `__fields__` so we use `__fields_set__`\n        # as well to get all of the relevant attributes\n        # Check for presence of attrs even if they're in the field set due to pydantic#4916\n        model_fields = {\n            f for f in expr.__fields_set__.union(expr.__fields__) if hasattr(expr, f)\n        }\n        items = [visit_nested(getattr(expr, key)) for key in model_fields]\n\n        if return_data:\n            # Collect fields with aliases so reconstruction can use the correct field name\n            aliases = {\n                key: value.alias\n                for key, value in expr.__fields__.items()\n                if value.has_alias\n            }\n\n            model_instance = typ(\n                **{\n                    aliases.get(key) or key: value\n                    for key, value in zip(model_fields, items)\n                }\n            )\n\n            # Private attributes are not included in `__fields_set__` but we do not want\n            # to drop them from the model so we restore them after constructing a new\n            # model\n            for attr in expr.__private_attributes__:\n                # Use `object.__setattr__` to avoid errors on immutable models\n                object.__setattr__(model_instance, attr, getattr(expr, attr))\n\n            result = model_instance\n        else:\n            result = None\n\n    else:\n        result = result if return_data else None\n\n    return result\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/filesystem/","title":"prefect.utilities.filesystem","text":"","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem","title":"prefect.utilities.filesystem","text":"

Utilities for working with file systems

","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.create_default_ignore_file","title":"create_default_ignore_file","text":"

Creates default ignore file in the provided path if one does not already exist; returns boolean specifying whether a file was created.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def create_default_ignore_file(path: str) -> bool:\n\"\"\"\n    Creates default ignore file in the provided path if one does not already exist; returns boolean specifying\n    whether a file was created.\n    \"\"\"\n    path = pathlib.Path(path)\n    ignore_file = path / \".prefectignore\"\n    if ignore_file.exists():\n        return False\n    default_file = pathlib.Path(prefect.__module_path__) / \".prefectignore\"\n    with ignore_file.open(mode=\"w\") as f:\n        f.write(default_file.read_text())\n    return True\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.filename","title":"filename","text":"

Extract the file name from a path with remote file system support

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def filename(path: str) -> str:\n\"\"\"Extract the file name from a path with remote file system support\"\"\"\n    try:\n        of: OpenFile = fsspec.open(path)\n        sep = of.fs.sep\n    except (ImportError, AttributeError):\n        sep = \"\\\\\" if \"\\\\\" in path else \"/\"\n    return path.split(sep)[-1]\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.filter_files","title":"filter_files","text":"

This function accepts a root directory path and a list of file patterns to ignore, and returns a list of files that excludes those that should be ignored.

The specification matches that of .gitignore files.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def filter_files(\n    root: str = \".\", ignore_patterns: list = None, include_dirs: bool = True\n) -> set:\n\"\"\"\n    This function accepts a root directory path and a list of file patterns to ignore, and returns\n    a list of files that excludes those that should be ignored.\n\n    The specification matches that of [.gitignore files](https://git-scm.com/docs/gitignore).\n    \"\"\"\n    if ignore_patterns is None:\n        ignore_patterns = []\n    spec = pathspec.PathSpec.from_lines(\"gitwildmatch\", ignore_patterns)\n    ignored_files = {p.path for p in spec.match_tree_entries(root)}\n    if include_dirs:\n        all_files = {p.path for p in pathspec.util.iter_tree_entries(root)}\n    else:\n        all_files = set(pathspec.util.iter_tree_files(root))\n    included_files = all_files - ignored_files\n    return included_files\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.get_open_file_limit","title":"get_open_file_limit","text":"

Get the maximum number of open files allowed for the current process

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def get_open_file_limit() -> int:\n\"\"\"Get the maximum number of open files allowed for the current process\"\"\"\n\n    try:\n        if os.name == \"nt\":\n            import ctypes\n\n            return ctypes.cdll.ucrtbase._getmaxstdio()\n        else:\n            import resource\n\n            soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)\n            return soft_limit\n    except Exception:\n        # Catch all exceptions, as ctypes can raise several errors\n        # depending on what went wrong. Return a safe default if we\n        # can't get the limit from the OS.\n        return 200\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.is_local_path","title":"is_local_path","text":"

Check if the given path points to a local or remote file system

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def is_local_path(path: Union[str, pathlib.Path, OpenFile]):\n\"\"\"Check if the given path points to a local or remote file system\"\"\"\n    if isinstance(path, str):\n        try:\n            of = fsspec.open(path)\n        except ImportError:\n            # The path is a remote file system that uses a lib that is not installed\n            return False\n    elif isinstance(path, pathlib.Path):\n        return True\n    elif isinstance(path, OpenFile):\n        of = path\n    else:\n        raise TypeError(f\"Invalid path of type {type(path).__name__!r}\")\n\n    return type(of.fs) == LocalFileSystem\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.relative_path_to_current_platform","title":"relative_path_to_current_platform","text":"

Converts a relative path generated on any platform to a relative path for the current platform.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def relative_path_to_current_platform(path_str: str) -> Path:\n\"\"\"\n    Converts a relative path generated on any platform to a relative path for the\n    current platform.\n    \"\"\"\n\n    return Path(PureWindowsPath(path_str).as_posix())\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.tmpchdir","title":"tmpchdir","text":"

Change current-working directories for the duration of the context

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
@contextmanager\ndef tmpchdir(path: str):\n\"\"\"\n    Change current-working directories for the duration of the context\n    \"\"\"\n    path = os.path.abspath(path)\n    if os.path.isfile(path) or (not os.path.exists(path) and not path.endswith(\"/\")):\n        path = os.path.dirname(path)\n\n    owd = os.getcwd()\n\n    try:\n        os.chdir(path)\n        yield path\n    finally:\n        os.chdir(owd)\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.to_display_path","title":"to_display_path","text":"

Convert a path to a displayable path. The absolute path or relative path to the current (or given) directory will be returned, whichever is shorter.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def to_display_path(\n    path: Union[pathlib.Path, str], relative_to: Union[pathlib.Path, str] = None\n) -> str:\n\"\"\"\n    Convert a path to a displayable path. The absolute path or relative path to the\n    current (or given) directory will be returned, whichever is shorter.\n    \"\"\"\n    path, relative_to = (\n        pathlib.Path(path).resolve(),\n        pathlib.Path(relative_to or \".\").resolve(),\n    )\n    relative_path = str(path.relative_to(relative_to))\n    absolute_path = str(path)\n    return relative_path if len(relative_path) < len(absolute_path) else absolute_path\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/hashing/","title":"prefect.utilities.hashing","text":"","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing","title":"prefect.utilities.hashing","text":"","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.file_hash","title":"file_hash","text":"

Given a path to a file, produces a stable hash of the file contents.

Parameters:

Name Type Description Default path str

the path to a file

required hash_algo

Hash algorithm from hashlib to use.

_md5

Returns:

Name Type Description str str

a hash of the file contents

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/hashing.py
def file_hash(path: str, hash_algo=_md5) -> str:\n\"\"\"Given a path to a file, produces a stable hash of the file contents.\n\n    Args:\n        path (str): the path to a file\n        hash_algo: Hash algorithm from hashlib to use.\n\n    Returns:\n        str: a hash of the file contents\n    \"\"\"\n    contents = Path(path).read_bytes()\n    return stable_hash(contents, hash_algo=hash_algo)\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.hash_objects","title":"hash_objects","text":"

Attempt to hash objects by dumping to JSON or serializing with cloudpickle. On failure of both, None will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/hashing.py
def hash_objects(*args, hash_algo=_md5, **kwargs) -> Optional[str]:\n\"\"\"\n    Attempt to hash objects by dumping to JSON or serializing with cloudpickle.\n    On failure of both, `None` will be returned\n    \"\"\"\n    try:\n        serializer = JSONSerializer(dumps_kwargs={\"sort_keys\": True})\n        return stable_hash(serializer.dumps((args, kwargs)), hash_algo=hash_algo)\n    except Exception:\n        pass\n\n    try:\n        return stable_hash(cloudpickle.dumps((args, kwargs)), hash_algo=hash_algo)\n    except Exception:\n        pass\n\n    return None\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.stable_hash","title":"stable_hash","text":"

Given some arguments, produces a stable 64-bit hash of their contents.

Supports bytes and strings. Strings will be UTF-8 encoded.

Parameters:

Name Type Description Default *args Union[str, bytes]

Items to include in the hash.

() hash_algo

Hash algorithm from hashlib to use.

_md5

Returns:

Type Description str

A hex hash.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/hashing.py
def stable_hash(*args: Union[str, bytes], hash_algo=_md5) -> str:\n\"\"\"Given some arguments, produces a stable 64-bit hash of their contents.\n\n    Supports bytes and strings. Strings will be UTF-8 encoded.\n\n    Args:\n        *args: Items to include in the hash.\n        hash_algo: Hash algorithm from hashlib to use.\n\n    Returns:\n        A hex hash.\n    \"\"\"\n    h = hash_algo()\n    for a in args:\n        if isinstance(a, str):\n            a = a.encode()\n        h.update(a)\n    return h.hexdigest()\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/processutils/","title":"prefect.utilities.processutils","text":""},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils","title":"prefect.utilities.processutils","text":""},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.forward_signal_handler","title":"forward_signal_handler","text":"

Forward subsequent signum events (e.g. interrupts) to respective signums.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/processutils.py
def forward_signal_handler(\n    pid: int, signum: int, *signums: int, process_name: str, print_fn: Callable\n):\n\"\"\"Forward subsequent signum events (e.g. interrupts) to respective signums.\"\"\"\n    current_signal, future_signals = signums[0], signums[1:]\n\n    # avoid RecursionError when setting up a direct signal forward to the same signal for the main pid\n    avoid_infinite_recursion = signum == current_signal and pid == os.getpid()\n    if avoid_infinite_recursion:\n        # store the vanilla handler so it can be temporarily restored below\n        original_handler = signal.getsignal(current_signal)\n\n    def handler(*args):\n        print_fn(\n            f\"Received {getattr(signum, 'name', signum)}. \"\n            f\"Sending {getattr(current_signal, 'name', current_signal)} to\"\n            f\" {process_name} (PID {pid})...\"\n        )\n        if avoid_infinite_recursion:\n            signal.signal(current_signal, original_handler)\n        os.kill(pid, current_signal)\n        if future_signals:\n            forward_signal_handler(\n                pid,\n                signum,\n                *future_signals,\n                process_name=process_name,\n                print_fn=print_fn,\n            )\n\n    # register current and future signal handlers\n    signal.signal(signum, handler)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.open_process","title":"open_process async","text":"

Like anyio.open_process but with: - Support for Windows command joining - Termination of the process on exception during yield - Forced cleanup of process resources during cancellation

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/processutils.py
@asynccontextmanager\nasync def open_process(command: List[str], **kwargs):\n\"\"\"\n    Like `anyio.open_process` but with:\n    - Support for Windows command joining\n    - Termination of the process on exception during yield\n    - Forced cleanup of process resources during cancellation\n    \"\"\"\n    # Passing a string to open_process is equivalent to shell=True which is\n    # generally necessary for Unix-like commands on Windows but otherwise should\n    # be avoided\n    if not isinstance(command, list):\n        raise TypeError(\n            \"The command passed to open process must be a list. You passed the command\"\n            f\"'{command}', which is type '{type(command)}'.\"\n        )\n\n    if sys.platform == \"win32\":\n        command = \" \".join(command)\n        process = await _open_anyio_process(command, **kwargs)\n    else:\n        process = await anyio.open_process(command, **kwargs)\n\n    # if there's a creationflags kwarg and it contains CREATE_NEW_PROCESS_GROUP,\n    # use SetConsoleCtrlHandler to handle CTRL-C\n    win32_process_group = False\n    if (\n        sys.platform == \"win32\"\n        and \"creationflags\" in kwargs\n        and kwargs[\"creationflags\"] & subprocess.CREATE_NEW_PROCESS_GROUP\n    ):\n        win32_process_group = True\n        _windows_process_group_pids.add(process.pid)\n        # Add a handler for CTRL-C. Re-adding the handler is safe as Windows\n        # will not add a duplicate handler if _win32_ctrl_handler is\n        # already registered.\n        windll.kernel32.SetConsoleCtrlHandler(_win32_ctrl_handler, 1)\n\n    try:\n        async with process:\n            yield process\n    finally:\n        try:\n            process.terminate()\n            if win32_process_group:\n                _windows_process_group_pids.remove(process.pid)\n\n        except OSError:\n            # Occurs if the process is already terminated\n            pass\n\n        # Ensure the process resource is closed. If not shielded from cancellation,\n        # this resource can be left open and the subprocess output can appear after\n        # the parent process has exited.\n        with anyio.CancelScope(shield=True):\n            await process.aclose()\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.run_process","title":"run_process async","text":"

Like anyio.run_process but with:

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/processutils.py
async def run_process(\n    command: List[str],\n    stream_output: Union[bool, Tuple[Optional[TextSink], Optional[TextSink]]] = False,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n    task_status_handler: Optional[Callable[[anyio.abc.Process], Any]] = None,\n    **kwargs,\n):\n\"\"\"\n    Like `anyio.run_process` but with:\n\n    - Use of our `open_process` utility to ensure resources are cleaned up\n    - Simple `stream_output` support to connect the subprocess to the parent stdout/err\n    - Support for submission with `TaskGroup.start` marking as 'started' after the\n        process has been created. When used, the PID is returned to the task status.\n\n    \"\"\"\n    if stream_output is True:\n        stream_output = (sys.stdout, sys.stderr)\n\n    async with open_process(\n        command,\n        stdout=subprocess.PIPE if stream_output else subprocess.DEVNULL,\n        stderr=subprocess.PIPE if stream_output else subprocess.DEVNULL,\n        **kwargs,\n    ) as process:\n        if task_status is not None:\n            if not task_status_handler:\n\n                def task_status_handler(process):\n                    return process.pid\n\n            task_status.started(task_status_handler(process))\n\n        if stream_output:\n            await consume_process_output(\n                process, stdout_sink=stream_output[0], stderr_sink=stream_output[1]\n            )\n\n        await process.wait()\n\n    return process\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_agent","title":"setup_signal_handlers_agent","text":"

Handle interrupts of the agent gracefully.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/processutils.py
def setup_signal_handlers_agent(pid: int, process_name: str, print_fn: Callable):\n\"\"\"Handle interrupts of the agent gracefully.\"\"\"\n    setup_handler = partial(\n        forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n    )\n    # when agent receives SIGINT, it stops dequeueing new FlowRuns, and runs until the subprocesses finish\n    # the signal is not forwarded to subprocesses, so they can continue to run and hopefully still complete\n    if sys.platform == \"win32\":\n        # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n    else:\n        # forward first SIGINT directly, send SIGKILL on subsequent interrupt\n        setup_handler(signal.SIGINT, signal.SIGINT, signal.SIGKILL)\n        # first SIGTERM: send SIGINT, send SIGKILL on subsequent SIGTERM\n        setup_handler(signal.SIGTERM, signal.SIGINT, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_server","title":"setup_signal_handlers_server","text":"

Handle interrupts of the server gracefully.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/processutils.py
def setup_signal_handlers_server(pid: int, process_name: str, print_fn: Callable):\n\"\"\"Handle interrupts of the server gracefully.\"\"\"\n    setup_handler = partial(\n        forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n    )\n    # when server receives a signal, it needs to be propagated to the uvicorn subprocess\n    if sys.platform == \"win32\":\n        # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n    else:\n        # first interrupt: SIGTERM, second interrupt: SIGKILL\n        setup_handler(signal.SIGINT, signal.SIGTERM, signal.SIGKILL)\n        # forward first SIGTERM directly, send SIGKILL on subsequent SIGTERM\n        setup_handler(signal.SIGTERM, signal.SIGTERM, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_worker","title":"setup_signal_handlers_worker","text":"

Handle interrupts of workers gracefully.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/processutils.py
def setup_signal_handlers_worker(pid: int, process_name: str, print_fn: Callable):\n\"\"\"Handle interrupts of workers gracefully.\"\"\"\n    setup_handler = partial(\n        forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n    )\n    # when agent receives SIGINT, it stops dequeueing new FlowRuns, and runs until the subprocesses finish\n    # the signal is not forwarded to subprocesses, so they can continue to run and hopefully still complete\n    if sys.platform == \"win32\":\n        # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n    else:\n        # forward first SIGINT directly, send SIGKILL on subsequent interrupt\n        setup_handler(signal.SIGINT, signal.SIGINT, signal.SIGKILL)\n        # first SIGTERM: send SIGINT, send SIGKILL on subsequent SIGTERM\n        setup_handler(signal.SIGTERM, signal.SIGINT, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/workers/base/","title":"prefect.workers.base","text":"","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base","title":"prefect.workers.base","text":"","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration","title":"BaseJobConfiguration","text":"

Bases: BaseModel

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
class BaseJobConfiguration(BaseModel):\n    command: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The command to use when starting a flow run. \"\n            \"In most cases, this should be left blank and the command \"\n            \"will be automatically generated by the worker.\"\n        ),\n    )\n    env: Dict[str, Optional[str]] = Field(\n        default_factory=dict,\n        title=\"Environment Variables\",\n        description=\"Environment variables to set when starting a flow run.\",\n    )\n    labels: Dict[str, str] = Field(\n        default_factory=dict,\n        description=(\n            \"Labels applied to infrastructure created by the worker using \"\n            \"this job configuration.\"\n        ),\n    )\n    name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"Name given to infrastructure created by the worker using this \"\n            \"job configuration.\"\n        ),\n    )\n\n    _related_objects: Dict[str, Any] = PrivateAttr(default_factory=dict)\n\n    @validator(\"command\")\n    def _coerce_command(cls, v):\n\"\"\"Make sure that empty strings are treated as None\"\"\"\n        if not v:\n            return None\n        return v\n\n    @staticmethod\n    def _get_base_config_defaults(variables: dict) -> dict:\n\"\"\"Get default values from base config for all variables that have them.\"\"\"\n        defaults = dict()\n        for variable_name, attrs in variables.items():\n            if \"default\" in attrs:\n                defaults[variable_name] = attrs[\"default\"]\n\n        return defaults\n\n    @classmethod\n    @inject_client\n    async def from_template_and_values(\n        cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n    ):\n\"\"\"Creates a valid worker configuration object from the provided base\n        configuration and overrides.\n\n        Important: this method expects that the base_job_template was already\n        validated server-side.\n        \"\"\"\n        job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n        variables_schema = base_job_template[\"variables\"]\n        variables = cls._get_base_config_defaults(\n            variables_schema.get(\"properties\", {})\n        )\n        variables.update(values)\n\n        populated_configuration = apply_values(template=job_config, values=variables)\n        populated_configuration = await resolve_block_document_references(\n            template=populated_configuration, client=client\n        )\n        populated_configuration = await resolve_variables(\n            template=populated_configuration, client=client\n        )\n        return cls(**populated_configuration)\n\n    @classmethod\n    def json_template(cls) -> dict:\n\"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n        Defaults to using the job configuration parameter name as the template variable name.\n\n        e.g.\n        {\n            key1: '{{ key1 }}',     # default variable template\n            key2: '{{ template2 }}', # `template2` specifically provide as template\n        }\n        \"\"\"\n        configuration = {}\n        properties = cls.schema()[\"properties\"]\n        for k, v in properties.items():\n            if v.get(\"template\"):\n                template = v[\"template\"]\n            else:\n                template = \"{{ \" + k + \" }}\"\n            configuration[k] = template\n\n        return configuration\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n\"\"\"\n        Prepare the job configuration for a flow run.\n\n        This method is called by the worker before starting a flow run. It\n        should be used to set any configuration values that are dependent on\n        the flow run.\n\n        Args:\n            flow_run: The flow run to be executed.\n            deployment: The deployment that the flow run is associated with.\n            flow: The flow that the flow run is associated with.\n        \"\"\"\n\n        self._related_objects = {\n            \"deployment\": deployment,\n            \"flow\": flow,\n            \"flow-run\": flow_run,\n        }\n        if deployment is not None:\n            deployment_labels = self._base_deployment_labels(deployment)\n        else:\n            deployment_labels = {}\n\n        if flow is not None:\n            flow_labels = self._base_flow_labels(flow)\n        else:\n            flow_labels = {}\n\n        env = {\n            **self._base_environment(),\n            **self._base_flow_run_environment(flow_run),\n            **self.env,\n        }\n        self.env = {key: value for key, value in env.items() if value is not None}\n        self.labels = {\n            **self._base_flow_run_labels(flow_run),\n            **deployment_labels,\n            **flow_labels,\n            **self.labels,\n        }\n        self.name = self.name or flow_run.name\n        self.command = self.command or self._base_flow_run_command()\n\n    @staticmethod\n    def _base_flow_run_command() -> str:\n\"\"\"\n        Generate a command for a flow run job.\n        \"\"\"\n        return \"python -m prefect.engine\"\n\n    @staticmethod\n    def _base_flow_run_labels(flow_run: \"FlowRun\") -> Dict[str, str]:\n\"\"\"\n        Generate a dictionary of labels for a flow run job.\n        \"\"\"\n        return {\n            \"prefect.io/flow-run-id\": str(flow_run.id),\n            \"prefect.io/flow-run-name\": flow_run.name,\n            \"prefect.io/version\": prefect.__version__,\n        }\n\n    @classmethod\n    def _base_environment(cls) -> Dict[str, str]:\n\"\"\"\n        Environment variables that should be passed to all created infrastructure.\n\n        These values should be overridable with the `env` field.\n        \"\"\"\n        return get_current_settings().to_environment_variables(exclude_unset=True)\n\n    @staticmethod\n    def _base_flow_run_environment(flow_run: \"FlowRun\") -> Dict[str, str]:\n\"\"\"\n        Generate a dictionary of environment variables for a flow run job.\n        \"\"\"\n        return {\"PREFECT__FLOW_RUN_ID\": flow_run.id.hex}\n\n    @staticmethod\n    def _base_deployment_labels(deployment: \"DeploymentResponse\") -> Dict[str, str]:\n        labels = {\n            \"prefect.io/deployment-id\": str(deployment.id),\n            \"prefect.io/deployment-name\": deployment.name,\n        }\n        if deployment.updated is not None:\n            labels[\"prefect.io/deployment-updated\"] = deployment.updated.in_timezone(\n                \"utc\"\n            ).to_iso8601_string()\n        return labels\n\n    @staticmethod\n    def _base_flow_labels(flow: \"Flow\") -> Dict[str, str]:\n        return {\n            \"prefect.io/flow-id\": str(flow.id),\n            \"prefect.io/flow-name\": flow.name,\n        }\n\n    def _related_resources(self) -> List[RelatedResource]:\n        tags = set()\n        related = []\n\n        for kind, obj in self._related_objects.items():\n            if obj is None:\n                continue\n            if hasattr(obj, \"tags\"):\n                tags.update(obj.tags)\n            related.append(object_as_related_resource(kind=kind, role=kind, object=obj))\n\n        return related + tags_as_related_resources(tags)\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.from_template_and_values","title":"from_template_and_values async classmethod","text":"

Creates a valid worker configuration object from the provided base configuration and overrides.

Important: this method expects that the base_job_template was already validated server-side.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
@classmethod\n@inject_client\nasync def from_template_and_values(\n    cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n):\n\"\"\"Creates a valid worker configuration object from the provided base\n    configuration and overrides.\n\n    Important: this method expects that the base_job_template was already\n    validated server-side.\n    \"\"\"\n    job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n    variables_schema = base_job_template[\"variables\"]\n    variables = cls._get_base_config_defaults(\n        variables_schema.get(\"properties\", {})\n    )\n    variables.update(values)\n\n    populated_configuration = apply_values(template=job_config, values=variables)\n    populated_configuration = await resolve_block_document_references(\n        template=populated_configuration, client=client\n    )\n    populated_configuration = await resolve_variables(\n        template=populated_configuration, client=client\n    )\n    return cls(**populated_configuration)\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.json_template","title":"json_template classmethod","text":"

Returns a dict with job configuration as keys and the corresponding templates as values

Defaults to using the job configuration parameter name as the template variable name.

e.g. { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # template2 specifically provide as template }

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
@classmethod\ndef json_template(cls) -> dict:\n\"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n    Defaults to using the job configuration parameter name as the template variable name.\n\n    e.g.\n    {\n        key1: '{{ key1 }}',     # default variable template\n        key2: '{{ template2 }}', # `template2` specifically provide as template\n    }\n    \"\"\"\n    configuration = {}\n    properties = cls.schema()[\"properties\"]\n    for k, v in properties.items():\n        if v.get(\"template\"):\n            template = v[\"template\"]\n        else:\n            template = \"{{ \" + k + \" }}\"\n        configuration[k] = template\n\n    return configuration\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

Prepare the job configuration for a flow run.

This method is called by the worker before starting a flow run. It should be used to set any configuration values that are dependent on the flow run.

Parameters:

Name Type Description Default flow_run FlowRun

The flow run to be executed.

required deployment Optional[DeploymentResponse]

The deployment that the flow run is associated with.

None flow Optional[Flow]

The flow that the flow run is associated with.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n\"\"\"\n    Prepare the job configuration for a flow run.\n\n    This method is called by the worker before starting a flow run. It\n    should be used to set any configuration values that are dependent on\n    the flow run.\n\n    Args:\n        flow_run: The flow run to be executed.\n        deployment: The deployment that the flow run is associated with.\n        flow: The flow that the flow run is associated with.\n    \"\"\"\n\n    self._related_objects = {\n        \"deployment\": deployment,\n        \"flow\": flow,\n        \"flow-run\": flow_run,\n    }\n    if deployment is not None:\n        deployment_labels = self._base_deployment_labels(deployment)\n    else:\n        deployment_labels = {}\n\n    if flow is not None:\n        flow_labels = self._base_flow_labels(flow)\n    else:\n        flow_labels = {}\n\n    env = {\n        **self._base_environment(),\n        **self._base_flow_run_environment(flow_run),\n        **self.env,\n    }\n    self.env = {key: value for key, value in env.items() if value is not None}\n    self.labels = {\n        **self._base_flow_run_labels(flow_run),\n        **deployment_labels,\n        **flow_labels,\n        **self.labels,\n    }\n    self.name = self.name or flow_run.name\n    self.command = self.command or self._base_flow_run_command()\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker","title":"BaseWorker","text":"

Bases: abc.ABC

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
@register_base_type\nclass BaseWorker(abc.ABC):\n    type: str\n    job_configuration: Type[BaseJobConfiguration] = BaseJobConfiguration\n    job_configuration_variables: Optional[Type[BaseVariables]] = None\n\n    _documentation_url = \"\"\n    _logo_url = \"\"\n    _description = \"\"\n\n    @experimental(feature=\"The workers feature\", group=\"workers\")\n    def __init__(\n        self,\n        work_pool_name: str,\n        work_queues: Optional[List[str]] = None,\n        name: Optional[str] = None,\n        prefetch_seconds: Optional[float] = None,\n        create_pool_if_not_found: bool = True,\n        limit: Optional[int] = None,\n    ):\n\"\"\"\n        Base class for all Prefect workers.\n\n        Args:\n            name: The name of the worker. If not provided, a random one\n                will be generated. If provided, it cannot contain '/' or '%'.\n                The name is used to identify the worker in the UI; if two\n                processes have the same name, they will be treated as the same\n                worker.\n            work_pool_name: The name of the work pool to poll.\n            work_queues: A list of work queues to poll. If not provided, all\n                work queue in the work pool will be polled.\n            prefetch_seconds: The number of seconds to prefetch flow runs for.\n            create_pool_if_not_found: Whether to create the work pool\n                if it is not found. Defaults to `True`, but can be set to `False` to\n                ensure that work pools are not created accidentally.\n            limit: The maximum number of flow runs this worker should be running at\n                a given time.\n        \"\"\"\n        if name and (\"/\" in name or \"%\" in name):\n            raise ValueError(\"Worker name cannot contain '/' or '%'\")\n        self.name = name or f\"{self.__class__.__name__} {uuid4()}\"\n        self._logger = get_logger(f\"worker.{self.__class__.type}.{self.name.lower()}\")\n\n        self.is_setup = False\n        self._create_pool_if_not_found = create_pool_if_not_found\n        self._work_pool_name = work_pool_name\n        self._work_queues: Set[str] = set(work_queues) if work_queues else set()\n\n        self._prefetch_seconds: float = (\n            prefetch_seconds or PREFECT_WORKER_PREFETCH_SECONDS.value()\n        )\n\n        self._work_pool: Optional[WorkPool] = None\n        self._runs_task_group: Optional[anyio.abc.TaskGroup] = None\n        self._client: Optional[PrefectClient] = None\n        self._last_polled_time: pendulum.DateTime = pendulum.now(\"utc\")\n        self._limit = limit\n        self._limiter: Optional[anyio.CapacityLimiter] = None\n        self._submitting_flow_run_ids = set()\n        self._cancelling_flow_run_ids = set()\n        self._scheduled_task_scopes = set()\n\n    @classmethod\n    def get_documentation_url(cls) -> str:\n        return cls._documentation_url\n\n    @classmethod\n    def get_logo_url(cls) -> str:\n        return cls._logo_url\n\n    @classmethod\n    def get_description(cls) -> str:\n        return cls._description\n\n    @classmethod\n    def get_default_base_job_template(cls) -> Dict:\n        if cls.job_configuration_variables is None:\n            schema = cls.job_configuration.schema()\n            # remove \"template\" key from all dicts in schema['properties'] because it is not a\n            # relevant field\n            for key, value in schema[\"properties\"].items():\n                if isinstance(value, dict):\n                    schema[\"properties\"][key].pop(\"template\", None)\n            variables_schema = schema\n        else:\n            variables_schema = cls.job_configuration_variables.schema()\n        variables_schema.pop(\"title\", None)\n        return {\n            \"job_configuration\": cls.job_configuration.json_template(),\n            \"variables\": variables_schema,\n        }\n\n    @staticmethod\n    def get_worker_class_from_type(type: str) -> Optional[Type[\"BaseWorker\"]]:\n\"\"\"\n        Returns the worker class for a given worker type. If the worker type\n        is not recognized, returns None.\n        \"\"\"\n        load_prefect_collections()\n        worker_registry = get_registry_for_type(BaseWorker)\n        if worker_registry is not None:\n            return worker_registry.get(type)\n\n    @staticmethod\n    def get_all_available_worker_types() -> List[str]:\n\"\"\"\n        Returns all worker types available in the local registry.\n        \"\"\"\n        load_prefect_collections()\n        worker_registry = get_registry_for_type(BaseWorker)\n        if worker_registry is not None:\n            return list(worker_registry.keys())\n        return []\n\n    def get_name_slug(self):\n        return slugify(self.name)\n\n    def get_flow_run_logger(self, flow_run: \"FlowRun\") -> PrefectLogAdapter:\n        return flow_run_logger(flow_run=flow_run).getChild(\n            \"worker\",\n            extra={\n                \"worker_name\": self.name,\n                \"work_pool_name\": (\n                    self._work_pool_name if self._work_pool else \"<unknown>\"\n                ),\n                \"work_pool_id\": str(getattr(self._work_pool, \"id\", \"unknown\")),\n            },\n        )\n\n    @abc.abstractmethod\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: BaseJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n\"\"\"\n        Runs a given flow run on the current worker.\n        \"\"\"\n        raise NotImplementedError(\n            \"Workers must implement a method for running submitted flow runs\"\n        )\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: BaseJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n\"\"\"\n        Method for killing infrastructure created by a worker. Should be implemented by\n        individual workers if they support killing infrastructure.\n        \"\"\"\n        raise NotImplementedError(\n            \"This worker does not support killing infrastructure.\"\n        )\n\n    @classmethod\n    def __dispatch_key__(cls):\n        if cls.__name__ == \"BaseWorker\":\n            return None  # The base class is abstract\n        return cls.type\n\n    async def setup(self):\n\"\"\"Prepares the worker to run.\"\"\"\n        self._logger.debug(\"Setting up worker...\")\n        self._runs_task_group = anyio.create_task_group()\n        self._limiter = (\n            anyio.CapacityLimiter(self._limit) if self._limit is not None else None\n        )\n        self._client = get_client()\n        await self._client.__aenter__()\n        await self._runs_task_group.__aenter__()\n\n        self.is_setup = True\n\n    async def teardown(self, *exc_info):\n\"\"\"Cleans up resources after the worker is stopped.\"\"\"\n        self._logger.debug(\"Tearing down worker...\")\n        self.is_setup = False\n        for scope in self._scheduled_task_scopes:\n            scope.cancel()\n        if self._runs_task_group:\n            await self._runs_task_group.__aexit__(*exc_info)\n        if self._client:\n            await self._client.__aexit__(*exc_info)\n        self._runs_task_group = None\n        self._client = None\n\n    def is_worker_still_polling(self, query_interval_seconds: int) -> bool:\n\"\"\"\n        This method is invoked by a webserver healthcheck handler\n        and returns a boolean indicating if the worker has recorded a\n        scheduled flow run poll within a variable amount of time.\n\n        The `query_interval_seconds` is the same value that is used by\n        the loop services - we will evaluate if the _last_polled_time\n        was within that interval x 30 (so 10s -> 5m)\n\n        The instance property `self._last_polled_time`\n        is currently set/updated in `get_and_submit_flow_runs()`\n        \"\"\"\n        threshold_seconds = query_interval_seconds * 30\n\n        seconds_since_last_poll = (\n            pendulum.now(\"utc\") - self._last_polled_time\n        ).in_seconds()\n\n        is_still_polling = seconds_since_last_poll <= threshold_seconds\n\n        if not is_still_polling:\n            self._logger.error(\n                f\"Worker has not polled in the last {seconds_since_last_poll} seconds \"\n                \"and should be restarted\"\n            )\n\n        return is_still_polling\n\n    async def get_and_submit_flow_runs(self):\n        runs_response = await self._get_scheduled_flow_runs()\n\n        self._last_polled_time = pendulum.now(\"utc\")\n\n        return await self._submit_scheduled_flow_runs(flow_run_response=runs_response)\n\n    async def check_for_cancelled_flow_runs(self):\n        if not self.is_setup:\n            raise RuntimeError(\n                \"Worker is not set up. Please make sure you are running this worker \"\n                \"as an async context manager.\"\n            )\n\n        self._logger.debug(\"Checking for cancelled flow runs...\")\n\n        work_queue_filter = (\n            WorkQueueFilter(name=WorkQueueFilterName(any_=list(self._work_queues)))\n            if self._work_queues\n            else None\n        )\n\n        named_cancelling_flow_runs = await self._client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n                    name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self._cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=WorkPoolFilter(\n                name=WorkPoolFilterName(any_=[self._work_pool_name])\n            ),\n            work_queue_filter=work_queue_filter,\n        )\n\n        typed_cancelling_flow_runs = await self._client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self._cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=WorkPoolFilter(\n                name=WorkPoolFilterName(any_=[self._work_pool_name])\n            ),\n            work_queue_filter=work_queue_filter,\n        )\n\n        cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n        if cancelling_flow_runs:\n            self._logger.info(\n                f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n            )\n\n        for flow_run in cancelling_flow_runs:\n            self._cancelling_flow_run_ids.add(flow_run.id)\n            self._runs_task_group.start_soon(self.cancel_run, flow_run)\n\n        return cancelling_flow_runs\n\n    async def cancel_run(self, flow_run: \"FlowRun\"):\n        run_logger = self.get_flow_run_logger(flow_run)\n\n        if not flow_run.infrastructure_pid:\n            run_logger.error(\n                f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n                \" attached. Cancellation cannot be guaranteed.\"\n            )\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": (\n                        \"This flow run is missing infrastructure tracking information\"\n                        \" and cancellation cannot be guaranteed.\"\n                    )\n                },\n            )\n            return\n\n        try:\n            configuration = await self._get_configuration(flow_run)\n        except ObjectNotFound:\n            self._logger.warning(\n                f\"Flow run {flow_run.id!r} cannot be cancelled by this worker:\"\n                f\" associated deployment {flow_run.deployment_id!r} does not exist.\"\n            )\n\n        try:\n            await self.kill_infrastructure(\n                infrastructure_pid=flow_run.infrastructure_pid,\n                configuration=configuration,\n            )\n        except NotImplementedError:\n            self._logger.error(\n                f\"Worker type {self.type!r} does not support killing created \"\n                \"infrastructure. Cancellation cannot be guaranteed.\"\n            )\n        except InfrastructureNotFound as exc:\n            self._logger.warning(f\"{exc} Marking flow run as cancelled.\")\n            await self._mark_flow_run_as_cancelled(flow_run)\n        except InfrastructureNotAvailable as exc:\n            self._logger.warning(f\"{exc} Flow run cannot be cancelled by this worker.\")\n        except Exception:\n            run_logger.exception(\n                \"Encountered exception while killing infrastructure for flow run \"\n                f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n            )\n            # We will try again on generic exceptions\n            self._cancelling_flow_run_ids.remove(flow_run.id)\n            return\n        else:\n            self._emit_flow_run_cancelled_event(\n                flow_run=flow_run, configuration=configuration\n            )\n            await self._mark_flow_run_as_cancelled(flow_run)\n            run_logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n\n    async def _update_local_work_pool_info(self):\n        try:\n            work_pool = await self._client.read_work_pool(\n                work_pool_name=self._work_pool_name\n            )\n        except ObjectNotFound:\n            if self._create_pool_if_not_found:\n                work_pool = await self._client.create_work_pool(\n                    work_pool=WorkPoolCreate(name=self._work_pool_name, type=self.type)\n                )\n                self._logger.info(f\"Worker pool {self._work_pool_name!r} created.\")\n            else:\n                self._logger.warning(f\"Worker pool {self._work_pool_name!r} not found!\")\n                return\n\n        # if the remote config type changes (or if it's being loaded for the\n        # first time), check if it matches the local type and warn if not\n        if getattr(self._work_pool, \"type\", 0) != work_pool.type:\n            if work_pool.type != self.__class__.type:\n                self._logger.warning(\n                    \"Worker type mismatch! This worker process expects type \"\n                    f\"{self.type!r} but received {work_pool.type!r}\"\n                    \" from the server. Unexpected behavior may occur.\"\n                )\n\n        # once the work pool is loaded, verify that it has a `base_job_template` and\n        # set it if not\n        if not work_pool.base_job_template:\n            job_template = self.__class__.get_default_base_job_template()\n            await self._set_work_pool_template(work_pool, job_template)\n            work_pool.base_job_template = job_template\n\n        self._work_pool = work_pool\n\n    async def _send_worker_heartbeat(self):\n        if self._work_pool:\n            await self._client.send_worker_heartbeat(\n                work_pool_name=self._work_pool_name, worker_name=self.name\n            )\n\n    async def sync_with_backend(self):\n\"\"\"\n        Updates the worker's local information about it's current work pool and\n        queues. Sends a worker heartbeat to the API.\n        \"\"\"\n        await self._update_local_work_pool_info()\n\n        await self._send_worker_heartbeat()\n\n        self._logger.debug(\"Worker synchronized with the Prefect API server.\")\n\n    async def _get_scheduled_flow_runs(\n        self,\n    ) -> List[\"WorkerFlowRunResponse\"]:\n\"\"\"\n        Retrieve scheduled flow runs from the work pool's queues.\n        \"\"\"\n        scheduled_before = pendulum.now(\"utc\").add(seconds=int(self._prefetch_seconds))\n        self._logger.debug(\n            f\"Querying for flow runs scheduled before {scheduled_before}\"\n        )\n        try:\n            scheduled_flow_runs = (\n                await self._client.get_scheduled_flow_runs_for_work_pool(\n                    work_pool_name=self._work_pool_name,\n                    scheduled_before=scheduled_before,\n                    work_queue_names=list(self._work_queues),\n                )\n            )\n            self._logger.debug(\n                f\"Discovered {len(scheduled_flow_runs)} scheduled_flow_runs\"\n            )\n            return scheduled_flow_runs\n        except ObjectNotFound:\n            # the pool doesn't exist; it will be created on the next\n            # heartbeat (or an appropriate warning will be logged)\n            return []\n\n    async def _submit_scheduled_flow_runs(\n        self, flow_run_response: List[\"WorkerFlowRunResponse\"]\n    ) -> List[\"FlowRun\"]:\n\"\"\"\n        Takes a list of WorkerFlowRunResponses and submits the referenced flow runs\n        for execution by the worker.\n        \"\"\"\n        submittable_flow_runs = [entry.flow_run for entry in flow_run_response]\n        submittable_flow_runs.sort(key=lambda run: run.next_scheduled_start_time)\n        for flow_run in submittable_flow_runs:\n            if flow_run.id in self._submitting_flow_run_ids:\n                continue\n\n            try:\n                if self._limiter:\n                    self._limiter.acquire_on_behalf_of_nowait(flow_run.id)\n            except anyio.WouldBlock:\n                self._logger.info(\n                    f\"Flow run limit reached; {self._limiter.borrowed_tokens} flow runs\"\n                    \" in progress.\"\n                )\n                break\n            else:\n                run_logger = self.get_flow_run_logger(flow_run)\n                run_logger.info(\n                    f\"Worker '{self.name}' submitting flow run '{flow_run.id}'\"\n                )\n                self._submitting_flow_run_ids.add(flow_run.id)\n                self._runs_task_group.start_soon(\n                    self._submit_run,\n                    flow_run,\n                )\n\n        return list(\n            filter(\n                lambda run: run.id in self._submitting_flow_run_ids,\n                submittable_flow_runs,\n            )\n        )\n\n    async def _check_flow_run(self, flow_run: \"FlowRun\") -> None:\n\"\"\"\n        Performs a check on a submitted flow run to warn the user if the flow run\n        was created from a deployment with a storage block.\n        \"\"\"\n        if flow_run.deployment_id:\n            deployment = await self._client.read_deployment(flow_run.deployment_id)\n            if deployment.storage_document_id:\n                raise ValueError(\n                    f\"Flow run {flow_run.id!r} was created from deployment\"\n                    f\" {deployment.name!r} which is configured with a storage block.\"\n                    \" Workers currently only support local storage. Please use an\"\n                    \" agent to execute this flow run.\"\n                )\n\n    async def _submit_run(self, flow_run: \"FlowRun\") -> None:\n\"\"\"\n        Submits a given flow run for execution by the worker.\n        \"\"\"\n        run_logger = self.get_flow_run_logger(flow_run)\n\n        try:\n            await self._check_flow_run(flow_run)\n        except (ValueError, ObjectNotFound):\n            self._logger.exception(\n                (\n                    \"Flow run %s did not pass checks and will not be submitted for\"\n                    \" execution\"\n                ),\n                flow_run.id,\n            )\n            self._submitting_flow_run_ids.remove(flow_run.id)\n            return\n\n        ready_to_submit = await self._propose_pending_state(flow_run)\n\n        if ready_to_submit:\n            readiness_result = await self._runs_task_group.start(\n                self._submit_run_and_capture_errors, flow_run\n            )\n\n            if readiness_result and not isinstance(readiness_result, Exception):\n                try:\n                    await self._client.update_flow_run(\n                        flow_run_id=flow_run.id,\n                        infrastructure_pid=str(readiness_result),\n                    )\n                except Exception:\n                    run_logger.exception(\n                        \"An error occurred while setting the `infrastructure_pid` on \"\n                        f\"flow run {flow_run.id!r}. The flow run will \"\n                        \"not be cancellable.\"\n                    )\n\n            run_logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n        else:\n            # If the run is not ready to submit, release the concurrency slot\n            if self._limiter:\n                self._limiter.release_on_behalf_of(flow_run.id)\n\n        self._submitting_flow_run_ids.remove(flow_run.id)\n\n    async def _submit_run_and_capture_errors(\n        self, flow_run: \"FlowRun\", task_status: anyio.abc.TaskStatus = None\n    ) -> Union[BaseWorkerResult, Exception]:\n        run_logger = self.get_flow_run_logger(flow_run)\n\n        try:\n            configuration = await self._get_configuration(flow_run)\n            submitted_event = self._emit_flow_run_submitted_event(configuration)\n            result = await self.run(\n                flow_run=flow_run,\n                task_status=task_status,\n                configuration=configuration,\n            )\n        except Exception as exc:\n            if not task_status._future.done():\n                # This flow run was being submitted and did not start successfully\n                run_logger.exception(\n                    f\"Failed to submit flow run '{flow_run.id}' to infrastructure.\"\n                )\n                # Mark the task as started to prevent agent crash\n                task_status.started(exc)\n                await self._propose_crashed_state(\n                    flow_run, \"Flow run could not be submitted to infrastructure\"\n                )\n            else:\n                run_logger.exception(\n                    f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n                    \"The flow run will not be marked as failed, but an issue may have \"\n                    \"occurred.\"\n                )\n            return exc\n        finally:\n            if self._limiter:\n                self._limiter.release_on_behalf_of(flow_run.id)\n\n        if not task_status._future.done():\n            run_logger.error(\n                f\"Infrastructure returned without reporting flow run '{flow_run.id}' \"\n                \"as started or raising an error. This behavior is not expected and \"\n                \"generally indicates improper implementation of infrastructure. The \"\n                \"flow run will not be marked as failed, but an issue may have occurred.\"\n            )\n            # Mark the task as started to prevent agent crash\n            task_status.started()\n\n        if result.status_code != 0:\n            await self._propose_crashed_state(\n                flow_run,\n                (\n                    \"Flow run infrastructure exited with non-zero status code\"\n                    f\" {result.status_code}.\"\n                ),\n            )\n\n        self._emit_flow_run_executed_event(result, configuration, submitted_event)\n\n        return result\n\n    def get_status(self):\n\"\"\"\n        Retrieves the status of the current worker including its name, current worker\n        pool, the work pool queues it is polling, and its local settings.\n        \"\"\"\n        return {\n            \"name\": self.name,\n            \"work_pool\": (\n                self._work_pool.dict(json_compatible=True)\n                if self._work_pool is not None\n                else None\n            ),\n            \"settings\": {\n                \"prefetch_seconds\": self._prefetch_seconds,\n            },\n        }\n\n    async def _get_configuration(\n        self,\n        flow_run: \"FlowRun\",\n    ) -> BaseJobConfiguration:\n        deployment = await self._client.read_deployment(flow_run.deployment_id)\n        flow = await self._client.read_flow(flow_run.flow_id)\n        configuration = await self.job_configuration.from_template_and_values(\n            base_job_template=self._work_pool.base_job_template,\n            values=deployment.infra_overrides or {},\n            client=self._client,\n        )\n        configuration.prepare_for_flow_run(\n            flow_run=flow_run, deployment=deployment, flow=flow\n        )\n        return configuration\n\n    async def _propose_pending_state(self, flow_run: \"FlowRun\") -> bool:\n        run_logger = self.get_flow_run_logger(flow_run)\n        state = flow_run.state\n        try:\n            state = await propose_state(\n                self._client, Pending(), flow_run_id=flow_run.id\n            )\n        except Abort as exc:\n            run_logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}'. \"\n                    f\"Server sent an abort signal: {exc}\"\n                ),\n            )\n            return False\n        except Exception:\n            run_logger.exception(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n            )\n            return False\n\n        if not state.is_pending():\n            run_logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}': \"\n                    f\"Server returned a non-pending state {state.type.value!r}\"\n                ),\n            )\n            return False\n\n        return True\n\n    async def _propose_failed_state(self, flow_run: \"FlowRun\", exc: Exception) -> None:\n        run_logger = self.get_flow_run_logger(flow_run)\n        try:\n            await propose_state(\n                self._client,\n                await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # We've already failed, no need to note the abort but we don't want it to\n            # raise in the agent process\n            pass\n        except Exception:\n            run_logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n\n    async def _propose_crashed_state(self, flow_run: \"FlowRun\", message: str) -> None:\n        run_logger = self.get_flow_run_logger(flow_run)\n        try:\n            state = await propose_state(\n                self._client,\n                Crashed(message=message),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # Flow run already marked as failed\n            pass\n        except Exception:\n            run_logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n        else:\n            if state.is_crashed():\n                run_logger.info(\n                    f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n                )\n\n    async def _mark_flow_run_as_cancelled(\n        self, flow_run: \"FlowRun\", state_updates: Optional[dict] = None\n    ) -> None:\n        state_updates = state_updates or {}\n        state_updates.setdefault(\"name\", \"Cancelled\")\n        state_updates.setdefault(\"type\", StateType.CANCELLED)\n        state = flow_run.state.copy(update=state_updates)\n\n        await self._client.set_flow_run_state(flow_run.id, state, force=True)\n\n        # Do not remove the flow run from the cancelling set immediately because\n        # the API caches responses for the `read_flow_runs` and we do not want to\n        # duplicate cancellations.\n        await self._schedule_task(\n            60 * 10, self._cancelling_flow_run_ids.remove, flow_run.id\n        )\n\n    async def _set_work_pool_template(self, work_pool, job_template):\n\"\"\"Updates the `base_job_template` for the worker's work pool server side.\"\"\"\n        await self._client.update_work_pool(\n            work_pool_name=work_pool.name,\n            work_pool=WorkPoolUpdate(\n                base_job_template=job_template,\n            ),\n        )\n\n    async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n\"\"\"\n        Schedule a background task to start after some time.\n\n        These tasks will be run immediately when the worker exits instead of waiting.\n\n        The function may be async or sync. Async functions will be awaited.\n        \"\"\"\n\n        async def wrapper(task_status):\n            # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n            # time or shutdown\n            if self.is_setup:\n                with anyio.CancelScope() as scope:\n                    self._scheduled_task_scopes.add(scope)\n                    task_status.started()\n                    await anyio.sleep(__in_seconds)\n\n                self._scheduled_task_scopes.remove(scope)\n            else:\n                task_status.started()\n\n            result = fn(*args, **kwargs)\n            if inspect.iscoroutine(result):\n                await result\n\n        await self._runs_task_group.start(wrapper)\n\n    async def __aenter__(self):\n        self._logger.debug(\"Entering worker context...\")\n        await self.setup()\n        return self\n\n    async def __aexit__(self, *exc_info):\n        self._logger.debug(\"Exiting worker context...\")\n        await self.teardown(*exc_info)\n\n    def __repr__(self):\n        return f\"Worker(pool={self._work_pool_name!r}, name={self.name!r})\"\n\n    def _event_resource(self):\n        return {\n            \"prefect.resource.id\": f\"prefect.worker.{self.type}.{self.get_name_slug()}\",\n            \"prefect.resource.name\": self.name,\n            \"prefect.version\": prefect.__version__,\n            \"prefect.worker-type\": self.type,\n        }\n\n    def _event_related_resources(\n        self,\n        configuration: Optional[BaseJobConfiguration] = None,\n        include_self: bool = False,\n    ) -> List[RelatedResource]:\n        related = []\n        if configuration:\n            related += configuration._related_resources()\n\n        if self._work_pool:\n            related.append(\n                object_as_related_resource(\n                    kind=\"work-pool\", role=\"work-pool\", object=self._work_pool\n                )\n            )\n\n        if include_self:\n            worker_resource = self._event_resource()\n            worker_resource[\"prefect.resource.role\"] = \"worker\"\n            related.append(RelatedResource(__root__=worker_resource))\n\n        return related\n\n    def _emit_flow_run_submitted_event(\n        self, configuration: BaseJobConfiguration\n    ) -> Event:\n        return emit_event(\n            event=\"prefect.worker.submitted-flow-run\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(configuration=configuration),\n        )\n\n    def _emit_flow_run_executed_event(\n        self,\n        result: BaseWorkerResult,\n        configuration: BaseJobConfiguration,\n        submitted_event: Event,\n    ):\n        related = self._event_related_resources(configuration=configuration)\n\n        for resource in related:\n            if resource.role == \"flow-run\":\n                resource[\"prefect.infrastructure.identifier\"] = str(result.identifier)\n                resource[\"prefect.infrastructure.status-code\"] = str(result.status_code)\n\n        emit_event(\n            event=\"prefect.worker.executed-flow-run\",\n            resource=self._event_resource(),\n            related=related,\n            follows=submitted_event,\n        )\n\n    async def _emit_worker_started_event(self) -> Event:\n        return emit_event(\n            \"prefect.worker.started\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(),\n        )\n\n    async def _emit_worker_stopped_event(self, started_event: Event):\n        emit_event(\n            \"prefect.worker.stopped\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(),\n            follows=started_event,\n        )\n\n    def _emit_flow_run_cancelled_event(\n        self, flow_run: \"FlowRun\", configuration: BaseJobConfiguration\n    ):\n        related = self._event_related_resources(configuration=configuration)\n\n        for resource in related:\n            if resource.role == \"flow-run\":\n                resource[\"prefect.infrastructure.identifier\"] = str(\n                    flow_run.infrastructure_pid\n                )\n\n        emit_event(\n            event=\"prefect.worker.cancelled-flow-run\",\n            resource=self._event_resource(),\n            related=related,\n        )\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_all_available_worker_types","title":"get_all_available_worker_types staticmethod","text":"

Returns all worker types available in the local registry.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
@staticmethod\ndef get_all_available_worker_types() -> List[str]:\n\"\"\"\n    Returns all worker types available in the local registry.\n    \"\"\"\n    load_prefect_collections()\n    worker_registry = get_registry_for_type(BaseWorker)\n    if worker_registry is not None:\n        return list(worker_registry.keys())\n    return []\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_status","title":"get_status","text":"

Retrieves the status of the current worker including its name, current worker pool, the work pool queues it is polling, and its local settings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
def get_status(self):\n\"\"\"\n    Retrieves the status of the current worker including its name, current worker\n    pool, the work pool queues it is polling, and its local settings.\n    \"\"\"\n    return {\n        \"name\": self.name,\n        \"work_pool\": (\n            self._work_pool.dict(json_compatible=True)\n            if self._work_pool is not None\n            else None\n        ),\n        \"settings\": {\n            \"prefetch_seconds\": self._prefetch_seconds,\n        },\n    }\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_worker_class_from_type","title":"get_worker_class_from_type staticmethod","text":"

Returns the worker class for a given worker type. If the worker type is not recognized, returns None.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
@staticmethod\ndef get_worker_class_from_type(type: str) -> Optional[Type[\"BaseWorker\"]]:\n\"\"\"\n    Returns the worker class for a given worker type. If the worker type\n    is not recognized, returns None.\n    \"\"\"\n    load_prefect_collections()\n    worker_registry = get_registry_for_type(BaseWorker)\n    if worker_registry is not None:\n        return worker_registry.get(type)\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.is_worker_still_polling","title":"is_worker_still_polling","text":"

This method is invoked by a webserver healthcheck handler and returns a boolean indicating if the worker has recorded a scheduled flow run poll within a variable amount of time.

The query_interval_seconds is the same value that is used by the loop services - we will evaluate if the _last_polled_time was within that interval x 30 (so 10s -> 5m)

The instance property self._last_polled_time is currently set/updated in get_and_submit_flow_runs()

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
def is_worker_still_polling(self, query_interval_seconds: int) -> bool:\n\"\"\"\n    This method is invoked by a webserver healthcheck handler\n    and returns a boolean indicating if the worker has recorded a\n    scheduled flow run poll within a variable amount of time.\n\n    The `query_interval_seconds` is the same value that is used by\n    the loop services - we will evaluate if the _last_polled_time\n    was within that interval x 30 (so 10s -> 5m)\n\n    The instance property `self._last_polled_time`\n    is currently set/updated in `get_and_submit_flow_runs()`\n    \"\"\"\n    threshold_seconds = query_interval_seconds * 30\n\n    seconds_since_last_poll = (\n        pendulum.now(\"utc\") - self._last_polled_time\n    ).in_seconds()\n\n    is_still_polling = seconds_since_last_poll <= threshold_seconds\n\n    if not is_still_polling:\n        self._logger.error(\n            f\"Worker has not polled in the last {seconds_since_last_poll} seconds \"\n            \"and should be restarted\"\n        )\n\n    return is_still_polling\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

Method for killing infrastructure created by a worker. Should be implemented by individual workers if they support killing infrastructure.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: BaseJobConfiguration,\n    grace_seconds: int = 30,\n):\n\"\"\"\n    Method for killing infrastructure created by a worker. Should be implemented by\n    individual workers if they support killing infrastructure.\n    \"\"\"\n    raise NotImplementedError(\n        \"This worker does not support killing infrastructure.\"\n    )\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.run","title":"run abstractmethod async","text":"

Runs a given flow run on the current worker.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
@abc.abstractmethod\nasync def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: BaseJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> BaseWorkerResult:\n\"\"\"\n    Runs a given flow run on the current worker.\n    \"\"\"\n    raise NotImplementedError(\n        \"Workers must implement a method for running submitted flow runs\"\n    )\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.setup","title":"setup async","text":"

Prepares the worker to run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
async def setup(self):\n\"\"\"Prepares the worker to run.\"\"\"\n    self._logger.debug(\"Setting up worker...\")\n    self._runs_task_group = anyio.create_task_group()\n    self._limiter = (\n        anyio.CapacityLimiter(self._limit) if self._limit is not None else None\n    )\n    self._client = get_client()\n    await self._client.__aenter__()\n    await self._runs_task_group.__aenter__()\n\n    self.is_setup = True\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.sync_with_backend","title":"sync_with_backend async","text":"

Updates the worker's local information about it's current work pool and queues. Sends a worker heartbeat to the API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
async def sync_with_backend(self):\n\"\"\"\n    Updates the worker's local information about it's current work pool and\n    queues. Sends a worker heartbeat to the API.\n    \"\"\"\n    await self._update_local_work_pool_info()\n\n    await self._send_worker_heartbeat()\n\n    self._logger.debug(\"Worker synchronized with the Prefect API server.\")\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.teardown","title":"teardown async","text":"

Cleans up resources after the worker is stopped.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
async def teardown(self, *exc_info):\n\"\"\"Cleans up resources after the worker is stopped.\"\"\"\n    self._logger.debug(\"Tearing down worker...\")\n    self.is_setup = False\n    for scope in self._scheduled_task_scopes:\n        scope.cancel()\n    if self._runs_task_group:\n        await self._runs_task_group.__aexit__(*exc_info)\n    if self._client:\n        await self._client.__aexit__(*exc_info)\n    self._runs_task_group = None\n    self._client = None\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/process/","title":"prefect.workers.process","text":"","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process","title":"prefect.workers.process","text":"

Module containing the Process worker used for executing flow runs as subprocesses.

Note this module is in beta. The interfaces within may change without notice.

To start a Process worker, run the following command:

prefect worker start --pool 'my-work-pool' --type process\n

Replace my-work-pool with the name of the work pool you want the worker to poll for flow runs.

For more information about work pools and workers, checkout out the Prefect docs.

","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessJobConfiguration","title":"ProcessJobConfiguration","text":"

Bases: BaseJobConfiguration

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/process.py
class ProcessJobConfiguration(BaseJobConfiguration):\n    stream_output: bool = Field(default=True)\n    working_dir: Optional[Path] = Field(default=None)\n\n    @validator(\"working_dir\")\n    def validate_command(cls, v):\n\"\"\"Make sure that the working directory is formatted for the current platform.\"\"\"\n        if v:\n            return relative_path_to_current_platform(v)\n        return v\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n\n        self.env = {**os.environ, **self.env}\n        self.command = (\n            f\"{sys.executable} -m prefect.engine\"\n            if self.command == self._base_flow_run_command()\n            else self.command\n        )\n
","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessJobConfiguration.validate_command","title":"validate_command","text":"

Make sure that the working directory is formatted for the current platform.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/process.py
@validator(\"working_dir\")\ndef validate_command(cls, v):\n\"\"\"Make sure that the working directory is formatted for the current platform.\"\"\"\n    if v:\n        return relative_path_to_current_platform(v)\n    return v\n
","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessWorkerResult","title":"ProcessWorkerResult","text":"

Bases: BaseWorkerResult

Contains information about the final state of a completed process

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/process.py
class ProcessWorkerResult(BaseWorkerResult):\n\"\"\"Contains information about the final state of a completed process\"\"\"\n
","tags":["Python API","workers","process"]},{"location":"api-ref/python/","title":"Python SDK API","text":"

The Prefect Python SDK API is used to build, test, and execute workflows against the Prefect orchestration engine. This is the primary user-facing API.

Select links in the left navigation menu to explore.

","tags":["API","Python SDK"]},{"location":"api-ref/rest-api/","title":"REST API","text":"

The Prefect REST API is used for communicating data from clients to a Prefect server so that orchestration can be performed. This API is consumed by clients such as the Prefect Python SDK or the server dashboard.

Prefect Cloud and a locally hosted Prefect server each provide a REST API.

","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#interacting-with-the-rest-api","title":"Interacting with the REST API","text":"

You have many options to interact with the Prefect REST API:

","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#prefectclient-with-a-prefect-server","title":"PrefectClient with a Prefect server","text":"

This example uses PrefectClient with a locally hosted Prefect server:

import asyncio\nfrom prefect.client import get_client\n\nasync def get_flows():\n    client = get_client()\n    r = await client.read_flows(limit=5)\n    return r\n\nr = asyncio.run(get_flows())\n\nfor flow in r:\n    print(flow.name, flow.id)\n\nif __name__ == \"__main__\":\n    asyncio.run(get_flows())\n

Output:

cat-facts 58ed68b1-0201-4f37-adef-0ea24bd2a022\ndog-facts e7c0403d-44e7-45cf-a6c8-79117b7f3766\nsloth-facts 771c0574-f5bf-4f59-a69d-3be3e061a62d\ncapybara-facts fbadaf8b-584f-48b9-b092-07d351edd424\nlemur-facts 53f710e7-3b0f-4b2f-ab6b-44934111818c\n
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#requests-with-prefect","title":"Requests with Prefect","text":"

This example uses the Requests library with Prefect Cloud to return the five newest artifacts.

import requests\n\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/abc-my-cloud-account-id-is-here/workspaces/123-my-workspace-id-is-here\"\nPREFECT_API_KEY=\"123abc_my_api_key_goes_here\"\ndata = {\n    \"sort\": \"CREATED_DESC\",\n    \"limit\": 5,\n    \"artifacts\": {\n        \"key\": {\n            \"exists_\": True\n        }\n    }\n}\n\nheaders = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\nendpoint = f\"{PREFECT_API_URL}/artifacts/filter\"\n\nresponse = requests.post(endpoint, headers=headers, json=data)\nassert response.status_code == 200\nfor artifact in response.json():\n    print(artifact)\n
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#curl-with-prefect-cloud","title":"curl with Prefect Cloud","text":"

This example uses curl with Prefect Cloud to create a flow run:

ACCOUNT_ID=\"abc-my-cloud-account-id-goes-here\"\nWORKSPACE_ID=\"123-my-workspace-id-goes-here\"\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/$ACCOUNT_ID/workspaces/$WORKSPACE_ID\"\nPREFECT_API_KEY=\"123abc_my_api_key_goes_here\"\nDEPLOYMENT_ID=\"my_deployment_id\"\n\ncurl --location --request POST \"$PREFECT_API_URL/deployments/$DEPLOYMENT_ID/create_flow_run\" \\\n--header \"Content-Type: application/json\" \\\n--header \"Authorization: Bearer $PREFECT_API_KEY\" \\\n--header \"X-PREFECT-API-VERSION: 0.8.4\" \\\n--data-raw \"{}\"\n

Note that in this example --data-raw \"{}\" is required and is where you can specify other aspects of the flow run such as the state. Windows users substitute ^ for \\ for line multi-line commands.

","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#finding-your-prefect-cloud-details","title":"Finding your Prefect Cloud details","text":"

When working with the Prefect Cloud REST API you will need your Account ID and often the Workspace ID for the workspace you want to interact with. You can find both IDs for a Prefect profile in the CLI with prefect profile inspect my_profile. This command will also display your Prefect API key, as shown below:

PREFECT_API_URL='https://api.prefect.cloud/api/accounts/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here'\nPREFECT_API_KEY='123abc_my_api_key_is_here'\n

Alternatively, view your Account ID and Workspace ID in your browser URL. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.

","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#rest-guidelines","title":"REST Guidelines","text":"

The REST APIs adhere to the following guidelines:

","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#http-verbs","title":"HTTP verbs","text":"","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#filtering","title":"Filtering","text":"

Objects can be filtered by providing filter criteria in the body of a POST request. When multiple criteria are specified, logical AND will be applied to the criteria.

Filter criteria are structured as follows:

{\n\"objects\": {\n\"object_field\": {\n\"field_operator_\": <field_value>\n}\n}\n}\n

In this example, objects is the name of the collection to filter over (for example, flows). The collection can be either the object being queried for (flows for POST /flows/filter) or a related object (flow_runs for POST /flows/filter).

object_field is the name of the field over which to filter (name for flows). Note that some objects may have nested object fields, such as {flow_run: {state: {type: {any_: []}}}}.

field_operator_ is the operator to apply to a field when filtering. Common examples include:

For example, to query for flows with the tag \"database\" and failed flow runs, POST /flows/filter with the following request body:

{\n\"flows\": {\n\"tags\": {\n\"all_\": [\"database\"]\n}\n},\n\"flow_runs\": {\n\"state\": {\n\"type\": {\n\"any_\": [\"FAILED\"]\n}\n}\n}\n}\n
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#openapi","title":"OpenAPI","text":"

The Prefect REST API can be fully described with an OpenAPI 3.0 compliant document. OpenAPI is a standard specification for describing REST APIs.

To generate the Prefect server's complete OpenAPI document, run the following commands in an interactive Python session:

from prefect.server.api.server import create_app\n\napp = create_app()\nopenapi_doc = app.openapi()\n

This document allows you to generate your own API client, explore the API using an API inspection tool, or write tests to ensure API compliance.

","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/server/","title":"Server API","text":"

The Prefect server API is used by the server to work with workflow metadata and enforce orchestration logic. This API is primarily used by Prefect developers.

Select links in the left navigation menu to explore.

","tags":["API","Server API"]},{"location":"api-ref/server/api/admin/","title":"server.api.admin","text":"","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin","title":"prefect.server.api.admin","text":"

Routes for admin-level interactions with the Prefect REST API.

","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.clear_database","title":"clear_database async","text":"

Clear all database tables without dropping them.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/admin.py
@router.post(\"/database/clear\", status_code=status.HTTP_204_NO_CONTENT)\nasync def clear_database(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    confirm: bool = Body(\n        False,\n        embed=True,\n        description=\"Pass confirm=True to confirm you want to modify the database.\",\n    ),\n    response: Response = None,\n):\n\"\"\"Clear all database tables without dropping them.\"\"\"\n    if not confirm:\n        response.status_code = status.HTTP_400_BAD_REQUEST\n        return\n    async with db.session_context(begin_transaction=True) as session:\n        # work pool has a circular dependency on pool queue; delete it first\n        await session.execute(db.WorkPool.__table__.delete())\n        for table in reversed(db.Base.metadata.sorted_tables):\n            await session.execute(table.delete())\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.create_database","title":"create_database async","text":"

Create all database objects.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/admin.py
@router.post(\"/database/create\", status_code=status.HTTP_204_NO_CONTENT)\nasync def create_database(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    confirm: bool = Body(\n        False,\n        embed=True,\n        description=\"Pass confirm=True to confirm you want to modify the database.\",\n    ),\n    response: Response = None,\n):\n\"\"\"Create all database objects.\"\"\"\n    if not confirm:\n        response.status_code = status.HTTP_400_BAD_REQUEST\n        return\n\n    await db.create_db()\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.drop_database","title":"drop_database async","text":"

Drop all database objects.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/admin.py
@router.post(\"/database/drop\", status_code=status.HTTP_204_NO_CONTENT)\nasync def drop_database(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    confirm: bool = Body(\n        False,\n        embed=True,\n        description=\"Pass confirm=True to confirm you want to modify the database.\",\n    ),\n    response: Response = None,\n):\n\"\"\"Drop all database objects.\"\"\"\n    if not confirm:\n        response.status_code = status.HTTP_400_BAD_REQUEST\n        return\n\n    await db.drop_db()\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.read_settings","title":"read_settings async","text":"

Get the current Prefect REST API settings.

Secret setting values will be obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/admin.py
@router.get(\"/settings\")\nasync def read_settings() -> prefect.settings.Settings:\n\"\"\"\n    Get the current Prefect REST API settings.\n\n    Secret setting values will be obfuscated.\n    \"\"\"\n    return prefect.settings.get_current_settings().with_obfuscated_secrets()\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.read_version","title":"read_version async","text":"

Returns the Prefect version number

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/admin.py
@router.get(\"/version\")\nasync def read_version() -> str:\n\"\"\"Returns the Prefect version number\"\"\"\n    return prefect.__version__\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/dependencies/","title":"server.api.dependencies","text":"","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies","title":"prefect.server.api.dependencies","text":"

Utilities for injecting FastAPI dependencies.

","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.EnforceMinimumAPIVersion","title":"EnforceMinimumAPIVersion","text":"

FastAPI Dependency used to check compatibility between the version of the api and a given request.

Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it to the api's version. Rejects requests that are lower than the minimum version.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/dependencies.py
class EnforceMinimumAPIVersion:\n\"\"\"\n    FastAPI Dependency used to check compatibility between the version of the api\n    and a given request.\n\n    Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it\n    to the api's version. Rejects requests that are lower than the minimum version.\n    \"\"\"\n\n    def __init__(self, minimum_api_version: str, logger: logging.Logger):\n        self.minimum_api_version = minimum_api_version\n        versions = [int(v) for v in minimum_api_version.split(\".\")]\n        self.api_major = versions[0]\n        self.api_minor = versions[1]\n        self.api_patch = versions[2]\n        self.logger = logger\n\n    async def __call__(\n        self,\n        x_prefect_api_version: str = Header(None),\n    ):\n        request_version = x_prefect_api_version\n\n        # if no version header, assume latest and continue\n        if not request_version:\n            return\n\n        # parse version\n        try:\n            major, minor, patch = [int(v) for v in request_version.split(\".\")]\n        except ValueError:\n            await self._notify_of_invalid_value(request_version)\n            raise HTTPException(\n                status_code=status.HTTP_400_BAD_REQUEST,\n                detail=(\n                    \"Invalid X-PREFECT-API-VERSION header format.\"\n                    f\"Expected header in format 'x.y.z' but received {request_version}\"\n                ),\n            )\n\n        if (major, minor, patch) < (self.api_major, self.api_minor, self.api_patch):\n            await self._notify_of_outdated_version(request_version)\n            raise HTTPException(\n                status_code=status.HTTP_400_BAD_REQUEST,\n                detail=(\n                    f\"The request specified API version {request_version} but this \"\n                    f\"server requires version {self.minimum_api_version} or higher.\"\n                ),\n            )\n\n    async def _notify_of_invalid_value(self, request_version: str):\n        self.logger.error(\n            f\"Invalid X-PREFECT-API-VERSION header format: '{request_version}'\"\n        )\n\n    async def _notify_of_outdated_version(self, request_version: str):\n        self.logger.error(\n            f\"X-PREFECT-API-VERSION header specifies version '{request_version}' \"\n            f\"but minimum allowed version is '{self.minimum_api_version}'\"\n        )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.LimitBody","title":"LimitBody","text":"

A fastapi.Depends factory for pulling a limit: int parameter from the request body while determing the default from the current settings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/dependencies.py
def LimitBody() -> Depends:\n\"\"\"\n    A `fastapi.Depends` factory for pulling a `limit: int` parameter from the\n    request body while determing the default from the current settings.\n    \"\"\"\n\n    def get_limit(\n        limit: int = Body(\n            None,\n            description=\"Defaults to PREFECT_API_DEFAULT_LIMIT if not provided.\",\n        )\n    ):\n        default_limit = PREFECT_API_DEFAULT_LIMIT.value()\n        limit = limit if limit is not None else default_limit\n        if not limit >= 0:\n            raise HTTPException(\n                status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n                detail=\"Invalid limit: must be greater than or equal to 0.\",\n            )\n        if limit > default_limit:\n            raise HTTPException(\n                status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n                detail=f\"Invalid limit: must be less than or equal to {default_limit}.\",\n            )\n        return limit\n\n    return Depends(get_limit)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/deployments/","title":"server.api.deployments","text":"","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments","title":"prefect.server.api.deployments","text":"

Routes for interacting with Deployment objects.

","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.count_deployments","title":"count_deployments async","text":"

Count deployments.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/count\")\nasync def count_deployments(\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n\"\"\"\n    Count deployments.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.deployments.count_deployments(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.create_deployment","title":"create_deployment async","text":"

Gracefully creates a new deployment from the provided schema. If a deployment with the same name and flow_id already exists, the deployment is updated.

If the deployment has an active schedule, flow runs will be scheduled. When upserting, any scheduled runs from the existing deployment will be deleted.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/\")\nasync def create_deployment(\n    deployment: schemas.actions.DeploymentCreate,\n    response: Response,\n    worker_lookups: WorkerLookups = Depends(WorkerLookups),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n\"\"\"\n    Gracefully creates a new deployment from the provided schema. If a deployment with\n    the same name and flow_id already exists, the deployment is updated.\n\n    If the deployment has an active schedule, flow runs will be scheduled.\n    When upserting, any scheduled runs from the existing deployment will be deleted.\n    \"\"\"\n\n    async with db.session_context(begin_transaction=True) as session:\n        if (\n            deployment.work_pool_name\n            and deployment.work_pool_name != DEFAULT_AGENT_WORK_POOL_NAME\n        ):\n            # Make sure that deployment is valid before beginning creation process\n            work_pool = await models.workers.read_work_pool_by_name(\n                session=session, work_pool_name=deployment.work_pool_name\n            )\n            if work_pool is None:\n                raise HTTPException(\n                    status_code=status.HTTP_404_NOT_FOUND,\n                    detail=f'Work pool \"{deployment.work_pool_name}\" not found.',\n                )\n            try:\n                deployment.check_valid_configuration(work_pool.base_job_template)\n            except (MissingVariableError, jsonschema.exceptions.ValidationError) as exc:\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=f\"Error creating deployment: {exc!r}\",\n                )\n\n        # hydrate the input model into a full model\n        deployment_dict = deployment.dict(exclude={\"work_pool_name\"})\n        if deployment.work_pool_name and deployment.work_queue_name:\n            # If a specific pool name/queue name combination was provided, get the\n            # ID for that work pool queue.\n            deployment_dict[\"work_queue_id\"] = (\n                await worker_lookups._get_work_queue_id_from_name(\n                    session=session,\n                    work_pool_name=deployment.work_pool_name,\n                    work_queue_name=deployment.work_queue_name,\n                    create_queue_if_not_found=True,\n                )\n            )\n        elif deployment.work_pool_name:\n            # If just a pool name was provided, get the ID for its default\n            # work pool queue.\n            deployment_dict[\"work_queue_id\"] = (\n                await worker_lookups._get_default_work_queue_id_from_work_pool_name(\n                    session=session,\n                    work_pool_name=deployment.work_pool_name,\n                )\n            )\n        elif deployment.work_queue_name:\n            # If just a queue name was provided, ensure that the queue exists and\n            # get its ID.\n            work_queue = await models.work_queues._ensure_work_queue_exists(\n                session=session, name=deployment.work_queue_name\n            )\n            deployment_dict[\"work_queue_id\"] = work_queue.id\n\n        deployment = schemas.core.Deployment(**deployment_dict)\n        # check to see if relevant blocks exist, allowing us throw a useful error message\n        # for debugging\n        if deployment.infrastructure_document_id is not None:\n            infrastructure_block = (\n                await models.block_documents.read_block_document_by_id(\n                    session=session,\n                    block_document_id=deployment.infrastructure_document_id,\n                )\n            )\n            if not infrastructure_block:\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=(\n                        \"Error creating deployment. Could not find infrastructure\"\n                        f\" block with id: {deployment.infrastructure_document_id}. This\"\n                        \" usually occurs when applying a deployment specification that\"\n                        \" was built against a different Prefect database / workspace.\"\n                    ),\n                )\n\n        if deployment.storage_document_id is not None:\n            infrastructure_block = (\n                await models.block_documents.read_block_document_by_id(\n                    session=session,\n                    block_document_id=deployment.storage_document_id,\n                )\n            )\n            if not infrastructure_block:\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=(\n                        \"Error creating deployment. Could not find storage block with\"\n                        f\" id: {deployment.storage_document_id}. This usually occurs\"\n                        \" when applying a deployment specification that was built\"\n                        \" against a different Prefect database / workspace.\"\n                    ),\n                )\n\n        now = pendulum.now(\"UTC\")\n        model = await models.deployments.create_deployment(\n            session=session, deployment=deployment\n        )\n\n        if model.created >= now:\n            response.status_code = status.HTTP_201_CREATED\n\n        return schemas.responses.DeploymentResponse.from_orm(model)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.create_flow_run_from_deployment","title":"create_flow_run_from_deployment async","text":"

Create a flow run from a deployment.

Any parameters not provided will be inferred from the deployment's parameters. If tags are not provided, the deployment's tags will be used.

If no state is provided, the flow run will be created in a SCHEDULED state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/{id}/create_flow_run\")\nasync def create_flow_run_from_deployment(\n    flow_run: schemas.actions.DeploymentFlowRunCreate,\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    worker_lookups: WorkerLookups = Depends(WorkerLookups),\n    response: Response = None,\n) -> schemas.responses.FlowRunResponse:\n\"\"\"\n    Create a flow run from a deployment.\n\n    Any parameters not provided will be inferred from the deployment's parameters.\n    If tags are not provided, the deployment's tags will be used.\n\n    If no state is provided, the flow run will be created in a SCHEDULED state.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        # get relevant info from the deployment\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n\n        parameters = deployment.parameters\n        parameters.update(flow_run.parameters or {})\n\n        work_queue_name = deployment.work_queue_name\n        work_queue_id = deployment.work_queue_id\n\n        if flow_run.work_queue_name:\n            # cant mutate the ORM model or else it will commit the changes back\n            work_queue_id = await worker_lookups._get_work_queue_id_from_name(\n                session=session,\n                work_pool_name=deployment.work_queue.work_pool.name,\n                work_queue_name=flow_run.work_queue_name,\n                create_queue_if_not_found=True,\n            )\n            work_queue_name = flow_run.work_queue_name\n\n        # hydrate the input model into a full flow run / state model\n        flow_run = schemas.core.FlowRun(\n            **flow_run.dict(\n                exclude={\n                    \"parameters\",\n                    \"tags\",\n                    \"infrastructure_document_id\",\n                    \"work_queue_name\",\n                }\n            ),\n            flow_id=deployment.flow_id,\n            deployment_id=deployment.id,\n            parameters=parameters,\n            tags=set(deployment.tags).union(flow_run.tags),\n            infrastructure_document_id=(\n                flow_run.infrastructure_document_id\n                or deployment.infrastructure_document_id\n            ),\n            work_queue_name=work_queue_name,\n            work_queue_id=work_queue_id,\n        )\n\n        if not flow_run.state:\n            flow_run.state = schemas.states.Scheduled()\n\n        now = pendulum.now(\"UTC\")\n        model = await models.flow_runs.create_flow_run(\n            session=session, flow_run=flow_run\n        )\n        if model.created >= now:\n            response.status_code = status.HTTP_201_CREATED\n        return schemas.responses.FlowRunResponse.from_orm(model)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.delete_deployment","title":"delete_deployment async","text":"

Delete a deployment by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Delete a deployment by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.deployments.delete_deployment(\n            session=session, deployment_id=deployment_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n        )\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployment","title":"read_deployment async","text":"

Get a deployment by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.get(\"/{id}\")\nasync def read_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n\"\"\"\n    Get a deployment by id.\n    \"\"\"\n    async with db.session_context() as session:\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        return schemas.responses.DeploymentResponse.from_orm(deployment)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployment_by_name","title":"read_deployment_by_name async","text":"

Get a deployment using the name of the flow and the deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.get(\"/name/{flow_name}/{deployment_name}\")\nasync def read_deployment_by_name(\n    flow_name: str = Path(..., description=\"The name of the flow\"),\n    deployment_name: str = Path(..., description=\"The name of the deployment\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n\"\"\"\n    Get a deployment using the name of the flow and the deployment.\n    \"\"\"\n    async with db.session_context() as session:\n        deployment = await models.deployments.read_deployment_by_name(\n            session=session, name=deployment_name, flow_name=flow_name\n        )\n        if not deployment:\n            raise HTTPException(\n                status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        return schemas.responses.DeploymentResponse.from_orm(deployment)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployments","title":"read_deployments async","text":"

Query for deployments.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/filter\")\nasync def read_deployments(\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    sort: schemas.sorting.DeploymentSort = Body(\n        schemas.sorting.DeploymentSort.NAME_ASC\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.DeploymentResponse]:\n\"\"\"\n    Query for deployments.\n    \"\"\"\n    async with db.session_context() as session:\n        response = await models.deployments.read_deployments(\n            session=session,\n            offset=offset,\n            sort=sort,\n            limit=limit,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n        return [\n            schemas.responses.DeploymentResponse.from_orm(orm_deployment=deployment)\n            for deployment in response\n        ]\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.schedule_deployment","title":"schedule_deployment async","text":"

Schedule runs for a deployment. For backfills, provide start/end times in the past.

This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected.

- Runs will be generated starting on or after the `start_time`\n- No more than `max_runs` runs will be generated\n- No runs will be generated after `end_time` is reached\n- At least `min_runs` runs will be generated\n- Runs will be generated until at least `start_time + min_time` is reached\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/{id}/schedule\")\nasync def schedule_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    start_time: DateTimeTZ = Body(None, description=\"The earliest date to schedule\"),\n    end_time: DateTimeTZ = Body(None, description=\"The latest date to schedule\"),\n    min_time: datetime.timedelta = Body(\n        None,\n        description=(\n            \"Runs will be scheduled until at least this long after the `start_time`\"\n        ),\n    ),\n    min_runs: int = Body(None, description=\"The minimum number of runs to schedule\"),\n    max_runs: int = Body(None, description=\"The maximum number of runs to schedule\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n\"\"\"\n    Schedule runs for a deployment. For backfills, provide start/end times in the past.\n\n    This function will generate the minimum number of runs that satisfy the min\n    and max times, and the min and max counts. Specifically, the following order\n    will be respected.\n\n        - Runs will be generated starting on or after the `start_time`\n        - No more than `max_runs` runs will be generated\n        - No runs will be generated after `end_time` is reached\n        - At least `min_runs` runs will be generated\n        - Runs will be generated until at least `start_time + min_time` is reached\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        await models.deployments.schedule_runs(\n            session=session,\n            deployment_id=deployment_id,\n            start_time=start_time,\n            min_time=min_time,\n            end_time=end_time,\n            min_runs=min_runs,\n            max_runs=max_runs,\n        )\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.set_schedule_active","title":"set_schedule_active async","text":"

Set a deployment schedule to active. Runs will be scheduled immediately.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/{id}/set_schedule_active\")\nasync def set_schedule_active(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n\"\"\"\n    Set a deployment schedule to active. Runs will be scheduled immediately.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        deployment.is_schedule_active = True\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.set_schedule_inactive","title":"set_schedule_inactive async","text":"

Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled state will be deleted.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/{id}/set_schedule_inactive\")\nasync def set_schedule_inactive(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n\"\"\"\n    Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled\n    state will be deleted.\n    \"\"\"\n    async with db.session_context(begin_transaction=False) as session:\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        deployment.is_schedule_active = False\n        # commit here to make the inactive schedule \"visible\" to the scheduler service\n        await session.commit()\n\n        # delete any auto scheduled runs\n        await models.deployments._delete_scheduled_runs(\n            session=session,\n            deployment_id=deployment_id,\n            db=db,\n            auto_scheduled_only=True,\n        )\n\n        await session.commit()\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.work_queue_check_for_deployment","title":"work_queue_check_for_deployment async","text":"

Get list of work-queues that are able to pick up the specified deployment.

This endpoint is intended to be used by the UI to provide users warnings about deployments that are unable to be executed because there are no work queues that will pick up their runs, based on existing filter criteria. It may be deprecated in the future because there is not a strict relationship between work queues and deployments.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.get(\"/{id}/work_queue_check\", deprecated=True)\nasync def work_queue_check_for_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.WorkQueue]:\n\"\"\"\n    Get list of work-queues that are able to pick up the specified deployment.\n\n    This endpoint is intended to be used by the UI to provide users warnings\n    about deployments that are unable to be executed because there are no work\n    queues that will pick up their runs, based on existing filter criteria. It\n    may be deprecated in the future because there is not a strict relationship\n    between work queues and deployments.\n    \"\"\"\n    try:\n        async with db.session_context() as session:\n            work_queues = await models.deployments.check_work_queues_for_deployment(\n                session=session, deployment_id=deployment_id\n            )\n    except ObjectNotFoundError:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n        )\n    return work_queues\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/flow_run_states/","title":"server.api.flow_run_states","text":"","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states","title":"prefect.server.api.flow_run_states","text":"

Routes for interacting with flow run state objects.

","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states.read_flow_run_state","title":"read_flow_run_state async","text":"

Get a flow run state by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_run_states.py
@router.get(\"/{id}\")\nasync def read_flow_run_state(\n    flow_run_state_id: UUID = Path(\n        ..., description=\"The flow run state id\", alias=\"id\"\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.states.State:\n\"\"\"\n    Get a flow run state by id.\n    \"\"\"\n    async with db.session_context() as session:\n        flow_run_state = await models.flow_run_states.read_flow_run_state(\n            session=session, flow_run_state_id=flow_run_state_id\n        )\n    if not flow_run_state:\n        raise HTTPException(\n            status.HTTP_404_NOT_FOUND, detail=\"Flow run state not found\"\n        )\n    return flow_run_state\n
","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states.read_flow_run_states","title":"read_flow_run_states async","text":"

Get states associated with a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_run_states.py
@router.get(\"/\")\nasync def read_flow_run_states(\n    flow_run_id: UUID,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.states.State]:\n\"\"\"\n    Get states associated with a flow run.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_run_states.read_flow_run_states(\n            session=session, flow_run_id=flow_run_id\n        )\n
","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_runs/","title":"server.api.flow_runs","text":"","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs","title":"prefect.server.api.flow_runs","text":"

Routes for interacting with flow run objects.

","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.average_flow_run_lateness","title":"average_flow_run_lateness async","text":"

Query for average flow-run lateness in seconds.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/lateness\")\nasync def average_flow_run_lateness(\n    flows: Optional[schemas.filters.FlowFilter] = None,\n    flow_runs: Optional[schemas.filters.FlowRunFilter] = None,\n    task_runs: Optional[schemas.filters.TaskRunFilter] = None,\n    deployments: Optional[schemas.filters.DeploymentFilter] = None,\n    work_pools: Optional[schemas.filters.WorkPoolFilter] = None,\n    work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> Optional[float]:\n\"\"\"\n    Query for average flow-run lateness in seconds.\n    \"\"\"\n    async with db.session_context() as session:\n        if db.dialect.name == \"sqlite\":\n            # Since we want an _average_ of the lateness we're unable to use\n            # the existing FlowRun.expected_start_time_delta property as it\n            # returns a timedelta and SQLite is unable to properly deal with it\n            # and always returns 1970.0 as the average. This copies the same\n            # logic but ensures that it returns the number of seconds instead\n            # so it's compatible with SQLite.\n            base_query = sa.case(\n                (\n                    db.FlowRun.start_time > db.FlowRun.expected_start_time,\n                    sa.func.strftime(\"%s\", db.FlowRun.start_time)\n                    - sa.func.strftime(\"%s\", db.FlowRun.expected_start_time),\n                ),\n                (\n                    db.FlowRun.start_time.is_(None)\n                    & db.FlowRun.state_type.notin_(schemas.states.TERMINAL_STATES)\n                    & (db.FlowRun.expected_start_time < sa.func.datetime(\"now\")),\n                    sa.func.strftime(\"%s\", sa.func.datetime(\"now\"))\n                    - sa.func.strftime(\"%s\", db.FlowRun.expected_start_time),\n                ),\n                else_=0,\n            )\n        else:\n            base_query = db.FlowRun.estimated_start_time_delta\n\n        query = await models.flow_runs._apply_flow_run_filters(\n            sa.select(sa.func.avg(base_query)),\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n        result = await session.execute(query)\n\n        avg_lateness = result.scalar()\n\n        if avg_lateness is None:\n            return None\n        elif isinstance(avg_lateness, datetime.timedelta):\n            return avg_lateness.total_seconds()\n        else:\n            return avg_lateness\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.count_flow_runs","title":"count_flow_runs async","text":"

Query for flow runs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/count\")\nasync def count_flow_runs(\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n\"\"\"\n    Query for flow runs.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_runs.count_flow_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.create_flow_run","title":"create_flow_run async","text":"

Create a flow run. If a flow run with the same flow_id and idempotency key already exists, the existing flow run will be returned.

If no state is provided, the flow run will be created in a PENDING state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/\")\nasync def create_flow_run(\n    flow_run: schemas.actions.FlowRunCreate,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    orchestration_parameters: dict = Depends(\n        orchestration_dependencies.provide_flow_orchestration_parameters\n    ),\n    api_version=Depends(dependencies.provide_request_api_version),\n) -> schemas.responses.FlowRunResponse:\n\"\"\"\n    Create a flow run. If a flow run with the same flow_id and\n    idempotency key already exists, the existing flow run will be returned.\n\n    If no state is provided, the flow run will be created in a PENDING state.\n    \"\"\"\n    # hydrate the input model into a full flow run / state model\n    flow_run = schemas.core.FlowRun(**flow_run.dict())\n\n    # pass the request version to the orchestration engine to support compatibility code\n    orchestration_parameters.update({\"api-version\": api_version})\n\n    if not flow_run.state:\n        flow_run.state = schemas.states.Pending()\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.flow_runs.create_flow_run(\n            session=session,\n            flow_run=flow_run,\n            orchestration_parameters=orchestration_parameters,\n        )\n        if model.created >= now:\n            response.status_code = status.HTTP_201_CREATED\n\n        return schemas.responses.FlowRunResponse.from_orm(model)\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.delete_flow_run","title":"delete_flow_run async","text":"

Delete a flow run by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow_run(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Delete a flow run by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flow_runs.delete_flow_run(\n            session=session, flow_run_id=flow_run_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n        )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.flow_run_history","title":"flow_run_history async","text":"

Query for flow run history data across a given range and interval.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/history\")\nasync def flow_run_history(\n    history_start: DateTimeTZ = Body(..., description=\"The history's start time.\"),\n    history_end: DateTimeTZ = Body(..., description=\"The history's end time.\"),\n    history_interval: datetime.timedelta = Body(\n        ...,\n        description=(\n            \"The size of each history interval, in seconds. Must be at least 1 second.\"\n        ),\n        alias=\"history_interval_seconds\",\n    ),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.HistoryResponse]:\n\"\"\"\n    Query for flow run history data across a given range and interval.\n    \"\"\"\n    if history_interval < datetime.timedelta(seconds=1):\n        raise HTTPException(\n            status.HTTP_422_UNPROCESSABLE_ENTITY,\n            detail=\"History interval must not be less than 1 second.\",\n        )\n\n    async with db.session_context() as session:\n        return await run_history(\n            session=session,\n            run_type=\"flow_run\",\n            history_start=history_start,\n            history_end=history_end,\n            history_interval=history_interval,\n            flows=flows,\n            flow_runs=flow_runs,\n            task_runs=task_runs,\n            deployments=deployments,\n            work_pools=work_pools,\n            work_queues=work_queues,\n        )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run","title":"read_flow_run async","text":"

Get a flow run by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.get(\"/{id}\")\nasync def read_flow_run(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.FlowRunResponse:\n\"\"\"\n    Get a flow run by id.\n    \"\"\"\n    async with db.session_context() as session:\n        flow_run = await models.flow_runs.read_flow_run(\n            session=session, flow_run_id=flow_run_id\n        )\n        if not flow_run:\n            raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\")\n        return schemas.responses.FlowRunResponse.from_orm(flow_run)\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_graph","title":"read_flow_run_graph async","text":"

Get a task run dependency map for a given flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.get(\"/{id}/graph\")\nasync def read_flow_run_graph(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[DependencyResult]:\n\"\"\"\n    Get a task run dependency map for a given flow run.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_runs.read_task_run_dependencies(\n            session=session, flow_run_id=flow_run_id\n        )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_runs","title":"read_flow_runs async","text":"

Query for flow runs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/filter\", response_class=ORJSONResponse)\nasync def read_flow_runs(\n    sort: schemas.sorting.FlowRunSort = Body(schemas.sorting.FlowRunSort.ID_DESC),\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.FlowRunResponse]:\n\"\"\"\n    Query for flow runs.\n    \"\"\"\n    async with db.session_context() as session:\n        db_flow_runs = await models.flow_runs.read_flow_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n            offset=offset,\n            limit=limit,\n            sort=sort,\n        )\n\n        # Instead of relying on fastapi.encoders.jsonable_encoder to convert the\n        # response to JSON, we do so more efficiently ourselves.\n        # In particular, the FastAPI encoder is very slow for large, nested objects.\n        # See: https://github.com/tiangolo/fastapi/issues/1224\n        encoded = [\n            schemas.responses.FlowRunResponse.from_orm(fr).dict(json_compatible=True)\n            for fr in db_flow_runs\n        ]\n        return ORJSONResponse(content=encoded)\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.resume_flow_run","title":"resume_flow_run async","text":"

Resume a paused flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/{id}/resume\")\nasync def resume_flow_run(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    flow_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_flow_policy\n    ),\n    orchestration_parameters: dict = Depends(\n        orchestration_dependencies.provide_flow_orchestration_parameters\n    ),\n    api_version=Depends(dependencies.provide_request_api_version),\n) -> OrchestrationResult:\n\"\"\"\n    Resume a paused flow run.\n    \"\"\"\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        flow_run = await models.flow_runs.read_flow_run(session, flow_run_id)\n        state = flow_run.state\n\n        if state is None or state.type != schemas.states.StateType.PAUSED:\n            result = OrchestrationResult(\n                state=None,\n                status=schemas.responses.SetStateStatus.ABORT,\n                details=schemas.responses.StateAbortDetails(\n                    reason=\"Cannot resume a flow run that is not paused.\"\n                ),\n            )\n            return result\n\n        orchestration_parameters.update({\"api-version\": api_version})\n\n        if state.state_details.pause_reschedule:\n            orchestration_result = await models.flow_runs.set_flow_run_state(\n                session=session,\n                flow_run_id=flow_run_id,\n                state=schemas.states.Scheduled(\n                    name=\"Resuming\", scheduled_time=pendulum.now(\"UTC\")\n                ),\n                flow_policy=flow_policy,\n                orchestration_parameters=orchestration_parameters,\n            )\n        else:\n            orchestration_result = await models.flow_runs.set_flow_run_state(\n                session=session,\n                flow_run_id=flow_run_id,\n                state=schemas.states.Running(),\n                flow_policy=flow_policy,\n                orchestration_parameters=orchestration_parameters,\n            )\n\n        # set the 201 if a new state was created\n        if orchestration_result.state and orchestration_result.state.timestamp >= now:\n            response.status_code = status.HTTP_201_CREATED\n        else:\n            response.status_code = status.HTTP_200_OK\n\n        return orchestration_result\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.set_flow_run_state","title":"set_flow_run_state async","text":"

Set a flow run state, invoking any orchestration rules.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/{id}/set_state\")\nasync def set_flow_run_state(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    state: schemas.actions.StateCreate = Body(..., description=\"The intended state.\"),\n    force: bool = Body(\n        False,\n        description=(\n            \"If false, orchestration rules will be applied that may alter or prevent\"\n            \" the state transition. If True, orchestration rules are not applied.\"\n        ),\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    flow_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_flow_policy\n    ),\n    orchestration_parameters: dict = Depends(\n        orchestration_dependencies.provide_flow_orchestration_parameters\n    ),\n    api_version=Depends(dependencies.provide_request_api_version),\n) -> OrchestrationResult:\n\"\"\"Set a flow run state, invoking any orchestration rules.\"\"\"\n\n    # pass the request version to the orchestration engine to support compatibility code\n    orchestration_parameters.update({\"api-version\": api_version})\n\n    now = pendulum.now(\"UTC\")\n\n    # create the state\n    async with db.session_context(\n        begin_transaction=True, with_for_update=True\n    ) as session:\n        orchestration_result = await models.flow_runs.set_flow_run_state(\n            session=session,\n            flow_run_id=flow_run_id,\n            # convert to a full State object\n            state=schemas.states.State.parse_obj(state),\n            force=force,\n            flow_policy=flow_policy,\n            orchestration_parameters=orchestration_parameters,\n        )\n\n    # set the 201 if a new state was created\n    if orchestration_result.state and orchestration_result.state.timestamp >= now:\n        response.status_code = status.HTTP_201_CREATED\n    else:\n        response.status_code = status.HTTP_200_OK\n\n    return orchestration_result\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.update_flow_run","title":"update_flow_run async","text":"

Updates a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_flow_run(\n    flow_run: schemas.actions.FlowRunUpdate,\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Updates a flow run.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flow_runs.update_flow_run(\n            session=session, flow_run=flow_run, flow_run_id=flow_run_id\n        )\n    if not result:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\")\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flows/","title":"server.api.flows","text":"","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows","title":"prefect.server.api.flows","text":"

Routes for interacting with flow objects.

","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.count_flows","title":"count_flows async","text":"

Count flows.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.post(\"/count\")\nasync def count_flows(\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n\"\"\"\n    Count flows.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flows.count_flows(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n        )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.create_flow","title":"create_flow async","text":"

Gracefully creates a new flow from the provided schema. If a flow with the same name already exists, the existing flow is returned.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.post(\"/\")\nasync def create_flow(\n    flow: schemas.actions.FlowCreate,\n    response: Response,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n\"\"\"Gracefully creates a new flow from the provided schema. If a flow with the\n    same name already exists, the existing flow is returned.\n    \"\"\"\n    # hydrate the input model into a full flow model\n    flow = schemas.core.Flow(**flow.dict())\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.flows.create_flow(session=session, flow=flow)\n\n    if model.created >= now:\n        response.status_code = status.HTTP_201_CREATED\n    return model\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.delete_flow","title":"delete_flow async","text":"

Delete a flow by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow(\n    flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Delete a flow by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flows.delete_flow(session=session, flow_id=flow_id)\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flow","title":"read_flow async","text":"

Get a flow by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.get(\"/{id}\")\nasync def read_flow(\n    flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n\"\"\"\n    Get a flow by id.\n    \"\"\"\n    async with db.session_context() as session:\n        flow = await models.flows.read_flow(session=session, flow_id=flow_id)\n    if not flow:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n    return flow\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flow_by_name","title":"read_flow_by_name async","text":"

Get a flow by name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.get(\"/name/{name}\")\nasync def read_flow_by_name(\n    name: str = Path(..., description=\"The name of the flow\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n\"\"\"\n    Get a flow by name.\n    \"\"\"\n    async with db.session_context() as session:\n        flow = await models.flows.read_flow_by_name(session=session, name=name)\n    if not flow:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n    return flow\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flows","title":"read_flows async","text":"

Query for flows.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.post(\"/filter\")\nasync def read_flows(\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    sort: schemas.sorting.FlowSort = Body(schemas.sorting.FlowSort.NAME_ASC),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.Flow]:\n\"\"\"\n    Query for flows.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flows.read_flows(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            sort=sort,\n            offset=offset,\n            limit=limit,\n        )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.update_flow","title":"update_flow async","text":"

Updates a flow.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_flow(\n    flow: schemas.actions.FlowUpdate,\n    flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Updates a flow.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flows.update_flow(\n            session=session, flow=flow, flow_id=flow_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/run_history/","title":"server.api.run_history","text":"","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/run_history/#prefect.server.api.run_history","title":"prefect.server.api.run_history","text":"

Utilities for querying flow and task run history.

","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/run_history/#prefect.server.api.run_history.run_history","title":"run_history async","text":"

Produce a history of runs aggregated by interval and state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/run_history.py
@inject_db\nasync def run_history(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    run_type: Literal[\"flow_run\", \"task_run\"],\n    history_start: DateTimeTZ,\n    history_end: DateTimeTZ,\n    history_interval: datetime.timedelta,\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_queues: schemas.filters.WorkQueueFilter = None,\n) -> List[schemas.responses.HistoryResponse]:\n\"\"\"\n    Produce a history of runs aggregated by interval and state\n    \"\"\"\n\n    # SQLite has issues with very small intervals\n    # (by 0.001 seconds it stops incrementing the interval)\n    if history_interval < datetime.timedelta(seconds=1):\n        raise ValueError(\"History interval must not be less than 1 second.\")\n\n    # prepare run-specific models\n    if run_type == \"flow_run\":\n        run_model = db.FlowRun\n        run_filter_function = models.flow_runs._apply_flow_run_filters\n    elif run_type == \"task_run\":\n        run_model = db.TaskRun\n        run_filter_function = models.task_runs._apply_task_run_filters\n    else:\n        raise ValueError(\n            f\"Unknown run type {run_type!r}. Expected 'flow_run' or 'task_run'.\"\n        )\n\n    # create a CTE for timestamp intervals\n    intervals = db.make_timestamp_intervals(\n        history_start,\n        history_end,\n        history_interval,\n    ).cte(\"intervals\")\n\n    # apply filters to the flow runs (and related states)\n    runs = (\n        await run_filter_function(\n            sa.select(\n                run_model.id,\n                run_model.expected_start_time,\n                run_model.estimated_run_time,\n                run_model.estimated_start_time_delta,\n                run_model.state_type,\n                run_model.state_name,\n            ).select_from(run_model),\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_queues,\n        )\n    ).alias(\"runs\")\n    # outer join intervals to the filtered runs to create a dataset composed of\n    # every interval and the aggregate of all its runs. The runs aggregate is represented\n    # by a descriptive JSON object\n    counts = (\n        sa.select(\n            intervals.c.interval_start,\n            intervals.c.interval_end,\n            # build a JSON object, ignoring the case where the count of runs is 0\n            sa.case(\n                (sa.func.count(runs.c.id) == 0, None),\n                else_=db.build_json_object(\n                    \"state_type\",\n                    runs.c.state_type,\n                    \"state_name\",\n                    runs.c.state_name,\n                    \"count_runs\",\n                    sa.func.count(runs.c.id),\n                    # estimated run times only includes positive run times (to avoid any unexpected corner cases)\n                    \"sum_estimated_run_time\",\n                    sa.func.sum(\n                        db.greatest(0, sa.extract(\"epoch\", runs.c.estimated_run_time))\n                    ),\n                    # estimated lateness is the sum of any positive start time deltas\n                    \"sum_estimated_lateness\",\n                    sa.func.sum(\n                        db.greatest(\n                            0, sa.extract(\"epoch\", runs.c.estimated_start_time_delta)\n                        )\n                    ),\n                ),\n            ).label(\"state_agg\"),\n        )\n        .select_from(intervals)\n        .join(\n            runs,\n            sa.and_(\n                runs.c.expected_start_time >= intervals.c.interval_start,\n                runs.c.expected_start_time < intervals.c.interval_end,\n            ),\n            isouter=True,\n        )\n        .group_by(\n            intervals.c.interval_start,\n            intervals.c.interval_end,\n            runs.c.state_type,\n            runs.c.state_name,\n        )\n    ).alias(\"counts\")\n\n    # aggregate all state-aggregate objects into a single array for each interval,\n    # ensuring that intervals with no runs have an empty array\n    query = (\n        sa.select(\n            counts.c.interval_start,\n            counts.c.interval_end,\n            sa.func.coalesce(\n                db.json_arr_agg(db.cast_to_json(counts.c.state_agg)).filter(\n                    counts.c.state_agg.is_not(None)\n                ),\n                sa.text(\"'[]'\"),\n            ).label(\"states\"),\n        )\n        .group_by(counts.c.interval_start, counts.c.interval_end)\n        .order_by(counts.c.interval_start)\n        # return no more than 500 bars\n        .limit(500)\n    )\n\n    # issue the query\n    result = await session.execute(query)\n    records = result.mappings()\n\n    # load and parse the record if the database returns JSON as strings\n    if db.uses_json_strings:\n        records = [dict(r) for r in records]\n        for r in records:\n            r[\"states\"] = json.loads(r[\"states\"])\n\n    return pydantic.parse_obj_as(List[schemas.responses.HistoryResponse], list(records))\n
","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/saved_searches/","title":"server.api.saved_searches","text":"","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches","title":"prefect.server.api.saved_searches","text":"

Routes for interacting with saved search objects.

","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.create_saved_search","title":"create_saved_search async","text":"

Gracefully creates a new saved search from the provided schema.

If a saved search with the same name already exists, the saved search's fields are replaced.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/saved_searches.py
@router.put(\"/\")\nasync def create_saved_search(\n    saved_search: schemas.actions.SavedSearchCreate,\n    response: Response,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.SavedSearch:\n\"\"\"Gracefully creates a new saved search from the provided schema.\n\n    If a saved search with the same name already exists, the saved search's fields are\n    replaced.\n    \"\"\"\n\n    # hydrate the input model into a full model\n    saved_search = schemas.core.SavedSearch(**saved_search.dict())\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.saved_searches.create_saved_search(\n            session=session, saved_search=saved_search\n        )\n\n    if model.created >= now:\n        response.status_code = status.HTTP_201_CREATED\n\n    return model\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.delete_saved_search","title":"delete_saved_search async","text":"

Delete a saved search by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/saved_searches.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_saved_search(\n    saved_search_id: UUID = Path(..., description=\"The saved search id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Delete a saved search by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.saved_searches.delete_saved_search(\n            session=session, saved_search_id=saved_search_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Saved search not found\"\n        )\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.read_saved_search","title":"read_saved_search async","text":"

Get a saved search by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/saved_searches.py
@router.get(\"/{id}\")\nasync def read_saved_search(\n    saved_search_id: UUID = Path(..., description=\"The saved search id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.SavedSearch:\n\"\"\"\n    Get a saved search by id.\n    \"\"\"\n    async with db.session_context() as session:\n        saved_search = await models.saved_searches.read_saved_search(\n            session=session, saved_search_id=saved_search_id\n        )\n    if not saved_search:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Saved search not found\"\n        )\n    return saved_search\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.read_saved_searches","title":"read_saved_searches async","text":"

Query for saved searches.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/saved_searches.py
@router.post(\"/filter\")\nasync def read_saved_searches(\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.SavedSearch]:\n\"\"\"\n    Query for saved searches.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.saved_searches.read_saved_searches(\n            session=session,\n            offset=offset,\n            limit=limit,\n        )\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/server/","title":"server.api.server","text":"","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server","title":"prefect.server.api.server","text":"

Defines the Prefect REST API FastAPI app.

","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.RequestLimitMiddleware","title":"RequestLimitMiddleware","text":"

A middleware that limits the number of concurrent requests handled by the API.

This is a blunt tool for limiting SQLite concurrent writes which will cause failures at high volume. Ideally, we would only apply the limit to routes that perform writes.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
class RequestLimitMiddleware:\n\"\"\"\n    A middleware that limits the number of concurrent requests handled by the API.\n\n    This is a blunt tool for limiting SQLite concurrent writes which will cause failures\n    at high volume. Ideally, we would only apply the limit to routes that perform\n    writes.\n    \"\"\"\n\n    def __init__(self, app, limit: float):\n        self.app = app\n        self._limiter = anyio.CapacityLimiter(limit)\n\n    async def __call__(self, scope, receive, send) -> None:\n        async with self._limiter:\n            await self.app(scope, receive, send)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.SPAStaticFiles","title":"SPAStaticFiles","text":"

Bases: StaticFiles

Implementation of StaticFiles for serving single page applications.

Adds get_response handling to ensure that when a resource isn't found the application still returns the index.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
class SPAStaticFiles(StaticFiles):\n\"\"\"\n    Implementation of `StaticFiles` for serving single page applications.\n\n    Adds `get_response` handling to ensure that when a resource isn't found the\n    application still returns the index.\n    \"\"\"\n\n    async def get_response(self, path: str, scope):\n        try:\n            return await super().get_response(path, scope)\n        except HTTPException:\n            return await super().get_response(\"./index.html\", scope)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.create_api_app","title":"create_api_app","text":"

Create a FastAPI app that includes the Prefect REST API

Parameters:

Name Type Description Default router_prefix Optional[str]

a prefix to apply to all included routers

'' dependencies Optional[List[Depends]]

a list of global dependencies to add to each Prefect REST API router

None health_check_path str

the health check route path

'/health' fast_api_app_kwargs dict

kwargs to pass to the FastAPI constructor

None router_overrides Mapping[str, Optional[APIRouter]]

a mapping of route prefixes (i.e. \"/admin\") to new routers allowing the caller to override the default routers. If None is provided as a value, the default router will be dropped from the application.

None

Returns:

Type Description FastAPI

a FastAPI app that serves the Prefect REST API

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
def create_api_app(\n    router_prefix: Optional[str] = \"\",\n    dependencies: Optional[List[Depends]] = None,\n    health_check_path: str = \"/health\",\n    version_check_path: str = \"/version\",\n    fast_api_app_kwargs: dict = None,\n    router_overrides: Mapping[str, Optional[APIRouter]] = None,\n) -> FastAPI:\n\"\"\"\n    Create a FastAPI app that includes the Prefect REST API\n\n    Args:\n        router_prefix: a prefix to apply to all included routers\n        dependencies: a list of global dependencies to add to each Prefect REST API router\n        health_check_path: the health check route path\n        fast_api_app_kwargs: kwargs to pass to the FastAPI constructor\n        router_overrides: a mapping of route prefixes (i.e. \"/admin\") to new routers\n            allowing the caller to override the default routers. If `None` is provided\n            as a value, the default router will be dropped from the application.\n\n    Returns:\n        a FastAPI app that serves the Prefect REST API\n    \"\"\"\n    fast_api_app_kwargs = fast_api_app_kwargs or {}\n    api_app = FastAPI(title=API_TITLE, **fast_api_app_kwargs)\n    api_app.add_middleware(GZipMiddleware)\n\n    @api_app.get(health_check_path, tags=[\"Root\"])\n    async def health_check():\n        return True\n\n    @api_app.get(version_check_path, tags=[\"Root\"])\n    async def orion_info():\n        return SERVER_API_VERSION\n\n    # always include version checking\n    if dependencies is None:\n        dependencies = [Depends(enforce_minimum_version)]\n    else:\n        dependencies.append(Depends(enforce_minimum_version))\n\n    routers = {router.prefix: router for router in API_ROUTERS}\n\n    if router_overrides:\n        for prefix, router in router_overrides.items():\n            # We may want to allow this behavior in the future to inject new routes, but\n            # for now this will be treated an as an exception\n            if prefix not in routers:\n                raise KeyError(\n                    \"Router override provided for prefix that does not exist:\"\n                    f\" {prefix!r}\"\n                )\n\n            # Drop the existing router\n            existing_router = routers.pop(prefix)\n\n            # Replace it with a new router if provided\n            if router is not None:\n                if prefix != router.prefix:\n                    # We may want to allow this behavior in the future, but it will\n                    # break expectations without additional routing and is banned for\n                    # now\n                    raise ValueError(\n                        f\"Router override for {prefix!r} defines a different prefix \"\n                        f\"{router.prefix!r}.\"\n                    )\n\n                existing_paths = method_paths_from_routes(existing_router.routes)\n                new_paths = method_paths_from_routes(router.routes)\n                if not existing_paths.issubset(new_paths):\n                    raise ValueError(\n                        f\"Router override for {prefix!r} is missing paths defined by \"\n                        f\"the original router: {existing_paths.difference(new_paths)}\"\n                    )\n\n                routers[prefix] = router\n\n    for router in routers.values():\n        api_app.include_router(router, prefix=router_prefix, dependencies=dependencies)\n\n    return api_app\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.create_app","title":"create_app","text":"

Create an FastAPI app that includes the Prefect REST API and UI

Parameters:

Name Type Description Default settings prefect.settings.Settings

The settings to use to create the app. If not set, settings are pulled from the context.

None ignore_cache bool

If set, a new application will be created even if the settings match. Otherwise, an application is returned from the cache.

False ephemeral bool

If set, the application will be treated as ephemeral. The UI and services will be disabled.

False Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
def create_app(\n    settings: prefect.settings.Settings = None,\n    ephemeral: bool = False,\n    ignore_cache: bool = False,\n) -> FastAPI:\n\"\"\"\n    Create an FastAPI app that includes the Prefect REST API and UI\n\n    Args:\n        settings: The settings to use to create the app. If not set, settings are pulled\n            from the context.\n        ignore_cache: If set, a new application will be created even if the settings\n            match. Otherwise, an application is returned from the cache.\n        ephemeral: If set, the application will be treated as ephemeral. The UI\n            and services will be disabled.\n    \"\"\"\n    settings = settings or prefect.settings.get_current_settings()\n    cache_key = (settings, ephemeral)\n\n    if cache_key in APP_CACHE and not ignore_cache:\n        return APP_CACHE[cache_key]\n\n    # TODO: Move these startup functions out of this closure into the top-level or\n    #       another dedicated location\n    async def run_migrations():\n\"\"\"Ensure the database is created and up to date with the current migrations\"\"\"\n        if prefect.settings.PREFECT_API_DATABASE_MIGRATE_ON_START:\n            from prefect.server.database.dependencies import provide_database_interface\n\n            db = provide_database_interface()\n            await db.create_db()\n\n    @_memoize_block_auto_registration\n    async def add_block_types():\n\"\"\"Add all registered blocks to the database\"\"\"\n        if not prefect.settings.PREFECT_API_BLOCKS_REGISTER_ON_START:\n            return\n\n        from prefect.server.database.dependencies import provide_database_interface\n        from prefect.server.models.block_registration import run_block_auto_registration\n\n        db = provide_database_interface()\n        session = await db.session()\n\n        try:\n            async with session:\n                await run_block_auto_registration(session=session)\n        except Exception as exc:\n            logger.warn(f\"Error occurred during block auto-registration: {exc!r}\")\n\n    async def start_services():\n\"\"\"Start additional services when the Prefect REST API starts up.\"\"\"\n\n        if ephemeral:\n            app.state.services = None\n            return\n\n        service_instances = []\n\n        if prefect.settings.PREFECT_API_SERVICES_SCHEDULER_ENABLED.value():\n            service_instances.append(services.scheduler.Scheduler())\n            service_instances.append(services.scheduler.RecentDeploymentsScheduler())\n\n        if prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_ENABLED.value():\n            service_instances.append(services.late_runs.MarkLateRuns())\n\n        if prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED.value():\n            service_instances.append(services.pause_expirations.FailExpiredPauses())\n\n        if prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED.value():\n            service_instances.append(\n                services.cancellation_cleanup.CancellationCleanup()\n            )\n\n        if prefect.settings.PREFECT_SERVER_ANALYTICS_ENABLED.value():\n            service_instances.append(services.telemetry.Telemetry())\n\n        if prefect.settings.PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED.value():\n            service_instances.append(\n                services.flow_run_notifications.FlowRunNotifications()\n            )\n\n        loop = asyncio.get_running_loop()\n\n        app.state.services = {\n            service: loop.create_task(service.start()) for service in service_instances\n        }\n\n        for service, task in app.state.services.items():\n            logger.info(f\"{service.name} service scheduled to start in-app\")\n            task.add_done_callback(partial(on_service_exit, service))\n\n    async def stop_services():\n\"\"\"Ensure services are stopped before the Prefect REST API shuts down.\"\"\"\n        if hasattr(app.state, \"services\") and app.state.services:\n            await asyncio.gather(*[service.stop() for service in app.state.services])\n            try:\n                await asyncio.gather(\n                    *[task.stop() for task in app.state.services.values()]\n                )\n            except Exception:\n                # `on_service_exit` should handle logging exceptions on exit\n                pass\n\n    @asynccontextmanager\n    async def lifespan(app):\n        try:\n            await run_migrations()\n            await add_block_types()\n            await start_services()\n            yield\n        finally:\n            await stop_services()\n\n    def on_service_exit(service, task):\n\"\"\"\n        Added as a callback for completion of services to log exit\n        \"\"\"\n        try:\n            # Retrieving the result will raise the exception\n            task.result()\n        except Exception:\n            logger.error(f\"{service.name} service failed!\", exc_info=True)\n        else:\n            logger.info(f\"{service.name} service stopped!\")\n\n    app = FastAPI(\n        title=TITLE,\n        version=API_VERSION,\n        lifespan=lifespan,\n    )\n    api_app = create_api_app(\n        fast_api_app_kwargs={\n            \"exception_handlers\": {\n                # NOTE: FastAPI special cases the generic `Exception` handler and\n                #       registers it as a separate middleware from the others\n                Exception: custom_internal_exception_handler,\n                RequestValidationError: validation_exception_handler,\n                sa.exc.IntegrityError: integrity_exception_handler,\n                ObjectNotFoundError: prefect_object_not_found_exception_handler,\n            }\n        },\n    )\n    ui_app = create_ui_app(ephemeral)\n\n    # middleware\n    app.add_middleware(\n        CORSMiddleware,\n        allow_origins=[\"*\"],\n        allow_methods=[\"*\"],\n        allow_headers=[\"*\"],\n    )\n\n    # Limit the number of concurrent requests when using a SQLite datbase to reduce\n    # chance of errors where the database cannot be opened due to a high number of\n    # concurrent writes\n    if (\n        get_dialect(prefect.settings.PREFECT_API_DATABASE_CONNECTION_URL.value()).name\n        == \"sqlite\"\n    ):\n        app.add_middleware(RequestLimitMiddleware, limit=100)\n\n    api_app.mount(\n        \"/static\",\n        StaticFiles(\n            directory=os.path.join(\n                os.path.dirname(os.path.realpath(__file__)), \"static\"\n            )\n        ),\n        name=\"static\",\n    )\n    app.api_app = api_app\n    app.mount(\"/api\", app=api_app, name=\"api\")\n    app.mount(\"/\", app=ui_app, name=\"ui\")\n\n    def openapi():\n\"\"\"\n        Convenience method for extracting the user facing OpenAPI schema from the API app.\n\n        This method is attached to the global public app for easy access.\n        \"\"\"\n        partial_schema = get_openapi(\n            title=API_TITLE,\n            version=API_VERSION,\n            routes=api_app.routes,\n        )\n        new_schema = partial_schema.copy()\n        new_schema[\"paths\"] = {}\n        for path, value in partial_schema[\"paths\"].items():\n            new_schema[\"paths\"][f\"/api{path}\"] = value\n\n        new_schema[\"info\"][\"x-logo\"] = {\"url\": \"static/prefect-logo-mark-gradient.png\"}\n        return new_schema\n\n    app.openapi = openapi\n\n    APP_CACHE[cache_key] = app\n    return app\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.custom_internal_exception_handler","title":"custom_internal_exception_handler async","text":"

Log a detailed exception for internal server errors before returning.

Send 503 for errors clients can retry on.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
async def custom_internal_exception_handler(request: Request, exc: Exception):\n\"\"\"\n    Log a detailed exception for internal server errors before returning.\n\n    Send 503 for errors clients can retry on.\n    \"\"\"\n    logger.error(\"Encountered exception in request:\", exc_info=True)\n\n    if is_client_retryable_exception(exc):\n        return JSONResponse(\n            content={\"exception_message\": \"Service Unavailable\"},\n            status_code=status.HTTP_503_SERVICE_UNAVAILABLE,\n        )\n\n    return JSONResponse(\n        content={\"exception_message\": \"Internal Server Error\"},\n        status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,\n    )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.integrity_exception_handler","title":"integrity_exception_handler async","text":"

Capture database integrity errors.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
async def integrity_exception_handler(request: Request, exc: Exception):\n\"\"\"Capture database integrity errors.\"\"\"\n    logger.error(\"Encountered exception in request:\", exc_info=True)\n    return JSONResponse(\n        content={\n            \"detail\": (\n                \"Data integrity conflict. This usually means a \"\n                \"unique or foreign key constraint was violated. \"\n                \"See server logs for details.\"\n            )\n        },\n        status_code=status.HTTP_409_CONFLICT,\n    )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.prefect_object_not_found_exception_handler","title":"prefect_object_not_found_exception_handler async","text":"

Return 404 status code on object not found exceptions.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
async def prefect_object_not_found_exception_handler(\n    request: Request, exc: ObjectNotFoundError\n):\n\"\"\"Return 404 status code on object not found exceptions.\"\"\"\n    return JSONResponse(\n        content={\"exception_message\": str(exc)}, status_code=status.HTTP_404_NOT_FOUND\n    )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.validation_exception_handler","title":"validation_exception_handler async","text":"

Provide a detailed message for request validation errors.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
async def validation_exception_handler(request: Request, exc: RequestValidationError):\n\"\"\"Provide a detailed message for request validation errors.\"\"\"\n    return JSONResponse(\n        status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n        content=jsonable_encoder(\n            {\n                \"exception_message\": \"Invalid request received.\",\n                \"exception_detail\": exc.errors(),\n                \"request_body\": exc.body,\n            }\n        ),\n    )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/task_run_states/","title":"server.api.task_run_states","text":"","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states","title":"prefect.server.api.task_run_states","text":"

Routes for interacting with task run state objects.

","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states.read_task_run_state","title":"read_task_run_state async","text":"

Get a task run state by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_run_states.py
@router.get(\"/{id}\")\nasync def read_task_run_state(\n    task_run_state_id: UUID = Path(\n        ..., description=\"The task run state id\", alias=\"id\"\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.states.State:\n\"\"\"\n    Get a task run state by id.\n    \"\"\"\n    async with db.session_context() as session:\n        task_run_state = await models.task_run_states.read_task_run_state(\n            session=session, task_run_state_id=task_run_state_id\n        )\n    if not task_run_state:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run state not found\"\n        )\n    return task_run_state\n
","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states.read_task_run_states","title":"read_task_run_states async","text":"

Get states associated with a task run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_run_states.py
@router.get(\"/\")\nasync def read_task_run_states(\n    task_run_id: UUID,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.states.State]:\n\"\"\"\n    Get states associated with a task run.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.task_run_states.read_task_run_states(\n            session=session, task_run_id=task_run_id\n        )\n
","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_runs/","title":"server.api.task_runs","text":"","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs","title":"prefect.server.api.task_runs","text":"

Routes for interacting with task run objects.

","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.count_task_runs","title":"count_task_runs async","text":"

Count task runs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.post(\"/count\")\nasync def count_task_runs(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n) -> int:\n\"\"\"\n    Count task runs.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.task_runs.count_task_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n        )\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.create_task_run","title":"create_task_run async","text":"

Create a task run. If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned.

If no state is provided, the task run will be created in a PENDING state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.post(\"/\")\nasync def create_task_run(\n    task_run: schemas.actions.TaskRunCreate,\n    response: Response,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    orchestration_parameters: dict = Depends(\n        orchestration_dependencies.provide_task_orchestration_parameters\n    ),\n) -> schemas.core.TaskRun:\n\"\"\"\n    Create a task run. If a task run with the same flow_run_id,\n    task_key, and dynamic_key already exists, the existing task\n    run will be returned.\n\n    If no state is provided, the task run will be created in a PENDING state.\n    \"\"\"\n    # hydrate the input model into a full task run / state model\n    task_run = schemas.core.TaskRun(**task_run.dict())\n\n    if not task_run.state:\n        task_run.state = schemas.states.Pending()\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.task_runs.create_task_run(\n            session=session,\n            task_run=task_run,\n            orchestration_parameters=orchestration_parameters,\n        )\n\n    if model.created >= now:\n        response.status_code = status.HTTP_201_CREATED\n    return model\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.delete_task_run","title":"delete_task_run async","text":"

Delete a task run by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_task_run(\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Delete a task run by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.task_runs.delete_task_run(\n            session=session, task_run_id=task_run_id\n        )\n    if not result:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task not found\")\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.read_task_run","title":"read_task_run async","text":"

Get a task run by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.get(\"/{id}\")\nasync def read_task_run(\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.TaskRun:\n\"\"\"\n    Get a task run by id.\n    \"\"\"\n    async with db.session_context() as session:\n        task_run = await models.task_runs.read_task_run(\n            session=session, task_run_id=task_run_id\n        )\n    if not task_run:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task not found\")\n    return task_run\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.read_task_runs","title":"read_task_runs async","text":"

Query for task runs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.post(\"/filter\")\nasync def read_task_runs(\n    sort: schemas.sorting.TaskRunSort = Body(schemas.sorting.TaskRunSort.ID_DESC),\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.TaskRun]:\n\"\"\"\n    Query for task runs.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.task_runs.read_task_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            offset=offset,\n            limit=limit,\n            sort=sort,\n        )\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.set_task_run_state","title":"set_task_run_state async","text":"

Set a task run state, invoking any orchestration rules.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.post(\"/{id}/set_state\")\nasync def set_task_run_state(\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    state: schemas.actions.StateCreate = Body(..., description=\"The intended state.\"),\n    force: bool = Body(\n        False,\n        description=(\n            \"If false, orchestration rules will be applied that may alter or prevent\"\n            \" the state transition. If True, orchestration rules are not applied.\"\n        ),\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    task_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_task_policy\n    ),\n    orchestration_parameters: dict = Depends(\n        orchestration_dependencies.provide_task_orchestration_parameters\n    ),\n) -> OrchestrationResult:\n\"\"\"Set a task run state, invoking any orchestration rules.\"\"\"\n\n    now = pendulum.now(\"UTC\")\n\n    # create the state\n    async with db.session_context(\n        begin_transaction=True, with_for_update=True\n    ) as session:\n        orchestration_result = await models.task_runs.set_task_run_state(\n            session=session,\n            task_run_id=task_run_id,\n            state=schemas.states.State.parse_obj(\n                state\n            ),  # convert to a full State object\n            force=force,\n            task_policy=task_policy,\n            orchestration_parameters=orchestration_parameters,\n        )\n\n    # set the 201 if a new state was created\n    if orchestration_result.state and orchestration_result.state.timestamp >= now:\n        response.status_code = status.HTTP_201_CREATED\n    else:\n        response.status_code = status.HTTP_200_OK\n\n    return orchestration_result\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.task_run_history","title":"task_run_history async","text":"

Query for task run history data across a given range and interval.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.post(\"/history\")\nasync def task_run_history(\n    history_start: DateTimeTZ = Body(..., description=\"The history's start time.\"),\n    history_end: DateTimeTZ = Body(..., description=\"The history's end time.\"),\n    history_interval: datetime.timedelta = Body(\n        ...,\n        description=(\n            \"The size of each history interval, in seconds. Must be at least 1 second.\"\n        ),\n        alias=\"history_interval_seconds\",\n    ),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.HistoryResponse]:\n\"\"\"\n    Query for task run history data across a given range and interval.\n    \"\"\"\n    if history_interval < datetime.timedelta(seconds=1):\n        raise HTTPException(\n            status.HTTP_422_UNPROCESSABLE_ENTITY,\n            detail=\"History interval must not be less than 1 second.\",\n        )\n\n    async with db.session_context() as session:\n        return await run_history(\n            session=session,\n            run_type=\"task_run\",\n            history_start=history_start,\n            history_end=history_end,\n            history_interval=history_interval,\n            flows=flows,\n            flow_runs=flow_runs,\n            task_runs=task_runs,\n            deployments=deployments,\n        )\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.update_task_run","title":"update_task_run async","text":"

Updates a task run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_task_run(\n    task_run: schemas.actions.TaskRunUpdate,\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Updates a task run.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.task_runs.update_task_run(\n            session=session, task_run=task_run, task_run_id=task_run_id\n        )\n    if not result:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task run not found\")\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/models/deployments/","title":"server.models.deployments","text":""},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments","title":"prefect.server.models.deployments","text":"

Functions for interacting with deployment ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.check_work_queues_for_deployment","title":"check_work_queues_for_deployment async","text":"

Get work queues that can pick up the specified deployment.

Work queues will pick up a deployment when all of the following are met.

Notes on the query:

Returns:

Type Description List[schemas.core.WorkQueue]

List[db.WorkQueue]: WorkQueues

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def check_work_queues_for_deployment(\n    db: PrefectDBInterface, session: sa.orm.Session, deployment_id: UUID\n) -> List[schemas.core.WorkQueue]:\n\"\"\"\n    Get work queues that can pick up the specified deployment.\n\n    Work queues will pick up a deployment when all of the following are met.\n\n    - The deployment has ALL tags that the work queue has (i.e. the work\n    queue's tags must be a subset of the deployment's tags).\n    - The work queue's specified deployment IDs match the deployment's ID,\n    or the work queue does NOT have specified deployment IDs.\n    - The work queue's specified flow runners match the deployment's flow\n    runner or the work queue does NOT have a specified flow runner.\n\n    Notes on the query:\n\n    - Our database currently allows either \"null\" and empty lists as\n    null values in filters, so we need to catch both cases with \"or\".\n    - `json_contains(A, B)` should be interepreted as \"True if A\n    contains B\".\n\n    Returns:\n        List[db.WorkQueue]: WorkQueues\n    \"\"\"\n    deployment = await session.get(db.Deployment, deployment_id)\n    if not deployment:\n        raise ObjectNotFoundError(f\"Deployment with id {deployment_id} not found\")\n\n    query = (\n        select(db.WorkQueue)\n        # work queue tags are a subset of deployment tags\n        .filter(\n            or_(\n                json_contains(deployment.tags, db.WorkQueue.filter[\"tags\"]),\n                json_contains([], db.WorkQueue.filter[\"tags\"]),\n                json_contains(None, db.WorkQueue.filter[\"tags\"]),\n            )\n        )\n        # deployment_ids is null or contains the deployment's ID\n        .filter(\n            or_(\n                json_contains(\n                    db.WorkQueue.filter[\"deployment_ids\"],\n                    str(deployment.id),\n                ),\n                json_contains(None, db.WorkQueue.filter[\"deployment_ids\"]),\n                json_contains([], db.WorkQueue.filter[\"deployment_ids\"]),\n            )\n        )\n    )\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.count_deployments","title":"count_deployments async","text":"

Count deployments.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_filter schemas.filters.FlowFilter

only count deployments whose flows match these criteria

None flow_run_filter schemas.filters.FlowRunFilter

only count deployments whose flow runs match these criteria

None task_run_filter schemas.filters.TaskRunFilter

only count deployments whose task runs match these criteria

None deployment_filter schemas.filters.DeploymentFilter

only count deployment that match these filters

None work_pool_filter schemas.filters.WorkPoolFilter

only count deployments that match these work pool filters

None work_queue_filter schemas.filters.WorkQueueFilter

only count deployments that match these work pool queue filters

None

Returns:

Name Type Description int int

the number of deployments matching filters

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def count_deployments(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n) -> int:\n\"\"\"\n    Count deployments.\n\n    Args:\n        session: A database session\n        flow_filter: only count deployments whose flows match these criteria\n        flow_run_filter: only count deployments whose flow runs match these criteria\n        task_run_filter: only count deployments whose task runs match these criteria\n        deployment_filter: only count deployment that match these filters\n        work_pool_filter: only count deployments that match these work pool filters\n        work_queue_filter: only count deployments that match these work pool queue filters\n\n    Returns:\n        int: the number of deployments matching filters\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.Deployment)\n\n    query = await _apply_deployment_filters(\n        query=query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n        db=db,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.create_deployment","title":"create_deployment async","text":"

Upserts a deployment.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required deployment schemas.core.Deployment

a deployment model

required

Returns:

Type Description

db.Deployment: the newly-created or updated deployment

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def create_deployment(\n    session: sa.orm.Session, deployment: schemas.core.Deployment, db: PrefectDBInterface\n):\n\"\"\"Upserts a deployment.\n\n    Args:\n        session: a database session\n        deployment: a deployment model\n\n    Returns:\n        db.Deployment: the newly-created or updated deployment\n\n    \"\"\"\n\n    # set `updated` manually\n    # known limitation of `on_conflict_do_update`, will not use `Column.onupdate`\n    # https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#the-set-clause\n    deployment.updated = pendulum.now(\"UTC\")\n\n    insert_values = deployment.dict(shallow=True, exclude_unset=True)\n\n    insert_stmt = (\n        (await db.insert(db.Deployment))\n        .values(**insert_values)\n        .on_conflict_do_update(\n            index_elements=db.deployment_unique_upsert_columns,\n            set_={\n                **deployment.dict(\n                    shallow=True,\n                    exclude_unset=True,\n                    exclude={\"id\", \"created\", \"created_by\"},\n                ),\n            },\n        )\n    )\n\n    await session.execute(insert_stmt)\n\n    query = (\n        sa.select(db.Deployment)\n        .where(\n            sa.and_(\n                db.Deployment.flow_id == deployment.flow_id,\n                db.Deployment.name == deployment.name,\n            )\n        )\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    model = result.scalar()\n\n    # because this could upsert a different schedule, delete any runs from the old\n    # deployment\n    await _delete_scheduled_runs(\n        session=session, deployment_id=model.id, db=db, auto_scheduled_only=True\n    )\n\n    return model\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_deployment","title":"delete_deployment async","text":"

Delete a deployment by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required deployment_id UUID

a deployment id

required

Returns:

Name Type Description bool bool

whether or not the deployment was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def delete_deployment(\n    session: sa.orm.Session, deployment_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a deployment by id.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n\n    Returns:\n        bool: whether or not the deployment was deleted\n    \"\"\"\n\n    # delete scheduled runs, both auto- and user- created.\n    await _delete_scheduled_runs(\n        session=session, deployment_id=deployment_id, auto_scheduled_only=False\n    )\n\n    result = await session.execute(\n        delete(db.Deployment).where(db.Deployment.id == deployment_id)\n    )\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment","title":"read_deployment async","text":"

Reads a deployment by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required deployment_id UUID

a deployment id

required

Returns:

Type Description

db.Deployment: the deployment

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def read_deployment(\n    session: sa.orm.Session, deployment_id: UUID, db: PrefectDBInterface\n):\n\"\"\"Reads a deployment by id.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n\n    Returns:\n        db.Deployment: the deployment\n    \"\"\"\n\n    return await session.get(db.Deployment, deployment_id)\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment_by_name","title":"read_deployment_by_name async","text":"

Reads a deployment by name.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required name str

a deployment name

required flow_name str

the name of the flow the deployment belongs to

required

Returns:

Type Description

db.Deployment: the deployment

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def read_deployment_by_name(\n    session: sa.orm.Session, name: str, flow_name: str, db: PrefectDBInterface\n):\n\"\"\"Reads a deployment by name.\n\n    Args:\n        session: A database session\n        name: a deployment name\n        flow_name: the name of the flow the deployment belongs to\n\n    Returns:\n        db.Deployment: the deployment\n    \"\"\"\n\n    result = await session.execute(\n        select(db.Deployment)\n        .join(db.Flow, db.Deployment.flow_id == db.Flow.id)\n        .where(\n            sa.and_(\n                db.Flow.name == flow_name,\n                db.Deployment.name == name,\n            )\n        )\n        .limit(1)\n    )\n    return result.scalar()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployments","title":"read_deployments async","text":"

Read deployments.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required offset int

Query offset

None limit int

Query limit

None flow_filter schemas.filters.FlowFilter

only select deployments whose flows match these criteria

None flow_run_filter schemas.filters.FlowRunFilter

only select deployments whose flow runs match these criteria

None task_run_filter schemas.filters.TaskRunFilter

only select deployments whose task runs match these criteria

None deployment_filter schemas.filters.DeploymentFilter

only select deployment that match these filters

None work_pool_filter schemas.filters.WorkPoolFilter

only select deployments whose work pools match these criteria

None work_queue_filter schemas.filters.WorkQueueFilter

only select deployments whose work pool queues match these criteria

None sort schemas.sorting.DeploymentSort

the sort criteria for selected deployments. Defaults to name ASC.

schemas.sorting.DeploymentSort.NAME_ASC

Returns:

Type Description

List[db.Deployment]: deployments

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def read_deployments(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    offset: int = None,\n    limit: int = None,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n    sort: schemas.sorting.DeploymentSort = schemas.sorting.DeploymentSort.NAME_ASC,\n):\n\"\"\"\n    Read deployments.\n\n    Args:\n        session: A database session\n        offset: Query offset\n        limit: Query limit\n        flow_filter: only select deployments whose flows match these criteria\n        flow_run_filter: only select deployments whose flow runs match these criteria\n        task_run_filter: only select deployments whose task runs match these criteria\n        deployment_filter: only select deployment that match these filters\n        work_pool_filter: only select deployments whose work pools match these criteria\n        work_queue_filter: only select deployments whose work pool queues match these criteria\n        sort: the sort criteria for selected deployments. Defaults to `name` ASC.\n\n    Returns:\n        List[db.Deployment]: deployments\n    \"\"\"\n\n    query = select(db.Deployment).order_by(sort.as_sql_sort(db=db))\n\n    query = await _apply_deployment_filters(\n        query=query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n        db=db,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.schedule_runs","title":"schedule_runs async","text":"

Schedule flow runs for a deployment

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required deployment_id UUID

the id of the deployment to schedule

required start_time datetime.datetime

the time from which to start scheduling runs

None end_time datetime.datetime

runs will be scheduled until at most this time

None min_time datetime.timedelta

runs will be scheduled until at least this far in the future

None min_runs int

a minimum amount of runs to schedule

None max_runs int

a maximum amount of runs to schedule

None

This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected.

- Runs will be generated starting on or after the `start_time`\n- No more than `max_runs` runs will be generated\n- No runs will be generated after `end_time` is reached\n- At least `min_runs` runs will be generated\n- Runs will be generated until at least `start_time` + `min_time` is reached\n

Returns:

Type Description List[UUID]

a list of flow run ids scheduled for the deployment

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
async def schedule_runs(\n    session: sa.orm.Session,\n    deployment_id: UUID,\n    start_time: datetime.datetime = None,\n    end_time: datetime.datetime = None,\n    min_time: datetime.timedelta = None,\n    min_runs: int = None,\n    max_runs: int = None,\n    auto_scheduled: bool = True,\n) -> List[UUID]:\n\"\"\"\n    Schedule flow runs for a deployment\n\n    Args:\n        session: a database session\n        deployment_id: the id of the deployment to schedule\n        start_time: the time from which to start scheduling runs\n        end_time: runs will be scheduled until at most this time\n        min_time: runs will be scheduled until at least this far in the future\n        min_runs: a minimum amount of runs to schedule\n        max_runs: a maximum amount of runs to schedule\n\n    This function will generate the minimum number of runs that satisfy the min\n    and max times, and the min and max counts. Specifically, the following order\n    will be respected.\n\n        - Runs will be generated starting on or after the `start_time`\n        - No more than `max_runs` runs will be generated\n        - No runs will be generated after `end_time` is reached\n        - At least `min_runs` runs will be generated\n        - Runs will be generated until at least `start_time` + `min_time` is reached\n\n    Returns:\n        a list of flow run ids scheduled for the deployment\n    \"\"\"\n    if min_runs is None:\n        min_runs = PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS.value()\n    if max_runs is None:\n        max_runs = PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS.value()\n    if start_time is None:\n        start_time = pendulum.now(\"UTC\")\n    if end_time is None:\n        end_time = start_time + (\n            PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME.value()\n        )\n    if min_time is None:\n        min_time = PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME.value()\n\n    start_time = pendulum.instance(start_time)\n    end_time = pendulum.instance(end_time)\n\n    runs = await _generate_scheduled_flow_runs(\n        session=session,\n        deployment_id=deployment_id,\n        start_time=start_time,\n        end_time=end_time,\n        min_time=min_time,\n        min_runs=min_runs,\n        max_runs=max_runs,\n        auto_scheduled=auto_scheduled,\n    )\n    return await _insert_scheduled_flow_runs(session=session, runs=runs)\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.update_deployment","title":"update_deployment async","text":"

Updates a deployment.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required deployment_id UUID

the ID of the deployment to modify

required deployment schemas.actions.DeploymentUpdate

changes to a deployment model

required

Returns:

Name Type Description bool bool

whether the deployment was updated

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def update_deployment(\n    session: sa.orm.Session,\n    deployment_id: UUID,\n    deployment: schemas.actions.DeploymentUpdate,\n    db: PrefectDBInterface,\n) -> bool:\n\"\"\"Updates a deployment.\n\n    Args:\n        session: a database session\n        deployment_id: the ID of the deployment to modify\n        deployment: changes to a deployment model\n\n    Returns:\n        bool: whether the deployment was updated\n\n    \"\"\"\n\n    # exclude_unset=True allows us to only update values provided by\n    # the user, ignoring any defaults on the model\n    update_data = deployment.dict(\n        shallow=True,\n        exclude_unset=True,\n        exclude={\"work_pool_name\"},\n    )\n    if deployment.work_pool_name and deployment.work_queue_name:\n        # If a specific pool name/queue name combination was provided, get the\n        # ID for that work pool queue.\n        update_data[\n            \"work_queue_id\"\n        ] = await WorkerLookups()._get_work_queue_id_from_name(\n            session=session,\n            work_pool_name=deployment.work_pool_name,\n            work_queue_name=deployment.work_queue_name,\n            create_queue_if_not_found=True,\n        )\n    elif deployment.work_pool_name:\n        # If just a pool name was provided, get the ID for its default\n        # work pool queue.\n        update_data[\n            \"work_queue_id\"\n        ] = await WorkerLookups()._get_default_work_queue_id_from_work_pool_name(\n            session=session,\n            work_pool_name=deployment.work_pool_name,\n        )\n    elif deployment.work_queue_name:\n        # If just a queue name was provided, ensure the queue exists and\n        # get its ID.\n        work_queue = await models.work_queues._ensure_work_queue_exists(\n            session=session, name=update_data[\"work_queue_name\"], db=db\n        )\n        update_data[\"work_queue_id\"] = work_queue.id\n\n    update_stmt = (\n        sa.update(db.Deployment)\n        .where(db.Deployment.id == deployment_id)\n        .values(**update_data)\n    )\n    result = await session.execute(update_stmt)\n\n    # delete any auto scheduled runs that would have reflected the old deployment config\n    await _delete_scheduled_runs(\n        session=session, deployment_id=deployment_id, db=db, auto_scheduled_only=True\n    )\n\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flow_run_states/","title":"server.models.flow_run_states","text":""},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states","title":"prefect.server.models.flow_run_states","text":"

Functions for interacting with flow run state ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.delete_flow_run_state","title":"delete_flow_run_state async","text":"

Delete a flow run state by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_run_state_id UUID

a flow run state id

required

Returns:

Name Type Description bool bool

whether or not the flow run state was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_run_states.py
@inject_db\nasync def delete_flow_run_state(\n    session: sa.orm.Session, flow_run_state_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a flow run state by id.\n\n    Args:\n        session: A database session\n        flow_run_state_id: a flow run state id\n\n    Returns:\n        bool: whether or not the flow run state was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.FlowRunState).where(db.FlowRunState.id == flow_run_state_id)\n    )\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.read_flow_run_state","title":"read_flow_run_state async","text":"

Reads a flow run state by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_run_state_id UUID

a flow run state id

required

Returns:

Type Description

db.FlowRunState: the flow state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_run_states.py
@inject_db\nasync def read_flow_run_state(\n    session: sa.orm.Session, flow_run_state_id: UUID, db: PrefectDBInterface\n):\n\"\"\"\n    Reads a flow run state by id.\n\n    Args:\n        session: A database session\n        flow_run_state_id: a flow run state id\n\n    Returns:\n        db.FlowRunState: the flow state\n    \"\"\"\n\n    return await session.get(db.FlowRunState, flow_run_state_id)\n
"},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.read_flow_run_states","title":"read_flow_run_states async","text":"

Reads flow runs states for a flow run.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_run_id UUID

the flow run id

required

Returns:

Type Description

List[db.FlowRunState]: the flow run states

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_run_states.py
@inject_db\nasync def read_flow_run_states(\n    session: sa.orm.Session, flow_run_id: UUID, db: PrefectDBInterface\n):\n\"\"\"\n    Reads flow runs states for a flow run.\n\n    Args:\n        session: A database session\n        flow_run_id: the flow run id\n\n    Returns:\n        List[db.FlowRunState]: the flow run states\n    \"\"\"\n\n    query = (\n        select(db.FlowRunState)\n        .filter_by(flow_run_id=flow_run_id)\n        .order_by(db.FlowRunState.timestamp)\n    )\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/flow_runs/","title":"server.models.flow_runs","text":""},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs","title":"prefect.server.models.flow_runs","text":"

Functions for interacting with flow run ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.count_flow_runs","title":"count_flow_runs async","text":"

Count flow runs.

Parameters:

Name Type Description Default session AsyncSession

a database session

required flow_filter schemas.filters.FlowFilter

only count flow runs whose flows match these filters

None flow_run_filter schemas.filters.FlowRunFilter

only count flow runs that match these filters

None task_run_filter schemas.filters.TaskRunFilter

only count flow runs whose task runs match these filters

None deployment_filter schemas.filters.DeploymentFilter

only count flow runs whose deployments match these filters

None

Returns:

Name Type Description int int

count of flow runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
@inject_db\nasync def count_flow_runs(\n    session: AsyncSession,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n) -> int:\n\"\"\"\n    Count flow runs.\n\n    Args:\n        session: a database session\n        flow_filter: only count flow runs whose flows match these filters\n        flow_run_filter: only count flow runs that match these filters\n        task_run_filter: only count flow runs whose task runs match these filters\n        deployment_filter: only count flow runs whose deployments match these filters\n\n    Returns:\n        int: count of flow runs\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.FlowRun)\n\n    query = await _apply_flow_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n        db=db,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.create_flow_run","title":"create_flow_run async","text":"

Creates a new flow run.

If the provided flow run has a state attached, it will also be created.

Parameters:

Name Type Description Default session AsyncSession

a database session

required flow_run schemas.core.FlowRun

a flow run model

required

Returns:

Type Description

db.FlowRun: the newly-created flow run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
@inject_db\nasync def create_flow_run(\n    session: AsyncSession,\n    flow_run: schemas.core.FlowRun,\n    db: PrefectDBInterface,\n    orchestration_parameters: dict = None,\n):\n\"\"\"Creates a new flow run.\n\n    If the provided flow run has a state attached, it will also be created.\n\n    Args:\n        session: a database session\n        flow_run: a flow run model\n\n    Returns:\n        db.FlowRun: the newly-created flow run\n    \"\"\"\n    now = pendulum.now(\"UTC\")\n\n    flow_run_dict = dict(\n        **flow_run.dict(\n            shallow=True,\n            exclude={\n                \"created\",\n                \"state\",\n                \"estimated_run_time\",\n                \"estimated_start_time_delta\",\n            },\n            exclude_unset=True,\n        ),\n        created=now,\n    )\n\n    # if no idempotency key was provided, create the run directly\n    if not flow_run.idempotency_key:\n        model = db.FlowRun(**flow_run_dict)\n        session.add(model)\n        await session.flush()\n\n    # otherwise let the database take care of enforcing idempotency\n    else:\n        insert_stmt = (\n            (await db.insert(db.FlowRun))\n            .values(**flow_run_dict)\n            .on_conflict_do_nothing(\n                index_elements=db.flow_run_unique_upsert_columns,\n            )\n        )\n        await session.execute(insert_stmt)\n\n        # read the run to see if idempotency was applied or not\n        query = (\n            sa.select(db.FlowRun)\n            .where(\n                sa.and_(\n                    db.FlowRun.flow_id == flow_run.flow_id,\n                    db.FlowRun.idempotency_key == flow_run.idempotency_key,\n                )\n            )\n            .limit(1)\n            .execution_options(populate_existing=True)\n            .options(\n                selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n            )\n        )\n        result = await session.execute(query)\n        model = result.scalar()\n\n    # if the flow run was created in this function call then we need to set the\n    # state. If it was created idempotently, the created time won't match.\n    if model.created == now and flow_run.state:\n        await models.flow_runs.set_flow_run_state(\n            session=session,\n            flow_run_id=model.id,\n            state=flow_run.state,\n            force=True,\n            orchestration_parameters=orchestration_parameters,\n        )\n    return model\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.delete_flow_run","title":"delete_flow_run async","text":"

Delete a flow run by flow_run_id.

Parameters:

Name Type Description Default session AsyncSession

A database session

required flow_run_id UUID

a flow run id

required

Returns:

Name Type Description bool bool

whether or not the flow run was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
@inject_db\nasync def delete_flow_run(\n    session: AsyncSession, flow_run_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a flow run by flow_run_id.\n\n    Args:\n        session: A database session\n        flow_run_id: a flow run id\n\n    Returns:\n        bool: whether or not the flow run was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.FlowRun).where(db.FlowRun.id == flow_run_id)\n    )\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_run","title":"read_flow_run async","text":"

Reads a flow run by id.

Parameters:

Name Type Description Default session AsyncSession

A database session

required flow_run_id UUID

a flow run id

required

Returns:

Type Description

db.FlowRun: the flow run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
@inject_db\nasync def read_flow_run(\n    session: AsyncSession,\n    flow_run_id: UUID,\n    db: PrefectDBInterface,\n    for_update: bool = False,\n):\n\"\"\"\n    Reads a flow run by id.\n\n    Args:\n        session: A database session\n        flow_run_id: a flow run id\n\n    Returns:\n        db.FlowRun: the flow run\n    \"\"\"\n    select = (\n        sa.select(db.FlowRun)\n        .where(db.FlowRun.id == flow_run_id)\n        .options(\n            selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n        )\n    )\n\n    if for_update:\n        select = select.with_for_update()\n\n    result = await session.execute(select)\n    return result.scalar()\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_runs","title":"read_flow_runs async","text":"

Read flow runs.

Parameters:

Name Type Description Default session AsyncSession

a database session

required columns List

a list of the flow run ORM columns to load, for performance

None flow_filter schemas.filters.FlowFilter

only select flow runs whose flows match these filters

None flow_run_filter schemas.filters.FlowRunFilter

only select flow runs match these filters

None task_run_filter schemas.filters.TaskRunFilter

only select flow runs whose task runs match these filters

None deployment_filter schemas.filters.DeploymentFilter

only sleect flow runs whose deployments match these filters

None offset int

Query offset

None limit int

Query limit

None sort schemas.sorting.FlowRunSort

Query sort

schemas.sorting.FlowRunSort.ID_DESC

Returns:

Type Description

List[db.FlowRun]: flow runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
@inject_db\nasync def read_flow_runs(\n    session: AsyncSession,\n    db: PrefectDBInterface,\n    columns: List = None,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n    offset: int = None,\n    limit: int = None,\n    sort: schemas.sorting.FlowRunSort = schemas.sorting.FlowRunSort.ID_DESC,\n):\n\"\"\"\n    Read flow runs.\n\n    Args:\n        session: a database session\n        columns: a list of the flow run ORM columns to load, for performance\n        flow_filter: only select flow runs whose flows match these filters\n        flow_run_filter: only select flow runs match these filters\n        task_run_filter: only select flow runs whose task runs match these filters\n        deployment_filter: only sleect flow runs whose deployments match these filters\n        offset: Query offset\n        limit: Query limit\n        sort: Query sort\n\n    Returns:\n        List[db.FlowRun]: flow runs\n    \"\"\"\n    query = (\n        select(db.FlowRun)\n        .order_by(sort.as_sql_sort(db))\n        .options(\n            selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n        )\n    )\n\n    if columns:\n        query = query.options(load_only(*columns))\n\n    query = await _apply_flow_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n        db=db,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_task_run_dependencies","title":"read_task_run_dependencies async","text":"

Get a task run dependency map for a given flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
async def read_task_run_dependencies(\n    session: AsyncSession,\n    flow_run_id: UUID,\n) -> List[DependencyResult]:\n\"\"\"\n    Get a task run dependency map for a given flow run.\n    \"\"\"\n    flow_run = await models.flow_runs.read_flow_run(\n        session=session, flow_run_id=flow_run_id\n    )\n    if not flow_run:\n        raise ObjectNotFoundError(f\"Flow run with id {flow_run_id} not found\")\n\n    task_runs = await models.task_runs.read_task_runs(\n        session=session,\n        flow_run_filter=schemas.filters.FlowRunFilter(\n            id=schemas.filters.FlowRunFilterId(any_=[flow_run_id])\n        ),\n    )\n\n    dependency_graph = []\n\n    for task_run in task_runs:\n        inputs = list(set(chain(*task_run.task_inputs.values())))\n        untrackable_result_status = (\n            False\n            if task_run.state is None\n            else task_run.state.state_details.untrackable_result\n        )\n        dependency_graph.append(\n            {\n                \"id\": task_run.id,\n                \"upstream_dependencies\": inputs,\n                \"state\": task_run.state,\n                \"expected_start_time\": task_run.expected_start_time,\n                \"name\": task_run.name,\n                \"start_time\": task_run.start_time,\n                \"end_time\": task_run.end_time,\n                \"total_run_time\": task_run.total_run_time,\n                \"estimated_run_time\": task_run.estimated_run_time,\n                \"untrackable_result\": untrackable_result_status,\n            }\n        )\n\n    return dependency_graph\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.set_flow_run_state","title":"set_flow_run_state async","text":"

Creates a new orchestrated flow run state.

Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed state input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A force flag is supplied to bypass a subset of orchestration logic.

Parameters:

Name Type Description Default session AsyncSession

a database session

required flow_run_id UUID

the flow run id

required state schemas.states.State

a flow run state model

required force bool

if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.

False

Returns:

Type Description OrchestrationResult

OrchestrationResult object

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
async def set_flow_run_state(\n    session: AsyncSession,\n    flow_run_id: UUID,\n    state: schemas.states.State,\n    force: bool = False,\n    flow_policy: BaseOrchestrationPolicy = None,\n    orchestration_parameters: dict = None,\n) -> OrchestrationResult:\n\"\"\"\n    Creates a new orchestrated flow run state.\n\n    Setting a new state on a run is the one of the principal actions that is governed by\n    Prefect's orchestration logic. Setting a new run state will not guarantee creation,\n    but instead trigger orchestration rules to govern the proposed `state` input. If\n    the state is considered valid, it will be written to the database. Otherwise, a\n    it's possible a different state, or no state, will be created. A `force` flag is\n    supplied to bypass a subset of orchestration logic.\n\n    Args:\n        session: a database session\n        flow_run_id: the flow run id\n        state: a flow run state model\n        force: if False, orchestration rules will be applied that may alter or prevent\n            the state transition. If True, orchestration rules are not applied.\n\n    Returns:\n        OrchestrationResult object\n    \"\"\"\n\n    # load the flow run\n    run = await models.flow_runs.read_flow_run(\n        session=session,\n        flow_run_id=flow_run_id,\n        # Lock the row to prevent orchestration race conditions\n        for_update=True,\n    )\n\n    if not run:\n        raise ObjectNotFoundError(f\"Flow run with id {flow_run_id} not found\")\n\n    initial_state = run.state.as_state() if run.state else None\n    initial_state_type = initial_state.type if initial_state else None\n    proposed_state_type = state.type if state else None\n    intended_transition = (initial_state_type, proposed_state_type)\n\n    if force or flow_policy is None:\n        flow_policy = MinimalFlowPolicy\n\n    orchestration_rules = flow_policy.compile_transition_rules(*intended_transition)\n    global_rules = GlobalFlowPolicy.compile_transition_rules(*intended_transition)\n\n    context = FlowOrchestrationContext(\n        session=session,\n        run=run,\n        initial_state=initial_state,\n        proposed_state=state,\n    )\n\n    if orchestration_parameters is not None:\n        context.parameters = orchestration_parameters\n\n    # apply orchestration rules and create the new flow run state\n    async with contextlib.AsyncExitStack() as stack:\n        for rule in orchestration_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        for rule in global_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        await context.validate_proposed_state()\n\n    if context.orchestration_error is not None:\n        raise context.orchestration_error\n\n    result = OrchestrationResult(\n        state=context.validated_state,\n        status=context.response_status,\n        details=context.response_details,\n    )\n\n    # if a new state is being set (either ACCEPTED from user or REJECTED\n    # and set by the server), check for any notification policies\n    if result.status in (SetStateStatus.ACCEPT, SetStateStatus.REJECT):\n        await models.flow_run_notification_policies.queue_flow_run_notifications(\n            session=session, flow_run=run\n        )\n\n    return result\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.update_flow_run","title":"update_flow_run async","text":"

Updates a flow run.

Parameters:

Name Type Description Default session AsyncSession

a database session

required flow_run_id UUID

the flow run id to update

required flow_run schemas.actions.FlowRunUpdate

a flow run model

required

Returns:

Name Type Description bool bool

whether or not matching rows were found to update

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
@inject_db\nasync def update_flow_run(\n    session: AsyncSession,\n    flow_run_id: UUID,\n    flow_run: schemas.actions.FlowRunUpdate,\n    db: PrefectDBInterface,\n) -> bool:\n\"\"\"\n    Updates a flow run.\n\n    Args:\n        session: a database session\n        flow_run_id: the flow run id to update\n        flow_run: a flow run model\n\n    Returns:\n        bool: whether or not matching rows were found to update\n    \"\"\"\n    update_stmt = (\n        sa.update(db.FlowRun).where(db.FlowRun.id == flow_run_id)\n        # exclude_unset=True allows us to only update values provided by\n        # the user, ignoring any defaults on the model\n        .values(**flow_run.dict(shallow=True, exclude_unset=True))\n    )\n    result = await session.execute(update_stmt)\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flows/","title":"server.models.flows","text":""},{"location":"api-ref/server/models/flows/#prefect.server.models.flows","title":"prefect.server.models.flows","text":"

Functions for interacting with flow ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.count_flows","title":"count_flows async","text":"

Count flows.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_filter schemas.filters.FlowFilter

only count flows that match these filters

None flow_run_filter schemas.filters.FlowRunFilter

only count flows whose flow runs match these filters

None task_run_filter schemas.filters.TaskRunFilter

only count flows whose task runs match these filters

None deployment_filter schemas.filters.DeploymentFilter

only count flows whose deployments match these filters

None work_pool_filter schemas.filters.WorkPoolFilter

only count flows whose work pools match these filters

None

Returns:

Name Type Description int int

count of flows

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def count_flows(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n) -> int:\n\"\"\"\n    Count flows.\n\n    Args:\n        session: A database session\n        flow_filter: only count flows that match these filters\n        flow_run_filter: only count flows whose flow runs match these filters\n        task_run_filter: only count flows whose task runs match these filters\n        deployment_filter: only count flows whose deployments match these filters\n        work_pool_filter: only count flows whose work pools match these filters\n\n    Returns:\n        int: count of flows\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.Flow)\n\n    query = await _apply_flow_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        db=db,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.create_flow","title":"create_flow async","text":"

Creates a new flow.

If a flow with the same name already exists, the existing flow is returned.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required flow schemas.core.Flow

a flow model

required

Returns:

Type Description

db.Flow: the newly-created or existing flow

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def create_flow(\n    session: sa.orm.Session, flow: schemas.core.Flow, db: PrefectDBInterface\n):\n\"\"\"\n    Creates a new flow.\n\n    If a flow with the same name already exists, the existing flow is returned.\n\n    Args:\n        session: a database session\n        flow: a flow model\n\n    Returns:\n        db.Flow: the newly-created or existing flow\n    \"\"\"\n\n    insert_stmt = (\n        (await db.insert(db.Flow))\n        .values(**flow.dict(shallow=True, exclude_unset=True))\n        .on_conflict_do_nothing(\n            index_elements=db.flow_unique_upsert_columns,\n        )\n    )\n    await session.execute(insert_stmt)\n\n    query = (\n        sa.select(db.Flow)\n        .where(\n            db.Flow.name == flow.name,\n        )\n        .limit(1)\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    model = result.scalar()\n    return model\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.delete_flow","title":"delete_flow async","text":"

Delete a flow by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_id UUID

a flow id

required

Returns:

Name Type Description bool bool

whether or not the flow was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def delete_flow(\n    session: sa.orm.Session, flow_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a flow by id.\n\n    Args:\n        session: A database session\n        flow_id: a flow id\n\n    Returns:\n        bool: whether or not the flow was deleted\n    \"\"\"\n\n    result = await session.execute(delete(db.Flow).where(db.Flow.id == flow_id))\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flow","title":"read_flow async","text":"

Reads a flow by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_id UUID

a flow id

required

Returns:

Type Description

db.Flow: the flow

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def read_flow(session: sa.orm.Session, flow_id: UUID, db: PrefectDBInterface):\n\"\"\"\n    Reads a flow by id.\n\n    Args:\n        session: A database session\n        flow_id: a flow id\n\n    Returns:\n        db.Flow: the flow\n    \"\"\"\n    return await session.get(db.Flow, flow_id)\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flow_by_name","title":"read_flow_by_name async","text":"

Reads a flow by name.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required name str

a flow name

required

Returns:

Type Description

db.Flow: the flow

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def read_flow_by_name(session: sa.orm.Session, name: str, db: PrefectDBInterface):\n\"\"\"\n    Reads a flow by name.\n\n    Args:\n        session: A database session\n        name: a flow name\n\n    Returns:\n        db.Flow: the flow\n    \"\"\"\n\n    result = await session.execute(select(db.Flow).filter_by(name=name))\n    return result.scalar()\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flows","title":"read_flows async","text":"

Read multiple flows.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_filter schemas.filters.FlowFilter

only select flows that match these filters

None flow_run_filter schemas.filters.FlowRunFilter

only select flows whose flow runs match these filters

None task_run_filter schemas.filters.TaskRunFilter

only select flows whose task runs match these filters

None deployment_filter schemas.filters.DeploymentFilter

only select flows whose deployments match these filters

None work_pool_filter schemas.filters.WorkPoolFilter

only select flows whose work pools match these filters

None offset int

Query offset

None limit int

Query limit

None

Returns:

Type Description

List[db.Flow]: flows

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def read_flows(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    sort: schemas.sorting.FlowSort = schemas.sorting.FlowSort.NAME_ASC,\n    offset: int = None,\n    limit: int = None,\n):\n\"\"\"\n    Read multiple flows.\n\n    Args:\n        session: A database session\n        flow_filter: only select flows that match these filters\n        flow_run_filter: only select flows whose flow runs match these filters\n        task_run_filter: only select flows whose task runs match these filters\n        deployment_filter: only select flows whose deployments match these filters\n        work_pool_filter: only select flows whose work pools match these filters\n        offset: Query offset\n        limit: Query limit\n\n    Returns:\n        List[db.Flow]: flows\n    \"\"\"\n\n    query = select(db.Flow).order_by(sort.as_sql_sort(db=db))\n\n    query = await _apply_flow_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        db=db,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.update_flow","title":"update_flow async","text":"

Updates a flow.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required flow_id UUID

the flow id to update

required flow schemas.actions.FlowUpdate

a flow update model

required

Returns:

Name Type Description bool

whether or not matching rows were found to update

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def update_flow(\n    session: sa.orm.Session,\n    flow_id: UUID,\n    flow: schemas.actions.FlowUpdate,\n    db: PrefectDBInterface,\n):\n\"\"\"\n    Updates a flow.\n\n    Args:\n        session: a database session\n        flow_id: the flow id to update\n        flow: a flow update model\n\n    Returns:\n        bool: whether or not matching rows were found to update\n    \"\"\"\n    update_stmt = (\n        sa.update(db.Flow).where(db.Flow.id == flow_id)\n        # exclude_unset=True allows us to only update values provided by\n        # the user, ignoring any defaults on the model\n        .values(**flow.dict(shallow=True, exclude_unset=True))\n    )\n    result = await session.execute(update_stmt)\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/saved_searches/","title":"server.models.saved_searches","text":""},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches","title":"prefect.server.models.saved_searches","text":"

Functions for interacting with saved search ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.create_saved_search","title":"create_saved_search async","text":"

Upserts a SavedSearch.

If a SavedSearch with the same name exists, all properties will be updated.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required saved_search schemas.core.SavedSearch

a SavedSearch model

required

Returns:

Type Description

db.SavedSearch: the newly-created or updated SavedSearch

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/saved_searches.py
@inject_db\nasync def create_saved_search(\n    session: sa.orm.Session,\n    saved_search: schemas.core.SavedSearch,\n    db: PrefectDBInterface,\n):\n\"\"\"\n    Upserts a SavedSearch.\n\n    If a SavedSearch with the same name exists, all properties will be updated.\n\n    Args:\n        session (sa.orm.Session): a database session\n        saved_search (schemas.core.SavedSearch): a SavedSearch model\n\n    Returns:\n        db.SavedSearch: the newly-created or updated SavedSearch\n\n    \"\"\"\n\n    insert_stmt = (\n        (await db.insert(db.SavedSearch))\n        .values(**saved_search.dict(shallow=True, exclude_unset=True))\n        .on_conflict_do_update(\n            index_elements=db.saved_search_unique_upsert_columns,\n            set_=saved_search.dict(shallow=True, include={\"filters\"}),\n        )\n    )\n\n    await session.execute(insert_stmt)\n\n    query = (\n        sa.select(db.SavedSearch)\n        .where(\n            db.SavedSearch.name == saved_search.name,\n        )\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    model = result.scalar()\n\n    return model\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.delete_saved_search","title":"delete_saved_search async","text":"

Delete a SavedSearch by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required saved_search_id str

a SavedSearch id

required

Returns:

Name Type Description bool bool

whether or not the SavedSearch was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/saved_searches.py
@inject_db\nasync def delete_saved_search(\n    session: sa.orm.Session, saved_search_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a SavedSearch by id.\n\n    Args:\n        session (sa.orm.Session): A database session\n        saved_search_id (str): a SavedSearch id\n\n    Returns:\n        bool: whether or not the SavedSearch was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.SavedSearch).where(db.SavedSearch.id == saved_search_id)\n    )\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_search","title":"read_saved_search async","text":"

Reads a SavedSearch by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required saved_search_id str

a SavedSearch id

required

Returns:

Type Description

db.SavedSearch: the SavedSearch

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/saved_searches.py
@inject_db\nasync def read_saved_search(\n    session: sa.orm.Session, saved_search_id: UUID, db: PrefectDBInterface\n):\n\"\"\"\n    Reads a SavedSearch by id.\n\n    Args:\n        session (sa.orm.Session): A database session\n        saved_search_id (str): a SavedSearch id\n\n    Returns:\n        db.SavedSearch: the SavedSearch\n    \"\"\"\n\n    return await session.get(db.SavedSearch, saved_search_id)\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_search_by_name","title":"read_saved_search_by_name async","text":"

Reads a SavedSearch by name.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required name str

a SavedSearch name

required

Returns:

Type Description

db.SavedSearch: the SavedSearch

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/saved_searches.py
@inject_db\nasync def read_saved_search_by_name(\n    session: sa.orm.Session, name: str, db: PrefectDBInterface\n):\n\"\"\"\n    Reads a SavedSearch by name.\n\n    Args:\n        session (sa.orm.Session): A database session\n        name (str): a SavedSearch name\n\n    Returns:\n        db.SavedSearch: the SavedSearch\n    \"\"\"\n    result = await session.execute(\n        select(db.SavedSearch).where(db.SavedSearch.name == name).limit(1)\n    )\n    return result.scalar()\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_searches","title":"read_saved_searches async","text":"

Read SavedSearches.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required offset int

Query offset

None limit(int)

Query limit

required

Returns:

Type Description

List[db.SavedSearch]: SavedSearches

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/saved_searches.py
@inject_db\nasync def read_saved_searches(\n    db: PrefectDBInterface,\n    session: sa.orm.Session,\n    offset: int = None,\n    limit: int = None,\n):\n\"\"\"\n    Read SavedSearches.\n\n    Args:\n        session (sa.orm.Session): A database session\n        offset (int): Query offset\n        limit(int): Query limit\n\n    Returns:\n        List[db.SavedSearch]: SavedSearches\n    \"\"\"\n\n    query = select(db.SavedSearch).order_by(db.SavedSearch.name)\n\n    if offset is not None:\n        query = query.offset(offset)\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/task_run_states/","title":"server.models.task_run_states","text":""},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states","title":"prefect.server.models.task_run_states","text":"

Functions for interacting with task run state ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.delete_task_run_state","title":"delete_task_run_state async","text":"

Delete a task run state by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required task_run_state_id UUID

a task run state id

required

Returns:

Name Type Description bool bool

whether or not the task run state was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_run_states.py
@inject_db\nasync def delete_task_run_state(\n    session: sa.orm.Session, task_run_state_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a task run state by id.\n\n    Args:\n        session: A database session\n        task_run_state_id: a task run state id\n\n    Returns:\n        bool: whether or not the task run state was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.TaskRunState).where(db.TaskRunState.id == task_run_state_id)\n    )\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.read_task_run_state","title":"read_task_run_state async","text":"

Reads a task run state by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required task_run_state_id UUID

a task run state id

required

Returns:

Type Description

db.TaskRunState: the task state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_run_states.py
@inject_db\nasync def read_task_run_state(\n    session: sa.orm.Session, task_run_state_id: UUID, db: PrefectDBInterface\n):\n\"\"\"\n    Reads a task run state by id.\n\n    Args:\n        session: A database session\n        task_run_state_id: a task run state id\n\n    Returns:\n        db.TaskRunState: the task state\n    \"\"\"\n\n    return await session.get(db.TaskRunState, task_run_state_id)\n
"},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.read_task_run_states","title":"read_task_run_states async","text":"

Reads task runs states for a task run.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required task_run_id UUID

the task run id

required

Returns:

Type Description

List[db.TaskRunState]: the task run states

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_run_states.py
@inject_db\nasync def read_task_run_states(\n    session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n):\n\"\"\"\n    Reads task runs states for a task run.\n\n    Args:\n        session: A database session\n        task_run_id: the task run id\n\n    Returns:\n        List[db.TaskRunState]: the task run states\n    \"\"\"\n\n    query = (\n        select(db.TaskRunState)\n        .filter_by(task_run_id=task_run_id)\n        .order_by(db.TaskRunState.timestamp)\n    )\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/task_runs/","title":"server.models.task_runs","text":""},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs","title":"prefect.server.models.task_runs","text":"

Functions for interacting with task run ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.count_task_runs","title":"count_task_runs async","text":"

Count task runs.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required flow_filter schemas.filters.FlowFilter

only count task runs whose flows match these filters

None flow_run_filter schemas.filters.FlowRunFilter

only count task runs whose flow runs match these filters

None task_run_filter schemas.filters.TaskRunFilter

only count task runs that match these filters

None deployment_filter schemas.filters.DeploymentFilter

only count task runs whose deployments match these filters

None

Returns:

Name Type Description int int

count of task runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
@inject_db\nasync def count_task_runs(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n) -> int:\n\"\"\"\n    Count task runs.\n\n    Args:\n        session: a database session\n        flow_filter: only count task runs whose flows match these filters\n        flow_run_filter: only count task runs whose flow runs match these filters\n        task_run_filter: only count task runs that match these filters\n        deployment_filter: only count task runs whose deployments match these filters\n    Returns:\n        int: count of task runs\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.TaskRun)\n\n    query = await _apply_task_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        db=db,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.create_task_run","title":"create_task_run async","text":"

Creates a new task run.

If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned. If the provided task run has a state attached, it will also be created.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required task_run schemas.core.TaskRun

a task run model

required

Returns:

Type Description

db.TaskRun: the newly-created or existing task run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
@inject_db\nasync def create_task_run(\n    session: sa.orm.Session,\n    task_run: schemas.core.TaskRun,\n    db: PrefectDBInterface,\n    orchestration_parameters: dict = None,\n):\n\"\"\"\n    Creates a new task run.\n\n    If a task run with the same flow_run_id, task_key, and dynamic_key already exists,\n    the existing task run will be returned. If the provided task run has a state\n    attached, it will also be created.\n\n    Args:\n        session: a database session\n        task_run: a task run model\n\n    Returns:\n        db.TaskRun: the newly-created or existing task run\n    \"\"\"\n\n    now = pendulum.now(\"UTC\")\n\n    # if a dynamic key exists, we need to guard against conflicts\n    insert_stmt = (\n        (await db.insert(db.TaskRun))\n        .values(\n            created=now,\n            **task_run.dict(\n                shallow=True, exclude={\"state\", \"created\"}, exclude_unset=True\n            ),\n        )\n        .on_conflict_do_nothing(\n            index_elements=db.task_run_unique_upsert_columns,\n        )\n    )\n    await session.execute(insert_stmt)\n\n    query = (\n        sa.select(db.TaskRun)\n        .where(\n            sa.and_(\n                db.TaskRun.flow_run_id == task_run.flow_run_id,\n                db.TaskRun.task_key == task_run.task_key,\n                db.TaskRun.dynamic_key == task_run.dynamic_key,\n            )\n        )\n        .limit(1)\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    model = result.scalar()\n\n    if model.created == now and task_run.state:\n        await models.task_runs.set_task_run_state(\n            session=session,\n            task_run_id=model.id,\n            state=task_run.state,\n            force=True,\n            orchestration_parameters=orchestration_parameters,\n        )\n    return model\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.delete_task_run","title":"delete_task_run async","text":"

Delete a task run by id.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required task_run_id UUID

the task run id to delete

required

Returns:

Name Type Description bool bool

whether or not the task run was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
@inject_db\nasync def delete_task_run(\n    session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a task run by id.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id to delete\n\n    Returns:\n        bool: whether or not the task run was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.TaskRun).where(db.TaskRun.id == task_run_id)\n    )\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.read_task_run","title":"read_task_run async","text":"

Read a task run by id.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required task_run_id UUID

the task run id

required

Returns:

Type Description

db.TaskRun: the task run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
@inject_db\nasync def read_task_run(\n    session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n):\n\"\"\"\n    Read a task run by id.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id\n\n    Returns:\n        db.TaskRun: the task run\n    \"\"\"\n\n    model = await session.get(db.TaskRun, task_run_id)\n    return model\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.read_task_runs","title":"read_task_runs async","text":"

Read task runs.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required flow_filter schemas.filters.FlowFilter

only select task runs whose flows match these filters

None flow_run_filter schemas.filters.FlowRunFilter

only select task runs whose flow runs match these filters

None task_run_filter schemas.filters.TaskRunFilter

only select task runs that match these filters

None deployment_filter schemas.filters.DeploymentFilter

only select task runs whose deployments match these filters

None offset int

Query offset

None limit int

Query limit

None sort schemas.sorting.TaskRunSort

Query sort

schemas.sorting.TaskRunSort.ID_DESC

Returns:

Type Description

List[db.TaskRun]: the task runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
@inject_db\nasync def read_task_runs(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    offset: int = None,\n    limit: int = None,\n    sort: schemas.sorting.TaskRunSort = schemas.sorting.TaskRunSort.ID_DESC,\n):\n\"\"\"\n    Read task runs.\n\n    Args:\n        session: a database session\n        flow_filter: only select task runs whose flows match these filters\n        flow_run_filter: only select task runs whose flow runs match these filters\n        task_run_filter: only select task runs that match these filters\n        deployment_filter: only select task runs whose deployments match these filters\n        offset: Query offset\n        limit: Query limit\n        sort: Query sort\n\n    Returns:\n        List[db.TaskRun]: the task runs\n    \"\"\"\n\n    query = select(db.TaskRun).order_by(sort.as_sql_sort(db))\n\n    query = await _apply_task_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        db=db,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.set_task_run_state","title":"set_task_run_state async","text":"

Creates a new orchestrated task run state.

Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed state input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A force flag is supplied to bypass a subset of orchestration logic.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required task_run_id UUID

the task run id

required state schemas.states.State

a task run state model

required force bool

if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.

False

Returns:

Type Description OrchestrationResult

OrchestrationResult object

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
async def set_task_run_state(\n    session: sa.orm.Session,\n    task_run_id: UUID,\n    state: schemas.states.State,\n    force: bool = False,\n    task_policy: BaseOrchestrationPolicy = None,\n    orchestration_parameters: dict = None,\n) -> OrchestrationResult:\n\"\"\"\n    Creates a new orchestrated task run state.\n\n    Setting a new state on a run is the one of the principal actions that is governed by\n    Prefect's orchestration logic. Setting a new run state will not guarantee creation,\n    but instead trigger orchestration rules to govern the proposed `state` input. If\n    the state is considered valid, it will be written to the database. Otherwise, a\n    it's possible a different state, or no state, will be created. A `force` flag is\n    supplied to bypass a subset of orchestration logic.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id\n        state: a task run state model\n        force: if False, orchestration rules will be applied that may alter or prevent\n            the state transition. If True, orchestration rules are not applied.\n\n    Returns:\n        OrchestrationResult object\n    \"\"\"\n\n    # load the task run\n    run = await models.task_runs.read_task_run(session=session, task_run_id=task_run_id)\n\n    if not run:\n        raise ObjectNotFoundError(f\"Task run with id {task_run_id} not found\")\n\n    initial_state = run.state.as_state() if run.state else None\n    initial_state_type = initial_state.type if initial_state else None\n    proposed_state_type = state.type if state else None\n    intended_transition = (initial_state_type, proposed_state_type)\n\n    if force or task_policy is None:\n        task_policy = MinimalTaskPolicy\n\n    orchestration_rules = task_policy.compile_transition_rules(*intended_transition)\n    global_rules = GlobalTaskPolicy.compile_transition_rules(*intended_transition)\n\n    context = TaskOrchestrationContext(\n        session=session,\n        run=run,\n        initial_state=initial_state,\n        proposed_state=state,\n    )\n\n    if orchestration_parameters is not None:\n        context.parameters = orchestration_parameters\n\n    # apply orchestration rules and create the new task run state\n    async with contextlib.AsyncExitStack() as stack:\n        for rule in orchestration_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        for rule in global_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        await context.validate_proposed_state()\n\n    if context.orchestration_error is not None:\n        raise context.orchestration_error\n\n    result = OrchestrationResult(\n        state=context.validated_state,\n        status=context.response_status,\n        details=context.response_details,\n    )\n\n    return result\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.update_task_run","title":"update_task_run async","text":"

Updates a task run.

Parameters:

Name Type Description Default session AsyncSession

a database session

required task_run_id UUID

the task run id to update

required task_run schemas.actions.TaskRunUpdate

a task run model

required

Returns:

Name Type Description bool bool

whether or not matching rows were found to update

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
@inject_db\nasync def update_task_run(\n    session: AsyncSession,\n    task_run_id: UUID,\n    task_run: schemas.actions.TaskRunUpdate,\n    db: PrefectDBInterface,\n) -> bool:\n\"\"\"\n    Updates a task run.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id to update\n        task_run: a task run model\n\n    Returns:\n        bool: whether or not matching rows were found to update\n    \"\"\"\n    update_stmt = (\n        sa.update(db.TaskRun).where(db.TaskRun.id == task_run_id)\n        # exclude_unset=True allows us to only update values provided by\n        # the user, ignoring any defaults on the model\n        .values(**task_run.dict(shallow=True, exclude_unset=True))\n    )\n    result = await session.execute(update_stmt)\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/orchestration/core_policy/","title":"server.orchestration.core_policy","text":""},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy","title":"prefect.server.orchestration.core_policy","text":"

Orchestration logic that fires on state transitions.

CoreFlowPolicy and CoreTaskPolicy contain all default orchestration rules that Prefect enforces on a state transition.

"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CoreFlowPolicy","title":"CoreFlowPolicy","text":"

Bases: BaseOrchestrationPolicy

Orchestration rules that run against flow-run-state transitions in priority order.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class CoreFlowPolicy(BaseOrchestrationPolicy):\n\"\"\"\n    Orchestration rules that run against flow-run-state transitions in priority order.\n    \"\"\"\n\n    def priority():\n        return [\n            HandleFlowTerminalStateTransitions,\n            EnforceCancellingToCancelledTransition,\n            BypassCancellingScheduledFlowRuns,\n            PreventPendingTransitions,\n            HandlePausingFlows,\n            HandleResumingPausedFlows,\n            CopyScheduledTime,\n            WaitForScheduledTime,\n            RetryFailedFlows,\n        ]\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CoreTaskPolicy","title":"CoreTaskPolicy","text":"

Bases: BaseOrchestrationPolicy

Orchestration rules that run against task-run-state transitions in priority order.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class CoreTaskPolicy(BaseOrchestrationPolicy):\n\"\"\"\n    Orchestration rules that run against task-run-state transitions in priority order.\n    \"\"\"\n\n    def priority():\n        return [\n            CacheRetrieval,\n            HandleTaskTerminalStateTransitions,\n            PreventRunningTasksFromStoppedFlows,\n            SecureTaskConcurrencySlots,  # retrieve cached states even if slots are full\n            CopyScheduledTime,\n            WaitForScheduledTime,\n            RetryFailedTasks,\n            RenameReruns,\n            UpdateFlowRunTrackerOnTasks,\n            CacheInsertion,\n            ReleaseTaskConcurrencySlots,\n        ]\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.SecureTaskConcurrencySlots","title":"SecureTaskConcurrencySlots","text":"

Bases: BaseOrchestrationRule

Checks relevant concurrency slots are available before entering a Running state.

This rule checks if concurrency limits have been set on the tags associated with a TaskRun. If so, a concurrency slot will be secured against each concurrency limit before being allowed to transition into a running state. If a concurrency limit has been reached, the client will be instructed to delay the transition for 30 seconds before trying again. If the concurrency limit set on a tag is 0, the transition will be aborted to prevent deadlocks.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class SecureTaskConcurrencySlots(BaseOrchestrationRule):\n\"\"\"\n    Checks relevant concurrency slots are available before entering a Running state.\n\n    This rule checks if concurrency limits have been set on the tags associated with a\n    TaskRun. If so, a concurrency slot will be secured against each concurrency limit\n    before being allowed to transition into a running state. If a concurrency limit has\n    been reached, the client will be instructed to delay the transition for 30 seconds\n    before trying again. If the concurrency limit set on a tag is 0, the transition will\n    be aborted to prevent deadlocks.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        self._applied_limits = []\n        filtered_limits = (\n            await concurrency_limits.filter_concurrency_limits_for_orchestration(\n                context.session, tags=context.run.tags\n            )\n        )\n        run_limits = {limit.tag: limit for limit in filtered_limits}\n        for tag, cl in run_limits.items():\n            limit = cl.concurrency_limit\n            if limit == 0:\n                # limits of 0 will deadlock, and the transition needs to abort\n                for stale_tag in self._applied_limits:\n                    stale_limit = run_limits.get(stale_tag, None)\n                    active_slots = set(stale_limit.active_slots)\n                    active_slots.discard(str(context.run.id))\n                    stale_limit.active_slots = list(active_slots)\n\n                await self.abort_transition(\n                    reason=(\n                        f'The concurrency limit on tag \"{tag}\" is 0 and will deadlock'\n                        \" if the task tries to run again.\"\n                    ),\n                )\n            elif len(cl.active_slots) >= limit:\n                # if the limit has already been reached, delay the transition\n                for stale_tag in self._applied_limits:\n                    stale_limit = run_limits.get(stale_tag, None)\n                    active_slots = set(stale_limit.active_slots)\n                    active_slots.discard(str(context.run.id))\n                    stale_limit.active_slots = list(active_slots)\n\n                await self.delay_transition(\n                    30,\n                    f\"Concurrency limit for the {tag} tag has been reached\",\n                )\n            else:\n                # log the TaskRun ID to active_slots\n                self._applied_limits.append(tag)\n                active_slots = set(cl.active_slots)\n                active_slots.add(str(context.run.id))\n                cl.active_slots = list(active_slots)\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        for tag in self._applied_limits:\n            cl = await concurrency_limits.read_concurrency_limit_by_tag(\n                context.session, tag\n            )\n            active_slots = set(cl.active_slots)\n            active_slots.discard(str(context.run.id))\n            cl.active_slots = list(active_slots)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.ReleaseTaskConcurrencySlots","title":"ReleaseTaskConcurrencySlots","text":"

Bases: BaseUniversalTransform

Releases any concurrency slots held by a run upon exiting a Running or Cancelling state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class ReleaseTaskConcurrencySlots(BaseUniversalTransform):\n\"\"\"\n    Releases any concurrency slots held by a run upon exiting a Running or\n    Cancelling state.\n    \"\"\"\n\n    async def after_transition(\n        self,\n        context: OrchestrationContext,\n    ):\n        if self.nullified_transition():\n            return\n\n        if context.validated_state and context.validated_state.type not in [\n            states.StateType.RUNNING,\n            states.StateType.CANCELLING,\n        ]:\n            filtered_limits = (\n                await concurrency_limits.filter_concurrency_limits_for_orchestration(\n                    context.session, tags=context.run.tags\n                )\n            )\n            run_limits = {limit.tag: limit for limit in filtered_limits}\n            for tag, cl in run_limits.items():\n                active_slots = set(cl.active_slots)\n                active_slots.discard(str(context.run.id))\n                cl.active_slots = list(active_slots)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CacheInsertion","title":"CacheInsertion","text":"

Bases: BaseOrchestrationRule

Caches completed states with cache keys after they are validated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class CacheInsertion(BaseOrchestrationRule):\n\"\"\"\n    Caches completed states with cache keys after they are validated.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.COMPLETED]\n\n    @inject_db\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n        db: PrefectDBInterface,\n    ) -> None:\n        if not validated_state or not context.session:\n            return\n\n        cache_key = validated_state.state_details.cache_key\n        if cache_key:\n            new_cache_item = db.TaskRunStateCache(\n                cache_key=cache_key,\n                cache_expiration=validated_state.state_details.cache_expiration,\n                task_run_state_id=validated_state.id,\n            )\n            context.session.add(new_cache_item)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CacheRetrieval","title":"CacheRetrieval","text":"

Bases: BaseOrchestrationRule

Rejects running states if a completed state has been cached.

This rule rejects transitions into a running state with a cache key if the key has already been associated with a completed state in the cache table. The client will be instructed to transition into the cached completed state instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class CacheRetrieval(BaseOrchestrationRule):\n\"\"\"\n    Rejects running states if a completed state has been cached.\n\n    This rule rejects transitions into a running state with a cache key if the key\n    has already been associated with a completed state in the cache table. The client\n    will be instructed to transition into the cached completed state instead.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    @inject_db\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n        db: PrefectDBInterface,\n    ) -> None:\n        cache_key = proposed_state.state_details.cache_key\n        if cache_key and not proposed_state.state_details.refresh_cache:\n            # Check for cached states matching the cache key\n            cached_state_id = (\n                select(db.TaskRunStateCache.task_run_state_id)\n                .where(\n                    sa.and_(\n                        db.TaskRunStateCache.cache_key == cache_key,\n                        sa.or_(\n                            db.TaskRunStateCache.cache_expiration.is_(None),\n                            db.TaskRunStateCache.cache_expiration > pendulum.now(\"utc\"),\n                        ),\n                    ),\n                )\n                .order_by(db.TaskRunStateCache.created.desc())\n                .limit(1)\n            ).scalar_subquery()\n            query = select(db.TaskRunState).where(db.TaskRunState.id == cached_state_id)\n            cached_state = (await context.session.execute(query)).scalar()\n            if cached_state:\n                new_state = cached_state.as_state().copy(reset_fields=True)\n                new_state.name = \"Cached\"\n                await self.reject_transition(\n                    state=new_state, reason=\"Retrieved state from cache\"\n                )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RetryFailedFlows","title":"RetryFailedFlows","text":"

Bases: BaseOrchestrationRule

Rejects failed states and schedules a retry if the retry limit has not been reached.

This rule rejects transitions into a failed state if retries has been set and the run count has not reached the specified limit. The client will be instructed to transition into a scheduled state to retry flow execution.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class RetryFailedFlows(BaseOrchestrationRule):\n\"\"\"\n    Rejects failed states and schedules a retry if the retry limit has not been reached.\n\n    This rule rejects transitions into a failed state if `retries` has been\n    set and the run count has not reached the specified limit. The client will be\n    instructed to transition into a scheduled state to retry flow execution.\n    \"\"\"\n\n    FROM_STATES = [StateType.RUNNING]\n    TO_STATES = [StateType.FAILED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: FlowOrchestrationContext,\n    ) -> None:\n        run_settings = context.run_settings\n        run_count = context.run.run_count\n\n        if run_settings.retries is None or run_count > run_settings.retries:\n            return  # Retry count exceeded, allow transition to failed\n\n        scheduled_start_time = pendulum.now(\"UTC\").add(\n            seconds=run_settings.retry_delay or 0\n        )\n\n        # support old-style flow run retries for older clients\n        # older flow retries require us to loop over failed tasks to update their state\n        # this is not required after API version 0.8.3\n        api_version = context.parameters.get(\"api-version\", None)\n        if api_version and api_version < Version(\"0.8.3\"):\n            failed_task_runs = await models.task_runs.read_task_runs(\n                context.session,\n                flow_run_filter=filters.FlowRunFilter(id={\"any_\": [context.run.id]}),\n                task_run_filter=filters.TaskRunFilter(\n                    state={\"type\": {\"any_\": [\"FAILED\"]}}\n                ),\n            )\n            for run in failed_task_runs:\n                await models.task_runs.set_task_run_state(\n                    context.session,\n                    run.id,\n                    state=states.AwaitingRetry(scheduled_time=scheduled_start_time),\n                    force=True,\n                )\n                # Reset the run count so that the task run retries still work correctly\n                run.run_count = 0\n\n        # Reset pause metadata on retry\n        # Pauses as a concept only exist after API version 0.8.4\n        api_version = context.parameters.get(\"api-version\", None)\n        if api_version is None or api_version >= Version(\"0.8.4\"):\n            updated_policy = context.run.empirical_policy.dict()\n            updated_policy[\"resuming\"] = False\n            updated_policy[\"pause_keys\"] = set()\n            context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n\n        # Generate a new state for the flow\n        retry_state = states.AwaitingRetry(\n            scheduled_time=scheduled_start_time,\n            message=proposed_state.message,\n            data=proposed_state.data,\n        )\n        await self.reject_transition(state=retry_state, reason=\"Retrying\")\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RetryFailedTasks","title":"RetryFailedTasks","text":"

Bases: BaseOrchestrationRule

Rejects failed states and schedules a retry if the retry limit has not been reached.

This rule rejects transitions into a failed state if retries has been set and the run count has not reached the specified limit. The client will be instructed to transition into a scheduled state to retry task execution.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class RetryFailedTasks(BaseOrchestrationRule):\n\"\"\"\n    Rejects failed states and schedules a retry if the retry limit has not been reached.\n\n    This rule rejects transitions into a failed state if `retries` has been\n    set and the run count has not reached the specified limit. The client will be\n    instructed to transition into a scheduled state to retry task execution.\n    \"\"\"\n\n    FROM_STATES = [StateType.RUNNING]\n    TO_STATES = [StateType.FAILED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        run_settings = context.run_settings\n        run_count = context.run.run_count\n        delay = run_settings.retry_delay\n\n        if isinstance(delay, list):\n            base_delay = delay[min(run_count - 1, len(delay) - 1)]\n        else:\n            base_delay = run_settings.retry_delay or 0\n\n        # guard against negative relative jitter inputs\n        if run_settings.retry_jitter_factor:\n            delay = clamped_poisson_interval(\n                base_delay, clamping_factor=run_settings.retry_jitter_factor\n            )\n        else:\n            delay = base_delay\n\n        if run_settings.retries is not None and run_count <= run_settings.retries:\n            retry_state = states.AwaitingRetry(\n                scheduled_time=pendulum.now(\"UTC\").add(seconds=delay),\n                message=proposed_state.message,\n                data=proposed_state.data,\n            )\n            await self.reject_transition(state=retry_state, reason=\"Retrying\")\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RenameReruns","title":"RenameReruns","text":"

Bases: BaseOrchestrationRule

Name the states if they have run more than once.

In the special case where the initial state is an \"AwaitingRetry\" scheduled state, the proposed state will be renamed to \"Retrying\" instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class RenameReruns(BaseOrchestrationRule):\n\"\"\"\n    Name the states if they have run more than once.\n\n    In the special case where the initial state is an \"AwaitingRetry\" scheduled state,\n    the proposed state will be renamed to \"Retrying\" instead.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        run_count = context.run.run_count\n        if run_count > 0:\n            if initial_state.name == \"AwaitingRetry\":\n                await self.rename_state(\"Retrying\")\n            else:\n                await self.rename_state(\"Rerunning\")\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CopyScheduledTime","title":"CopyScheduledTime","text":"

Bases: BaseOrchestrationRule

Ensures scheduled time is copied from scheduled states to pending states.

If a new scheduled time has been proposed on the pending state, the scheduled time on the scheduled state will be ignored.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class CopyScheduledTime(BaseOrchestrationRule):\n\"\"\"\n    Ensures scheduled time is copied from scheduled states to pending states.\n\n    If a new scheduled time has been proposed on the pending state, the scheduled time\n    on the scheduled state will be ignored.\n    \"\"\"\n\n    FROM_STATES = [StateType.SCHEDULED]\n    TO_STATES = [StateType.PENDING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        if not proposed_state.state_details.scheduled_time:\n            proposed_state.state_details.scheduled_time = (\n                initial_state.state_details.scheduled_time\n            )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.WaitForScheduledTime","title":"WaitForScheduledTime","text":"

Bases: BaseOrchestrationRule

Prevents transitions to running states from happening to early.

This rule enforces that all scheduled states will only start with the machine clock used by the Prefect REST API instance. This rule will identify transitions from scheduled states that are too early and nullify them. Instead, no state will be written to the database and the client will be sent an instruction to wait for delay_seconds before attempting the transition again.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class WaitForScheduledTime(BaseOrchestrationRule):\n\"\"\"\n    Prevents transitions to running states from happening to early.\n\n    This rule enforces that all scheduled states will only start with the machine clock\n    used by the Prefect REST API instance. This rule will identify transitions from scheduled\n    states that are too early and nullify them. Instead, no state will be written to the\n    database and the client will be sent an instruction to wait for `delay_seconds`\n    before attempting the transition again.\n    \"\"\"\n\n    FROM_STATES = [StateType.SCHEDULED, StateType.PENDING]\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        scheduled_time = initial_state.state_details.scheduled_time\n        if not scheduled_time:\n            return\n\n        # At this moment, we round delay to the nearest second as the API schema\n        # specifies an integer return value.\n        delay = scheduled_time - pendulum.now(\"UTC\")\n        delay_seconds = delay.in_seconds()\n        delay_seconds += round(delay.microseconds / 1e6)\n        if delay_seconds > 0:\n            await self.delay_transition(\n                delay_seconds, reason=\"Scheduled time is in the future\"\n            )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandlePausingFlows","title":"HandlePausingFlows","text":"

Bases: BaseOrchestrationRule

Governs runs attempting to enter a Paused state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class HandlePausingFlows(BaseOrchestrationRule):\n\"\"\"\n    Governs runs attempting to enter a Paused state\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.PAUSED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        if initial_state is None:\n            await self.abort_transition(\"Cannot pause flows with no state.\")\n            return\n\n        if not initial_state.is_running():\n            await self.reject_transition(\n                state=None, reason=\"Cannot pause flows that are not currently running.\"\n            )\n            return\n\n        self.key = proposed_state.state_details.pause_key\n        if self.key is None:\n            # if no pause key is provided, default to a UUID\n            self.key = str(uuid4())\n\n        if self.key in context.run.empirical_policy.pause_keys:\n            await self.reject_transition(\n                state=None, reason=\"This pause has already fired.\"\n            )\n            return\n\n        if proposed_state.state_details.pause_reschedule:\n            if context.run.parent_task_run_id:\n                await self.abort_transition(\n                    reason=\"Cannot pause subflows with the reschedule option.\",\n                )\n                return\n\n            if context.run.deployment_id is None:\n                await self.abort_transition(\n                    reason=(\n                        \"Cannot pause flows without a deployment with the reschedule\"\n                        \" option.\"\n                    ),\n                )\n                return\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        updated_policy = context.run.empirical_policy.dict()\n        updated_policy[\"pause_keys\"].add(self.key)\n        context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleResumingPausedFlows","title":"HandleResumingPausedFlows","text":"

Bases: BaseOrchestrationRule

Governs runs attempting to leave a Paused state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class HandleResumingPausedFlows(BaseOrchestrationRule):\n\"\"\"\n    Governs runs attempting to leave a Paused state\n    \"\"\"\n\n    FROM_STATES = [StateType.PAUSED]\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        if not (\n            proposed_state.is_running()\n            or proposed_state.is_scheduled()\n            or proposed_state.is_final()\n        ):\n            await self.reject_transition(\n                state=None,\n                reason=(\n                    f\"This run cannot transition to the {proposed_state.type} state\"\n                    f\" from the {initial_state.type} state.\"\n                ),\n            )\n            return\n\n        if initial_state.state_details.pause_reschedule:\n            if not context.run.deployment_id:\n                await self.reject_transition(\n                    state=None,\n                    reason=\"Cannot reschedule a paused flow run without a deployment.\",\n                )\n                return\n        pause_timeout = initial_state.state_details.pause_timeout\n        if pause_timeout and pause_timeout < pendulum.now(\"UTC\"):\n            pause_timeout_failure = states.Failed(\n                message=\"The flow was paused and never resumed.\",\n            )\n            await self.reject_transition(\n                state=pause_timeout_failure,\n                reason=\"The flow run pause has timed out and can no longer resume.\",\n            )\n            return\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        updated_policy = context.run.empirical_policy.dict()\n        updated_policy[\"resuming\"] = True\n        context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.UpdateFlowRunTrackerOnTasks","title":"UpdateFlowRunTrackerOnTasks","text":"

Bases: BaseOrchestrationRule

Tracks the flow run attempt a task run state is associated with.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class UpdateFlowRunTrackerOnTasks(BaseOrchestrationRule):\n\"\"\"\n    Tracks the flow run attempt a task run state is associated with.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        self.flow_run = await context.flow_run()\n        if self.flow_run:\n            context.run.flow_run_run_count = self.flow_run.run_count\n        else:\n            raise ObjectNotFoundError(\n                (\n                    \"Unable to read flow run associated with task run:\"\n                    f\" {context.run.id}, this flow run might have been deleted\"\n                ),\n            )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleTaskTerminalStateTransitions","title":"HandleTaskTerminalStateTransitions","text":"

Bases: BaseOrchestrationRule

We do not allow tasks to leave terminal states if: - The task is completed and has a persisted result - The task is going to CANCELLING / PAUSED / CRASHED

We reset the run count when a task leaves a terminal state for a non-terminal state which resets task run retries; this is particularly relevant for flow run retries.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class HandleTaskTerminalStateTransitions(BaseOrchestrationRule):\n\"\"\"\n    We do not allow tasks to leave terminal states if:\n    - The task is completed and has a persisted result\n    - The task is going to CANCELLING / PAUSED / CRASHED\n\n    We reset the run count when a task leaves a terminal state for a non-terminal state\n    which resets task run retries; this is particularly relevant for flow run retries.\n    \"\"\"\n\n    FROM_STATES = TERMINAL_STATES\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        self.original_run_count = context.run.run_count\n\n        # Do not allow runs to be marked as crashed, paused, or cancelling if already terminal\n        if proposed_state.type in {\n            StateType.CANCELLING,\n            StateType.PAUSED,\n            StateType.CRASHED,\n        }:\n            await self.abort_transition(f\"Run is already {initial_state.type.value}.\")\n            return\n\n        # Only allow departure from a happily completed state if the result is not persisted\n        if (\n            initial_state.is_completed()\n            and initial_state.data\n            and initial_state.data.get(\"type\") != \"unpersisted\"\n        ):\n            await self.reject_transition(None, \"This run is already completed.\")\n            return\n\n        if not proposed_state.is_final():\n            # Reset run count to reset retries\n            context.run.run_count = 0\n\n        # Change the name of the state to retrying if its a flow run retry\n        if proposed_state.is_running():\n            self.flow_run = await context.flow_run()\n            flow_retrying = context.run.flow_run_run_count < self.flow_run.run_count\n            if flow_retrying:\n                await self.rename_state(\"Retrying\")\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ):\n        # reset run count\n        context.run.run_count = self.original_run_count\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleFlowTerminalStateTransitions","title":"HandleFlowTerminalStateTransitions","text":"

Bases: BaseOrchestrationRule

We do not allow flows to leave terminal states if: - The flow is completed and has a persisted result - The flow is going to CANCELLING / PAUSED / CRASHED - The flow is going to scheduled and has no deployment

We reset the pause metadata when a flow leaves a terminal state for a non-terminal state. This resets pause behavior during manual flow run retries.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class HandleFlowTerminalStateTransitions(BaseOrchestrationRule):\n\"\"\"\n    We do not allow flows to leave terminal states if:\n    - The flow is completed and has a persisted result\n    - The flow is going to CANCELLING / PAUSED / CRASHED\n    - The flow is going to scheduled and has no deployment\n\n    We reset the pause metadata when a flow leaves a terminal state for a non-terminal\n    state. This resets pause behavior during manual flow run retries.\n    \"\"\"\n\n    FROM_STATES = TERMINAL_STATES\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: FlowOrchestrationContext,\n    ) -> None:\n        self.original_flow_policy = context.run.empirical_policy.dict()\n\n        # Do not allow runs to be marked as crashed, paused, or cancelling if already terminal\n        if proposed_state.type in {\n            StateType.CANCELLING,\n            StateType.PAUSED,\n            StateType.CRASHED,\n        }:\n            await self.abort_transition(\n                f\"Run is already in terminal state {initial_state.type.value}.\"\n            )\n            return\n\n        # Only allow departure from a happily completed state if the result is not\n        # persisted and the a rerun is being proposed\n        if (\n            initial_state.is_completed()\n            and not proposed_state.is_final()\n            and initial_state.data\n            and initial_state.data.get(\"type\") != \"unpersisted\"\n        ):\n            await self.reject_transition(None, \"Run is already COMPLETED.\")\n            return\n\n        # Do not allows runs to be rescheduled without a deployment\n        if proposed_state.is_scheduled() and not context.run.deployment_id:\n            await self.abort_transition(\n                \"Cannot reschedule a run without an associated deployment.\"\n            )\n            return\n\n        if not proposed_state.is_final():\n            # Reset pause metadata when leaving a terminal state\n            api_version = context.parameters.get(\"api-version\", None)\n            if api_version is None or api_version >= Version(\"0.8.4\"):\n                updated_policy = context.run.empirical_policy.dict()\n                updated_policy[\"resuming\"] = False\n                updated_policy[\"pause_keys\"] = set()\n                context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ):\n        context.run.empirical_policy = core.FlowRunPolicy(**self.original_flow_policy)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventPendingTransitions","title":"PreventPendingTransitions","text":"

Bases: BaseOrchestrationRule

Prevents transitions to PENDING.

This rule is only used for flow runs.

This is intended to prevent race conditions during duplicate submissions of runs. Before a run is submitted to its execution environment, it should be placed in a PENDING state. If two workers attempt to submit the same run, one of them should encounter a PENDING -> PENDING transition and abort orchestration of the run.

Similarly, if the execution environment starts quickly the run may be in a RUNNING state when the second worker attempts the PENDING transition. We deny these state changes as well to prevent duplicate submission. If a run has transitioned to a RUNNING state a worker should not attempt to submit it again unless it has moved into a terminal state.

CANCELLING and CANCELLED runs should not be allowed to transition to PENDING. For re-runs of deployed runs, they should transition to SCHEDULED first. For re-runs of ad-hoc runs, they should transition directly to RUNNING.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class PreventPendingTransitions(BaseOrchestrationRule):\n\"\"\"\n    Prevents transitions to PENDING.\n\n    This rule is only used for flow runs.\n\n    This is intended to prevent race conditions during duplicate submissions of runs.\n    Before a run is submitted to its execution environment, it should be placed in a\n    PENDING state. If two workers attempt to submit the same run, one of them should\n    encounter a PENDING -> PENDING transition and abort orchestration of the run.\n\n    Similarly, if the execution environment starts quickly the run may be in a RUNNING\n    state when the second worker attempts the PENDING transition. We deny these state\n    changes as well to prevent duplicate submission. If a run has transitioned to a\n    RUNNING state a worker should not attempt to submit it again unless it has moved\n    into a terminal state.\n\n    CANCELLING and CANCELLED runs should not be allowed to transition to PENDING.\n    For re-runs of deployed runs, they should transition to SCHEDULED first.\n    For re-runs of ad-hoc runs, they should transition directly to RUNNING.\n    \"\"\"\n\n    FROM_STATES = [\n        StateType.PENDING,\n        StateType.CANCELLING,\n        StateType.RUNNING,\n        StateType.CANCELLED,\n    ]\n    TO_STATES = [StateType.PENDING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        await self.abort_transition(\n            reason=(\n                f\"This run is in a {initial_state.type.name} state and cannot\"\n                \" transition to a PENDING state.\"\n            )\n        )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventRunningTasksFromStoppedFlows","title":"PreventRunningTasksFromStoppedFlows","text":"

Bases: BaseOrchestrationRule

Prevents running tasks from stopped flows.

A running state implies execution, but also the converse. This rule ensures that a flow's tasks cannot be run unless the flow is also running.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class PreventRunningTasksFromStoppedFlows(BaseOrchestrationRule):\n\"\"\"\n    Prevents running tasks from stopped flows.\n\n    A running state implies execution, but also the converse. This rule ensures that a\n    flow's tasks cannot be run unless the flow is also running.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        flow_run = await context.flow_run()\n        if flow_run.state is None:\n            await self.abort_transition(\n                reason=\"The enclosing flow must be running to begin task execution.\"\n            )\n        elif flow_run.state.type == StateType.PAUSED:\n            await self.reject_transition(\n                state=states.Paused(name=\"NotReady\"),\n                reason=(\n                    \"The flow is paused, new tasks can execute after resuming flow\"\n                    f\" run: {flow_run.id}.\"\n                ),\n            )\n        elif not flow_run.state.type == StateType.RUNNING:\n            # task runners should abort task run execution\n            await self.abort_transition(\n                reason=\"The enclosing flow must be running to begin task execution.\",\n            )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.EnforceCancellingToCancelledTransition","title":"EnforceCancellingToCancelledTransition","text":"

Bases: BaseOrchestrationRule

Rejects transitions from Cancelling to any terminal state except for Cancelled.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class EnforceCancellingToCancelledTransition(BaseOrchestrationRule):\n\"\"\"\n    Rejects transitions from Cancelling to any terminal state except for Cancelled.\n    \"\"\"\n\n    FROM_STATES = {StateType.CANCELLED, StateType.CANCELLING}\n    TO_STATES = ALL_ORCHESTRATION_STATES - {StateType.CANCELLED}\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        await self.reject_transition(\n            state=None,\n            reason=(\n                \"Cannot transition flows that are cancelling to a state other \"\n                \"than Cancelled.\"\n            ),\n        )\n        return\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.BypassCancellingScheduledFlowRuns","title":"BypassCancellingScheduledFlowRuns","text":"

Bases: BaseOrchestrationRule

Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled, if the flow run has no associated infrastructure process ID.

The Cancelling state is used to clean up infrastructure. If there is not infrastructure to clean up, we can transition directly to Cancelled. Runs that are AwaitingRetry are a Scheduled state that may have associated infrastructure.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class BypassCancellingScheduledFlowRuns(BaseOrchestrationRule):\n\"\"\"Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled,\n    if the flow run has no associated infrastructure process ID.\n\n    The `Cancelling` state is used to clean up infrastructure. If there is not infrastructure\n    to clean up, we can transition directly to `Cancelled`. Runs that are `AwaitingRetry` are\n    a `Scheduled` state that may have associated infrastructure.\n    \"\"\"\n\n    FROM_STATES = {StateType.SCHEDULED}\n    TO_STATES = {StateType.CANCELLING}\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: FlowOrchestrationContext,\n    ) -> None:\n        if not context.run.infrastructure_pid:\n            await self.reject_transition(\n                state=states.Cancelled(),\n                reason=\"Scheduled flow run has no infrastructure to terminate.\",\n            )\n
"},{"location":"api-ref/server/orchestration/global_policy/","title":"server.orchestration.global_policy","text":""},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy","title":"prefect.server.orchestration.global_policy","text":"

Bookkeeping logic that fires on every state transition.

For clarity, GlobalFlowpolicy and GlobalTaskPolicy contain all transition logic implemented using BaseUniversalTransform. None of these operations modify state, and regardless of what orchestration Prefect REST API might enforce on a transtition, the global policies contain Prefect's necessary bookkeeping. Because these transforms record information about the validated state committed to the state database, they should be the most deeply nested contexts in orchestration loop.

"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.GlobalFlowPolicy","title":"GlobalFlowPolicy","text":"

Bases: BaseOrchestrationPolicy

Global transforms that run against flow-run-state transitions in priority order.

These transforms are intended to run immediately before and after a state transition is validated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class GlobalFlowPolicy(BaseOrchestrationPolicy):\n\"\"\"\n    Global transforms that run against flow-run-state transitions in priority order.\n\n    These transforms are intended to run immediately before and after a state transition\n    is validated.\n    \"\"\"\n\n    def priority():\n        return COMMON_GLOBAL_TRANSFORMS() + [\n            UpdateSubflowParentTask,\n            UpdateSubflowStateDetails,\n            IncrementFlowRunCount,\n            RemoveResumingIndicator,\n        ]\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.GlobalTaskPolicy","title":"GlobalTaskPolicy","text":"

Bases: BaseOrchestrationPolicy

Global transforms that run against task-run-state transitions in priority order.

These transforms are intended to run immediately before and after a state transition is validated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class GlobalTaskPolicy(BaseOrchestrationPolicy):\n\"\"\"\n    Global transforms that run against task-run-state transitions in priority order.\n\n    These transforms are intended to run immediately before and after a state transition\n    is validated.\n    \"\"\"\n\n    def priority():\n        return COMMON_GLOBAL_TRANSFORMS() + [\n            IncrementTaskRunCount,\n        ]\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateType","title":"SetRunStateType","text":"

Bases: BaseUniversalTransform

Updates the state type of a run on a state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetRunStateType(BaseUniversalTransform):\n\"\"\"\n    Updates the state type of a run on a state transition.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # record the new state's type\n        context.run.state_type = context.proposed_state.type\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateName","title":"SetRunStateName","text":"

Bases: BaseUniversalTransform

Updates the state name of a run on a state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetRunStateName(BaseUniversalTransform):\n\"\"\"\n    Updates the state name of a run on a state transition.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # record the new state's name\n        context.run.state_name = context.proposed_state.name\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetStartTime","title":"SetStartTime","text":"

Bases: BaseUniversalTransform

Records the time a run enters a running state for the first time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetStartTime(BaseUniversalTransform):\n\"\"\"\n    Records the time a run enters a running state for the first time.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if entering a running state and no start time is set...\n        if context.proposed_state.is_running() and context.run.start_time is None:\n            # set the start time\n            context.run.start_time = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateTimestamp","title":"SetRunStateTimestamp","text":"

Bases: BaseUniversalTransform

Records the time a run changes states.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetRunStateTimestamp(BaseUniversalTransform):\n\"\"\"\n    Records the time a run changes states.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # record the new state's timestamp\n        context.run.state_timestamp = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetEndTime","title":"SetEndTime","text":"

Bases: BaseUniversalTransform

Records the time a run enters a terminal state.

With normal client usage, a run will not transition out of a terminal state. However, it's possible to force these transitions manually via the API. While leaving a terminal state, the end time will be unset.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetEndTime(BaseUniversalTransform):\n\"\"\"\n    Records the time a run enters a terminal state.\n\n    With normal client usage, a run will not transition out of a terminal state.\n    However, it's possible to force these transitions manually via the API. While\n    leaving a terminal state, the end time will be unset.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if exiting a final state for a non-final state...\n        if (\n            context.initial_state\n            and context.initial_state.is_final()\n            and not context.proposed_state.is_final()\n        ):\n            # clear the end time\n            context.run.end_time = None\n\n        # if entering a final state...\n        if context.proposed_state.is_final():\n            # if the run has a start time and no end time, give it one\n            if context.run.start_time and not context.run.end_time:\n                context.run.end_time = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementRunTime","title":"IncrementRunTime","text":"

Bases: BaseUniversalTransform

Records the amount of time a run spends in the running state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class IncrementRunTime(BaseUniversalTransform):\n\"\"\"\n    Records the amount of time a run spends in the running state.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if exiting a running state...\n        if context.initial_state and context.initial_state.is_running():\n            # increment the run time by the time spent in the previous state\n            context.run.total_run_time += (\n                context.proposed_state.timestamp - context.initial_state.timestamp\n            )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementFlowRunCount","title":"IncrementFlowRunCount","text":"

Bases: BaseUniversalTransform

Records the number of times a run enters a running state. For use with retries.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class IncrementFlowRunCount(BaseUniversalTransform):\n\"\"\"\n    Records the number of times a run enters a running state. For use with retries.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if entering a running state...\n        if context.proposed_state.is_running():\n            # do not increment the run count if resuming a paused flow\n            api_version = context.parameters.get(\"api-version\", None)\n            if api_version is None or api_version >= Version(\"0.8.4\"):\n                if context.run.empirical_policy.resuming:\n                    return\n\n            # increment the run count\n            context.run.run_count += 1\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.RemoveResumingIndicator","title":"RemoveResumingIndicator","text":"

Bases: BaseUniversalTransform

Removes the indicator on a flow run that marks it as resuming.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class RemoveResumingIndicator(BaseUniversalTransform):\n\"\"\"\n    Removes the indicator on a flow run that marks it as resuming.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        proposed_state = context.proposed_state\n\n        api_version = context.parameters.get(\"api-version\", None)\n        if api_version is None or api_version >= Version(\"0.8.4\"):\n            if proposed_state.is_running() or proposed_state.is_final():\n                if context.run.empirical_policy.resuming:\n                    updated_policy = context.run.empirical_policy.dict()\n                    updated_policy[\"resuming\"] = False\n                    context.run.empirical_policy = FlowRunPolicy(**updated_policy)\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementTaskRunCount","title":"IncrementTaskRunCount","text":"

Bases: BaseUniversalTransform

Records the number of times a run enters a running state. For use with retries.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class IncrementTaskRunCount(BaseUniversalTransform):\n\"\"\"\n    Records the number of times a run enters a running state. For use with retries.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if entering a running state...\n        if context.proposed_state.is_running():\n            # increment the run count\n            context.run.run_count += 1\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetExpectedStartTime","title":"SetExpectedStartTime","text":"

Bases: BaseUniversalTransform

Estimates the time a state is expected to start running if not set.

For scheduled states, this estimate is simply the scheduled time. For other states, this is set to the time the proposed state was created by Prefect.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetExpectedStartTime(BaseUniversalTransform):\n\"\"\"\n    Estimates the time a state is expected to start running if not set.\n\n    For scheduled states, this estimate is simply the scheduled time. For other states,\n    this is set to the time the proposed state was created by Prefect.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # set expected start time if this is the first state\n        if not context.run.expected_start_time:\n            if context.proposed_state.is_scheduled():\n                context.run.expected_start_time = (\n                    context.proposed_state.state_details.scheduled_time\n                )\n            else:\n                context.run.expected_start_time = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetNextScheduledStartTime","title":"SetNextScheduledStartTime","text":"

Bases: BaseUniversalTransform

Records the scheduled time on a run.

When a run enters a scheduled state, run.next_scheduled_start_time is set to the state's scheduled time. When leaving a scheduled state, run.next_scheduled_start_time is unset.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetNextScheduledStartTime(BaseUniversalTransform):\n\"\"\"\n    Records the scheduled time on a run.\n\n    When a run enters a scheduled state, `run.next_scheduled_start_time` is set to\n    the state's scheduled time. When leaving a scheduled state,\n    `run.next_scheduled_start_time` is unset.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # remove the next scheduled start time if exiting a scheduled state\n        if context.initial_state and context.initial_state.is_scheduled():\n            context.run.next_scheduled_start_time = None\n\n        # set next scheduled start time if entering a scheduled state\n        if context.proposed_state.is_scheduled():\n            context.run.next_scheduled_start_time = (\n                context.proposed_state.state_details.scheduled_time\n            )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateSubflowParentTask","title":"UpdateSubflowParentTask","text":"

Bases: BaseUniversalTransform

Whenever a subflow changes state, it must update its parent task run's state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class UpdateSubflowParentTask(BaseUniversalTransform):\n\"\"\"\n    Whenever a subflow changes state, it must update its parent task run's state.\n    \"\"\"\n\n    async def after_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # only applies to flow runs with a parent task run id\n        if context.run.parent_task_run_id is not None:\n            # avoid mutation of the flow run state\n            subflow_parent_task_state = context.validated_state.copy(\n                reset_fields=True,\n                include={\n                    \"type\",\n                    \"timestamp\",\n                    \"name\",\n                    \"message\",\n                    \"state_details\",\n                    \"data\",\n                },\n            )\n\n            # set the task's \"child flow run id\" to be the subflow run id\n            subflow_parent_task_state.state_details.child_flow_run_id = context.run.id\n\n            await models.task_runs.set_task_run_state(\n                session=context.session,\n                task_run_id=context.run.parent_task_run_id,\n                state=subflow_parent_task_state,\n                force=True,\n            )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateSubflowStateDetails","title":"UpdateSubflowStateDetails","text":"

Bases: BaseUniversalTransform

Update a child subflow state's references to a corresponding tracking task run id in the parent flow run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class UpdateSubflowStateDetails(BaseUniversalTransform):\n\"\"\"\n    Update a child subflow state's references to a corresponding tracking task run id\n    in the parent flow run\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # only applies to flow runs with a parent task run id\n        if context.run.parent_task_run_id is not None:\n            context.proposed_state.state_details.task_run_id = (\n                context.run.parent_task_run_id\n            )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateStateDetails","title":"UpdateStateDetails","text":"

Bases: BaseUniversalTransform

Update a state's references to a corresponding flow- or task- run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class UpdateStateDetails(BaseUniversalTransform):\n\"\"\"\n    Update a state's references to a corresponding flow- or task- run.\n    \"\"\"\n\n    async def before_transition(\n        self,\n        context: OrchestrationContext,\n    ) -> None:\n        if self.nullified_transition():\n            return\n\n        if isinstance(context, FlowOrchestrationContext):\n            flow_run = await context.flow_run()\n            context.proposed_state.state_details.flow_run_id = flow_run.id\n\n        elif isinstance(context, TaskOrchestrationContext):\n            task_run = await context.task_run()\n            context.proposed_state.state_details.flow_run_id = task_run.flow_run_id\n            context.proposed_state.state_details.task_run_id = task_run.id\n
"},{"location":"api-ref/server/orchestration/policies/","title":"server.orchestration.policies","text":""},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies","title":"prefect.server.orchestration.policies","text":"

Policies are collections of orchestration rules and transforms.

Prefect implements (most) orchestration with logic that governs a Prefect flow or task changing state. Policies organize of orchestration logic both to provide an ordering mechanism as well as provide observability into the orchestration process.

While Prefect's orchestration rules can gracefully run independently of one another, ordering can still have an impact on the observed behavior of the system. For example, it makes no sense to secure a concurrency slot for a run if a cached state exists. Furthermore, policies, provide a mechanism to configure and observe exactly what logic will fire against a transition.

"},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy","title":"BaseOrchestrationPolicy","text":"

Bases: ABC

An abstract base class used to organize orchestration rules in priority order.

Different collections of orchestration rules might be used to govern various kinds of transitions. For example, flow-run states and task-run states might require different orchestration logic.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/policies.py
class BaseOrchestrationPolicy(ABC):\n\"\"\"\n    An abstract base class used to organize orchestration rules in priority order.\n\n    Different collections of orchestration rules might be used to govern various kinds\n    of transitions. For example, flow-run states and task-run states might require\n    different orchestration logic.\n    \"\"\"\n\n    @staticmethod\n    @abstractmethod\n    def priority():\n\"\"\"\n        A list of orchestration rules in priority order.\n        \"\"\"\n\n        return []\n\n    @classmethod\n    def compile_transition_rules(cls, from_state=None, to_state=None):\n\"\"\"\n        Returns rules in policy that are valid for the specified state transition.\n        \"\"\"\n\n        transition_rules = []\n        for rule in cls.priority():\n            if from_state in rule.FROM_STATES and to_state in rule.TO_STATES:\n                transition_rules.append(rule)\n        return transition_rules\n
"},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy.priority","title":"priority abstractmethod staticmethod","text":"

A list of orchestration rules in priority order.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/policies.py
@staticmethod\n@abstractmethod\ndef priority():\n\"\"\"\n    A list of orchestration rules in priority order.\n    \"\"\"\n\n    return []\n
"},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy.compile_transition_rules","title":"compile_transition_rules classmethod","text":"

Returns rules in policy that are valid for the specified state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/policies.py
@classmethod\ndef compile_transition_rules(cls, from_state=None, to_state=None):\n\"\"\"\n    Returns rules in policy that are valid for the specified state transition.\n    \"\"\"\n\n    transition_rules = []\n    for rule in cls.priority():\n        if from_state in rule.FROM_STATES and to_state in rule.TO_STATES:\n            transition_rules.append(rule)\n    return transition_rules\n
"},{"location":"api-ref/server/orchestration/rules/","title":"server.orchestration.rules","text":""},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules","title":"prefect.server.orchestration.rules","text":"

Prefect's flow and task-run orchestration machinery.

This module contains all the core concepts necessary to implement Prefect's state orchestration engine. These states correspond to intuitive descriptions of all the points that a Prefect flow or task can observe executing user code and intervene, if necessary. A detailed description of states can be found in our concept documentation.

Prefect's orchestration engine operates under the assumption that no governed user code will execute without first requesting Prefect REST API validate a change in state and record metadata about the run. With all attempts to run user code being checked against a Prefect instance, the Prefect REST API database becomes the unambiguous source of truth for managing the execution of complex interacting workflows. Orchestration rules can be implemented as discrete units of logic that operate against each state transition and can be fully observable, extensible, and customizable -- all without needing to store or parse a single line of user code.

"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext","title":"OrchestrationContext","text":"

Bases: PrefectBaseModel

A container for a state transition, governed by orchestration rules.

Note

An OrchestrationContext should not be instantiated directly, instead use the flow- or task- specific subclasses, FlowOrchestrationContext and TaskOrchestrationContext.

When a flow- or task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule ABC.

OrchestrationContext introduces the concept of a state being None in the context of an intended state transition. An initial state can be None if a run is is attempting to set a state for the first time. The proposed state might be None if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.

Attributes:

Name Type Description session Optional[Union[sa.orm.Session, AsyncSession]]

a SQLAlchemy database session

initial_state Optional[states.State]

the initial state of a run

proposed_state Optional[states.State]

the proposed state a run is transitioning into

validated_state Optional[states.State]

a proposed state that has committed to the database

rule_signature List[str]

a record of rules that have fired on entry into a managed context, currently only used for debugging purposes

finalization_signature List[str]

a record of rules that have fired on exit from a managed context, currently only used for debugging purposes

response_status SetStateStatus

a SetStateStatus object used to build the API response

response_details StateResponseDetails

a StateResponseDetails object use to build the API response

Parameters:

Name Type Description Default session

a SQLAlchemy database session

required initial_state

the initial state of a run

required proposed_state

the proposed state a run is transitioning into

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
class OrchestrationContext(PrefectBaseModel):\n\"\"\"\n    A container for a state transition, governed by orchestration rules.\n\n    Note:\n        An `OrchestrationContext` should not be instantiated directly, instead\n        use the flow- or task- specific subclasses, `FlowOrchestrationContext` and\n        `TaskOrchestrationContext`.\n\n    When a flow- or task- run attempts to change state, Prefect REST API has an opportunity\n    to decide whether this transition can proceed. All the relevant information\n    associated with the state transition is stored in an `OrchestrationContext`,\n    which is subsequently governed by nested orchestration rules implemented using\n    the `BaseOrchestrationRule` ABC.\n\n    `OrchestrationContext` introduces the concept of a state being `None` in the\n    context of an intended state transition. An initial state can be `None` if a run\n    is is attempting to set a state for the first time. The proposed state might be\n    `None` if a rule governing the transition determines that no state change\n    should occur at all and nothing is written to the database.\n\n    Attributes:\n        session: a SQLAlchemy database session\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n        validated_state: a proposed state that has committed to the database\n        rule_signature: a record of rules that have fired on entry into a\n            managed context, currently only used for debugging purposes\n        finalization_signature: a record of rules that have fired on exit from a\n            managed context, currently only used for debugging purposes\n        response_status: a SetStateStatus object used to build the API response\n        response_details:a StateResponseDetails object use to build the API response\n\n    Args:\n        session: a SQLAlchemy database session\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n    \"\"\"\n\n    class Config:\n        arbitrary_types_allowed = True\n\n    session: Optional[Union[sa.orm.Session, AsyncSession]] = ...\n    initial_state: Optional[states.State] = ...\n    proposed_state: Optional[states.State] = ...\n    validated_state: Optional[states.State]\n    rule_signature: List[str] = Field(default_factory=list)\n    finalization_signature: List[str] = Field(default_factory=list)\n    response_status: SetStateStatus = Field(default=SetStateStatus.ACCEPT)\n    response_details: StateResponseDetails = Field(default_factory=StateAcceptDetails)\n    orchestration_error: Optional[Exception] = Field(default=None)\n    parameters: Dict[Any, Any] = Field(default_factory=dict)\n\n    @property\n    def initial_state_type(self) -> Optional[states.StateType]:\n\"\"\"The state type of `self.initial_state` if it exists.\"\"\"\n\n        return self.initial_state.type if self.initial_state else None\n\n    @property\n    def proposed_state_type(self) -> Optional[states.StateType]:\n\"\"\"The state type of `self.proposed_state` if it exists.\"\"\"\n\n        return self.proposed_state.type if self.proposed_state else None\n\n    @property\n    def validated_state_type(self) -> Optional[states.StateType]:\n\"\"\"The state type of `self.validated_state` if it exists.\"\"\"\n        return self.validated_state.type if self.validated_state else None\n\n    def safe_copy(self):\n\"\"\"\n        Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n        Orchestration rules govern state transitions using information stored in\n        an `OrchestrationContext`. However, mutating objects stored on the context\n        directly can have unintended side-effects. To guard against this,\n        `self.safe_copy` can be used to pass information to orchestration rules\n        without risking mutation.\n\n        Returns:\n            A mutation-safe copy of the `OrchestrationContext`\n        \"\"\"\n\n        safe_copy = self.copy()\n\n        safe_copy.initial_state = (\n            self.initial_state.copy() if self.initial_state else None\n        )\n        safe_copy.proposed_state = (\n            self.proposed_state.copy() if self.proposed_state else None\n        )\n        safe_copy.validated_state = (\n            self.validated_state.copy() if self.validated_state else None\n        )\n        safe_copy.parameters = self.parameters.copy()\n        return safe_copy\n\n    def entry_context(self):\n\"\"\"\n        A convenience method that generates input parameters for orchestration rules.\n\n        An `OrchestrationContext` defines a state transition that is managed by\n        orchestration rules which can fire hooks before a transition has been committed\n        to the database. These hooks have a consistent interface which can be generated\n        with this method.\n        \"\"\"\n\n        safe_context = self.safe_copy()\n        return safe_context.initial_state, safe_context.proposed_state, safe_context\n\n    def exit_context(self):\n\"\"\"\n        A convenience method that generates input parameters for orchestration rules.\n\n        An `OrchestrationContext` defines a state transition that is managed by\n        orchestration rules which can fire hooks after a transition has been committed\n        to the database. These hooks have a consistent interface which can be generated\n        with this method.\n        \"\"\"\n\n        safe_context = self.safe_copy()\n        return safe_context.initial_state, safe_context.validated_state, safe_context\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.initial_state_type","title":"initial_state_type: Optional[states.StateType] property","text":"

The state type of self.initial_state if it exists.

"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.proposed_state_type","title":"proposed_state_type: Optional[states.StateType] property","text":"

The state type of self.proposed_state if it exists.

"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.validated_state_type","title":"validated_state_type: Optional[states.StateType] property","text":"

The state type of self.validated_state if it exists.

"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.safe_copy","title":"safe_copy","text":"

Creates a mostly-mutation-safe copy for use in orchestration rules.

Orchestration rules govern state transitions using information stored in an OrchestrationContext. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy can be used to pass information to orchestration rules without risking mutation.

Returns:

Type Description

A mutation-safe copy of the OrchestrationContext

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def safe_copy(self):\n\"\"\"\n    Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n    Orchestration rules govern state transitions using information stored in\n    an `OrchestrationContext`. However, mutating objects stored on the context\n    directly can have unintended side-effects. To guard against this,\n    `self.safe_copy` can be used to pass information to orchestration rules\n    without risking mutation.\n\n    Returns:\n        A mutation-safe copy of the `OrchestrationContext`\n    \"\"\"\n\n    safe_copy = self.copy()\n\n    safe_copy.initial_state = (\n        self.initial_state.copy() if self.initial_state else None\n    )\n    safe_copy.proposed_state = (\n        self.proposed_state.copy() if self.proposed_state else None\n    )\n    safe_copy.validated_state = (\n        self.validated_state.copy() if self.validated_state else None\n    )\n    safe_copy.parameters = self.parameters.copy()\n    return safe_copy\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.entry_context","title":"entry_context","text":"

A convenience method that generates input parameters for orchestration rules.

An OrchestrationContext defines a state transition that is managed by orchestration rules which can fire hooks before a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def entry_context(self):\n\"\"\"\n    A convenience method that generates input parameters for orchestration rules.\n\n    An `OrchestrationContext` defines a state transition that is managed by\n    orchestration rules which can fire hooks before a transition has been committed\n    to the database. These hooks have a consistent interface which can be generated\n    with this method.\n    \"\"\"\n\n    safe_context = self.safe_copy()\n    return safe_context.initial_state, safe_context.proposed_state, safe_context\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.exit_context","title":"exit_context","text":"

A convenience method that generates input parameters for orchestration rules.

An OrchestrationContext defines a state transition that is managed by orchestration rules which can fire hooks after a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def exit_context(self):\n\"\"\"\n    A convenience method that generates input parameters for orchestration rules.\n\n    An `OrchestrationContext` defines a state transition that is managed by\n    orchestration rules which can fire hooks after a transition has been committed\n    to the database. These hooks have a consistent interface which can be generated\n    with this method.\n    \"\"\"\n\n    safe_context = self.safe_copy()\n    return safe_context.initial_state, safe_context.validated_state, safe_context\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext","title":"FlowOrchestrationContext","text":"

Bases: OrchestrationContext

A container for a flow run state transition, governed by orchestration rules.

When a flow- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule ABC.

FlowOrchestrationContext introduces the concept of a state being None in the context of an intended state transition. An initial state can be None if a run is is attempting to set a state for the first time. The proposed state might be None if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.

Attributes:

Name Type Description session

a SQLAlchemy database session

run Any

the flow run attempting to change state

initial_state Any

the initial state of the run

proposed_state Any

the proposed state the run is transitioning into

validated_state Any

a proposed state that has committed to the database

rule_signature Any

a record of rules that have fired on entry into a managed context, currently only used for debugging purposes

finalization_signature Any

a record of rules that have fired on exit from a managed context, currently only used for debugging purposes

response_status Any

a SetStateStatus object used to build the API response

response_details Any

a StateResponseDetails object use to build the API response

Parameters:

Name Type Description Default session

a SQLAlchemy database session

required run

the flow run attempting to change state

required initial_state

the initial state of a run

required proposed_state

the proposed state a run is transitioning into

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
class FlowOrchestrationContext(OrchestrationContext):\n\"\"\"\n    A container for a flow run state transition, governed by orchestration rules.\n\n    When a flow- run attempts to change state, Prefect REST API has an opportunity\n    to decide whether this transition can proceed. All the relevant information\n    associated with the state transition is stored in an `OrchestrationContext`,\n    which is subsequently governed by nested orchestration rules implemented using\n    the `BaseOrchestrationRule` ABC.\n\n    `FlowOrchestrationContext` introduces the concept of a state being `None` in the\n    context of an intended state transition. An initial state can be `None` if a run\n    is is attempting to set a state for the first time. The proposed state might be\n    `None` if a rule governing the transition determines that no state change\n    should occur at all and nothing is written to the database.\n\n    Attributes:\n        session: a SQLAlchemy database session\n        run: the flow run attempting to change state\n        initial_state: the initial state of the run\n        proposed_state: the proposed state the run is transitioning into\n        validated_state: a proposed state that has committed to the database\n        rule_signature: a record of rules that have fired on entry into a\n            managed context, currently only used for debugging purposes\n        finalization_signature: a record of rules that have fired on exit from a\n            managed context, currently only used for debugging purposes\n        response_status: a SetStateStatus object used to build the API response\n        response_details:a StateResponseDetails object use to build the API response\n\n    Args:\n        session: a SQLAlchemy database session\n        run: the flow run attempting to change state\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n    \"\"\"\n\n    # run: db.FlowRun = ...\n    run: Any = ...\n\n    @inject_db\n    async def validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n\"\"\"\n        Validates a proposed state by committing it to the database.\n\n        After the `FlowOrchestrationContext` is governed by orchestration rules, the\n        proposed state can be validated: the proposed state is added to the current\n        SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n        state. The state on the run is set to the validated state as well.\n\n        If the proposed state is `None` when this method is called, no state will be\n        written and `self.validated_state` will be set to the run's current state.\n\n        Returns:\n            None\n        \"\"\"\n        # (circular import)\n        from prefect.server.api.server import is_client_retryable_exception\n\n        try:\n            await self._validate_proposed_state()\n            return\n        except Exception as exc:\n            logger.exception(\"Encountered error during state validation\")\n            self.proposed_state = None\n\n            if is_client_retryable_exception(exc):\n                # Do not capture retryable database exceptions, this exception will be\n                # raised as a 503 in the API layer\n                raise\n\n            reason = f\"Error validating state: {exc!r}\"\n            self.response_status = SetStateStatus.ABORT\n            self.response_details = StateAbortDetails(reason=reason)\n\n    @inject_db\n    async def _validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n        if self.proposed_state is None:\n            validated_orm_state = self.run.state\n            # We cannot access `self.run.state.data` directly for unknown reasons\n            state_data = (\n                (\n                    await artifacts.read_artifact(\n                        self.session, self.run.state.result_artifact_id\n                    )\n                ).data\n                if self.run.state.result_artifact_id\n                else None\n            )\n        else:\n            state_payload = self.proposed_state.dict(shallow=True)\n            state_data = state_payload.pop(\"data\", None)\n\n            if state_data is not None:\n                state_result_artifact = core.Artifact.from_result(state_data)\n                state_result_artifact.flow_run_id = self.run.id\n                await artifacts.create_artifact(self.session, state_result_artifact)\n                state_payload[\"result_artifact_id\"] = state_result_artifact.id\n\n            validated_orm_state = db.FlowRunState(\n                flow_run_id=self.run.id,\n                **state_payload,\n            )\n\n        self.session.add(validated_orm_state)\n        self.run.set_state(validated_orm_state)\n\n        await self.session.flush()\n        if validated_orm_state:\n            self.validated_state = states.State.from_orm_without_result(\n                validated_orm_state, with_data=state_data\n            )\n        else:\n            self.validated_state = None\n\n    def safe_copy(self):\n\"\"\"\n        Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n        Orchestration rules govern state transitions using information stored in\n        an `OrchestrationContext`. However, mutating objects stored on the context\n        directly can have unintended side-effects. To guard against this,\n        `self.safe_copy` can be used to pass information to orchestration rules\n        without risking mutation.\n\n        Note:\n            `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n        Returns:\n            A mutation-safe copy of `FlowOrchestrationContext`\n        \"\"\"\n\n        return super().safe_copy()\n\n    @property\n    def run_settings(self) -> Dict:\n\"\"\"Run-level settings used to orchestrate the state transition.\"\"\"\n\n        return self.run.empirical_policy\n\n    async def task_run(self):\n        return None\n\n    async def flow_run(self):\n        return self.run\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.run_settings","title":"run_settings: Dict property","text":"

Run-level settings used to orchestrate the state transition.

"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.validate_proposed_state","title":"validate_proposed_state async","text":"

Validates a proposed state by committing it to the database.

After the FlowOrchestrationContext is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. self.validated_state set to the flushed state. The state on the run is set to the validated state as well.

If the proposed state is None when this method is called, no state will be written and self.validated_state will be set to the run's current state.

Returns:

Type Description

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
@inject_db\nasync def validate_proposed_state(\n    self,\n    db: PrefectDBInterface,\n):\n\"\"\"\n    Validates a proposed state by committing it to the database.\n\n    After the `FlowOrchestrationContext` is governed by orchestration rules, the\n    proposed state can be validated: the proposed state is added to the current\n    SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n    state. The state on the run is set to the validated state as well.\n\n    If the proposed state is `None` when this method is called, no state will be\n    written and `self.validated_state` will be set to the run's current state.\n\n    Returns:\n        None\n    \"\"\"\n    # (circular import)\n    from prefect.server.api.server import is_client_retryable_exception\n\n    try:\n        await self._validate_proposed_state()\n        return\n    except Exception as exc:\n        logger.exception(\"Encountered error during state validation\")\n        self.proposed_state = None\n\n        if is_client_retryable_exception(exc):\n            # Do not capture retryable database exceptions, this exception will be\n            # raised as a 503 in the API layer\n            raise\n\n        reason = f\"Error validating state: {exc!r}\"\n        self.response_status = SetStateStatus.ABORT\n        self.response_details = StateAbortDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.safe_copy","title":"safe_copy","text":"

Creates a mostly-mutation-safe copy for use in orchestration rules.

Orchestration rules govern state transitions using information stored in an OrchestrationContext. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy can be used to pass information to orchestration rules without risking mutation.

Note

self.run is an ORM model, and even when copied is unsafe to mutate

Returns:

Type Description

A mutation-safe copy of FlowOrchestrationContext

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def safe_copy(self):\n\"\"\"\n    Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n    Orchestration rules govern state transitions using information stored in\n    an `OrchestrationContext`. However, mutating objects stored on the context\n    directly can have unintended side-effects. To guard against this,\n    `self.safe_copy` can be used to pass information to orchestration rules\n    without risking mutation.\n\n    Note:\n        `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n    Returns:\n        A mutation-safe copy of `FlowOrchestrationContext`\n    \"\"\"\n\n    return super().safe_copy()\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext","title":"TaskOrchestrationContext","text":"

Bases: OrchestrationContext

A container for a task run state transition, governed by orchestration rules.

When a task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule ABC.

TaskOrchestrationContext introduces the concept of a state being None in the context of an intended state transition. An initial state can be None if a run is is attempting to set a state for the first time. The proposed state might be None if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.

Attributes:

Name Type Description session

a SQLAlchemy database session

run Any

the task run attempting to change state

initial_state Any

the initial state of the run

proposed_state Any

the proposed state the run is transitioning into

validated_state Any

a proposed state that has committed to the database

rule_signature Any

a record of rules that have fired on entry into a managed context, currently only used for debugging purposes

finalization_signature Any

a record of rules that have fired on exit from a managed context, currently only used for debugging purposes

response_status Any

a SetStateStatus object used to build the API response

response_details Any

a StateResponseDetails object use to build the API response

Parameters:

Name Type Description Default session

a SQLAlchemy database session

required run

the task run attempting to change state

required initial_state

the initial state of a run

required proposed_state

the proposed state a run is transitioning into

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
class TaskOrchestrationContext(OrchestrationContext):\n\"\"\"\n    A container for a task run state transition, governed by orchestration rules.\n\n    When a task- run attempts to change state, Prefect REST API has an opportunity\n    to decide whether this transition can proceed. All the relevant information\n    associated with the state transition is stored in an `OrchestrationContext`,\n    which is subsequently governed by nested orchestration rules implemented using\n    the `BaseOrchestrationRule` ABC.\n\n    `TaskOrchestrationContext` introduces the concept of a state being `None` in the\n    context of an intended state transition. An initial state can be `None` if a run\n    is is attempting to set a state for the first time. The proposed state might be\n    `None` if a rule governing the transition determines that no state change\n    should occur at all and nothing is written to the database.\n\n    Attributes:\n        session: a SQLAlchemy database session\n        run: the task run attempting to change state\n        initial_state: the initial state of the run\n        proposed_state: the proposed state the run is transitioning into\n        validated_state: a proposed state that has committed to the database\n        rule_signature: a record of rules that have fired on entry into a\n            managed context, currently only used for debugging purposes\n        finalization_signature: a record of rules that have fired on exit from a\n            managed context, currently only used for debugging purposes\n        response_status: a SetStateStatus object used to build the API response\n        response_details:a StateResponseDetails object use to build the API response\n\n    Args:\n        session: a SQLAlchemy database session\n        run: the task run attempting to change state\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n    \"\"\"\n\n    # run: db.TaskRun = ...\n    run: Any = ...\n\n    @inject_db\n    async def validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n\"\"\"\n        Validates a proposed state by committing it to the database.\n\n        After the `TaskOrchestrationContext` is governed by orchestration rules, the\n        proposed state can be validated: the proposed state is added to the current\n        SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n        state. The state on the run is set to the validated state as well.\n\n        If the proposed state is `None` when this method is called, no state will be\n        written and `self.validated_state` will be set to the run's current state.\n\n        Returns:\n            None\n        \"\"\"\n        # (circular import)\n        from prefect.server.api.server import is_client_retryable_exception\n\n        try:\n            await self._validate_proposed_state()\n            return\n        except Exception as exc:\n            logger.exception(\"Encountered error during state validation\")\n            self.proposed_state = None\n\n            if is_client_retryable_exception(exc):\n                # Do not capture retryable database exceptions, this exception will be\n                # raised as a 503 in the API layer\n                raise\n\n            reason = f\"Error validating state: {exc!r}\"\n            self.response_status = SetStateStatus.ABORT\n            self.response_details = StateAbortDetails(reason=reason)\n\n    @inject_db\n    async def _validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n        if self.proposed_state is None:\n            validated_orm_state = self.run.state\n            # We cannot access `self.run.state.data` directly for unknown reasons\n            state_data = (\n                (\n                    await artifacts.read_artifact(\n                        self.session, self.run.state.result_artifact_id\n                    )\n                ).data\n                if self.run.state.result_artifact_id\n                else None\n            )\n        else:\n            state_payload = self.proposed_state.dict(shallow=True)\n            state_data = state_payload.pop(\"data\", None)\n\n            if state_data is not None:\n                state_result_artifact = core.Artifact.from_result(state_data)\n                state_result_artifact.task_run_id = self.run.id\n\n                flow_run = await self.flow_run()\n                state_result_artifact.flow_run_id = flow_run.id\n\n                await artifacts.create_artifact(self.session, state_result_artifact)\n                state_payload[\"result_artifact_id\"] = state_result_artifact.id\n\n            validated_orm_state = db.TaskRunState(\n                task_run_id=self.run.id,\n                **state_payload,\n            )\n\n        self.session.add(validated_orm_state)\n        self.run.set_state(validated_orm_state)\n\n        await self.session.flush()\n        if validated_orm_state:\n            self.validated_state = states.State.from_orm_without_result(\n                validated_orm_state, with_data=state_data\n            )\n        else:\n            self.validated_state = None\n\n    def safe_copy(self):\n\"\"\"\n        Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n        Orchestration rules govern state transitions using information stored in\n        an `OrchestrationContext`. However, mutating objects stored on the context\n        directly can have unintended side-effects. To guard against this,\n        `self.safe_copy` can be used to pass information to orchestration rules\n        without risking mutation.\n\n        Note:\n            `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n        Returns:\n            A mutation-safe copy of `TaskOrchestrationContext`\n        \"\"\"\n\n        return super().safe_copy()\n\n    @property\n    def run_settings(self) -> Dict:\n\"\"\"Run-level settings used to orchestrate the state transition.\"\"\"\n\n        return self.run.empirical_policy\n\n    async def task_run(self):\n        return self.run\n\n    async def flow_run(self):\n        return await flow_runs.read_flow_run(\n            session=self.session,\n            flow_run_id=self.run.flow_run_id,\n        )\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.run_settings","title":"run_settings: Dict property","text":"

Run-level settings used to orchestrate the state transition.

"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.validate_proposed_state","title":"validate_proposed_state async","text":"

Validates a proposed state by committing it to the database.

After the TaskOrchestrationContext is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. self.validated_state set to the flushed state. The state on the run is set to the validated state as well.

If the proposed state is None when this method is called, no state will be written and self.validated_state will be set to the run's current state.

Returns:

Type Description

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
@inject_db\nasync def validate_proposed_state(\n    self,\n    db: PrefectDBInterface,\n):\n\"\"\"\n    Validates a proposed state by committing it to the database.\n\n    After the `TaskOrchestrationContext` is governed by orchestration rules, the\n    proposed state can be validated: the proposed state is added to the current\n    SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n    state. The state on the run is set to the validated state as well.\n\n    If the proposed state is `None` when this method is called, no state will be\n    written and `self.validated_state` will be set to the run's current state.\n\n    Returns:\n        None\n    \"\"\"\n    # (circular import)\n    from prefect.server.api.server import is_client_retryable_exception\n\n    try:\n        await self._validate_proposed_state()\n        return\n    except Exception as exc:\n        logger.exception(\"Encountered error during state validation\")\n        self.proposed_state = None\n\n        if is_client_retryable_exception(exc):\n            # Do not capture retryable database exceptions, this exception will be\n            # raised as a 503 in the API layer\n            raise\n\n        reason = f\"Error validating state: {exc!r}\"\n        self.response_status = SetStateStatus.ABORT\n        self.response_details = StateAbortDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.safe_copy","title":"safe_copy","text":"

Creates a mostly-mutation-safe copy for use in orchestration rules.

Orchestration rules govern state transitions using information stored in an OrchestrationContext. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy can be used to pass information to orchestration rules without risking mutation.

Note

self.run is an ORM model, and even when copied is unsafe to mutate

Returns:

Type Description

A mutation-safe copy of TaskOrchestrationContext

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def safe_copy(self):\n\"\"\"\n    Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n    Orchestration rules govern state transitions using information stored in\n    an `OrchestrationContext`. However, mutating objects stored on the context\n    directly can have unintended side-effects. To guard against this,\n    `self.safe_copy` can be used to pass information to orchestration rules\n    without risking mutation.\n\n    Note:\n        `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n    Returns:\n        A mutation-safe copy of `TaskOrchestrationContext`\n    \"\"\"\n\n    return super().safe_copy()\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule","title":"BaseOrchestrationRule","text":"

Bases: contextlib.AbstractAsyncContextManager

An abstract base class used to implement a discrete piece of orchestration logic.

An OrchestrationRule is a stateful context manager that directly governs a state transition. Complex orchestration is achieved by nesting multiple rules. Each rule runs against an OrchestrationContext that contains the transition details; this context is then passed to subsequent rules. The context can be modified by hooks that fire before and after a new state is validated and committed to the database. These hooks will fire as long as the state transition is considered \"valid\" and govern a transition by either modifying the proposed state before it is validated or by producing a side-effect.

A state transition occurs whenever a flow- or task- run changes state, prompting Prefect REST API to decide whether or not this transition can proceed. The current state of the run is referred to as the \"initial state\", and the state a run is attempting to transition into is the \"proposed state\". Together, the initial state transitioning into the proposed state is the intended transition that is governed by these orchestration rules. After using rules to enter a runtime context, the OrchestrationContext will contain a proposed state that has been governed by each rule, and at that point can validate the proposed state and commit it to the database. The validated state will be set on the context as context.validated_state, and rules will call the self.after_transition hook upon exiting the managed context.

Examples:

Create a rule:

>>> class BasicRule(BaseOrchestrationRule):\n>>>     # allowed initial state types\n>>>     FROM_STATES = [StateType.RUNNING]\n>>>     # allowed proposed state types\n>>>     TO_STATES = [StateType.COMPLETED, StateType.FAILED]\n>>>\n>>>     async def before_transition(initial_state, proposed_state, ctx):\n>>>         # side effects and proposed state mutation can happen here\n>>>         ...\n>>>\n>>>     async def after_transition(initial_state, validated_state, ctx):\n>>>         # operations on states that have been validated can happen here\n>>>         ...\n>>>\n>>>     async def cleanup(intitial_state, validated_state, ctx):\n>>>         # reverts side effects generated by `before_transition` if necessary\n>>>         ...\n

Use a rule:

>>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n>>> async with BasicRule(context, *intended_transition):\n>>>     # context.proposed_state has been governed by BasicRule\n>>>     ...\n

Use multiple rules:

>>> rules = [BasicRule, BasicRule]\n>>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n>>> async with contextlib.AsyncExitStack() as stack:\n>>>     for rule in rules:\n>>>         stack.enter_async_context(rule(context, *intended_transition))\n>>>\n>>>     # context.proposed_state has been governed by all rules\n>>>     ...\n

Attributes:

Name Type Description FROM_STATES Iterable

list of valid initial state types this rule governs

TO_STATES Iterable

list of valid proposed state types this rule governs

context

the orchestration context

from_state_type

the state type a run is currently in

to_state_type

the intended proposed state type prior to any orchestration

Parameters:

Name Type Description Default context OrchestrationContext

A FlowOrchestrationContext or TaskOrchestrationContext that is passed between rules

required from_state_type Optional[states.StateType]

The state type of the initial state of a run, if this state type is not contained in FROM_STATES, no hooks will fire

required to_state_type Optional[states.StateType]

The state type of the proposed state before orchestration, if this state type is not contained in TO_STATES, no hooks will fire

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
class BaseOrchestrationRule(contextlib.AbstractAsyncContextManager):\n\"\"\"\n    An abstract base class used to implement a discrete piece of orchestration logic.\n\n    An `OrchestrationRule` is a stateful context manager that directly governs a state\n    transition. Complex orchestration is achieved by nesting multiple rules.\n    Each rule runs against an `OrchestrationContext` that contains the transition\n    details; this context is then passed to subsequent rules. The context can be\n    modified by hooks that fire before and after a new state is validated and committed\n    to the database. These hooks will fire as long as the state transition is\n    considered \"valid\" and govern a transition by either modifying the proposed state\n    before it is validated or by producing a side-effect.\n\n    A state transition occurs whenever a flow- or task- run changes state, prompting\n    Prefect REST API to decide whether or not this transition can proceed. The current state of\n    the run is referred to as the \"initial state\", and the state a run is\n    attempting to transition into is the \"proposed state\". Together, the initial state\n    transitioning into the proposed state is the intended transition that is governed\n    by these orchestration rules. After using rules to enter a runtime context, the\n    `OrchestrationContext` will contain a proposed state that has been governed by\n    each rule, and at that point can validate the proposed state and commit it to\n    the database. The validated state will be set on the context as\n    `context.validated_state`, and rules will call the `self.after_transition` hook\n    upon exiting the managed context.\n\n    Examples:\n\n        Create a rule:\n\n        >>> class BasicRule(BaseOrchestrationRule):\n        >>>     # allowed initial state types\n        >>>     FROM_STATES = [StateType.RUNNING]\n        >>>     # allowed proposed state types\n        >>>     TO_STATES = [StateType.COMPLETED, StateType.FAILED]\n        >>>\n        >>>     async def before_transition(initial_state, proposed_state, ctx):\n        >>>         # side effects and proposed state mutation can happen here\n        >>>         ...\n        >>>\n        >>>     async def after_transition(initial_state, validated_state, ctx):\n        >>>         # operations on states that have been validated can happen here\n        >>>         ...\n        >>>\n        >>>     async def cleanup(intitial_state, validated_state, ctx):\n        >>>         # reverts side effects generated by `before_transition` if necessary\n        >>>         ...\n\n        Use a rule:\n\n        >>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n        >>> async with BasicRule(context, *intended_transition):\n        >>>     # context.proposed_state has been governed by BasicRule\n        >>>     ...\n\n        Use multiple rules:\n\n        >>> rules = [BasicRule, BasicRule]\n        >>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n        >>> async with contextlib.AsyncExitStack() as stack:\n        >>>     for rule in rules:\n        >>>         stack.enter_async_context(rule(context, *intended_transition))\n        >>>\n        >>>     # context.proposed_state has been governed by all rules\n        >>>     ...\n\n    Attributes:\n        FROM_STATES: list of valid initial state types this rule governs\n        TO_STATES: list of valid proposed state types this rule governs\n        context: the orchestration context\n        from_state_type: the state type a run is currently in\n        to_state_type: the intended proposed state type prior to any orchestration\n\n    Args:\n        context: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is\n            passed between rules\n        from_state_type: The state type of the initial state of a run, if this\n            state type is not contained in `FROM_STATES`, no hooks will fire\n        to_state_type: The state type of the proposed state before orchestration, if\n            this state type is not contained in `TO_STATES`, no hooks will fire\n    \"\"\"\n\n    FROM_STATES: Iterable = []\n    TO_STATES: Iterable = []\n\n    def __init__(\n        self,\n        context: OrchestrationContext,\n        from_state_type: Optional[states.StateType],\n        to_state_type: Optional[states.StateType],\n    ):\n        self.context = context\n        self.from_state_type = from_state_type\n        self.to_state_type = to_state_type\n        self._invalid_on_entry = None\n\n    async def __aenter__(self) -> OrchestrationContext:\n\"\"\"\n        Enter an async runtime context governed by this rule.\n\n        The `with` statement will bind a governed `OrchestrationContext` to the target\n        specified by the `as` clause. If the transition proposed by the\n        `OrchestrationContext` is considered invalid on entry, entering this context\n        will do nothing. Otherwise, `self.before_transition` will fire.\n        \"\"\"\n\n        if await self.invalid():\n            pass\n        else:\n            try:\n                entry_context = self.context.entry_context()\n                await self.before_transition(*entry_context)\n                self.context.rule_signature.append(str(self.__class__))\n            except Exception as before_transition_error:\n                reason = (\n                    f\"Aborting orchestration due to error in {self.__class__!r}:\"\n                    f\" !{before_transition_error!r}\"\n                )\n                logger.exception(\n                    f\"Error running before-transition hook in rule {self.__class__!r}:\"\n                    f\" !{before_transition_error!r}\"\n                )\n\n                self.context.proposed_state = None\n                self.context.response_status = SetStateStatus.ABORT\n                self.context.response_details = StateAbortDetails(reason=reason)\n                self.context.orchestration_error = before_transition_error\n\n        return self.context\n\n    async def __aexit__(\n        self,\n        exc_type: Optional[Type[BaseException]],\n        exc_val: Optional[BaseException],\n        exc_tb: Optional[TracebackType],\n    ) -> None:\n\"\"\"\n        Exit the async runtime context governed by this rule.\n\n        One of three outcomes can happen upon exiting this rule's context depending on\n        the state of the rule. If the rule was found to be invalid on entry, nothing\n        happens. If the rule was valid on entry and continues to be valid on exit,\n        `self.after_transition` will fire. If the rule was valid on entry but invalid\n        on exit, the rule will \"fizzle\" and `self.cleanup` will fire in order to revert\n        any side-effects produced by `self.before_transition`.\n        \"\"\"\n\n        exit_context = self.context.exit_context()\n        if await self.invalid():\n            pass\n        elif await self.fizzled():\n            await self.cleanup(*exit_context)\n        else:\n            await self.after_transition(*exit_context)\n            self.context.finalization_signature.append(str(self.__class__))\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n\"\"\"\n        Implements a hook that can fire before a state is committed to the database.\n\n        This hook may produce side-effects or mutate the proposed state of a\n        transition using one of four methods: `self.reject_transition`,\n        `self.delay_transition`, `self.abort_transition`, and `self.rename_state`.\n\n        Note:\n            As currently implemented, the `before_transition` hook is not\n            perfectly isolated from mutating the transition. It is a standard instance\n            method that has access to `self`, and therefore `self.context`. This should\n            never be modified directly. Furthermore, `context.run` is an ORM model, and\n            mutating the run can also cause unintended writes to the database.\n\n        Args:\n            initial_state: The initial state of a transtion\n            proposed_state: The proposed state of a transition\n            context: A safe copy of the `OrchestrationContext`, with the exception of\n                `context.run`, mutating this context will have no effect on the broader\n                orchestration environment.\n\n        Returns:\n            None\n        \"\"\"\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n\"\"\"\n        Implements a hook that can fire after a state is committed to the database.\n\n        Args:\n            initial_state: The initial state of a transtion\n            validated_state: The governed state that has been committed to the database\n            context: A safe copy of the `OrchestrationContext`, with the exception of\n                `context.run`, mutating this context will have no effect on the broader\n                orchestration environment.\n\n        Returns:\n            None\n        \"\"\"\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n\"\"\"\n        Implements a hook that can fire after a state is committed to the database.\n\n        The intended use of this method is to revert side-effects produced by\n        `self.before_transition` when the transition is found to be invalid on exit.\n        This allows multiple rules to be gracefully run in sequence, without logic that\n        keeps track of all other rules that might govern a transition.\n\n        Args:\n            initial_state: The initial state of a transtion\n            validated_state: The governed state that has been committed to the database\n            context: A safe copy of the `OrchestrationContext`, with the exception of\n                `context.run`, mutating this context will have no effect on the broader\n                orchestration environment.\n\n        Returns:\n            None\n        \"\"\"\n\n    async def invalid(self) -> bool:\n\"\"\"\n        Determines if a rule is invalid.\n\n        Invalid rules do nothing and no hooks fire upon entering or exiting a governed\n        context. Rules are invalid if the transition states types are not contained in\n        `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing\n        a transition that differs from the transition the rule was instantiated with.\n\n        Returns:\n            True if the rules in invalid, False otherwise.\n        \"\"\"\n        # invalid and fizzled states are mutually exclusive,\n        # `_invalid_on_entry` holds this statefulness\n        if self.from_state_type not in self.FROM_STATES:\n            self._invalid_on_entry = True\n        if self.to_state_type not in self.TO_STATES:\n            self._invalid_on_entry = True\n\n        if self._invalid_on_entry is None:\n            self._invalid_on_entry = await self.invalid_transition()\n        return self._invalid_on_entry\n\n    async def fizzled(self) -> bool:\n\"\"\"\n        Determines if a rule is fizzled and side-effects need to be reverted.\n\n        Rules are fizzled if the transitions were valid on entry (thus firing\n        `self.before_transition`) but are invalid upon exiting the governed context,\n        most likely caused by another rule mutating the transition.\n\n        Returns:\n            True if the rule is fizzled, False otherwise.\n        \"\"\"\n\n        if self._invalid_on_entry:\n            return False\n        return await self.invalid_transition()\n\n    async def invalid_transition(self) -> bool:\n\"\"\"\n        Determines if the transition proposed by the `OrchestrationContext` is invalid.\n\n        If the `OrchestrationContext` is attempting to manage a transition with this\n        rule that differs from the transition the rule was instantiated with, the\n        transition is considered to be invalid. Depending on the context, a rule with an\n        invalid transition is either \"invalid\" or \"fizzled\".\n\n        Returns:\n            True if the transition is invalid, False otherwise.\n        \"\"\"\n\n        initial_state_type = self.context.initial_state_type\n        proposed_state_type = self.context.proposed_state_type\n        return (self.from_state_type != initial_state_type) or (\n            self.to_state_type != proposed_state_type\n        )\n\n    async def reject_transition(self, state: Optional[states.State], reason: str):\n\"\"\"\n        Rejects a proposed transition before the transition is validated.\n\n        This method will reject a proposed transition, mutating the proposed state to\n        the provided `state`. A reason for rejecting the transition is also passed on\n        to the `OrchestrationContext`. Rules that reject the transition will not fizzle,\n        despite the proposed state type changing.\n\n        Args:\n            state: The new proposed state. If `None`, the current run state will be\n                returned in the result instead.\n            reason: The reason for rejecting the transition\n        \"\"\"\n\n        # don't run if the transition is already validated\n        if self.context.validated_state:\n            raise RuntimeError(\"The transition is already validated\")\n\n        # the current state will be used if a new one is not provided\n        if state is None:\n            if self.from_state_type is None:\n                raise OrchestrationError(\n                    \"The current run has no state; this transition cannot be \"\n                    \"rejected without providing a new state.\"\n                )\n            self.to_state_type = None\n            self.context.proposed_state = None\n        else:\n            # a rule that mutates state should not fizzle itself\n            self.to_state_type = state.type\n            self.context.proposed_state = state\n\n        self.context.response_status = SetStateStatus.REJECT\n        self.context.response_details = StateRejectDetails(reason=reason)\n\n    async def delay_transition(\n        self,\n        delay_seconds: int,\n        reason: str,\n    ):\n\"\"\"\n        Delays a proposed transition before the transition is validated.\n\n        This method will delay a proposed transition, setting the proposed state to\n        `None`, signaling to the `OrchestrationContext` that no state should be\n        written to the database. The number of seconds a transition should be delayed is\n        passed to the `OrchestrationContext`. A reason for delaying the transition is\n        also provided. Rules that delay the transition will not fizzle, despite the\n        proposed state type changing.\n\n        Args:\n            delay_seconds: The number of seconds the transition should be delayed\n            reason: The reason for delaying the transition\n        \"\"\"\n\n        # don't run if the transition is already validated\n        if self.context.validated_state:\n            raise RuntimeError(\"The transition is already validated\")\n\n        # a rule that mutates state should not fizzle itself\n        self.to_state_type = None\n        self.context.proposed_state = None\n        self.context.response_status = SetStateStatus.WAIT\n        self.context.response_details = StateWaitDetails(\n            delay_seconds=delay_seconds, reason=reason\n        )\n\n    async def abort_transition(self, reason: str):\n\"\"\"\n        Aborts a proposed transition before the transition is validated.\n\n        This method will abort a proposed transition, expecting no further action to\n        occur for this run. The proposed state is set to `None`, signaling to the\n        `OrchestrationContext` that no state should be written to the database. A\n        reason for aborting the transition is also provided. Rules that abort the\n        transition will not fizzle, despite the proposed state type changing.\n\n        Args:\n            reason: The reason for aborting the transition\n        \"\"\"\n\n        # don't run if the transition is already validated\n        if self.context.validated_state:\n            raise RuntimeError(\"The transition is already validated\")\n\n        # a rule that mutates state should not fizzle itself\n        self.to_state_type = None\n        self.context.proposed_state = None\n        self.context.response_status = SetStateStatus.ABORT\n        self.context.response_details = StateAbortDetails(reason=reason)\n\n    async def rename_state(self, state_name):\n\"\"\"\n        Sets the \"name\" attribute on a proposed state.\n\n        The name of a state is an annotation intended to provide rich, human-readable\n        context for how a run is progressing. This method only updates the name and not\n        the canonical state TYPE, and will not fizzle or invalidate any other rules\n        that might govern this state transition.\n        \"\"\"\n        if self.context.proposed_state is not None:\n            self.context.proposed_state.name = state_name\n\n    async def update_context_parameters(self, key, value):\n\"\"\"\n        Updates the \"parameters\" dictionary attribute with the specified key-value pair.\n\n        This mechanism streamlines the process of passing messages and information\n        between orchestration rules if necessary and is simpler and more ephemeral than\n        message-passing via the database or some other side-effect. This mechanism can\n        be used to break up large rules for ease of testing or comprehension, but note\n        that any rules coupled this way (or any other way) are no longer independent and\n        the order in which they appear in the orchestration policy priority will matter.\n        \"\"\"\n\n        self.context.parameters.update({key: value})\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.before_transition","title":"before_transition async","text":"

Implements a hook that can fire before a state is committed to the database.

This hook may produce side-effects or mutate the proposed state of a transition using one of four methods: self.reject_transition, self.delay_transition, self.abort_transition, and self.rename_state.

Note

As currently implemented, the before_transition hook is not perfectly isolated from mutating the transition. It is a standard instance method that has access to self, and therefore self.context. This should never be modified directly. Furthermore, context.run is an ORM model, and mutating the run can also cause unintended writes to the database.

Parameters:

Name Type Description Default initial_state Optional[states.State]

The initial state of a transtion

required proposed_state Optional[states.State]

The proposed state of a transition

required context OrchestrationContext

A safe copy of the OrchestrationContext, with the exception of context.run, mutating this context will have no effect on the broader orchestration environment.

required

Returns:

Type Description None

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def before_transition(\n    self,\n    initial_state: Optional[states.State],\n    proposed_state: Optional[states.State],\n    context: OrchestrationContext,\n) -> None:\n\"\"\"\n    Implements a hook that can fire before a state is committed to the database.\n\n    This hook may produce side-effects or mutate the proposed state of a\n    transition using one of four methods: `self.reject_transition`,\n    `self.delay_transition`, `self.abort_transition`, and `self.rename_state`.\n\n    Note:\n        As currently implemented, the `before_transition` hook is not\n        perfectly isolated from mutating the transition. It is a standard instance\n        method that has access to `self`, and therefore `self.context`. This should\n        never be modified directly. Furthermore, `context.run` is an ORM model, and\n        mutating the run can also cause unintended writes to the database.\n\n    Args:\n        initial_state: The initial state of a transtion\n        proposed_state: The proposed state of a transition\n        context: A safe copy of the `OrchestrationContext`, with the exception of\n            `context.run`, mutating this context will have no effect on the broader\n            orchestration environment.\n\n    Returns:\n        None\n    \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.after_transition","title":"after_transition async","text":"

Implements a hook that can fire after a state is committed to the database.

Parameters:

Name Type Description Default initial_state Optional[states.State]

The initial state of a transtion

required validated_state Optional[states.State]

The governed state that has been committed to the database

required context OrchestrationContext

A safe copy of the OrchestrationContext, with the exception of context.run, mutating this context will have no effect on the broader orchestration environment.

required

Returns:

Type Description None

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def after_transition(\n    self,\n    initial_state: Optional[states.State],\n    validated_state: Optional[states.State],\n    context: OrchestrationContext,\n) -> None:\n\"\"\"\n    Implements a hook that can fire after a state is committed to the database.\n\n    Args:\n        initial_state: The initial state of a transtion\n        validated_state: The governed state that has been committed to the database\n        context: A safe copy of the `OrchestrationContext`, with the exception of\n            `context.run`, mutating this context will have no effect on the broader\n            orchestration environment.\n\n    Returns:\n        None\n    \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.cleanup","title":"cleanup async","text":"

Implements a hook that can fire after a state is committed to the database.

The intended use of this method is to revert side-effects produced by self.before_transition when the transition is found to be invalid on exit. This allows multiple rules to be gracefully run in sequence, without logic that keeps track of all other rules that might govern a transition.

Parameters:

Name Type Description Default initial_state Optional[states.State]

The initial state of a transtion

required validated_state Optional[states.State]

The governed state that has been committed to the database

required context OrchestrationContext

A safe copy of the OrchestrationContext, with the exception of context.run, mutating this context will have no effect on the broader orchestration environment.

required

Returns:

Type Description None

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def cleanup(\n    self,\n    initial_state: Optional[states.State],\n    validated_state: Optional[states.State],\n    context: OrchestrationContext,\n) -> None:\n\"\"\"\n    Implements a hook that can fire after a state is committed to the database.\n\n    The intended use of this method is to revert side-effects produced by\n    `self.before_transition` when the transition is found to be invalid on exit.\n    This allows multiple rules to be gracefully run in sequence, without logic that\n    keeps track of all other rules that might govern a transition.\n\n    Args:\n        initial_state: The initial state of a transtion\n        validated_state: The governed state that has been committed to the database\n        context: A safe copy of the `OrchestrationContext`, with the exception of\n            `context.run`, mutating this context will have no effect on the broader\n            orchestration environment.\n\n    Returns:\n        None\n    \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.invalid","title":"invalid async","text":"

Determines if a rule is invalid.

Invalid rules do nothing and no hooks fire upon entering or exiting a governed context. Rules are invalid if the transition states types are not contained in self.FROM_STATES and self.TO_STATES, or if the context is proposing a transition that differs from the transition the rule was instantiated with.

Returns:

Type Description bool

True if the rules in invalid, False otherwise.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def invalid(self) -> bool:\n\"\"\"\n    Determines if a rule is invalid.\n\n    Invalid rules do nothing and no hooks fire upon entering or exiting a governed\n    context. Rules are invalid if the transition states types are not contained in\n    `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing\n    a transition that differs from the transition the rule was instantiated with.\n\n    Returns:\n        True if the rules in invalid, False otherwise.\n    \"\"\"\n    # invalid and fizzled states are mutually exclusive,\n    # `_invalid_on_entry` holds this statefulness\n    if self.from_state_type not in self.FROM_STATES:\n        self._invalid_on_entry = True\n    if self.to_state_type not in self.TO_STATES:\n        self._invalid_on_entry = True\n\n    if self._invalid_on_entry is None:\n        self._invalid_on_entry = await self.invalid_transition()\n    return self._invalid_on_entry\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.fizzled","title":"fizzled async","text":"

Determines if a rule is fizzled and side-effects need to be reverted.

Rules are fizzled if the transitions were valid on entry (thus firing self.before_transition) but are invalid upon exiting the governed context, most likely caused by another rule mutating the transition.

Returns:

Type Description bool

True if the rule is fizzled, False otherwise.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def fizzled(self) -> bool:\n\"\"\"\n    Determines if a rule is fizzled and side-effects need to be reverted.\n\n    Rules are fizzled if the transitions were valid on entry (thus firing\n    `self.before_transition`) but are invalid upon exiting the governed context,\n    most likely caused by another rule mutating the transition.\n\n    Returns:\n        True if the rule is fizzled, False otherwise.\n    \"\"\"\n\n    if self._invalid_on_entry:\n        return False\n    return await self.invalid_transition()\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.invalid_transition","title":"invalid_transition async","text":"

Determines if the transition proposed by the OrchestrationContext is invalid.

If the OrchestrationContext is attempting to manage a transition with this rule that differs from the transition the rule was instantiated with, the transition is considered to be invalid. Depending on the context, a rule with an invalid transition is either \"invalid\" or \"fizzled\".

Returns:

Type Description bool

True if the transition is invalid, False otherwise.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def invalid_transition(self) -> bool:\n\"\"\"\n    Determines if the transition proposed by the `OrchestrationContext` is invalid.\n\n    If the `OrchestrationContext` is attempting to manage a transition with this\n    rule that differs from the transition the rule was instantiated with, the\n    transition is considered to be invalid. Depending on the context, a rule with an\n    invalid transition is either \"invalid\" or \"fizzled\".\n\n    Returns:\n        True if the transition is invalid, False otherwise.\n    \"\"\"\n\n    initial_state_type = self.context.initial_state_type\n    proposed_state_type = self.context.proposed_state_type\n    return (self.from_state_type != initial_state_type) or (\n        self.to_state_type != proposed_state_type\n    )\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.reject_transition","title":"reject_transition async","text":"

Rejects a proposed transition before the transition is validated.

This method will reject a proposed transition, mutating the proposed state to the provided state. A reason for rejecting the transition is also passed on to the OrchestrationContext. Rules that reject the transition will not fizzle, despite the proposed state type changing.

Parameters:

Name Type Description Default state Optional[states.State]

The new proposed state. If None, the current run state will be returned in the result instead.

required reason str

The reason for rejecting the transition

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def reject_transition(self, state: Optional[states.State], reason: str):\n\"\"\"\n    Rejects a proposed transition before the transition is validated.\n\n    This method will reject a proposed transition, mutating the proposed state to\n    the provided `state`. A reason for rejecting the transition is also passed on\n    to the `OrchestrationContext`. Rules that reject the transition will not fizzle,\n    despite the proposed state type changing.\n\n    Args:\n        state: The new proposed state. If `None`, the current run state will be\n            returned in the result instead.\n        reason: The reason for rejecting the transition\n    \"\"\"\n\n    # don't run if the transition is already validated\n    if self.context.validated_state:\n        raise RuntimeError(\"The transition is already validated\")\n\n    # the current state will be used if a new one is not provided\n    if state is None:\n        if self.from_state_type is None:\n            raise OrchestrationError(\n                \"The current run has no state; this transition cannot be \"\n                \"rejected without providing a new state.\"\n            )\n        self.to_state_type = None\n        self.context.proposed_state = None\n    else:\n        # a rule that mutates state should not fizzle itself\n        self.to_state_type = state.type\n        self.context.proposed_state = state\n\n    self.context.response_status = SetStateStatus.REJECT\n    self.context.response_details = StateRejectDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.delay_transition","title":"delay_transition async","text":"

Delays a proposed transition before the transition is validated.

This method will delay a proposed transition, setting the proposed state to None, signaling to the OrchestrationContext that no state should be written to the database. The number of seconds a transition should be delayed is passed to the OrchestrationContext. A reason for delaying the transition is also provided. Rules that delay the transition will not fizzle, despite the proposed state type changing.

Parameters:

Name Type Description Default delay_seconds int

The number of seconds the transition should be delayed

required reason str

The reason for delaying the transition

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def delay_transition(\n    self,\n    delay_seconds: int,\n    reason: str,\n):\n\"\"\"\n    Delays a proposed transition before the transition is validated.\n\n    This method will delay a proposed transition, setting the proposed state to\n    `None`, signaling to the `OrchestrationContext` that no state should be\n    written to the database. The number of seconds a transition should be delayed is\n    passed to the `OrchestrationContext`. A reason for delaying the transition is\n    also provided. Rules that delay the transition will not fizzle, despite the\n    proposed state type changing.\n\n    Args:\n        delay_seconds: The number of seconds the transition should be delayed\n        reason: The reason for delaying the transition\n    \"\"\"\n\n    # don't run if the transition is already validated\n    if self.context.validated_state:\n        raise RuntimeError(\"The transition is already validated\")\n\n    # a rule that mutates state should not fizzle itself\n    self.to_state_type = None\n    self.context.proposed_state = None\n    self.context.response_status = SetStateStatus.WAIT\n    self.context.response_details = StateWaitDetails(\n        delay_seconds=delay_seconds, reason=reason\n    )\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.abort_transition","title":"abort_transition async","text":"

Aborts a proposed transition before the transition is validated.

This method will abort a proposed transition, expecting no further action to occur for this run. The proposed state is set to None, signaling to the OrchestrationContext that no state should be written to the database. A reason for aborting the transition is also provided. Rules that abort the transition will not fizzle, despite the proposed state type changing.

Parameters:

Name Type Description Default reason str

The reason for aborting the transition

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def abort_transition(self, reason: str):\n\"\"\"\n    Aborts a proposed transition before the transition is validated.\n\n    This method will abort a proposed transition, expecting no further action to\n    occur for this run. The proposed state is set to `None`, signaling to the\n    `OrchestrationContext` that no state should be written to the database. A\n    reason for aborting the transition is also provided. Rules that abort the\n    transition will not fizzle, despite the proposed state type changing.\n\n    Args:\n        reason: The reason for aborting the transition\n    \"\"\"\n\n    # don't run if the transition is already validated\n    if self.context.validated_state:\n        raise RuntimeError(\"The transition is already validated\")\n\n    # a rule that mutates state should not fizzle itself\n    self.to_state_type = None\n    self.context.proposed_state = None\n    self.context.response_status = SetStateStatus.ABORT\n    self.context.response_details = StateAbortDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.rename_state","title":"rename_state async","text":"

Sets the \"name\" attribute on a proposed state.

The name of a state is an annotation intended to provide rich, human-readable context for how a run is progressing. This method only updates the name and not the canonical state TYPE, and will not fizzle or invalidate any other rules that might govern this state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def rename_state(self, state_name):\n\"\"\"\n    Sets the \"name\" attribute on a proposed state.\n\n    The name of a state is an annotation intended to provide rich, human-readable\n    context for how a run is progressing. This method only updates the name and not\n    the canonical state TYPE, and will not fizzle or invalidate any other rules\n    that might govern this state transition.\n    \"\"\"\n    if self.context.proposed_state is not None:\n        self.context.proposed_state.name = state_name\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.update_context_parameters","title":"update_context_parameters async","text":"

Updates the \"parameters\" dictionary attribute with the specified key-value pair.

This mechanism streamlines the process of passing messages and information between orchestration rules if necessary and is simpler and more ephemeral than message-passing via the database or some other side-effect. This mechanism can be used to break up large rules for ease of testing or comprehension, but note that any rules coupled this way (or any other way) are no longer independent and the order in which they appear in the orchestration policy priority will matter.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def update_context_parameters(self, key, value):\n\"\"\"\n    Updates the \"parameters\" dictionary attribute with the specified key-value pair.\n\n    This mechanism streamlines the process of passing messages and information\n    between orchestration rules if necessary and is simpler and more ephemeral than\n    message-passing via the database or some other side-effect. This mechanism can\n    be used to break up large rules for ease of testing or comprehension, but note\n    that any rules coupled this way (or any other way) are no longer independent and\n    the order in which they appear in the orchestration policy priority will matter.\n    \"\"\"\n\n    self.context.parameters.update({key: value})\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform","title":"BaseUniversalTransform","text":"

Bases: contextlib.AbstractAsyncContextManager

An abstract base class used to implement privileged bookkeeping logic.

Warning

In almost all cases, use the BaseOrchestrationRule base class instead.

Beyond the orchestration rules implemented with the BaseOrchestrationRule ABC, Universal transforms are not stateful, and fire their before- and after- transition hooks on every state transition unless the proposed state is None, indicating that no state should be written to the database. Because there are no guardrails in place to prevent directly mutating state or other parts of the orchestration context, universal transforms should only be used with care.

Attributes:

Name Type Description FROM_STATES Iterable

for compatibility with BaseOrchestrationPolicy

TO_STATES Iterable

for compatibility with BaseOrchestrationPolicy

context

the orchestration context

from_state_type

the state type a run is currently in

to_state_type

the intended proposed state type prior to any orchestration

Parameters:

Name Type Description Default context OrchestrationContext

A FlowOrchestrationContext or TaskOrchestrationContext that is passed between transforms

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
class BaseUniversalTransform(contextlib.AbstractAsyncContextManager):\n\"\"\"\n    An abstract base class used to implement privileged bookkeeping logic.\n\n    Warning:\n        In almost all cases, use the `BaseOrchestrationRule` base class instead.\n\n    Beyond the orchestration rules implemented with the `BaseOrchestrationRule` ABC,\n    Universal transforms are not stateful, and fire their before- and after- transition\n    hooks on every state transition unless the proposed state is `None`, indicating that\n    no state should be written to the database. Because there are no guardrails in place\n    to prevent directly mutating state or other parts of the orchestration context,\n    universal transforms should only be used with care.\n\n    Attributes:\n        FROM_STATES: for compatibility with `BaseOrchestrationPolicy`\n        TO_STATES: for compatibility with `BaseOrchestrationPolicy`\n        context: the orchestration context\n        from_state_type: the state type a run is currently in\n        to_state_type: the intended proposed state type prior to any orchestration\n\n    Args:\n        context: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is\n            passed between transforms\n    \"\"\"\n\n    # `BaseUniversalTransform` will always fire on non-null transitions\n    FROM_STATES: Iterable = ALL_ORCHESTRATION_STATES\n    TO_STATES: Iterable = ALL_ORCHESTRATION_STATES\n\n    def __init__(\n        self,\n        context: OrchestrationContext,\n        from_state_type: Optional[states.StateType],\n        to_state_type: Optional[states.StateType],\n    ):\n        self.context = context\n        self.from_state_type = from_state_type\n        self.to_state_type = to_state_type\n\n    async def __aenter__(self):\n\"\"\"\n        Enter an async runtime context governed by this transform.\n\n        The `with` statement will bind a governed `OrchestrationContext` to the target\n        specified by the `as` clause. If the transition proposed by the\n        `OrchestrationContext` has been nullified on entry and `context.proposed_state`\n        is `None`, entering this context will do nothing. Otherwise\n        `self.before_transition` will fire.\n        \"\"\"\n\n        await self.before_transition(self.context)\n        self.context.rule_signature.append(str(self.__class__))\n        return self.context\n\n    async def __aexit__(\n        self,\n        exc_type: Optional[Type[BaseException]],\n        exc_val: Optional[BaseException],\n        exc_tb: Optional[TracebackType],\n    ) -> None:\n\"\"\"\n        Exit the async runtime context governed by this transform.\n\n        If the transition has been nullified or errorred upon exiting this transforms's context,\n        nothing happens. Otherwise, `self.after_transition` will fire on every non-null\n        proposed state.\n        \"\"\"\n\n        if not self.exception_in_transition():\n            await self.after_transition(self.context)\n            self.context.finalization_signature.append(str(self.__class__))\n\n    async def before_transition(self, context) -> None:\n\"\"\"\n        Implements a hook that fires before a state is committed to the database.\n\n        Args:\n            context: the `OrchestrationContext` that contains transition details\n\n        Returns:\n            None\n        \"\"\"\n\n    async def after_transition(self, context) -> None:\n\"\"\"\n        Implements a hook that can fire after a state is committed to the database.\n\n        Args:\n            context: the `OrchestrationContext` that contains transition details\n\n        Returns:\n            None\n        \"\"\"\n\n    def nullified_transition(self) -> bool:\n\"\"\"\n        Determines if the transition has been nullified.\n\n        Transitions are nullified if the proposed state is `None`, indicating that\n        nothing should be written to the database.\n\n        Returns:\n            True if the transition is nullified, False otherwise.\n        \"\"\"\n\n        return self.context.proposed_state is None\n\n    def exception_in_transition(self) -> bool:\n\"\"\"\n        Determines if the transition has encountered an exception.\n\n        Returns:\n            True if the transition is encountered an exception, False otherwise.\n        \"\"\"\n\n        return self.context.orchestration_error is not None\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.before_transition","title":"before_transition async","text":"

Implements a hook that fires before a state is committed to the database.

Parameters:

Name Type Description Default context

the OrchestrationContext that contains transition details

required

Returns:

Type Description None

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def before_transition(self, context) -> None:\n\"\"\"\n    Implements a hook that fires before a state is committed to the database.\n\n    Args:\n        context: the `OrchestrationContext` that contains transition details\n\n    Returns:\n        None\n    \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.after_transition","title":"after_transition async","text":"

Implements a hook that can fire after a state is committed to the database.

Parameters:

Name Type Description Default context

the OrchestrationContext that contains transition details

required

Returns:

Type Description None

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def after_transition(self, context) -> None:\n\"\"\"\n    Implements a hook that can fire after a state is committed to the database.\n\n    Args:\n        context: the `OrchestrationContext` that contains transition details\n\n    Returns:\n        None\n    \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.nullified_transition","title":"nullified_transition","text":"

Determines if the transition has been nullified.

Transitions are nullified if the proposed state is None, indicating that nothing should be written to the database.

Returns:

Type Description bool

True if the transition is nullified, False otherwise.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def nullified_transition(self) -> bool:\n\"\"\"\n    Determines if the transition has been nullified.\n\n    Transitions are nullified if the proposed state is `None`, indicating that\n    nothing should be written to the database.\n\n    Returns:\n        True if the transition is nullified, False otherwise.\n    \"\"\"\n\n    return self.context.proposed_state is None\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.exception_in_transition","title":"exception_in_transition","text":"

Determines if the transition has encountered an exception.

Returns:

Type Description bool

True if the transition is encountered an exception, False otherwise.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def exception_in_transition(self) -> bool:\n\"\"\"\n    Determines if the transition has encountered an exception.\n\n    Returns:\n        True if the transition is encountered an exception, False otherwise.\n    \"\"\"\n\n    return self.context.orchestration_error is not None\n
"},{"location":"api-ref/server/schemas/actions/","title":"server.schemas.actions","text":""},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions","title":"prefect.server.schemas.actions","text":"

Reduced schemas for accepting API actions.

"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate","title":"ArtifactCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create an artifact.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass ArtifactCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create an artifact.\"\"\"\n\n    key: Optional[str] = FieldFrom(schemas.core.Artifact)\n    type: Optional[str] = FieldFrom(schemas.core.Artifact)\n    description: Optional[str] = FieldFrom(schemas.core.Artifact)\n    data: Optional[Union[Dict[str, Any], Any]] = FieldFrom(schemas.core.Artifact)\n    metadata_: Optional[Dict[str, str]] = FieldFrom(schemas.core.Artifact)\n    flow_run_id: Optional[UUID] = FieldFrom(schemas.core.Artifact)\n    task_run_id: Optional[UUID] = FieldFrom(schemas.core.Artifact)\n\n    _validate_artifact_format = validator(\"key\", allow_reuse=True)(\n        validate_artifact_key\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate","title":"ArtifactUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update an artifact.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass ArtifactUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update an artifact.\"\"\"\n\n    data: Optional[Union[Dict[str, Any], Any]] = FieldFrom(schemas.core.Artifact)\n    description: Optional[str] = FieldFrom(schemas.core.Artifact)\n    metadata_: Optional[Dict[str, str]] = FieldFrom(schemas.core.Artifact)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate","title":"BlockDocumentCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a block document.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass BlockDocumentCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a block document.\"\"\"\n\n    name: Optional[str] = FieldFrom(schemas.core.BlockDocument)\n    data: dict = FieldFrom(schemas.core.BlockDocument)\n    block_schema_id: UUID = FieldFrom(schemas.core.BlockDocument)\n    block_type_id: UUID = FieldFrom(schemas.core.BlockDocument)\n    is_anonymous: bool = FieldFrom(schemas.core.BlockDocument)\n\n    _validate_name_format = validator(\"name\", allow_reuse=True)(\n        validate_block_document_name\n    )\n\n    @root_validator\n    def validate_name_is_present_if_not_anonymous(cls, values):\n        # TODO: We should find an elegant way to reuse this logic from the origin model\n        if not values.get(\"is_anonymous\") and not values.get(\"name\"):\n            raise ValueError(\"Names must be provided for block documents.\")\n        return values\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate","title":"BlockDocumentReferenceCreate","text":"

Bases: ActionBaseModel

Data used to create block document reference.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass BlockDocumentReferenceCreate(ActionBaseModel):\n\"\"\"Data used to create block document reference.\"\"\"\n\n    id: UUID = FieldFrom(schemas.core.BlockDocumentReference)\n    parent_block_document_id: UUID = FieldFrom(schemas.core.BlockDocumentReference)\n    reference_block_document_id: UUID = FieldFrom(schemas.core.BlockDocumentReference)\n    name: str = FieldFrom(schemas.core.BlockDocumentReference)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate","title":"BlockDocumentUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a block document.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass BlockDocumentUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a block document.\"\"\"\n\n    block_schema_id: Optional[UUID] = Field(\n        default=None, description=\"A block schema ID\"\n    )\n    data: dict = FieldFrom(schemas.core.BlockDocument)\n    merge_existing_data: bool = True\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate","title":"BlockSchemaCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a block schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass BlockSchemaCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a block schema.\"\"\"\n\n    fields: dict = FieldFrom(schemas.core.BlockSchema)\n    block_type_id: Optional[UUID] = FieldFrom(schemas.core.BlockSchema)\n    capabilities: List[str] = FieldFrom(schemas.core.BlockSchema)\n    version: str = FieldFrom(schemas.core.BlockSchema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate","title":"BlockTypeCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a block type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass BlockTypeCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a block type.\"\"\"\n\n    name: str = FieldFrom(schemas.core.BlockType)\n    slug: str = FieldFrom(schemas.core.BlockType)\n    logo_url: Optional[schemas.core.HttpUrl] = FieldFrom(schemas.core.BlockType)\n    documentation_url: Optional[schemas.core.HttpUrl] = FieldFrom(\n        schemas.core.BlockType\n    )\n    description: Optional[str] = FieldFrom(schemas.core.BlockType)\n    code_example: Optional[str] = FieldFrom(schemas.core.BlockType)\n\n    # validators\n    _validate_slug_format = validator(\"slug\", allow_reuse=True)(\n        validate_block_type_slug\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate","title":"BlockTypeUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a block type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass BlockTypeUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a block type.\"\"\"\n\n    logo_url: Optional[schemas.core.HttpUrl] = FieldFrom(schemas.core.BlockType)\n    documentation_url: Optional[schemas.core.HttpUrl] = FieldFrom(\n        schemas.core.BlockType\n    )\n    description: Optional[str] = FieldFrom(schemas.core.BlockType)\n    code_example: Optional[str] = FieldFrom(schemas.core.BlockType)\n\n    @classmethod\n    def updatable_fields(cls) -> set:\n        return get_class_fields_only(cls)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate","title":"ConcurrencyLimitCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a concurrency limit.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass ConcurrencyLimitCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a concurrency limit.\"\"\"\n\n    tag: str = FieldFrom(schemas.core.ConcurrencyLimit)\n    concurrency_limit: int = FieldFrom(schemas.core.ConcurrencyLimit)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate","title":"DeploymentCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@experimental_field(\n    \"work_pool_name\",\n    group=\"work_pools\",\n    when=lambda x: x is not None,\n)\n@copy_model_fields\nclass DeploymentCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a deployment.\"\"\"\n\n    @root_validator(pre=True)\n    def remove_old_fields(cls, values):\n        # 2.7.7 removed worker_pool_queue_id in lieu of worker_pool_name and\n        # worker_pool_queue_name. Those fields were later renamed to work_pool_name\n        # and work_queue_name. This validator removes old fields provided\n        # by older clients to avoid 422 errors.\n        values_copy = copy(values)\n        worker_pool_queue_id = values_copy.pop(\"worker_pool_queue_id\", None)\n        worker_pool_name = values_copy.pop(\"worker_pool_name\", None)\n        worker_pool_queue_name = values_copy.pop(\"worker_pool_queue_name\", None)\n        work_pool_queue_name = values_copy.pop(\"work_pool_queue_name\", None)\n        if worker_pool_queue_id:\n            warnings.warn(\n                (\n                    \"`worker_pool_queue_id` is no longer supported for creating \"\n                    \"deployments. Please use `work_pool_name` and \"\n                    \"`work_queue_name` instead.\"\n                ),\n                UserWarning,\n            )\n        if worker_pool_name or worker_pool_queue_name or work_pool_queue_name:\n            warnings.warn(\n                (\n                    \"`worker_pool_name`, `worker_pool_queue_name`, and \"\n                    \"`work_pool_name` are\"\n                    \"no longer supported for creating \"\n                    \"deployments. Please use `work_pool_name` and \"\n                    \"`work_queue_name` instead.\"\n                ),\n                UserWarning,\n            )\n        return values_copy\n\n    name: str = FieldFrom(schemas.core.Deployment)\n    flow_id: UUID = FieldFrom(schemas.core.Deployment)\n    is_schedule_active: Optional[bool] = FieldFrom(schemas.core.Deployment)\n    parameters: Dict[str, Any] = FieldFrom(schemas.core.Deployment)\n    tags: List[str] = FieldFrom(schemas.core.Deployment)\n    pull_steps: Optional[List[dict]] = FieldFrom(schemas.core.Deployment)\n\n    manifest_path: Optional[str] = FieldFrom(schemas.core.Deployment)\n    work_queue_name: Optional[str] = FieldFrom(schemas.core.Deployment)\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the deployment's work pool.\",\n        example=\"my-work-pool\",\n    )\n    storage_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n    infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n    schedule: Optional[schemas.schedules.SCHEDULE_TYPES] = FieldFrom(\n        schemas.core.Deployment\n    )\n    description: Optional[str] = FieldFrom(schemas.core.Deployment)\n    parameter_openapi_schema: Optional[Dict[str, Any]] = FieldFrom(\n        schemas.core.Deployment\n    )\n    path: Optional[str] = FieldFrom(schemas.core.Deployment)\n    version: Optional[str] = FieldFrom(schemas.core.Deployment)\n    entrypoint: Optional[str] = FieldFrom(schemas.core.Deployment)\n    infra_overrides: Optional[Dict[str, Any]] = FieldFrom(schemas.core.Deployment)\n\n    def check_valid_configuration(self, base_job_template: dict):\n\"\"\"Check that the combination of base_job_template defaults\n        and infra_overrides conforms to the specified schema.\n        \"\"\"\n        variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n        if variables_schema is not None:\n            # jsonschema considers required fields, even if that field has a default,\n            # to still be required. To get around this we remove the fields from\n            # required if there is a default present.\n            required = variables_schema.get(\"required\")\n            properties = variables_schema.get(\"properties\")\n            if required is not None and properties is not None:\n                for k, v in properties.items():\n                    if \"default\" in v and k in required:\n                        required.remove(k)\n\n            jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.check_valid_configuration","title":"check_valid_configuration","text":"

Check that the combination of base_job_template defaults and infra_overrides conforms to the specified schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n\"\"\"Check that the combination of base_job_template defaults\n    and infra_overrides conforms to the specified schema.\n    \"\"\"\n    variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n    if variables_schema is not None:\n        # jsonschema considers required fields, even if that field has a default,\n        # to still be required. To get around this we remove the fields from\n        # required if there is a default present.\n        required = variables_schema.get(\"required\")\n        properties = variables_schema.get(\"properties\")\n        if required is not None and properties is not None:\n            for k, v in properties.items():\n                if \"default\" in v and k in required:\n                    required.remove(k)\n\n        jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate","title":"DeploymentFlowRunCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a flow run from a deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass DeploymentFlowRunCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a flow run from a deployment.\"\"\"\n\n    # FlowRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the flow run to create\"\n    )\n\n    name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    parameters: dict = FieldFrom(schemas.core.FlowRun)\n    context: dict = FieldFrom(schemas.core.FlowRun)\n    infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    empirical_policy: schemas.core.FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n    tags: List[str] = FieldFrom(schemas.core.FlowRun)\n    idempotency_key: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    parent_task_run_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    work_queue_name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate","title":"DeploymentUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@experimental_field(\n    \"work_pool_name\",\n    group=\"work_pools\",\n    when=lambda x: x is not None,\n)\n@copy_model_fields\nclass DeploymentUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a deployment.\"\"\"\n\n    @root_validator(pre=True)\n    def remove_old_fields(cls, values):\n        # 2.7.7 removed worker_pool_queue_id in lieu of worker_pool_name and\n        # worker_pool_queue_name. Those fields were later renamed to work_pool_name\n        # and work_queue_name. This validator removes old fields provided\n        # by older clients to avoid 422 errors.\n        values_copy = copy(values)\n        worker_pool_queue_id = values_copy.pop(\"worker_pool_queue_id\", None)\n        worker_pool_name = values_copy.pop(\"worker_pool_name\", None)\n        worker_pool_queue_name = values_copy.pop(\"worker_pool_queue_name\", None)\n        work_pool_queue_name = values_copy.pop(\"work_pool_queue_name\", None)\n        if worker_pool_queue_id:\n            warnings.warn(\n                (\n                    \"`worker_pool_queue_id` is no longer supported for updating \"\n                    \"deployments. Please use `work_pool_name` and \"\n                    \"`work_queue_name` instead.\"\n                ),\n                UserWarning,\n            )\n        if worker_pool_name or worker_pool_queue_name or work_pool_queue_name:\n            warnings.warn(\n                (\n                    \"`worker_pool_name`, `worker_pool_queue_name`, and \"\n                    \"`work_pool_name` are\"\n                    \"no longer supported for creating \"\n                    \"deployments. Please use `work_pool_name` and \"\n                    \"`work_queue_name` instead.\"\n                ),\n                UserWarning,\n            )\n        return values_copy\n\n    version: Optional[str] = FieldFrom(schemas.core.Deployment)\n    schedule: Optional[schemas.schedules.SCHEDULE_TYPES] = FieldFrom(\n        schemas.core.Deployment\n    )\n    description: Optional[str] = FieldFrom(schemas.core.Deployment)\n    is_schedule_active: bool = FieldFrom(schemas.core.Deployment)\n    parameters: Dict[str, Any] = FieldFrom(schemas.core.Deployment)\n    tags: List[str] = FieldFrom(schemas.core.Deployment)\n    work_queue_name: Optional[str] = FieldFrom(schemas.core.Deployment)\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the deployment's work pool.\",\n        example=\"my-work-pool\",\n    )\n    path: Optional[str] = FieldFrom(schemas.core.Deployment)\n    infra_overrides: Optional[Dict[str, Any]] = FieldFrom(schemas.core.Deployment)\n    entrypoint: Optional[str] = FieldFrom(schemas.core.Deployment)\n    manifest_path: Optional[str] = FieldFrom(schemas.core.Deployment)\n    storage_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n    infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n\n    def check_valid_configuration(self, base_job_template: dict):\n\"\"\"Check that the combination of base_job_template defaults\n        and infra_overrides conforms to the specified schema.\n        \"\"\"\n        variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n        if variables_schema is not None:\n            # jsonschema considers required fields, even if that field has a default,\n            # to still be required. To get around this we remove the fields from\n            # required if there is a default present.\n            required = variables_schema.get(\"required\")\n            properties = variables_schema.get(\"properties\")\n            if required is not None and properties is not None:\n                for k, v in properties.items():\n                    if \"default\" in v and k in required:\n                        required.remove(k)\n\n        if variables_schema is not None:\n            jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.check_valid_configuration","title":"check_valid_configuration","text":"

Check that the combination of base_job_template defaults and infra_overrides conforms to the specified schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n\"\"\"Check that the combination of base_job_template defaults\n    and infra_overrides conforms to the specified schema.\n    \"\"\"\n    variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n    if variables_schema is not None:\n        # jsonschema considers required fields, even if that field has a default,\n        # to still be required. To get around this we remove the fields from\n        # required if there is a default present.\n        required = variables_schema.get(\"required\")\n        properties = variables_schema.get(\"properties\")\n        if required is not None and properties is not None:\n            for k, v in properties.items():\n                if \"default\" in v and k in required:\n                    required.remove(k)\n\n    if variables_schema is not None:\n        jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate","title":"FlowCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a flow.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass FlowCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a flow.\"\"\"\n\n    name: str = FieldFrom(schemas.core.Flow)\n    tags: List[str] = FieldFrom(schemas.core.Flow)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate","title":"FlowRunCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a flow run.\"\"\"\n\n    # FlowRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the flow run to create\"\n    )\n\n    name: str = FieldFrom(schemas.core.FlowRun)\n    flow_id: UUID = FieldFrom(schemas.core.FlowRun)\n    flow_version: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    parameters: dict = FieldFrom(schemas.core.FlowRun)\n    context: dict = FieldFrom(schemas.core.FlowRun)\n    parent_task_run_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    empirical_policy: schemas.core.FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n    tags: List[str] = FieldFrom(schemas.core.FlowRun)\n    idempotency_key: Optional[str] = FieldFrom(schemas.core.FlowRun)\n\n    # DEPRECATED\n\n    deployment_id: Optional[UUID] = Field(\n        None,\n        description=(\n            \"DEPRECATED: The id of the deployment associated with this flow run, if\"\n            \" available.\"\n        ),\n        deprecated=True,\n    )\n\n    class Config(ActionBaseModel.Config):\n        json_dumps = orjson_dumps_extra_compatible\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate","title":"FlowRunNotificationPolicyCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a flow run notification policy.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunNotificationPolicyCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a flow run notification policy.\"\"\"\n\n    is_active: bool = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    state_names: List[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    tags: List[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    block_document_id: UUID = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    message_template: Optional[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate","title":"FlowRunNotificationPolicyUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a flow run notification policy.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunNotificationPolicyUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a flow run notification policy.\"\"\"\n\n    is_active: Optional[bool] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    state_names: Optional[List[str]] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    tags: Optional[List[str]] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    block_document_id: Optional[UUID] = FieldFrom(\n        schemas.core.FlowRunNotificationPolicy\n    )\n    message_template: Optional[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate","title":"FlowRunUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a flow run.\"\"\"\n\n    name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    flow_version: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    parameters: dict = FieldFrom(schemas.core.FlowRun)\n    empirical_policy: schemas.core.FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n    tags: List[str] = FieldFrom(schemas.core.FlowRun)\n    infrastructure_pid: Optional[str] = FieldFrom(schemas.core.FlowRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate","title":"FlowUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a flow.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass FlowUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a flow.\"\"\"\n\n    tags: List[str] = FieldFrom(schemas.core.Flow)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate","title":"LogCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a log.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass LogCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a log.\"\"\"\n\n    name: str = FieldFrom(schemas.core.Log)\n    level: int = FieldFrom(schemas.core.Log)\n    message: str = FieldFrom(schemas.core.Log)\n    timestamp: schemas.core.DateTimeTZ = FieldFrom(schemas.core.Log)\n    flow_run_id: UUID = FieldFrom(schemas.core.Log)\n    task_run_id: Optional[UUID] = FieldFrom(schemas.core.Log)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate","title":"SavedSearchCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a saved search.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass SavedSearchCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a saved search.\"\"\"\n\n    name: str = FieldFrom(schemas.core.SavedSearch)\n    filters: List[schemas.core.SavedSearchFilter] = FieldFrom(schemas.core.SavedSearch)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate","title":"StateCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a new state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass StateCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a new state.\"\"\"\n\n    type: schemas.states.StateType = FieldFrom(schemas.states.State)\n    name: Optional[str] = FieldFrom(schemas.states.State)\n    message: Optional[str] = FieldFrom(schemas.states.State)\n    data: Optional[Any] = FieldFrom(schemas.states.State)\n    state_details: schemas.states.StateDetails = FieldFrom(schemas.states.State)\n\n    # DEPRECATED\n\n    timestamp: Optional[schemas.core.DateTimeTZ] = Field(\n        default=None,\n        repr=False,\n        ignored=True,\n    )\n    id: Optional[UUID] = Field(default=None, repr=False, ignored=True)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate","title":"TaskRunCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a task run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass TaskRunCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a task run\"\"\"\n\n    # TaskRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the task run to create\"\n    )\n\n    name: str = FieldFrom(schemas.core.TaskRun)\n    flow_run_id: UUID = FieldFrom(schemas.core.TaskRun)\n    task_key: str = FieldFrom(schemas.core.TaskRun)\n    dynamic_key: str = FieldFrom(schemas.core.TaskRun)\n    cache_key: Optional[str] = FieldFrom(schemas.core.TaskRun)\n    cache_expiration: Optional[schemas.core.DateTimeTZ] = FieldFrom(\n        schemas.core.TaskRun\n    )\n    task_version: Optional[str] = FieldFrom(schemas.core.TaskRun)\n    empirical_policy: schemas.core.TaskRunPolicy = FieldFrom(schemas.core.TaskRun)\n    tags: List[str] = FieldFrom(schemas.core.TaskRun)\n    task_inputs: Dict[\n        str,\n        List[\n            Union[\n                schemas.core.TaskRunResult,\n                schemas.core.Parameter,\n                schemas.core.Constant,\n            ]\n        ],\n    ] = FieldFrom(schemas.core.TaskRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate","title":"TaskRunUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a task run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass TaskRunUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a task run\"\"\"\n\n    name: str = FieldFrom(schemas.core.TaskRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate","title":"VariableCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a Variable.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass VariableCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a Variable.\"\"\"\n\n    name: str = FieldFrom(schemas.core.Variable)\n    value: str = FieldFrom(schemas.core.Variable)\n    tags: Optional[List[str]] = FieldFrom(schemas.core.Variable)\n\n    # validators\n    _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate","title":"VariableUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a Variable.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass VariableUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a Variable.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the variable\",\n        example=\"my_variable\",\n        max_length=schemas.core.MAX_VARIABLE_NAME_LENGTH,\n    )\n    value: Optional[str] = Field(\n        default=None,\n        description=\"The value of the variable\",\n        example=\"my-value\",\n        max_length=schemas.core.MAX_VARIABLE_VALUE_LENGTH,\n    )\n    tags: Optional[List[str]] = FieldFrom(schemas.core.Variable)\n\n    # validators\n    _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate","title":"WorkPoolCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a work pool.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass WorkPoolCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a work pool.\"\"\"\n\n    name: str = FieldFrom(schemas.core.WorkPool)\n    description: Optional[str] = FieldFrom(schemas.core.WorkPool)\n    type: str = Field(description=\"The work pool type.\", default=\"prefect-agent\")\n    base_job_template: Dict[str, Any] = FieldFrom(schemas.core.WorkPool)\n    is_paused: bool = FieldFrom(schemas.core.WorkPool)\n    concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkPool)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate","title":"WorkPoolUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a work pool.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass WorkPoolUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a work pool.\"\"\"\n\n    description: Optional[str] = FieldFrom(schemas.core.WorkPool)\n    is_paused: Optional[bool] = FieldFrom(schemas.core.WorkPool)\n    base_job_template: Optional[Dict[str, Any]] = FieldFrom(schemas.core.WorkPool)\n    concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkPool)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate","title":"WorkQueueCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass WorkQueueCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a work queue.\"\"\"\n\n    name: str = FieldFrom(schemas.core.WorkQueue)\n    description: Optional[str] = FieldFrom(schemas.core.WorkQueue)\n    is_paused: bool = FieldFrom(schemas.core.WorkQueue)\n    concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkQueue)\n    priority: Optional[int] = Field(\n        default=None,\n        description=(\n            \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n        ),\n    )\n\n    # DEPRECATED\n\n    filter: Optional[schemas.core.QueueFilter] = Field(\n        None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate","title":"WorkQueueUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass WorkQueueUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a work queue.\"\"\"\n\n    name: str = FieldFrom(schemas.core.WorkQueue)\n    description: Optional[str] = FieldFrom(schemas.core.WorkQueue)\n    is_paused: bool = FieldFrom(schemas.core.WorkQueue)\n    concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkQueue)\n    priority: Optional[int] = FieldFrom(schemas.core.WorkQueue)\n    last_polled: Optional[DateTimeTZ] = FieldFrom(schemas.core.WorkQueue)\n\n    # DEPRECATED\n\n    filter: Optional[schemas.core.QueueFilter] = Field(\n        None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/core/","title":"server.schemas.core","text":""},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core","title":"prefect.server.schemas.core","text":"

Full schemas of Prefect REST API objects.

"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Agent","title":"Agent","text":"

Bases: ORMBaseModel

An ORM representation of an agent

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class Agent(ORMBaseModel):\n\"\"\"An ORM representation of an agent\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the agent. If a name is not provided, it will be\"\n            \" auto-generated.\"\n        ),\n    )\n    work_queue_id: UUID = Field(\n        default=..., description=\"The work queue with which the agent is associated.\"\n    )\n    last_activity_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The last time this agent polled for work.\"\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockDocument","title":"BlockDocument","text":"

Bases: ORMBaseModel

An ORM representation of a block document.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class BlockDocument(ORMBaseModel):\n\"\"\"An ORM representation of a block document.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The block document's name. Not required for anonymous block documents.\"\n        ),\n    )\n    data: dict = Field(default_factory=dict, description=\"The block document's data\")\n    block_schema_id: UUID = Field(default=..., description=\"A block schema ID\")\n    block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The associated block schema\"\n    )\n    block_type_id: UUID = Field(default=..., description=\"A block type ID\")\n    block_type: Optional[BlockType] = Field(\n        default=None, description=\"The associated block type\"\n    )\n    block_document_references: Dict[str, Dict[str, Any]] = Field(\n        default_factory=dict, description=\"Record of the block document's references\"\n    )\n    is_anonymous: bool = Field(\n        default=False,\n        description=(\n            \"Whether the block is anonymous (anonymous blocks are usually created by\"\n            \" Prefect automatically)\"\n        ),\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        # the BlockDocumentCreate subclass allows name=None\n        # and will inherit this validator\n        if v is not None:\n            raise_on_name_with_banned_characters(v)\n        return v\n\n    @root_validator\n    def validate_name_is_present_if_not_anonymous(cls, values):\n        # anonymous blocks may have no name prior to actually being\n        # stored in the database\n        if not values.get(\"is_anonymous\") and not values.get(\"name\"):\n            raise ValueError(\"Names must be provided for block documents.\")\n        return values\n\n    @classmethod\n    async def from_orm_model(\n        cls,\n        session,\n        orm_block_document: \"prefect.server.database.orm_models.ORMBlockDocument\",\n        include_secrets: bool = False,\n    ):\n        data = await orm_block_document.decrypt_data(session=session)\n        # if secrets are not included, obfuscate them based on the schema's\n        # `secret_fields`. Note this walks any nested blocks as well. If the\n        # nested blocks were recovered from named blocks, they will already\n        # be obfuscated, but if nested fields were hardcoded into the parent\n        # blocks data, this is the only opportunity to obfuscate them.\n        if not include_secrets:\n            flat_data = dict_to_flatdict(data)\n            # iterate over the (possibly nested) secret fields\n            # and obfuscate their data\n            for secret_field in orm_block_document.block_schema.fields.get(\n                \"secret_fields\", []\n            ):\n                secret_key = tuple(secret_field.split(\".\"))\n                if flat_data.get(secret_key) is not None:\n                    flat_data[secret_key] = obfuscate_string(flat_data[secret_key])\n                # If a wildcard (*) is in the current secret key path, we take the portion\n                # of the path before the wildcard and compare it to the same level of each\n                # key. A match means that the field is nested under the secret key and should\n                # be obfuscated.\n                elif \"*\" in secret_key:\n                    wildcard_index = secret_key.index(\"*\")\n                    for data_key in flat_data.keys():\n                        if secret_key[0:wildcard_index] == data_key[0:wildcard_index]:\n                            flat_data[data_key] = obfuscate(flat_data[data_key])\n            data = flatdict_to_dict(flat_data)\n\n        return cls(\n            id=orm_block_document.id,\n            created=orm_block_document.created,\n            updated=orm_block_document.updated,\n            name=orm_block_document.name,\n            data=data,\n            block_schema_id=orm_block_document.block_schema_id,\n            block_schema=orm_block_document.block_schema,\n            block_type_id=orm_block_document.block_type_id,\n            block_type=orm_block_document.block_type,\n            is_anonymous=orm_block_document.is_anonymous,\n        )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockDocumentReference","title":"BlockDocumentReference","text":"

Bases: ORMBaseModel

An ORM representation of a block document reference.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class BlockDocumentReference(ORMBaseModel):\n\"\"\"An ORM representation of a block document reference.\"\"\"\n\n    parent_block_document_id: UUID = Field(\n        default=..., description=\"ID of block document the reference is nested within\"\n    )\n    parent_block_document: Optional[BlockDocument] = Field(\n        default=None, description=\"The block document the reference is nested within\"\n    )\n    reference_block_document_id: UUID = Field(\n        default=..., description=\"ID of the nested block document\"\n    )\n    reference_block_document: Optional[BlockDocument] = Field(\n        default=None, description=\"The nested block document\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n\n    @root_validator\n    def validate_parent_and_ref_are_different(cls, values):\n        parent_id = values.get(\"parent_block_document_id\")\n        ref_id = values.get(\"reference_block_document_id\")\n        if parent_id and ref_id and parent_id == ref_id:\n            raise ValueError(\n                \"`parent_block_document_id` and `reference_block_document_id` cannot be\"\n                \" the same\"\n            )\n        return values\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockSchema","title":"BlockSchema","text":"

Bases: ORMBaseModel

An ORM representation of a block schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class BlockSchema(ORMBaseModel):\n\"\"\"An ORM representation of a block schema.\"\"\"\n\n    checksum: str = Field(default=..., description=\"The block schema's unique checksum\")\n    fields: dict = Field(\n        default_factory=dict, description=\"The block schema's field schema\"\n    )\n    block_type_id: Optional[UUID] = Field(default=..., description=\"A block type ID\")\n    block_type: Optional[BlockType] = Field(\n        default=None, description=\"The associated block type\"\n    )\n    capabilities: List[str] = Field(\n        default_factory=list,\n        description=\"A list of Block capabilities\",\n    )\n    version: str = Field(\n        default=DEFAULT_BLOCK_SCHEMA_VERSION,\n        description=\"Human readable identifier for the block schema\",\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockSchemaReference","title":"BlockSchemaReference","text":"

Bases: ORMBaseModel

An ORM representation of a block schema reference.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class BlockSchemaReference(ORMBaseModel):\n\"\"\"An ORM representation of a block schema reference.\"\"\"\n\n    parent_block_schema_id: UUID = Field(\n        default=..., description=\"ID of block schema the reference is nested within\"\n    )\n    parent_block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The block schema the reference is nested within\"\n    )\n    reference_block_schema_id: UUID = Field(\n        default=..., description=\"ID of the nested block schema\"\n    )\n    reference_block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The nested block schema\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockType","title":"BlockType","text":"

Bases: ORMBaseModel

An ORM representation of a block type

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class BlockType(ORMBaseModel):\n\"\"\"An ORM representation of a block type\"\"\"\n\n    name: str = Field(default=..., description=\"A block type's name\")\n    slug: str = Field(default=..., description=\"A block type's slug\")\n    logo_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's logo\"\n    )\n    documentation_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's documentation\"\n    )\n    description: Optional[str] = Field(\n        default=None,\n        description=\"A short blurb about the corresponding block's intended use\",\n    )\n    code_example: Optional[str] = Field(\n        default=None,\n        description=\"A code snippet demonstrating use of the corresponding block\",\n    )\n    is_protected: bool = Field(\n        default=False, description=\"Protected block types cannot be modified via API.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        raise_on_name_with_banned_characters(v)\n        return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.ConcurrencyLimit","title":"ConcurrencyLimit","text":"

Bases: ORMBaseModel

An ORM representation of a concurrency limit.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class ConcurrencyLimit(ORMBaseModel):\n\"\"\"An ORM representation of a concurrency limit.\"\"\"\n\n    tag: str = Field(\n        default=..., description=\"A tag the concurrency limit is applied to.\"\n    )\n    concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: List[UUID] = Field(\n        default_factory=list,\n        description=\"A list of active run ids using a concurrency slot\",\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Configuration","title":"Configuration","text":"

Bases: ORMBaseModel

An ORM representation of account info.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class Configuration(ORMBaseModel):\n\"\"\"An ORM representation of account info.\"\"\"\n\n    key: str = Field(default=..., description=\"Account info key\")\n    value: dict = Field(default=..., description=\"Account info\")\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Deployment","title":"Deployment","text":"

Bases: ORMBaseModel

An ORM representation of deployment data.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class Deployment(ORMBaseModel):\n\"\"\"An ORM representation of deployment data.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the deployment.\")\n    version: Optional[str] = Field(\n        default=None, description=\"An optional version for the deployment.\"\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A description for the deployment.\"\n    )\n    flow_id: UUID = Field(\n        default=..., description=\"The flow id associated with the deployment.\"\n    )\n    schedule: Optional[schedules.SCHEDULE_TYPES] = Field(\n        default=None, description=\"A schedule for the deployment.\"\n    )\n    is_schedule_active: bool = Field(\n        default=True, description=\"Whether or not the deployment schedule is active.\"\n    )\n    infra_overrides: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to the base infrastructure block at runtime.\",\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    pull_steps: Optional[List[dict]] = Field(\n        default=None,\n        description=\"Pull steps for cloning and running this deployment.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the deployment\",\n        example=[\"tag-1\", \"tag-2\"],\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The work queue for the deployment. If no work queue is set, work will not\"\n            \" be scheduled.\"\n        ),\n    )\n    parameter_openapi_schema: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"The parameter schema of the flow, including defaults.\",\n    )\n    path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the working directory for the workflow, relative to remote\"\n            \" storage or an absolute path.\"\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the entrypoint for the workflow, relative to the `path`.\"\n        ),\n    )\n    manifest_path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the flow's manifest file, relative to the chosen storage.\"\n        ),\n    )\n    storage_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining storage used for this flow.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use for flow runs.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this deployment.\",\n    )\n    updated_by: Optional[UpdatedBy] = Field(\n        default=None,\n        description=\"Optional information about the updater of this deployment.\",\n    )\n    work_queue_id: UUID = Field(\n        default=None,\n        description=(\n            \"The id of the work pool queue to which this deployment is assigned.\"\n        ),\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        raise_on_name_with_banned_characters(v)\n        return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Flow","title":"Flow","text":"

Bases: ORMBaseModel

An ORM representation of flow data.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class Flow(ORMBaseModel):\n\"\"\"An ORM representation of flow data.\"\"\"\n\n    name: str = Field(\n        default=..., description=\"The name of the flow\", example=\"my-flow\"\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        example=[\"tag-1\", \"tag-2\"],\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        raise_on_name_with_banned_characters(v)\n        return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRun","title":"FlowRun","text":"

Bases: ORMBaseModel

An ORM representation of flow run data.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class FlowRun(ORMBaseModel):\n\"\"\"An ORM representation of flow run data.\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        example=\"my-flow-run\",\n    )\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the flow run's current state.\"\n    )\n    deployment_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"The id of the deployment associated with this flow run, if available.\"\n        ),\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None, description=\"The work queue that handled this flow run.\"\n    )\n    flow_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the flow executed in this flow run.\",\n        example=\"1.0\",\n    )\n    parameters: dict = Field(\n        default_factory=dict, description=\"Parameters for the flow run.\"\n    )\n    idempotency_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n            \" run is not created multiple times.\"\n        ),\n    )\n    context: dict = Field(\n        default_factory=dict,\n        description=\"Additional context for the flow run.\",\n        example={\"my_var\": \"my_val\"},\n    )\n    empirical_policy: FlowRunPolicy = Field(\n        default_factory=FlowRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags on the flow run\",\n        example=[\"tag-1\", \"tag-2\"],\n    )\n    parent_task_run_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n            \" flow used to track subflow state.\"\n        ),\n    )\n\n    state_type: Optional[states.StateType] = Field(\n        default=None, description=\"The type of the current flow run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current flow run state.\"\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the flow run was executed.\"\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The flow run's expected start time.\",\n    )\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the flow run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the flow run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of the total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n    auto_scheduled: bool = Field(\n        default=False,\n        description=\"Whether or not the flow run was automatically scheduled.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use this flow run.\",\n    )\n    infrastructure_pid: Optional[str] = Field(\n        default=None,\n        description=\"The id of the flow run as returned by an infrastructure block.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this flow run.\",\n    )\n    work_queue_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the run's work pool queue.\"\n    )\n\n    # relationships\n    # flow: Flow = None\n    # task_runs: List[\"TaskRun\"] = Field(default_factory=list)\n    state: Optional[states.State] = Field(\n        default=None, description=\"The current state of the flow run.\"\n    )\n    # parent_task_run: \"TaskRun\" = None\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return name or generate_slug(2)\n\n    def __eq__(self, other: Any) -> bool:\n\"\"\"\n        Check for \"equality\" to another flow run schema\n\n        Estimates times are rolling and will always change with repeated queries for\n        a flow run so we ignore them during equality checks.\n        \"\"\"\n        if isinstance(other, FlowRun):\n            exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n            return self.dict(exclude=exclude_fields) == other.dict(\n                exclude=exclude_fields\n            )\n        return super().__eq__(other)\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunNotificationPolicy","title":"FlowRunNotificationPolicy","text":"

Bases: ORMBaseModel

An ORM representation of a flow run notification.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class FlowRunNotificationPolicy(ORMBaseModel):\n\"\"\"An ORM representation of a flow run notification.\"\"\"\n\n    is_active: bool = Field(\n        default=True, description=\"Whether the policy is currently active\"\n    )\n    state_names: List[str] = Field(\n        default=..., description=\"The flow run states that trigger notifications\"\n    )\n    tags: List[str] = Field(\n        default=...,\n        description=\"The flow run tags that trigger notifications (set [] to disable)\",\n    )\n    block_document_id: UUID = Field(\n        default=..., description=\"The block document ID used for sending notifications\"\n    )\n    message_template: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A templatable notification message. Use {braces} to add variables.\"\n            \" Valid variables include:\"\n            f\" {listrepr(sorted(FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n        ),\n        example=(\n            \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n            \" {flow_run_state_name}.\"\n        ),\n    )\n\n    @validator(\"message_template\")\n    def validate_message_template_variables(cls, v):\n        if v is not None:\n            try:\n                v.format(**{k: \"test\" for k in FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS})\n            except KeyError as exc:\n                raise ValueError(f\"Invalid template variable provided: '{exc.args[0]}'\")\n        return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunPolicy","title":"FlowRunPolicy","text":"

Bases: PrefectBaseModel

Defines of how a flow run should retry.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class FlowRunPolicy(PrefectBaseModel):\n\"\"\"Defines of how a flow run should retry.\"\"\"\n\n    # TODO: Determine how to separate between infrastructure and within-process level\n    #       retries\n    max_retries: int = Field(\n        default=0,\n        description=(\n            \"The maximum number of retries. Field is not used. Please use `retries`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retry_delay_seconds: float = Field(\n        default=0,\n        description=(\n            \"The delay between retries. Field is not used. Please use `retry_delay`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n    retry_delay: Optional[int] = Field(\n        default=None, description=\"The delay time between retries, in seconds.\"\n    )\n    pause_keys: Optional[set] = Field(\n        default_factory=set, description=\"Tracks pauses this run has observed.\"\n    )\n    resuming: Optional[bool] = Field(\n        default=False, description=\"Indicates if this run is resuming from a pause.\"\n    )\n\n    @root_validator\n    def populate_deprecated_fields(cls, values):\n\"\"\"\n        If deprecated fields are provided, populate the corresponding new fields\n        to preserve orchestration behavior.\n        \"\"\"\n        if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n            values[\"retries\"] = values[\"max_retries\"]\n        if (\n            not values.get(\"retry_delay\", None)\n            and values.get(\"retry_delay_seconds\", 0) != 0\n        ):\n            values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n        return values\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunPolicy.populate_deprecated_fields","title":"populate_deprecated_fields","text":"

If deprecated fields are provided, populate the corresponding new fields to preserve orchestration behavior.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
@root_validator\ndef populate_deprecated_fields(cls, values):\n\"\"\"\n    If deprecated fields are provided, populate the corresponding new fields\n    to preserve orchestration behavior.\n    \"\"\"\n    if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n        values[\"retries\"] = values[\"max_retries\"]\n    if (\n        not values.get(\"retry_delay\", None)\n        and values.get(\"retry_delay_seconds\", 0) != 0\n    ):\n        values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n    return values\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunnerSettings","title":"FlowRunnerSettings","text":"

Bases: PrefectBaseModel

An API schema for passing details about the flow runner.

This schema is agnostic to the types and configuration provided by clients

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class FlowRunnerSettings(PrefectBaseModel):\n\"\"\"\n    An API schema for passing details about the flow runner.\n\n    This schema is agnostic to the types and configuration provided by clients\n    \"\"\"\n\n    type: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The type of the flow runner which can be used by the client for\"\n            \" dispatching.\"\n        ),\n    )\n    config: Optional[dict] = Field(\n        default=None, description=\"The configuration for the given flow runner type.\"\n    )\n\n    # The following is required for composite compatibility in the ORM\n\n    def __init__(self, type: str = None, config: dict = None, **kwargs) -> None:\n        # Pydantic does not support positional arguments so they must be converted to\n        # keyword arguments\n        super().__init__(type=type, config=config, **kwargs)\n\n    def __composite_values__(self):\n        return self.type, self.config\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Log","title":"Log","text":"

Bases: ORMBaseModel

An ORM representation of log data.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class Log(ORMBaseModel):\n\"\"\"An ORM representation of log data.\"\"\"\n\n    name: str = Field(default=..., description=\"The logger name.\")\n    level: int = Field(default=..., description=\"The log level.\")\n    message: str = Field(default=..., description=\"The log message.\")\n    timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n    flow_run_id: UUID = Field(\n        default=..., description=\"The flow run ID associated with the log.\"\n    )\n    task_run_id: Optional[UUID] = Field(\n        default=None, description=\"The task run ID associated with the log.\"\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.QueueFilter","title":"QueueFilter","text":"

Bases: PrefectBaseModel

Filter criteria definition for a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class QueueFilter(PrefectBaseModel):\n\"\"\"Filter criteria definition for a work queue.\"\"\"\n\n    tags: Optional[List[str]] = Field(\n        default=None,\n        description=\"Only include flow runs with these tags in the work queue.\",\n    )\n    deployment_ids: Optional[List[UUID]] = Field(\n        default=None,\n        description=\"Only include flow runs from these deployments in the work queue.\",\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.SavedSearch","title":"SavedSearch","text":"

Bases: ORMBaseModel

An ORM representation of saved search data. Represents a set of filter criteria.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class SavedSearch(ORMBaseModel):\n\"\"\"An ORM representation of saved search data. Represents a set of filter criteria.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the saved search.\")\n    filters: List[SavedSearchFilter] = Field(\n        default_factory=list, description=\"The filter set for the saved search.\"\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.SavedSearchFilter","title":"SavedSearchFilter","text":"

Bases: PrefectBaseModel

A filter for a saved search model. Intended for use by the Prefect UI.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class SavedSearchFilter(PrefectBaseModel):\n\"\"\"A filter for a saved search model. Intended for use by the Prefect UI.\"\"\"\n\n    object: str = Field(default=..., description=\"The object over which to filter.\")\n    property: str = Field(\n        default=..., description=\"The property of the object on which to filter.\"\n    )\n    type: str = Field(default=..., description=\"The type of the property.\")\n    operation: str = Field(\n        default=...,\n        description=\"The operator to apply to the object. For example, `equals`.\",\n    )\n    value: Any = Field(\n        default=..., description=\"A JSON-compatible value for the filter.\"\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.TaskRun","title":"TaskRun","text":"

Bases: ORMBaseModel

An ORM representation of task run data.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class TaskRun(ORMBaseModel):\n\"\"\"An ORM representation of task run data.\"\"\"\n\n    name: str = Field(default_factory=lambda: generate_slug(2), example=\"my-task-run\")\n    flow_run_id: UUID = Field(\n        default=..., description=\"The flow run id of the task run.\"\n    )\n    task_key: str = Field(\n        default=..., description=\"A unique identifier for the task being run.\"\n    )\n    dynamic_key: str = Field(\n        default=...,\n        description=(\n            \"A dynamic key used to differentiate between multiple runs of the same task\"\n            \" within the same flow run.\"\n        ),\n    )\n    cache_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional cache key. If a COMPLETED state associated with this cache key\"\n            \" is found, the cached COMPLETED state will be used instead of executing\"\n            \" the task run.\"\n        ),\n    )\n    cache_expiration: Optional[DateTimeTZ] = Field(\n        default=None, description=\"Specifies when the cached state should expire.\"\n    )\n    task_version: Optional[str] = Field(\n        default=None, description=\"The version of the task being run.\"\n    )\n    empirical_policy: TaskRunPolicy = Field(\n        default_factory=TaskRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the task run.\",\n        example=[\"tag-1\", \"tag-2\"],\n    )\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the current task run state.\"\n    )\n    task_inputs: Dict[str, List[Union[TaskRunResult, Parameter, Constant]]] = Field(\n        default_factory=dict,\n        description=(\n            \"Tracks the source of inputs to a task run. Used for internal bookkeeping.\"\n        ),\n    )\n    state_type: Optional[states.StateType] = Field(\n        default=None, description=\"The type of the current task run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current task run state.\"\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the task run has been executed.\"\n    )\n    flow_run_run_count: int = Field(\n        default=0,\n        description=(\n            \"If the parent flow has retried, this indicates the flow retry this run is\"\n            \" associated with.\"\n        ),\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The task run's expected start time.\",\n    )\n\n    # the next scheduled start time will be populated\n    # whenever the run is in a scheduled state\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the task run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the task run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n\n    # relationships\n    # flow_run: FlowRun = None\n    # subflow_runs: List[FlowRun] = Field(default_factory=list)\n    state: Optional[states.State] = Field(\n        default=None, description=\"The current task run state.\"\n    )\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return name or generate_slug(2)\n\n    @validator(\"cache_key\")\n    def validate_cache_key_length(cls, cache_key):\n        if cache_key and len(cache_key) > PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH.value():\n            raise ValueError(\n                \"Cache key exceeded maximum allowed length of\"\n                f\" {PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH.value()} characters.\"\n            )\n        return cache_key\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkPool","title":"WorkPool","text":"

Bases: ORMBaseModel

An ORM representation of a work pool

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class WorkPool(ORMBaseModel):\n\"\"\"An ORM representation of a work pool\"\"\"\n\n    name: str = Field(\n        description=\"The name of the work pool.\",\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A description of the work pool.\"\n    )\n    type: str = Field(description=\"The work pool type.\")\n    base_job_template: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The work pool's base job template.\"\n    )\n    is_paused: bool = Field(\n        default=False,\n        description=\"Pausing the work pool stops the delivery of all work.\",\n    )\n    concurrency_limit: Optional[conint(ge=0)] = Field(\n        default=None, description=\"A concurrency limit for the work pool.\"\n    )\n\n    # this required field has a default of None so that the custom validator\n    # below will be called and produce a more helpful error message\n    default_queue_id: UUID = Field(\n        None, description=\"The id of the pool's default queue.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        raise_on_name_with_banned_characters(v)\n        return v\n\n    @validator(\"default_queue_id\", always=True)\n    def helpful_error_for_missing_default_queue_id(cls, v):\n\"\"\"\n        Default queue ID is required because all pools must have a default queue\n        ID, but it represents a circular foreign key relationship to a\n        WorkQueue (which can't be created until the work pool exists).\n        Therefore, while this field can *technically* be null, it shouldn't be.\n        This should only be an issue when creating new pools, as reading\n        existing ones will always have this field populated. This custom error\n        message will help users understand that they should use the\n        `actions.WorkPoolCreate` model in that case.\n        \"\"\"\n        if v is None:\n            raise ValueError(\n                \"`default_queue_id` is a required field. If you are \"\n                \"creating a new WorkPool and don't have a queue \"\n                \"ID yet, use the `actions.WorkPoolCreate` model instead.\"\n            )\n        return v\n\n    @validator(\"base_job_template\")\n    def validate_base_job_template(cls, v):\n        if v == dict():\n            return v\n\n        job_config = v.get(\"job_configuration\")\n        variables = v.get(\"variables\")\n        if not (job_config and variables):\n            raise ValueError(\n                \"The `base_job_template` must contain both a `job_configuration` key\"\n                \" and a `variables` key.\"\n            )\n        template_variables = set()\n        for template in job_config.values():\n            # find any variables inside of double curly braces, minus any whitespace\n            # e.g. \"{{ var1 }}.{{var2}}\" -> [\"var1\", \"var2\"]\n            # convert to json string to handle nested objects and lists\n            found_variables = find_placeholders(json.dumps(template))\n            template_variables = {placeholder.name for placeholder in found_variables}\n\n        provided_variables = set(variables[\"properties\"].keys())\n        if not template_variables.issubset(provided_variables):\n            missing_variables = template_variables - provided_variables\n            raise ValueError(\n                \"The variables specified in the job configuration template must be \"\n                \"present as properties in the variables schema. \"\n                \"Your job configuration uses the following undeclared \"\n                f\"variable(s): {' ,'.join(missing_variables)}.\"\n            )\n        return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkPool.helpful_error_for_missing_default_queue_id","title":"helpful_error_for_missing_default_queue_id","text":"

Default queue ID is required because all pools must have a default queue ID, but it represents a circular foreign key relationship to a WorkQueue (which can't be created until the work pool exists). Therefore, while this field can technically be null, it shouldn't be. This should only be an issue when creating new pools, as reading existing ones will always have this field populated. This custom error message will help users understand that they should use the actions.WorkPoolCreate model in that case.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
@validator(\"default_queue_id\", always=True)\ndef helpful_error_for_missing_default_queue_id(cls, v):\n\"\"\"\n    Default queue ID is required because all pools must have a default queue\n    ID, but it represents a circular foreign key relationship to a\n    WorkQueue (which can't be created until the work pool exists).\n    Therefore, while this field can *technically* be null, it shouldn't be.\n    This should only be an issue when creating new pools, as reading\n    existing ones will always have this field populated. This custom error\n    message will help users understand that they should use the\n    `actions.WorkPoolCreate` model in that case.\n    \"\"\"\n    if v is None:\n        raise ValueError(\n            \"`default_queue_id` is a required field. If you are \"\n            \"creating a new WorkPool and don't have a queue \"\n            \"ID yet, use the `actions.WorkPoolCreate` model instead.\"\n        )\n    return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueue","title":"WorkQueue","text":"

Bases: ORMBaseModel

An ORM representation of a work queue

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class WorkQueue(ORMBaseModel):\n\"\"\"An ORM representation of a work queue\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the work queue.\")\n    description: Optional[str] = Field(\n        default=\"\", description=\"An optional description for the work queue.\"\n    )\n    is_paused: bool = Field(\n        default=False, description=\"Whether or not the work queue is paused.\"\n    )\n    concurrency_limit: Optional[conint(ge=0)] = Field(\n        default=None, description=\"An optional concurrency limit for the work queue.\"\n    )\n    priority: conint(ge=1) = Field(\n        default=1,\n        description=(\n            \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n        ),\n    )\n    # Will be required after a future migration\n    work_pool_id: Optional[UUID] = Field(\n        default=None, description=\"The work pool with which the queue is associated.\"\n    )\n    filter: Optional[QueueFilter] = Field(\n        default=None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n    last_polled: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The last time an agent polled this queue for work.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        raise_on_name_with_banned_characters(v)\n        return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueueHealthPolicy","title":"WorkQueueHealthPolicy","text":"

Bases: PrefectBaseModel

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class WorkQueueHealthPolicy(PrefectBaseModel):\n    maximum_late_runs: Optional[int] = Field(\n        default=0,\n        description=(\n            \"The maximum number of late runs in the work queue before it is deemed\"\n            \" unhealthy. Defaults to `0`.\"\n        ),\n    )\n    maximum_seconds_since_last_polled: Optional[int] = Field(\n        default=60,\n        description=(\n            \"The maximum number of time in seconds elapsed since work queue has been\"\n            \" polled before it is deemed unhealthy. Defaults to `60`.\"\n        ),\n    )\n\n    def evaluate_health_status(\n        self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n    ) -> bool:\n\"\"\"\n        Given empirical information about the state of the work queue, evaulate its health status.\n\n        Args:\n            late_runs: the count of late runs for the work queue.\n            last_polled: the last time the work queue was polled, if available.\n\n        Returns:\n            bool: whether or not the work queue is healthy.\n        \"\"\"\n        healthy = True\n        if (\n            self.maximum_late_runs is not None\n            and late_runs_count > self.maximum_late_runs\n        ):\n            healthy = False\n\n        if self.maximum_seconds_since_last_polled is not None:\n            if (\n                last_polled is None\n                or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n                > self.maximum_seconds_since_last_polled\n            ):\n                healthy = False\n\n        return healthy\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueueHealthPolicy.evaluate_health_status","title":"evaluate_health_status","text":"

Given empirical information about the state of the work queue, evaulate its health status.

Parameters:

Name Type Description Default late_runs

the count of late runs for the work queue.

required last_polled Optional[DateTimeTZ]

the last time the work queue was polled, if available.

None

Returns:

Name Type Description bool bool

whether or not the work queue is healthy.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
def evaluate_health_status(\n    self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n) -> bool:\n\"\"\"\n    Given empirical information about the state of the work queue, evaulate its health status.\n\n    Args:\n        late_runs: the count of late runs for the work queue.\n        last_polled: the last time the work queue was polled, if available.\n\n    Returns:\n        bool: whether or not the work queue is healthy.\n    \"\"\"\n    healthy = True\n    if (\n        self.maximum_late_runs is not None\n        and late_runs_count > self.maximum_late_runs\n    ):\n        healthy = False\n\n    if self.maximum_seconds_since_last_polled is not None:\n        if (\n            last_polled is None\n            or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n            > self.maximum_seconds_since_last_polled\n        ):\n            healthy = False\n\n    return healthy\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Worker","title":"Worker","text":"

Bases: ORMBaseModel

An ORM representation of a worker

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class Worker(ORMBaseModel):\n\"\"\"An ORM representation of a worker\"\"\"\n\n    name: str = Field(description=\"The name of the worker.\")\n    work_pool_id: UUID = Field(\n        description=\"The work pool with which the queue is associated.\"\n    )\n    last_heartbeat_time: datetime.datetime = Field(\n        None, description=\"The last time the worker process sent a heartbeat.\"\n    )\n
"},{"location":"api-ref/server/schemas/filters/","title":"server.schemas.filters","text":""},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters","title":"prefect.server.schemas.filters","text":"

Schemas that define Prefect REST API filtering operations.

Each filter schema includes logic for transforming itself into a SQL where clause.

"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilter","title":"ArtifactCollectionFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter artifact collections. Only artifact collections matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactCollectionFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter artifact collections. Only artifact collections matching all criteria will be returned\"\"\"\n\n    latest_id: Optional[ArtifactCollectionFilterLatestId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.id`\"\n    )\n    key: Optional[ArtifactCollectionFilterKey] = Field(\n        default=None, description=\"Filter criteria for `Artifact.key`\"\n    )\n    flow_run_id: Optional[ArtifactCollectionFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n    )\n    task_run_id: Optional[ArtifactCollectionFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n    )\n    type: Optional[ArtifactCollectionFilterType] = Field(\n        default=None, description=\"Filter criteria for `Artifact.type`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.latest_id is not None:\n            filters.append(self.latest_id.as_sql_filter(db))\n        if self.key is not None:\n            filters.append(self.key.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n        if self.task_run_id is not None:\n            filters.append(self.task_run_id.as_sql_filter(db))\n        if self.type is not None:\n            filters.append(self.type.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId","title":"ArtifactCollectionFilterFlowRunId","text":"

Bases: PrefectFilterBaseModel

Filter by ArtifactCollection.flow_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactCollectionFilterFlowRunId(PrefectFilterBaseModel):\n\"\"\"Filter by `ArtifactCollection.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.flow_run_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey","title":"ArtifactCollectionFilterKey","text":"

Bases: PrefectFilterBaseModel

Filter by ArtifactCollection.key.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactCollectionFilterKey(PrefectFilterBaseModel):\n\"\"\"Filter by `ArtifactCollection.key`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact keys to include\"\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match artifact keys against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        example=\"my-artifact-%\",\n    )\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `true`, only include artifacts with a non-null key. If `false`, \"\n            \"only include artifacts with a null key. Should return all rows in \"\n            \"the ArtifactCollection table if specified.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.key.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.ArtifactCollection.key.ilike(f\"%{self.like_}%\"))\n        if self.exists_ is not None:\n            filters.append(\n                db.ArtifactCollection.key.isnot(None)\n                if self.exists_\n                else db.ArtifactCollection.key.is_(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId","title":"ArtifactCollectionFilterLatestId","text":"

Bases: PrefectFilterBaseModel

Filter by ArtifactCollection.latest_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactCollectionFilterLatestId(PrefectFilterBaseModel):\n\"\"\"Filter by `ArtifactCollection.latest_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of artifact ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.latest_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId","title":"ArtifactCollectionFilterTaskRunId","text":"

Bases: PrefectFilterBaseModel

Filter by ArtifactCollection.task_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactCollectionFilterTaskRunId(PrefectFilterBaseModel):\n\"\"\"Filter by `ArtifactCollection.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.task_run_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType","title":"ArtifactCollectionFilterType","text":"

Bases: PrefectFilterBaseModel

Filter by ArtifactCollection.type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactCollectionFilterType(PrefectFilterBaseModel):\n\"\"\"Filter by `ArtifactCollection.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.type.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.ArtifactCollection.type.notin_(self.not_any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter","title":"ArtifactFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter artifacts. Only artifacts matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter artifacts. Only artifacts matching all criteria will be returned\"\"\"\n\n    id: Optional[ArtifactFilterId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.id`\"\n    )\n    key: Optional[ArtifactFilterKey] = Field(\n        default=None, description=\"Filter criteria for `Artifact.key`\"\n    )\n    flow_run_id: Optional[ArtifactFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n    )\n    task_run_id: Optional[ArtifactFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n    )\n    type: Optional[ArtifactFilterType] = Field(\n        default=None, description=\"Filter criteria for `Artifact.type`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.key is not None:\n            filters.append(self.key.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n        if self.task_run_id is not None:\n            filters.append(self.task_run_id.as_sql_filter(db))\n        if self.type is not None:\n            filters.append(self.type.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId","title":"ArtifactFilterFlowRunId","text":"

Bases: PrefectFilterBaseModel

Filter by Artifact.flow_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactFilterFlowRunId(PrefectFilterBaseModel):\n\"\"\"Filter by `Artifact.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.flow_run_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId","title":"ArtifactFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by Artifact.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `Artifact.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of artifact ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey","title":"ArtifactFilterKey","text":"

Bases: PrefectFilterBaseModel

Filter by Artifact.key.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactFilterKey(PrefectFilterBaseModel):\n\"\"\"Filter by `Artifact.key`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact keys to include\"\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match artifact keys against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        example=\"my-artifact-%\",\n    )\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `true`, only include artifacts with a non-null key. If `false`, \"\n            \"only include artifacts with a null key.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.key.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Artifact.key.ilike(f\"%{self.like_}%\"))\n        if self.exists_ is not None:\n            filters.append(\n                db.Artifact.key.isnot(None)\n                if self.exists_\n                else db.Artifact.key.is_(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId","title":"ArtifactFilterTaskRunId","text":"

Bases: PrefectFilterBaseModel

Filter by Artifact.task_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactFilterTaskRunId(PrefectFilterBaseModel):\n\"\"\"Filter by `Artifact.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.task_run_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType","title":"ArtifactFilterType","text":"

Bases: PrefectFilterBaseModel

Filter by Artifact.type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactFilterType(PrefectFilterBaseModel):\n\"\"\"Filter by `Artifact.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.type.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.Artifact.type.notin_(self.not_any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter","title":"BlockDocumentFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockDocumentFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned\"\"\"\n\n    id: Optional[BlockDocumentFilterId] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.id`\"\n    )\n    is_anonymous: Optional[BlockDocumentFilterIsAnonymous] = Field(\n        # default is to exclude anonymous blocks\n        BlockDocumentFilterIsAnonymous(eq_=False),\n        description=(\n            \"Filter criteria for `BlockDocument.is_anonymous`. \"\n            \"Defaults to excluding anonymous blocks.\"\n        ),\n    )\n    block_type_id: Optional[BlockDocumentFilterBlockTypeId] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.block_type_id`\"\n    )\n    name: Optional[BlockDocumentFilterName] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.is_anonymous is not None:\n            filters.append(self.is_anonymous.as_sql_filter(db))\n        if self.block_type_id is not None:\n            filters.append(self.block_type_id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId","title":"BlockDocumentFilterBlockTypeId","text":"

Bases: PrefectFilterBaseModel

Filter by BlockDocument.block_type_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockDocumentFilterBlockTypeId(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockDocument.block_type_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block type ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockDocument.block_type_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId","title":"BlockDocumentFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by BlockDocument.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockDocumentFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockDocument.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockDocument.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous","title":"BlockDocumentFilterIsAnonymous","text":"

Bases: PrefectFilterBaseModel

Filter by BlockDocument.is_anonymous.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockDocumentFilterIsAnonymous(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockDocument.is_anonymous`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Filter block documents for only those that are or are not anonymous.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.BlockDocument.is_anonymous.is_(self.eq_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName","title":"BlockDocumentFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by BlockDocument.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockDocumentFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockDocument.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of block names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockDocument.name.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter","title":"BlockSchemaFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter BlockSchemas

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockSchemaFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter BlockSchemas\"\"\"\n\n    block_type_id: Optional[BlockSchemaFilterBlockTypeId] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.block_type_id`\"\n    )\n    block_capabilities: Optional[BlockSchemaFilterCapabilities] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.capabilities`\"\n    )\n    id: Optional[BlockSchemaFilterId] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.id`\"\n    )\n    version: Optional[BlockSchemaFilterVersion] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.version`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.block_type_id is not None:\n            filters.append(self.block_type_id.as_sql_filter(db))\n        if self.block_capabilities is not None:\n            filters.append(self.block_capabilities.as_sql_filter(db))\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.version is not None:\n            filters.append(self.version.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId","title":"BlockSchemaFilterBlockTypeId","text":"

Bases: PrefectFilterBaseModel

Filter by BlockSchema.block_type_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockSchemaFilterBlockTypeId(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockSchema.block_type_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block type ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockSchema.block_type_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities","title":"BlockSchemaFilterCapabilities","text":"

Bases: PrefectFilterBaseModel

Filter by BlockSchema.capabilities

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockSchemaFilterCapabilities(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"write-storage\", \"read-storage\"],\n        description=(\n            \"A list of block capabilities. Block entities will be returned only if an\"\n            \" associated block schema has a superset of the defined capabilities.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.BlockSchema.capabilities, self.all_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId","title":"BlockSchemaFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by BlockSchema.id

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockSchemaFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by BlockSchema.id\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockSchema.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion","title":"BlockSchemaFilterVersion","text":"

Bases: PrefectFilterBaseModel

Filter by BlockSchema.capabilities

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockSchemaFilterVersion(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"2.0.0\", \"2.1.0\"],\n        description=\"A list of block schema versions.\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        pass\n\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockSchema.version.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter","title":"BlockTypeFilter","text":"

Bases: PrefectFilterBaseModel

Filter BlockTypes

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockTypeFilter(PrefectFilterBaseModel):\n\"\"\"Filter BlockTypes\"\"\"\n\n    name: Optional[BlockTypeFilterName] = Field(\n        default=None, description=\"Filter criteria for `BlockType.name`\"\n    )\n\n    slug: Optional[BlockTypeFilterSlug] = Field(\n        default=None, description=\"Filter criteria for `BlockType.slug`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.slug is not None:\n            filters.append(self.slug.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName","title":"BlockTypeFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by BlockType.name

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockTypeFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockType.name`\"\"\"\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        example=\"marvin\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.like_ is not None:\n            filters.append(db.BlockType.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug","title":"BlockTypeFilterSlug","text":"

Bases: PrefectFilterBaseModel

Filter by BlockType.slug

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockTypeFilterSlug(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockType.slug`\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of slugs to match\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockType.slug.in_(self.any_))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter","title":"DeploymentFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter for deployments. Only deployments matching all criteria will be returned.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class DeploymentFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n    id: Optional[DeploymentFilterId] = Field(\n        default=None, description=\"Filter criteria for `Deployment.id`\"\n    )\n    name: Optional[DeploymentFilterName] = Field(\n        default=None, description=\"Filter criteria for `Deployment.name`\"\n    )\n    is_schedule_active: Optional[DeploymentFilterIsScheduleActive] = Field(\n        default=None, description=\"Filter criteria for `Deployment.is_schedule_active`\"\n    )\n    tags: Optional[DeploymentFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Deployment.tags`\"\n    )\n    work_queue_name: Optional[DeploymentFilterWorkQueueName] = Field(\n        default=None, description=\"Filter criteria for `Deployment.work_queue_name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.is_schedule_active is not None:\n            filters.append(self.is_schedule_active.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        if self.work_queue_name is not None:\n            filters.append(self.work_queue_name.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId","title":"DeploymentFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by Deployment.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class DeploymentFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `Deployment.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of deployment ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Deployment.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive","title":"DeploymentFilterIsScheduleActive","text":"

Bases: PrefectFilterBaseModel

Filter by Deployment.is_schedule_active.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class DeploymentFilterIsScheduleActive(PrefectFilterBaseModel):\n\"\"\"Filter by `Deployment.is_schedule_active`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=\"Only returns where deployment schedule is/is not active\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.Deployment.is_schedule_active.is_(self.eq_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName","title":"DeploymentFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by Deployment.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class DeploymentFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `Deployment.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of deployment names to include\",\n        example=[\"my-deployment-1\", \"my-deployment-2\"],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        example=\"marvin\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Deployment.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Deployment.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags","title":"DeploymentFilterTags","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by Deployment.tags.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class DeploymentFilterTags(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `Deployment.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"tag-1\", \"tag-2\"],\n        description=(\n            \"A list of tags. Deployments will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include deployments without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.Deployment.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.Deployment.tags == [] if self.is_null_ else db.Deployment.tags != []\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName","title":"DeploymentFilterWorkQueueName","text":"

Bases: PrefectFilterBaseModel

Filter by Deployment.work_queue_name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class DeploymentFilterWorkQueueName(PrefectFilterBaseModel):\n\"\"\"Filter by `Deployment.work_queue_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        example=[\"work_queue_1\", \"work_queue_2\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Deployment.work_queue_name.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet","title":"FilterSet","text":"

Bases: PrefectBaseModel

A collection of filters for common objects

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FilterSet(PrefectBaseModel):\n\"\"\"A collection of filters for common objects\"\"\"\n\n    flows: FlowFilter = Field(\n        default_factory=FlowFilter, description=\"Filters that apply to flows\"\n    )\n    flow_runs: FlowRunFilter = Field(\n        default_factory=FlowRunFilter, description=\"Filters that apply to flow runs\"\n    )\n    task_runs: TaskRunFilter = Field(\n        default_factory=TaskRunFilter, description=\"Filters that apply to task runs\"\n    )\n    deployments: DeploymentFilter = Field(\n        default_factory=DeploymentFilter,\n        description=\"Filters that apply to deployments\",\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter","title":"FlowFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter for flows. Only flows matching all criteria will be returned.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter for flows. Only flows matching all criteria will be returned.\"\"\"\n\n    id: Optional[FlowFilterId] = Field(\n        default=None, description=\"Filter criteria for `Flow.id`\"\n    )\n    name: Optional[FlowFilterName] = Field(\n        default=None, description=\"Filter criteria for `Flow.name`\"\n    )\n    tags: Optional[FlowFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Flow.tags`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId","title":"FlowFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by Flow.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `Flow.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Flow.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName","title":"FlowFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by Flow.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `Flow.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of flow names to include\",\n        example=[\"my-flow-1\", \"my-flow-2\"],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        example=\"marvin\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Flow.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Flow.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags","title":"FlowFilterTags","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by Flow.tags.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowFilterTags(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `Flow.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"tag-1\", \"tag-2\"],\n        description=(\n            \"A list of tags. Flows will be returned only if their tags are a superset\"\n            \" of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include flows without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.Flow.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(db.Flow.tags == [] if self.is_null_ else db.Flow.tags != [])\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter","title":"FlowRunFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter flow runs. Only flow runs matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter flow runs. Only flow runs matching all criteria will be returned\"\"\"\n\n    id: Optional[FlowRunFilterId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.id`\"\n    )\n    name: Optional[FlowRunFilterName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.name`\"\n    )\n    tags: Optional[FlowRunFilterTags] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.tags`\"\n    )\n    deployment_id: Optional[FlowRunFilterDeploymentId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.deployment_id`\"\n    )\n    work_queue_name: Optional[FlowRunFilterWorkQueueName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.work_queue_name\"\n    )\n    state: Optional[FlowRunFilterState] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.state`\"\n    )\n    flow_version: Optional[FlowRunFilterFlowVersion] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.flow_version`\"\n    )\n    start_time: Optional[FlowRunFilterStartTime] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.start_time`\"\n    )\n    expected_start_time: Optional[FlowRunFilterExpectedStartTime] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.expected_start_time`\"\n    )\n    next_scheduled_start_time: Optional[FlowRunFilterNextScheduledStartTime] = Field(\n        default=None,\n        description=\"Filter criteria for `FlowRun.next_scheduled_start_time`\",\n    )\n    parent_task_run_id: Optional[FlowRunFilterParentTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.parent_task_run_id`\"\n    )\n    idempotency_key: Optional[FlowRunFilterIdempotencyKey] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.idempotency_key`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        if self.deployment_id is not None:\n            filters.append(self.deployment_id.as_sql_filter(db))\n        if self.work_queue_name is not None:\n            filters.append(self.work_queue_name.as_sql_filter(db))\n        if self.flow_version is not None:\n            filters.append(self.flow_version.as_sql_filter(db))\n        if self.state is not None:\n            filters.append(self.state.as_sql_filter(db))\n        if self.start_time is not None:\n            filters.append(self.start_time.as_sql_filter(db))\n        if self.expected_start_time is not None:\n            filters.append(self.expected_start_time.as_sql_filter(db))\n        if self.next_scheduled_start_time is not None:\n            filters.append(self.next_scheduled_start_time.as_sql_filter(db))\n        if self.parent_task_run_id is not None:\n            filters.append(self.parent_task_run_id.as_sql_filter(db))\n        if self.idempotency_key is not None:\n            filters.append(self.idempotency_key.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId","title":"FlowRunFilterDeploymentId","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by FlowRun.deployment_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterDeploymentId(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `FlowRun.deployment_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run deployment ids to include\"\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without deployment ids\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.deployment_id.in_(self.any_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.deployment_id.is_(None)\n                if self.is_null_\n                else db.FlowRun.deployment_id.is_not(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime","title":"FlowRunFilterExpectedStartTime","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.expected_start_time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterExpectedStartTime(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.expected_start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs scheduled to start at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs scheduled to start at or after this time\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.FlowRun.expected_start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.FlowRun.expected_start_time >= self.after_)\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion","title":"FlowRunFilterFlowVersion","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.flow_version.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterFlowVersion(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.flow_version`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run flow_versions to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.flow_version.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId","title":"FlowRunFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to include\"\n    )\n    not_any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.id.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.FlowRun.id.not_in(self.not_any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey","title":"FlowRunFilterIdempotencyKey","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.idempotency_key.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterIdempotencyKey(PrefectFilterBaseModel):\n\"\"\"Filter by FlowRun.idempotency_key.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run idempotency keys to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run idempotency keys to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.idempotency_key.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.FlowRun.idempotency_key.not_in(self.not_any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName","title":"FlowRunFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of flow run names to include\",\n        example=[\"my-flow-run-1\", \"my-flow-run-2\"],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        example=\"marvin\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.FlowRun.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime","title":"FlowRunFilterNextScheduledStartTime","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.next_scheduled_start_time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterNextScheduledStartTime(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.next_scheduled_start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include flow runs with a next_scheduled_start_time or before this\"\n            \" time\"\n        ),\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include flow runs with a next_scheduled_start_time at or after this\"\n            \" time\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.FlowRun.next_scheduled_start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.FlowRun.next_scheduled_start_time >= self.after_)\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId","title":"FlowRunFilterParentTaskRunId","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by FlowRun.parent_task_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterParentTaskRunId(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `FlowRun.parent_task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run parent_task_run_ids to include\"\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without parent_task_run_id\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.parent_task_run_id.in_(self.any_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.parent_task_run_id.is_(None)\n                if self.is_null_\n                else db.FlowRun.parent_task_run_id.is_not(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime","title":"FlowRunFilterStartTime","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.start_time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterStartTime(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs starting at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs starting at or after this time\",\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only return flow runs without a start time\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.FlowRun.start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.FlowRun.start_time >= self.after_)\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.start_time.is_(None)\n                if self.is_null_\n                else db.FlowRun.start_time.is_not(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState","title":"FlowRunFilterState","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by FlowRun.state_type and FlowRun.state_name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterState(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `FlowRun.state_type` and `FlowRun.state_name`.\"\"\"\n\n    type: Optional[FlowRunFilterStateType]\n    name: Optional[FlowRunFilterStateName]\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.type is not None:\n            filters.extend(self.type._get_filter_list(db))\n        if self.name is not None:\n            filters.extend(self.name._get_filter_list(db))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName","title":"FlowRunFilterStateName","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.state_name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterStateName(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.state_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run state names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.state_name.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType","title":"FlowRunFilterStateType","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.state_type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterStateType(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.state_type`.\"\"\"\n\n    any_: Optional[List[schemas.states.StateType]] = Field(\n        default=None, description=\"A list of flow run state types to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.state_type.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags","title":"FlowRunFilterTags","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by FlowRun.tags.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterTags(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `FlowRun.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"tag-1\", \"tag-2\"],\n        description=(\n            \"A list of tags. Flow runs will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include flow runs without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.FlowRun.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.tags == [] if self.is_null_ else db.FlowRun.tags != []\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName","title":"FlowRunFilterWorkQueueName","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by FlowRun.work_queue_name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterWorkQueueName(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `FlowRun.work_queue_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        example=[\"work_queue_1\", \"work_queue_2\"],\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without work queue names\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.work_queue_name.in_(self.any_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.work_queue_name.is_(None)\n                if self.is_null_\n                else db.FlowRun.work_queue_name.is_not(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter","title":"FlowRunNotificationPolicyFilter","text":"

Bases: PrefectFilterBaseModel

Filter FlowRunNotificationPolicies.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunNotificationPolicyFilter(PrefectFilterBaseModel):\n\"\"\"Filter FlowRunNotificationPolicies.\"\"\"\n\n    is_active: Optional[FlowRunNotificationPolicyFilterIsActive] = Field(\n        default=FlowRunNotificationPolicyFilterIsActive(eq_=False),\n        description=\"Filter criteria for `FlowRunNotificationPolicy.is_active`. \",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.is_active is not None:\n            filters.append(self.is_active.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive","title":"FlowRunNotificationPolicyFilterIsActive","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRunNotificationPolicy.is_active.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunNotificationPolicyFilterIsActive(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRunNotificationPolicy.is_active`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Filter notification policies for only those that are or are not active.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.FlowRunNotificationPolicy.is_active.is_(self.eq_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter","title":"LogFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter logs. Only logs matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class LogFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter logs. Only logs matching all criteria will be returned\"\"\"\n\n    level: Optional[LogFilterLevel] = Field(\n        default=None, description=\"Filter criteria for `Log.level`\"\n    )\n    timestamp: Optional[LogFilterTimestamp] = Field(\n        default=None, description=\"Filter criteria for `Log.timestamp`\"\n    )\n    flow_run_id: Optional[LogFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Log.flow_run_id`\"\n    )\n    task_run_id: Optional[LogFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Log.task_run_id`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.level is not None:\n            filters.append(self.level.as_sql_filter(db))\n        if self.timestamp is not None:\n            filters.append(self.timestamp.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n        if self.task_run_id is not None:\n            filters.append(self.task_run_id.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId","title":"LogFilterFlowRunId","text":"

Bases: PrefectFilterBaseModel

Filter by Log.flow_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class LogFilterFlowRunId(PrefectFilterBaseModel):\n\"\"\"Filter by `Log.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Log.flow_run_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel","title":"LogFilterLevel","text":"

Bases: PrefectFilterBaseModel

Filter by Log.level.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class LogFilterLevel(PrefectFilterBaseModel):\n\"\"\"Filter by `Log.level`.\"\"\"\n\n    ge_: Optional[int] = Field(\n        default=None,\n        description=\"Include logs with a level greater than or equal to this level\",\n        example=20,\n    )\n\n    le_: Optional[int] = Field(\n        default=None,\n        description=\"Include logs with a level less than or equal to this level\",\n        example=50,\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.ge_ is not None:\n            filters.append(db.Log.level >= self.ge_)\n        if self.le_ is not None:\n            filters.append(db.Log.level <= self.le_)\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName","title":"LogFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by Log.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class LogFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `Log.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of log names to include\",\n        example=[\"prefect.logger.flow_runs\", \"prefect.logger.task_runs\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Log.name.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId","title":"LogFilterTaskRunId","text":"

Bases: PrefectFilterBaseModel

Filter by Log.task_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class LogFilterTaskRunId(PrefectFilterBaseModel):\n\"\"\"Filter by `Log.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Log.task_run_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp","title":"LogFilterTimestamp","text":"

Bases: PrefectFilterBaseModel

Filter by Log.timestamp.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class LogFilterTimestamp(PrefectFilterBaseModel):\n\"\"\"Filter by `Log.timestamp`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include logs with a timestamp at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include logs with a timestamp at or after this time\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.Log.timestamp <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.Log.timestamp >= self.after_)\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.Operator","title":"Operator","text":"

Bases: AutoEnum

Operators for combining filter criteria.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class Operator(AutoEnum):\n\"\"\"Operators for combining filter criteria.\"\"\"\n\n    and_ = AutoEnum.auto()\n    or_ = AutoEnum.auto()\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.Operator.auto","title":"auto staticmethod","text":"

Exposes enum.auto() to avoid requiring a second import to use AutoEnum

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
@staticmethod\ndef auto():\n\"\"\"\n    Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n    \"\"\"\n    return auto()\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel","title":"PrefectFilterBaseModel","text":"

Bases: PrefectBaseModel

Base model for Prefect filters

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class PrefectFilterBaseModel(PrefectBaseModel):\n\"\"\"Base model for Prefect filters\"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    def as_sql_filter(self, db: \"PrefectDBInterface\") -> \"BooleanClauseList\":\n\"\"\"Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter.\"\"\"\n        filters = self._get_filter_list(db)\n        if not filters:\n            return True\n        return sa.and_(*filters)\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n\"\"\"Return a list of boolean filter statements based on filter parameters\"\"\"\n        raise NotImplementedError(\"_get_filter_list must be implemented\")\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel","title":"PrefectOperatorFilterBaseModel","text":"

Bases: PrefectFilterBaseModel

Base model for Prefect filters that combines criteria with a user-provided operator

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class PrefectOperatorFilterBaseModel(PrefectFilterBaseModel):\n\"\"\"Base model for Prefect filters that combines criteria with a user-provided operator\"\"\"\n\n    operator: Operator = Field(\n        default=Operator.and_,\n        description=\"Operator for combining filter criteria. Defaults to 'and_'.\",\n    )\n\n    def as_sql_filter(self, db: \"PrefectDBInterface\") -> \"BooleanClauseList\":\n        filters = self._get_filter_list(db)\n        if not filters:\n            return True\n        return sa.and_(*filters) if self.operator == Operator.and_ else sa.or_(*filters)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter","title":"TaskRunFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter task runs. Only task runs matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter task runs. Only task runs matching all criteria will be returned\"\"\"\n\n    id: Optional[TaskRunFilterId] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.id`\"\n    )\n    name: Optional[TaskRunFilterName] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.name`\"\n    )\n    tags: Optional[TaskRunFilterTags] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.tags`\"\n    )\n    state: Optional[TaskRunFilterState] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.state`\"\n    )\n    start_time: Optional[TaskRunFilterStartTime] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.start_time`\"\n    )\n    subflow_runs: Optional[TaskRunFilterSubFlowRuns] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.subflow_run`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        if self.state is not None:\n            filters.append(self.state.as_sql_filter(db))\n        if self.start_time is not None:\n            filters.append(self.start_time.as_sql_filter(db))\n        if self.subflow_runs is not None:\n            filters.append(self.subflow_runs.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId","title":"TaskRunFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by TaskRun.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `TaskRun.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName","title":"TaskRunFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by TaskRun.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `TaskRun.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of task run names to include\",\n        example=[\"my-task-run-1\", \"my-task-run-2\"],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        example=\"marvin\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.TaskRun.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime","title":"TaskRunFilterStartTime","text":"

Bases: PrefectFilterBaseModel

Filter by TaskRun.start_time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterStartTime(PrefectFilterBaseModel):\n\"\"\"Filter by `TaskRun.start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include task runs starting at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include task runs starting at or after this time\",\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only return task runs without a start time\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.TaskRun.start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.TaskRun.start_time >= self.after_)\n        if self.is_null_ is not None:\n            filters.append(\n                db.TaskRun.start_time.is_(None)\n                if self.is_null_\n                else db.TaskRun.start_time.is_not(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState","title":"TaskRunFilterState","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by TaskRun.type and TaskRun.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterState(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `TaskRun.type` and `TaskRun.name`.\"\"\"\n\n    type: Optional[TaskRunFilterStateType]\n    name: Optional[TaskRunFilterStateName]\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.type is not None:\n            filters.extend(self.type._get_filter_list(db))\n        if self.name is not None:\n            filters.extend(self.name._get_filter_list(db))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName","title":"TaskRunFilterStateName","text":"

Bases: PrefectFilterBaseModel

Filter by TaskRun.state_name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterStateName(PrefectFilterBaseModel):\n\"\"\"Filter by `TaskRun.state_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of task run state names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.state_name.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType","title":"TaskRunFilterStateType","text":"

Bases: PrefectFilterBaseModel

Filter by TaskRun.state_type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterStateType(PrefectFilterBaseModel):\n\"\"\"Filter by `TaskRun.state_type`.\"\"\"\n\n    any_: Optional[List[schemas.states.StateType]] = Field(\n        default=None, description=\"A list of task run state types to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.state_type.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns","title":"TaskRunFilterSubFlowRuns","text":"

Bases: PrefectFilterBaseModel

Filter by TaskRun.subflow_run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterSubFlowRuns(PrefectFilterBaseModel):\n\"\"\"Filter by `TaskRun.subflow_run`.\"\"\"\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If true, only include task runs that are subflow run parents; if false,\"\n            \" exclude parent task runs\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.exists_ is True:\n            filters.append(db.TaskRun.subflow_run.has())\n        elif self.exists_ is False:\n            filters.append(sa.not_(db.TaskRun.subflow_run.has()))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags","title":"TaskRunFilterTags","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by TaskRun.tags.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterTags(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `TaskRun.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"tag-1\", \"tag-2\"],\n        description=(\n            \"A list of tags. Task runs will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include task runs without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.TaskRun.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.TaskRun.tags == [] if self.is_null_ else db.TaskRun.tags != []\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter","title":"VariableFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter variables. Only variables matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class VariableFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter variables. Only variables matching all criteria will be returned\"\"\"\n\n    id: Optional[VariableFilterId] = Field(\n        default=None, description=\"Filter criteria for `Variable.id`\"\n    )\n    name: Optional[VariableFilterName] = Field(\n        default=None, description=\"Filter criteria for `Variable.name`\"\n    )\n    value: Optional[VariableFilterValue] = Field(\n        default=None, description=\"Filter criteria for `Variable.value`\"\n    )\n    tags: Optional[VariableFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Variable.tags`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.value is not None:\n            filters.append(self.value.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId","title":"VariableFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by Variable.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class VariableFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `Variable.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of variable ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Variable.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName","title":"VariableFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by Variable.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class VariableFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `Variable.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of variables names to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match variable names against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        example=\"my_variable_%\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Variable.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Variable.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags","title":"VariableFilterTags","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by Variable.tags.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class VariableFilterTags(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `Variable.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"tag-1\", \"tag-2\"],\n        description=(\n            \"A list of tags. Variables will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include Variables without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.Variable.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.Variable.tags == [] if self.is_null_ else db.Variable.tags != []\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue","title":"VariableFilterValue","text":"

Bases: PrefectFilterBaseModel

Filter by Variable.value.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class VariableFilterValue(PrefectFilterBaseModel):\n\"\"\"Filter by `Variable.value`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of variables value to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match variable value against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        example=\"my-value-%\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Variable.value.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Variable.value.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter","title":"WorkPoolFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter work pools. Only work pools matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkPoolFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter work pools. Only work pools matching all criteria will be returned\"\"\"\n\n    id: Optional[WorkPoolFilterId] = Field(\n        default=None, description=\"Filter criteria for `WorkPool.id`\"\n    )\n    name: Optional[WorkPoolFilterName] = Field(\n        default=None, description=\"Filter criteria for `WorkPool.name`\"\n    )\n    type: Optional[WorkPoolFilterType] = Field(\n        default=None, description=\"Filter criteria for `WorkPool.type`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.type is not None:\n            filters.append(self.type.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId","title":"WorkPoolFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by WorkPool.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkPoolFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `WorkPool.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of work pool ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkPool.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName","title":"WorkPoolFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by WorkPool.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkPoolFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `WorkPool.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of work pool names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkPool.name.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType","title":"WorkPoolFilterType","text":"

Bases: PrefectFilterBaseModel

Filter by WorkPool.type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkPoolFilterType(PrefectFilterBaseModel):\n\"\"\"Filter by `WorkPool.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of work pool types to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkPool.type.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter","title":"WorkQueueFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter work queues. Only work queues matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkQueueFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter work queues. Only work queues matching all criteria will be\n    returned\"\"\"\n\n    id: Optional[WorkQueueFilterId] = Field(\n        default=None, description=\"Filter criteria for `WorkQueue.id`\"\n    )\n\n    name: Optional[WorkQueueFilterName] = Field(\n        default=None, description=\"Filter criteria for `WorkQueue.name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId","title":"WorkQueueFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by WorkQueue.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkQueueFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `WorkQueue.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None,\n        description=\"A list of work queue ids to include\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkQueue.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName","title":"WorkQueueFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by WorkQueue.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkQueueFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `WorkQueue.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        example=[\"wq-1\", \"wq-2\"],\n    )\n\n    startswith_: Optional[List[str]] = Field(\n        default=None,\n        description=(\n            \"A list of case-insensitive starts-with matches. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', and 'Marvin-robot', but not 'sad-marvin'.\"\n        ),\n        example=[\"marvin\", \"Marvin-robot\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkQueue.name.in_(self.any_))\n        if self.startswith_ is not None:\n            filters.append(\n                sa.or_(\n                    *[db.WorkQueue.name.ilike(f\"{item}%\") for item in self.startswith_]\n                )\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter","title":"WorkerFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by Worker.last_heartbeat_time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkerFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n    # worker_config_id: Optional[WorkerFilterWorkPoolId] = Field(\n    #     default=None, description=\"Filter criteria for `Worker.worker_config_id`\"\n    # )\n\n    last_heartbeat_time: Optional[WorkerFilterLastHeartbeatTime] = Field(\n        default=None,\n        description=\"Filter criteria for `Worker.last_heartbeat_time`\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.last_heartbeat_time is not None:\n            filters.append(self.last_heartbeat_time.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime","title":"WorkerFilterLastHeartbeatTime","text":"

Bases: PrefectFilterBaseModel

Filter by Worker.last_heartbeat_time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkerFilterLastHeartbeatTime(PrefectFilterBaseModel):\n\"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include processes whose last heartbeat was at or before this time\"\n        ),\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include processes whose last heartbeat was at or after this time\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.Worker.last_heartbeat_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.Worker.last_heartbeat_time >= self.after_)\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId","title":"WorkerFilterWorkPoolId","text":"

Bases: PrefectFilterBaseModel

Filter by Worker.worker_config_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkerFilterWorkPoolId(PrefectFilterBaseModel):\n\"\"\"Filter by `Worker.worker_config_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of work pool ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Worker.worker_config_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/responses/","title":"server.schemas.responses","text":""},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses","title":"prefect.server.schemas.responses","text":"

Schemas for special responses from the Prefect REST API.

"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.FlowRunResponse","title":"FlowRunResponse","text":"

Bases: ORMBaseModel

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
@copy_model_fields\nclass FlowRunResponse(ORMBaseModel):\n    name: str = FieldFrom(schemas.core.FlowRun)\n    flow_id: UUID = FieldFrom(schemas.core.FlowRun)\n    state_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    deployment_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    work_queue_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    work_queue_name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    flow_version: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    parameters: dict = FieldFrom(schemas.core.FlowRun)\n    idempotency_key: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    context: dict = FieldFrom(schemas.core.FlowRun)\n    empirical_policy: FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n    tags: List[str] = FieldFrom(schemas.core.FlowRun)\n    parent_task_run_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    state_type: Optional[schemas.states.StateType] = FieldFrom(schemas.core.FlowRun)\n    state_name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    run_count: int = FieldFrom(schemas.core.FlowRun)\n    expected_start_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n    next_scheduled_start_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n    start_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n    end_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n    total_run_time: datetime.timedelta = FieldFrom(schemas.core.FlowRun)\n    estimated_run_time: datetime.timedelta = FieldFrom(schemas.core.FlowRun)\n    estimated_start_time_delta: datetime.timedelta = FieldFrom(schemas.core.FlowRun)\n    auto_scheduled: bool = FieldFrom(schemas.core.FlowRun)\n    infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    infrastructure_pid: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    created_by: Optional[CreatedBy] = FieldFrom(schemas.core.FlowRun)\n    work_pool_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The id of the flow run's work pool.\",\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the flow run's work pool.\",\n        example=\"my-work-pool\",\n    )\n    state: Optional[schemas.states.State] = FieldFrom(schemas.core.FlowRun)\n\n    @classmethod\n    def from_orm(cls, orm_flow_run: \"prefect.server.database.orm_models.ORMFlowRun\"):\n        response = super().from_orm(orm_flow_run)\n        if orm_flow_run.work_queue:\n            response.work_queue_id = orm_flow_run.work_queue.id\n            response.work_queue_name = orm_flow_run.work_queue.name\n            if orm_flow_run.work_queue.work_pool:\n                response.work_pool_id = orm_flow_run.work_queue.work_pool.id\n                response.work_pool_name = orm_flow_run.work_queue.work_pool.name\n\n        return response\n\n    def __eq__(self, other: Any) -> bool:\n\"\"\"\n        Check for \"equality\" to another flow run schema\n\n        Estimates times are rolling and will always change with repeated queries for\n        a flow run so we ignore them during equality checks.\n        \"\"\"\n        if isinstance(other, FlowRunResponse):\n            exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n            return self.dict(exclude=exclude_fields) == other.dict(\n                exclude=exclude_fields\n            )\n        return super().__eq__(other)\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.HistoryResponse","title":"HistoryResponse","text":"

Bases: PrefectBaseModel

Represents a history of aggregation states over an interval

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class HistoryResponse(PrefectBaseModel):\n\"\"\"Represents a history of aggregation states over an interval\"\"\"\n\n    interval_start: DateTimeTZ = Field(\n        default=..., description=\"The start date of the interval.\"\n    )\n    interval_end: DateTimeTZ = Field(\n        default=..., description=\"The end date of the interval.\"\n    )\n    states: List[HistoryResponseState] = Field(\n        default=..., description=\"A list of state histories during the interval.\"\n    )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.HistoryResponseState","title":"HistoryResponseState","text":"

Bases: PrefectBaseModel

Represents a single state's history over an interval.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class HistoryResponseState(PrefectBaseModel):\n\"\"\"Represents a single state's history over an interval.\"\"\"\n\n    state_type: schemas.states.StateType = Field(\n        default=..., description=\"The state type.\"\n    )\n    state_name: str = Field(default=..., description=\"The state name.\")\n    count_runs: int = Field(\n        default=...,\n        description=\"The number of runs in the specified state during the interval.\",\n    )\n    sum_estimated_run_time: datetime.timedelta = Field(\n        default=...,\n        description=\"The total estimated run time of all runs during the interval.\",\n    )\n    sum_estimated_lateness: datetime.timedelta = Field(\n        default=...,\n        description=(\n            \"The sum of differences between actual and expected start time during the\"\n            \" interval.\"\n        ),\n    )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.OrchestrationResult","title":"OrchestrationResult","text":"

Bases: PrefectBaseModel

A container for the output of state orchestration.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class OrchestrationResult(PrefectBaseModel):\n\"\"\"\n    A container for the output of state orchestration.\n    \"\"\"\n\n    state: Optional[schemas.states.State]\n    status: SetStateStatus\n    details: StateResponseDetails\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.SetStateStatus","title":"SetStateStatus","text":"

Bases: AutoEnum

Enumerates return statuses for setting run states.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class SetStateStatus(AutoEnum):\n\"\"\"Enumerates return statuses for setting run states.\"\"\"\n\n    ACCEPT = AutoEnum.auto()\n    REJECT = AutoEnum.auto()\n    ABORT = AutoEnum.auto()\n    WAIT = AutoEnum.auto()\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateAbortDetails","title":"StateAbortDetails","text":"

Bases: PrefectBaseModel

Details associated with an ABORT state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class StateAbortDetails(PrefectBaseModel):\n\"\"\"Details associated with an ABORT state transition.\"\"\"\n\n    type: Literal[\"abort_details\"] = Field(\n        default=\"abort_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition was aborted.\"\n    )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateAcceptDetails","title":"StateAcceptDetails","text":"

Bases: PrefectBaseModel

Details associated with an ACCEPT state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class StateAcceptDetails(PrefectBaseModel):\n\"\"\"Details associated with an ACCEPT state transition.\"\"\"\n\n    type: Literal[\"accept_details\"] = Field(\n        default=\"accept_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateRejectDetails","title":"StateRejectDetails","text":"

Bases: PrefectBaseModel

Details associated with a REJECT state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class StateRejectDetails(PrefectBaseModel):\n\"\"\"Details associated with a REJECT state transition.\"\"\"\n\n    type: Literal[\"reject_details\"] = Field(\n        default=\"reject_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition was rejected.\"\n    )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateWaitDetails","title":"StateWaitDetails","text":"

Bases: PrefectBaseModel

Details associated with a WAIT state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class StateWaitDetails(PrefectBaseModel):\n\"\"\"Details associated with a WAIT state transition.\"\"\"\n\n    type: Literal[\"wait_details\"] = Field(\n        default=\"wait_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    delay_seconds: int = Field(\n        default=...,\n        description=(\n            \"The length of time in seconds the client should wait before transitioning\"\n            \" states.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition should wait.\"\n    )\n
"},{"location":"api-ref/server/schemas/schedules/","title":"server.schemas.schedules","text":""},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules","title":"prefect.server.schemas.schedules","text":"

Schedule schemas

"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.CronSchedule","title":"CronSchedule","text":"

Bases: PrefectBaseModel

Cron schedule

NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST.

Parameters:

Name Type Description Default cron str

a valid cron string

required timezone str

a valid timezone string in IANA tzdata format (for example, America/New_York).

required day_or bool

Control how croniter handles day and day_of_week entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday.

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
class CronSchedule(PrefectBaseModel):\n\"\"\"\n    Cron schedule\n\n    NOTE: If the timezone is a DST-observing one, then the schedule will adjust\n    itself appropriately. Cron's rules for DST are based on schedule times, not\n    intervals. This means that an hourly cron schedule will fire on every new\n    schedule hour, not every elapsed hour; for example, when clocks are set back\n    this will result in a two-hour pause as the schedule will fire *the first\n    time* 1am is reached and *the first time* 2am is reached, 120 minutes later.\n    Longer schedules, such as one that fires at 9am every morning, will\n    automatically adjust for DST.\n\n    Args:\n        cron (str): a valid cron string\n        timezone (str): a valid timezone string in IANA tzdata format (for example,\n            America/New_York).\n        day_or (bool, optional): Control how croniter handles `day` and `day_of_week`\n            entries. Defaults to True, matching cron which connects those values using\n            OR. If the switch is set to False, the values are connected using AND. This\n            behaves like fcron and enables you to e.g. define a job that executes each\n            2nd friday of a month by setting the days of month and the weekday.\n\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    cron: str = Field(default=..., example=\"0 0 * * *\")\n    timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n    day_or: bool = Field(\n        default=True,\n        description=(\n            \"Control croniter behavior for handling day and day_of_week entries.\"\n        ),\n    )\n\n    @validator(\"timezone\")\n    def valid_timezone(cls, v):\n        if v and v not in pendulum.tz.timezones:\n            raise ValueError(\n                f'Invalid timezone: \"{v}\" (specify in IANA tzdata format, for example,'\n                \" America/New_York)\"\n            )\n        return v\n\n    @validator(\"cron\")\n    def valid_cron_string(cls, v):\n        # croniter allows \"random\" and \"hashed\" expressions\n        # which we do not support https://github.com/kiorky/croniter\n        if not croniter.is_valid(v):\n            raise ValueError(f'Invalid cron string: \"{v}\"')\n        elif any(c for c in v.split() if c.casefold() in [\"R\", \"H\", \"r\", \"h\"]):\n            raise ValueError(\n                f'Random and Hashed expressions are unsupported, received: \"{v}\"'\n            )\n        return v\n\n    async def get_dates(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> List[pendulum.DateTime]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: A list of dates\n        \"\"\"\n        return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n    def _get_dates_generator(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> Generator[pendulum.DateTime, None, None]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to the current date. If a timezone-naive\n                datetime is provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): No returned date will exceed this date.\n                If a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: a list of dates\n        \"\"\"\n        if start is None:\n            start = pendulum.now(\"UTC\")\n\n        start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n        if n is None:\n            # if an end was supplied, we do our best to supply all matching dates (up to\n            # MAX_ITERATIONS)\n            if end is not None:\n                n = MAX_ITERATIONS\n            else:\n                n = 1\n\n        elif self.timezone:\n            start = start.in_tz(self.timezone)\n\n        # subtract one second from the start date, so that croniter returns it\n        # as an event (if it meets the cron criteria)\n        start = start.subtract(seconds=1)\n\n        # croniter's DST logic interferes with all other datetime libraries except pytz\n        start_localized = pytz.timezone(start.tz.name).localize(\n            datetime.datetime(\n                year=start.year,\n                month=start.month,\n                day=start.day,\n                hour=start.hour,\n                minute=start.minute,\n                second=start.second,\n                microsecond=start.microsecond,\n            )\n        )\n\n        # Respect microseconds by rounding up\n        if start_localized.microsecond > 0:\n            start_localized += datetime.timedelta(seconds=1)\n\n        cron = croniter(self.cron, start_localized, day_or=self.day_or)  # type: ignore\n        dates = set()\n        counter = 0\n\n        while True:\n            next_date = pendulum.instance(cron.get_next(datetime.datetime))\n            # if the end date was exceeded, exit\n            if end and next_date > end:\n                break\n            # ensure no duplicates; weird things can happen with DST\n            if next_date not in dates:\n                dates.add(next_date)\n                yield next_date\n\n            # if enough dates have been collected or enough attempts were made, exit\n            if len(dates) >= n or counter > MAX_ITERATIONS:\n                break\n\n            counter += 1\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.CronSchedule.get_dates","title":"get_dates async","text":"

Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.

Parameters:

Name Type Description Default n int

The number of dates to generate

None start datetime.datetime

The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

None end datetime.datetime

The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

None

Returns:

Type Description List[pendulum.DateTime]

List[pendulum.DateTime]: A list of dates

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
async def get_dates(\n    self,\n    n: int = None,\n    start: datetime.datetime = None,\n    end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n    following the start date.\n\n    Args:\n        n (int): The number of dates to generate\n        start (datetime.datetime, optional): The first returned date will be on or\n            after this date. Defaults to None.  If a timezone-naive datetime is\n            provided, it is assumed to be in the schedule's timezone.\n        end (datetime.datetime, optional): The maximum scheduled date to return. If\n            a timezone-naive datetime is provided, it is assumed to be in the\n            schedule's timezone.\n\n    Returns:\n        List[pendulum.DateTime]: A list of dates\n    \"\"\"\n    return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.IntervalSchedule","title":"IntervalSchedule","text":"

Bases: PrefectBaseModel

A schedule formed by adding interval increments to an anchor_date. If no anchor_date is supplied, the current UTC time is used. If a timezone-naive datetime is provided for anchor_date, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a timezone can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date.

NOTE: If the IntervalSchedule anchor_date or timezone is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.

Parameters:

Name Type Description Default interval datetime.timedelta

an interval to schedule on

required anchor_date DateTimeTZ

an anchor date to schedule increments against; if not provided, the current timestamp will be used

required timezone str

a valid timezone string

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
class IntervalSchedule(PrefectBaseModel):\n\"\"\"\n    A schedule formed by adding `interval` increments to an `anchor_date`. If no\n    `anchor_date` is supplied, the current UTC time is used.  If a\n    timezone-naive datetime is provided for `anchor_date`, it is assumed to be\n    in the schedule's timezone (or UTC). Even if supplied with an IANA timezone,\n    anchor dates are always stored as UTC offsets, so a `timezone` can be\n    provided to determine localization behaviors like DST boundary handling. If\n    none is provided it will be inferred from the anchor date.\n\n    NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a\n    DST-observing timezone, then the schedule will adjust itself appropriately.\n    Intervals greater than 24 hours will follow DST conventions, while intervals\n    of less than 24 hours will follow UTC intervals. For example, an hourly\n    schedule will fire every UTC hour, even across DST boundaries. When clocks\n    are set back, this will result in two runs that *appear* to both be\n    scheduled for 1am local time, even though they are an hour apart in UTC\n    time. For longer intervals, like a daily schedule, the interval schedule\n    will adjust for DST boundaries so that the clock-hour remains constant. This\n    means that a daily schedule that always fires at 9am will observe DST and\n    continue to fire at 9am in the local time zone.\n\n    Args:\n        interval (datetime.timedelta): an interval to schedule on\n        anchor_date (DateTimeTZ, optional): an anchor date to schedule increments against;\n            if not provided, the current timestamp will be used\n        timezone (str, optional): a valid timezone string\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n        exclude_none = True\n\n    interval: datetime.timedelta\n    anchor_date: DateTimeTZ = None\n    timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n\n    @validator(\"interval\")\n    def interval_must_be_positive(cls, v):\n        if v.total_seconds() <= 0:\n            raise ValueError(\"The interval must be positive\")\n        return v\n\n    @validator(\"anchor_date\", always=True)\n    def default_anchor_date(cls, v):\n        if v is None:\n            return pendulum.now(\"UTC\")\n        return pendulum.instance(v)\n\n    @validator(\"timezone\", always=True)\n    def default_timezone(cls, v, *, values, **kwargs):\n        # if was provided, make sure its a valid IANA string\n        if v and v not in pendulum.tz.timezones:\n            raise ValueError(f'Invalid timezone: \"{v}\"')\n\n        # otherwise infer the timezone from the anchor date\n        elif v is None and values.get(\"anchor_date\"):\n            tz = values[\"anchor_date\"].tz.name\n            if tz in pendulum.tz.timezones:\n                return tz\n            # sometimes anchor dates have \"timezones\" that are UTC offsets\n            # like \"-04:00\". This happens when parsing ISO8601 strings.\n            # In this case we, the correct inferred localization is \"UTC\".\n            else:\n                return \"UTC\"\n\n        return v\n\n    async def get_dates(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> List[pendulum.DateTime]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: A list of dates\n        \"\"\"\n        return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n    def _get_dates_generator(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> Generator[pendulum.DateTime, None, None]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: a list of dates\n        \"\"\"\n        if n is None:\n            # if an end was supplied, we do our best to supply all matching dates (up to\n            # MAX_ITERATIONS)\n            if end is not None:\n                n = MAX_ITERATIONS\n            else:\n                n = 1\n\n        if start is None:\n            start = pendulum.now(\"UTC\")\n\n        anchor_tz = self.anchor_date.in_tz(self.timezone)\n        start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n        # compute the offset between the anchor date and the start date to jump to the\n        # next date\n        offset = (start - anchor_tz).total_seconds() / self.interval.total_seconds()\n        next_date = anchor_tz.add(seconds=self.interval.total_seconds() * int(offset))\n\n        # break the interval into `days` and `seconds` because pendulum\n        # will handle DST boundaries properly if days are provided, but not\n        # if we add `total seconds`. Therefore, `next_date + self.interval`\n        # fails while `next_date.add(days=days, seconds=seconds)` works.\n        interval_days = self.interval.days\n        interval_seconds = self.interval.total_seconds() - (\n            interval_days * 24 * 60 * 60\n        )\n\n        # daylight saving time boundaries can create a situation where the next date is\n        # before the start date, so we advance it if necessary\n        while next_date < start:\n            next_date = next_date.add(days=interval_days, seconds=interval_seconds)\n\n        counter = 0\n        dates = set()\n\n        while True:\n            # if the end date was exceeded, exit\n            if end and next_date > end:\n                break\n\n            # ensure no duplicates; weird things can happen with DST\n            if next_date not in dates:\n                dates.add(next_date)\n                yield next_date\n\n            # if enough dates have been collected or enough attempts were made, exit\n            if len(dates) >= n or counter > MAX_ITERATIONS:\n                break\n\n            counter += 1\n\n            next_date = next_date.add(days=interval_days, seconds=interval_seconds)\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.IntervalSchedule.get_dates","title":"get_dates async","text":"

Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.

Parameters:

Name Type Description Default n int

The number of dates to generate

None start datetime.datetime

The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

None end datetime.datetime

The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

None

Returns:

Type Description List[pendulum.DateTime]

List[pendulum.DateTime]: A list of dates

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
async def get_dates(\n    self,\n    n: int = None,\n    start: datetime.datetime = None,\n    end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n    following the start date.\n\n    Args:\n        n (int): The number of dates to generate\n        start (datetime.datetime, optional): The first returned date will be on or\n            after this date. Defaults to None.  If a timezone-naive datetime is\n            provided, it is assumed to be in the schedule's timezone.\n        end (datetime.datetime, optional): The maximum scheduled date to return. If\n            a timezone-naive datetime is provided, it is assumed to be in the\n            schedule's timezone.\n\n    Returns:\n        List[pendulum.DateTime]: A list of dates\n    \"\"\"\n    return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule","title":"RRuleSchedule","text":"

Bases: PrefectBaseModel

RRule schedule, based on the iCalendar standard (RFC 5545) as implemented in dateutils.rrule.

RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more.

Note that as a calendar-oriented standard, RRuleSchedules are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time.

Parameters:

Name Type Description Default rrule str

a valid RRule string

required timezone str

a valid timezone string

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
class RRuleSchedule(PrefectBaseModel):\n\"\"\"\n    RRule schedule, based on the iCalendar standard\n    ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as\n    implemented in `dateutils.rrule`.\n\n    RRules are appropriate for any kind of calendar-date manipulation, including\n    irregular intervals, repetition, exclusions, week day or day-of-month\n    adjustments, and more.\n\n    Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to\n    to the initial timezone provided. A 9am daily schedule with a daylight saving\n    time-aware start date will maintain a local 9am time through DST boundaries;\n    a 9am daily schedule with a UTC start date will maintain a 9am UTC time.\n\n    Args:\n        rrule (str): a valid RRule string\n        timezone (str, optional): a valid timezone string\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    rrule: str\n    timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n\n    @validator(\"rrule\")\n    def validate_rrule_str(cls, v):\n        # attempt to parse the rrule string as an rrule object\n        # this will error if the string is invalid\n        try:\n            dateutil.rrule.rrulestr(v, cache=True)\n        except ValueError as exc:\n            # rrules errors are a mix of cryptic and informative\n            # so reraise to be clear that the string was invalid\n            raise ValueError(f'Invalid RRule string \"{v}\": {exc}')\n        if len(v) > MAX_RRULE_LENGTH:\n            raise ValueError(\n                f'Invalid RRule string \"{v[:40]}...\"\\n'\n                f\"Max length is {MAX_RRULE_LENGTH}, got {len(v)}\"\n            )\n        return v\n\n    @classmethod\n    def from_rrule(cls, rrule: dateutil.rrule.rrule):\n        if isinstance(rrule, dateutil.rrule.rrule):\n            if rrule._dtstart.tzinfo is not None:\n                timezone = rrule._dtstart.tzinfo.name\n            else:\n                timezone = \"UTC\"\n            return RRuleSchedule(rrule=str(rrule), timezone=timezone)\n        elif isinstance(rrule, dateutil.rrule.rruleset):\n            dtstarts = [rr._dtstart for rr in rrule._rrule if rr._dtstart is not None]\n            unique_dstarts = set(pendulum.instance(d).in_tz(\"UTC\") for d in dtstarts)\n            unique_timezones = set(d.tzinfo for d in dtstarts if d.tzinfo is not None)\n\n            if len(unique_timezones) > 1:\n                raise ValueError(\n                    f\"rruleset has too many dtstart timezones: {unique_timezones}\"\n                )\n\n            if len(unique_dstarts) > 1:\n                raise ValueError(f\"rruleset has too many dtstarts: {unique_dstarts}\")\n\n            if unique_dstarts and unique_timezones:\n                timezone = dtstarts[0].tzinfo.name\n            else:\n                timezone = \"UTC\"\n\n            rruleset_string = \"\"\n            if rrule._rrule:\n                rruleset_string += \"\\n\".join(str(r) for r in rrule._rrule)\n            if rrule._exrule:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"\\n\".join(str(r) for r in rrule._exrule).replace(\n                    \"RRULE\", \"EXRULE\"\n                )\n            if rrule._rdate:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"RDATE:\" + \",\".join(\n                    rd.strftime(\"%Y%m%dT%H%M%SZ\") for rd in rrule._rdate\n                )\n            if rrule._exdate:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"EXDATE:\" + \",\".join(\n                    exd.strftime(\"%Y%m%dT%H%M%SZ\") for exd in rrule._exdate\n                )\n            return RRuleSchedule(rrule=rruleset_string, timezone=timezone)\n        else:\n            raise ValueError(f\"Invalid RRule object: {rrule}\")\n\n    def to_rrule(self) -> dateutil.rrule.rrule:\n\"\"\"\n        Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n        here\n        \"\"\"\n        rrule = dateutil.rrule.rrulestr(\n            self.rrule,\n            dtstart=DEFAULT_ANCHOR_DATE,\n            cache=True,\n        )\n        timezone = dateutil.tz.gettz(self.timezone)\n        if isinstance(rrule, dateutil.rrule.rrule):\n            kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n            if rrule._until:\n                kwargs.update(\n                    until=rrule._until.replace(tzinfo=timezone),\n                )\n            return rrule.replace(**kwargs)\n        elif isinstance(rrule, dateutil.rrule.rruleset):\n            # update rrules\n            localized_rrules = []\n            for rr in rrule._rrule:\n                kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n                if rr._until:\n                    kwargs.update(\n                        until=rr._until.replace(tzinfo=timezone),\n                    )\n                localized_rrules.append(rr.replace(**kwargs))\n            rrule._rrule = localized_rrules\n\n            # update exrules\n            localized_exrules = []\n            for exr in rrule._exrule:\n                kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n                if exr._until:\n                    kwargs.update(\n                        until=exr._until.replace(tzinfo=timezone),\n                    )\n                localized_exrules.append(exr.replace(**kwargs))\n            rrule._exrule = localized_exrules\n\n            # update rdates\n            localized_rdates = []\n            for rd in rrule._rdate:\n                localized_rdates.append(rd.replace(tzinfo=timezone))\n            rrule._rdate = localized_rdates\n\n            # update exdates\n            localized_exdates = []\n            for exd in rrule._exdate:\n                localized_exdates.append(exd.replace(tzinfo=timezone))\n            rrule._exdate = localized_exdates\n\n            return rrule\n\n    @validator(\"timezone\", always=True)\n    def valid_timezone(cls, v):\n        if v and v not in pytz.all_timezones_set:\n            raise ValueError(f'Invalid timezone: \"{v}\"')\n        elif v is None:\n            return \"UTC\"\n        return v\n\n    async def get_dates(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> List[pendulum.DateTime]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: A list of dates\n        \"\"\"\n        return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n    def _get_dates_generator(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> Generator[pendulum.DateTime, None, None]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to the current date. If a timezone-naive\n                datetime is provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): No returned date will exceed this date.\n                If a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: a list of dates\n        \"\"\"\n        if start is None:\n            start = pendulum.now(\"UTC\")\n\n        start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n        if n is None:\n            # if an end was supplied, we do our best to supply all matching dates (up\n            # to MAX_ITERATIONS)\n            if end is not None:\n                n = MAX_ITERATIONS\n            else:\n                n = 1\n\n        dates = set()\n        counter = 0\n\n        # pass count = None to account for discrepancies with duplicates around DST\n        # boundaries\n        for next_date in self.to_rrule().xafter(start, count=None, inc=True):\n            next_date = pendulum.instance(next_date).in_tz(self.timezone)\n\n            # if the end date was exceeded, exit\n            if end and next_date > end:\n                break\n\n            # ensure no duplicates; weird things can happen with DST\n            if next_date not in dates:\n                dates.add(next_date)\n                yield next_date\n\n            # if enough dates have been collected or enough attempts were made, exit\n            if len(dates) >= n or counter > MAX_ITERATIONS:\n                break\n\n            counter += 1\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule.get_dates","title":"get_dates async","text":"

Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.

Parameters:

Name Type Description Default n int

The number of dates to generate

None start datetime.datetime

The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

None end datetime.datetime

The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

None

Returns:

Type Description List[pendulum.DateTime]

List[pendulum.DateTime]: A list of dates

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
async def get_dates(\n    self,\n    n: int = None,\n    start: datetime.datetime = None,\n    end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n    following the start date.\n\n    Args:\n        n (int): The number of dates to generate\n        start (datetime.datetime, optional): The first returned date will be on or\n            after this date. Defaults to None.  If a timezone-naive datetime is\n            provided, it is assumed to be in the schedule's timezone.\n        end (datetime.datetime, optional): The maximum scheduled date to return. If\n            a timezone-naive datetime is provided, it is assumed to be in the\n            schedule's timezone.\n\n    Returns:\n        List[pendulum.DateTime]: A list of dates\n    \"\"\"\n    return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule.to_rrule","title":"to_rrule","text":"

Since rrule doesn't properly serialize/deserialize timezones, we localize dates here

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
def to_rrule(self) -> dateutil.rrule.rrule:\n\"\"\"\n    Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n    here\n    \"\"\"\n    rrule = dateutil.rrule.rrulestr(\n        self.rrule,\n        dtstart=DEFAULT_ANCHOR_DATE,\n        cache=True,\n    )\n    timezone = dateutil.tz.gettz(self.timezone)\n    if isinstance(rrule, dateutil.rrule.rrule):\n        kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n        if rrule._until:\n            kwargs.update(\n                until=rrule._until.replace(tzinfo=timezone),\n            )\n        return rrule.replace(**kwargs)\n    elif isinstance(rrule, dateutil.rrule.rruleset):\n        # update rrules\n        localized_rrules = []\n        for rr in rrule._rrule:\n            kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n            if rr._until:\n                kwargs.update(\n                    until=rr._until.replace(tzinfo=timezone),\n                )\n            localized_rrules.append(rr.replace(**kwargs))\n        rrule._rrule = localized_rrules\n\n        # update exrules\n        localized_exrules = []\n        for exr in rrule._exrule:\n            kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n            if exr._until:\n                kwargs.update(\n                    until=exr._until.replace(tzinfo=timezone),\n                )\n            localized_exrules.append(exr.replace(**kwargs))\n        rrule._exrule = localized_exrules\n\n        # update rdates\n        localized_rdates = []\n        for rd in rrule._rdate:\n            localized_rdates.append(rd.replace(tzinfo=timezone))\n        rrule._rdate = localized_rdates\n\n        # update exdates\n        localized_exdates = []\n        for exd in rrule._exdate:\n            localized_exdates.append(exd.replace(tzinfo=timezone))\n        rrule._exdate = localized_exdates\n\n        return rrule\n
"},{"location":"api-ref/server/schemas/sorting/","title":"server.schemas.sorting","text":""},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting","title":"prefect.server.schemas.sorting","text":"

Schemas for sorting Prefect REST API objects.

"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactCollectionSort","title":"ArtifactCollectionSort","text":"

Bases: AutoEnum

Defines artifact collection sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class ArtifactCollectionSort(AutoEnum):\n\"\"\"Defines artifact collection sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    ID_DESC = AutoEnum.auto()\n    KEY_DESC = AutoEnum.auto()\n    KEY_ASC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort artifact collections\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.ArtifactCollection.created.desc(),\n            \"UPDATED_DESC\": db.ArtifactCollection.updated.desc(),\n            \"ID_DESC\": db.ArtifactCollection.id.desc(),\n            \"KEY_DESC\": db.ArtifactCollection.key.desc(),\n            \"KEY_ASC\": db.ArtifactCollection.key.asc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactCollectionSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort artifact collections

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort artifact collections\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.ArtifactCollection.created.desc(),\n        \"UPDATED_DESC\": db.ArtifactCollection.updated.desc(),\n        \"ID_DESC\": db.ArtifactCollection.id.desc(),\n        \"KEY_DESC\": db.ArtifactCollection.key.desc(),\n        \"KEY_ASC\": db.ArtifactCollection.key.asc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactSort","title":"ArtifactSort","text":"

Bases: AutoEnum

Defines artifact sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class ArtifactSort(AutoEnum):\n\"\"\"Defines artifact sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    ID_DESC = AutoEnum.auto()\n    KEY_DESC = AutoEnum.auto()\n    KEY_ASC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort artifacts\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Artifact.created.desc(),\n            \"UPDATED_DESC\": db.Artifact.updated.desc(),\n            \"ID_DESC\": db.Artifact.id.desc(),\n            \"KEY_DESC\": db.Artifact.key.desc(),\n            \"KEY_ASC\": db.Artifact.key.asc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort artifacts

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort artifacts\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Artifact.created.desc(),\n        \"UPDATED_DESC\": db.Artifact.updated.desc(),\n        \"ID_DESC\": db.Artifact.id.desc(),\n        \"KEY_DESC\": db.Artifact.key.desc(),\n        \"KEY_ASC\": db.Artifact.key.asc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.DeploymentSort","title":"DeploymentSort","text":"

Bases: AutoEnum

Defines deployment sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class DeploymentSort(AutoEnum):\n\"\"\"Defines deployment sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort deployments\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Deployment.created.desc(),\n            \"UPDATED_DESC\": db.Deployment.updated.desc(),\n            \"NAME_ASC\": db.Deployment.name.asc(),\n            \"NAME_DESC\": db.Deployment.name.desc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.DeploymentSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort deployments

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort deployments\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Deployment.created.desc(),\n        \"UPDATED_DESC\": db.Deployment.updated.desc(),\n        \"NAME_ASC\": db.Deployment.name.asc(),\n        \"NAME_DESC\": db.Deployment.name.desc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowRunSort","title":"FlowRunSort","text":"

Bases: AutoEnum

Defines flow run sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class FlowRunSort(AutoEnum):\n\"\"\"Defines flow run sorting options.\"\"\"\n\n    ID_DESC = AutoEnum.auto()\n    START_TIME_ASC = AutoEnum.auto()\n    START_TIME_DESC = AutoEnum.auto()\n    EXPECTED_START_TIME_ASC = AutoEnum.auto()\n    EXPECTED_START_TIME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n    NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n    END_TIME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        from sqlalchemy.sql.functions import coalesce\n\n\"\"\"Return an expression used to sort flow runs\"\"\"\n        sort_mapping = {\n            \"ID_DESC\": db.FlowRun.id.desc(),\n            \"START_TIME_ASC\": coalesce(\n                db.FlowRun.start_time, db.FlowRun.expected_start_time\n            ).asc(),\n            \"START_TIME_DESC\": coalesce(\n                db.FlowRun.start_time, db.FlowRun.expected_start_time\n            ).desc(),\n            \"EXPECTED_START_TIME_ASC\": db.FlowRun.expected_start_time.asc(),\n            \"EXPECTED_START_TIME_DESC\": db.FlowRun.expected_start_time.desc(),\n            \"NAME_ASC\": db.FlowRun.name.asc(),\n            \"NAME_DESC\": db.FlowRun.name.desc(),\n            \"NEXT_SCHEDULED_START_TIME_ASC\": db.FlowRun.next_scheduled_start_time.asc(),\n            \"END_TIME_DESC\": db.FlowRun.end_time.desc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowSort","title":"FlowSort","text":"

Bases: AutoEnum

Defines flow sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class FlowSort(AutoEnum):\n\"\"\"Defines flow sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort flows\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Flow.created.desc(),\n            \"UPDATED_DESC\": db.Flow.updated.desc(),\n            \"NAME_ASC\": db.Flow.name.asc(),\n            \"NAME_DESC\": db.Flow.name.desc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort flows

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort flows\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Flow.created.desc(),\n        \"UPDATED_DESC\": db.Flow.updated.desc(),\n        \"NAME_ASC\": db.Flow.name.asc(),\n        \"NAME_DESC\": db.Flow.name.desc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.LogSort","title":"LogSort","text":"

Bases: AutoEnum

Defines log sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class LogSort(AutoEnum):\n\"\"\"Defines log sorting options.\"\"\"\n\n    TIMESTAMP_ASC = AutoEnum.auto()\n    TIMESTAMP_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort task runs\"\"\"\n        sort_mapping = {\n            \"TIMESTAMP_ASC\": db.Log.timestamp.asc(),\n            \"TIMESTAMP_DESC\": db.Log.timestamp.desc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.LogSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort task runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort task runs\"\"\"\n    sort_mapping = {\n        \"TIMESTAMP_ASC\": db.Log.timestamp.asc(),\n        \"TIMESTAMP_DESC\": db.Log.timestamp.desc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.TaskRunSort","title":"TaskRunSort","text":"

Bases: AutoEnum

Defines task run sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class TaskRunSort(AutoEnum):\n\"\"\"Defines task run sorting options.\"\"\"\n\n    ID_DESC = AutoEnum.auto()\n    EXPECTED_START_TIME_ASC = AutoEnum.auto()\n    EXPECTED_START_TIME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n    NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n    END_TIME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort task runs\"\"\"\n        sort_mapping = {\n            \"ID_DESC\": db.TaskRun.id.desc(),\n            \"EXPECTED_START_TIME_ASC\": db.TaskRun.expected_start_time.asc(),\n            \"EXPECTED_START_TIME_DESC\": db.TaskRun.expected_start_time.desc(),\n            \"NAME_ASC\": db.TaskRun.name.asc(),\n            \"NAME_DESC\": db.TaskRun.name.desc(),\n            \"NEXT_SCHEDULED_START_TIME_ASC\": db.TaskRun.next_scheduled_start_time.asc(),\n            \"END_TIME_DESC\": db.TaskRun.end_time.desc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.TaskRunSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort task runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort task runs\"\"\"\n    sort_mapping = {\n        \"ID_DESC\": db.TaskRun.id.desc(),\n        \"EXPECTED_START_TIME_ASC\": db.TaskRun.expected_start_time.asc(),\n        \"EXPECTED_START_TIME_DESC\": db.TaskRun.expected_start_time.desc(),\n        \"NAME_ASC\": db.TaskRun.name.asc(),\n        \"NAME_DESC\": db.TaskRun.name.desc(),\n        \"NEXT_SCHEDULED_START_TIME_ASC\": db.TaskRun.next_scheduled_start_time.asc(),\n        \"END_TIME_DESC\": db.TaskRun.end_time.desc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.VariableSort","title":"VariableSort","text":"

Bases: AutoEnum

Defines variables sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class VariableSort(AutoEnum):\n\"\"\"Defines variables sorting options.\"\"\"\n\n    CREATED_DESC = \"CREATED_DESC\"\n    UPDATED_DESC = \"UPDATED_DESC\"\n    NAME_DESC = \"NAME_DESC\"\n    NAME_ASC = \"NAME_ASC\"\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort variables\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Variable.created.desc(),\n            \"UPDATED_DESC\": db.Variable.updated.desc(),\n            \"NAME_DESC\": db.Variable.name.desc(),\n            \"NAME_ASC\": db.Variable.name.asc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.VariableSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort variables

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort variables\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Variable.created.desc(),\n        \"UPDATED_DESC\": db.Variable.updated.desc(),\n        \"NAME_DESC\": db.Variable.name.desc(),\n        \"NAME_ASC\": db.Variable.name.asc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/states/","title":"server.schemas.states","text":""},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states","title":"prefect.server.schemas.states","text":"

State schemas.

"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State","title":"State","text":"

Bases: StateBaseModel, Generic[R]

Represents the state of a run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
class State(StateBaseModel, Generic[R]):\n\"\"\"Represents the state of a run.\"\"\"\n\n    class Config:\n        orm_mode = True\n\n    type: StateType\n    name: Optional[str] = Field(default=None)\n    timestamp: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n    message: Optional[str] = Field(default=None, example=\"Run started\")\n    data: Optional[Any] = Field(\n        default=None,\n        description=(\n            \"Data associated with the state, e.g. a result. \"\n            \"Content must be storable as JSON.\"\n        ),\n    )\n    state_details: StateDetails = Field(default_factory=StateDetails)\n\n    @classmethod\n    def from_orm_without_result(\n        cls,\n        orm_state: Union[\n            \"prefect.server.database.orm_models.ORMFlowRunState\",\n            \"prefect.server.database.orm_models.ORMTaskRunState\",\n        ],\n        with_data: Optional[Any] = None,\n    ):\n\"\"\"\n        During orchestration, ORM states can be instantiated prior to inserting results\n        into the artifact table and the `data` field will not be eagerly loaded. In\n        these cases, sqlalchemy will attept to lazily load the the relationship, which\n        will fail when called within a synchronous pydantic method.\n\n        This method will construct a `State` object from an ORM model without a loaded\n        artifact and attach data passed using the `with_data` argument to the `data`\n        field.\n        \"\"\"\n\n        field_keys = cls.schema()[\"properties\"].keys()\n        state_data = {\n            field: getattr(orm_state, field, None)\n            for field in field_keys\n            if field != \"data\"\n        }\n        state_data[\"data\"] = with_data\n        return cls(**state_data)\n\n    @validator(\"name\", always=True)\n    def default_name_from_type(cls, v, *, values, **kwargs):\n\"\"\"If a name is not provided, use the type\"\"\"\n\n        # if `type` is not in `values` it means the `type` didn't pass its own\n        # validation check and an error will be raised after this function is called\n        if v is None and values.get(\"type\"):\n            v = \" \".join([v.capitalize() for v in values.get(\"type\").value.split(\"_\")])\n        return v\n\n    @root_validator\n    def default_scheduled_start_time(cls, values):\n\"\"\"\n        TODO: This should throw an error instead of setting a default but is out of\n              scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled\n              into work refactoring state initialization\n        \"\"\"\n        if values.get(\"type\") == StateType.SCHEDULED:\n            state_details = values.setdefault(\n                \"state_details\", cls.__fields__[\"state_details\"].get_default()\n            )\n            if not state_details.scheduled_time:\n                state_details.scheduled_time = pendulum.now(\"utc\")\n        return values\n\n    def is_scheduled(self) -> bool:\n        return self.type == StateType.SCHEDULED\n\n    def is_pending(self) -> bool:\n        return self.type == StateType.PENDING\n\n    def is_running(self) -> bool:\n        return self.type == StateType.RUNNING\n\n    def is_completed(self) -> bool:\n        return self.type == StateType.COMPLETED\n\n    def is_failed(self) -> bool:\n        return self.type == StateType.FAILED\n\n    def is_crashed(self) -> bool:\n        return self.type == StateType.CRASHED\n\n    def is_cancelled(self) -> bool:\n        return self.type == StateType.CANCELLED\n\n    def is_cancelling(self) -> bool:\n        return self.type == StateType.CANCELLING\n\n    def is_final(self) -> bool:\n        return self.type in TERMINAL_STATES\n\n    def is_paused(self) -> bool:\n        return self.type == StateType.PAUSED\n\n    def copy(self, *, update: dict = None, reset_fields: bool = False, **kwargs):\n\"\"\"\n        Copying API models should return an object that could be inserted into the\n        database again. The 'timestamp' is reset using the default factory.\n        \"\"\"\n        update = update or {}\n        update.setdefault(\"timestamp\", self.__fields__[\"timestamp\"].get_default())\n        return super().copy(reset_fields=reset_fields, update=update, **kwargs)\n\n    def result(self, raise_on_failure: bool = True, fetch: Optional[bool] = None):\n        # Backwards compatible `result` handling on the server-side schema\n        from prefect.states import State\n\n        warnings.warn(\n            (\n                \"`result` is no longer supported by\"\n                \" `prefect.server.schemas.states.State` and will be removed in a future\"\n                \" release. When result retrieval is needed, use `prefect.states.State`.\"\n            ),\n            DeprecationWarning,\n            stacklevel=2,\n        )\n\n        state = State.parse_obj(self)\n        return state.result(raise_on_failure=raise_on_failure, fetch=fetch)\n\n    def to_state_create(self):\n        # Backwards compatibility for `to_state_create`\n        from prefect.client.schemas import State\n\n        warnings.warn(\n            (\n                \"Use of `prefect.server.schemas.states.State` from the client is\"\n                \" deprecated and support will be removed in a future release. Use\"\n                \" `prefect.states.State` instead.\"\n            ),\n            DeprecationWarning,\n            stacklevel=2,\n        )\n\n        state = State.parse_obj(self)\n        return state.to_state_create()\n\n    def __repr__(self) -> str:\n\"\"\"\n        Generates a complete state representation appropriate for introspection\n        and debugging, including the result:\n\n        `MyCompletedState(message=\"my message\", type=COMPLETED, result=...)`\n        \"\"\"\n        from prefect.deprecated.data_documents import DataDocument\n\n        if isinstance(self.data, DataDocument):\n            result = self.data.decode()\n        else:\n            result = self.data\n\n        display = dict(\n            message=repr(self.message),\n            type=str(self.type.value),\n            result=repr(result),\n        )\n\n        return f\"{self.name}({', '.join(f'{k}={v}' for k, v in display.items())})\"\n\n    def __str__(self) -> str:\n\"\"\"\n        Generates a simple state representation appropriate for logging:\n\n        `MyCompletedState(\"my message\", type=COMPLETED)`\n        \"\"\"\n\n        display = []\n\n        if self.message:\n            display.append(repr(self.message))\n\n        if self.type.value.lower() != self.name.lower():\n            display.append(f\"type={self.type.value}\")\n\n        return f\"{self.name}({', '.join(display)})\"\n\n    def __hash__(self) -> int:\n        return hash(\n            (\n                getattr(self.state_details, \"flow_run_id\", None),\n                getattr(self.state_details, \"task_run_id\", None),\n                self.timestamp,\n                self.type,\n            )\n        )\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.from_orm_without_result","title":"from_orm_without_result classmethod","text":"

During orchestration, ORM states can be instantiated prior to inserting results into the artifact table and the data field will not be eagerly loaded. In these cases, sqlalchemy will attept to lazily load the the relationship, which will fail when called within a synchronous pydantic method.

This method will construct a State object from an ORM model without a loaded artifact and attach data passed using the with_data argument to the data field.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
@classmethod\ndef from_orm_without_result(\n    cls,\n    orm_state: Union[\n        \"prefect.server.database.orm_models.ORMFlowRunState\",\n        \"prefect.server.database.orm_models.ORMTaskRunState\",\n    ],\n    with_data: Optional[Any] = None,\n):\n\"\"\"\n    During orchestration, ORM states can be instantiated prior to inserting results\n    into the artifact table and the `data` field will not be eagerly loaded. In\n    these cases, sqlalchemy will attept to lazily load the the relationship, which\n    will fail when called within a synchronous pydantic method.\n\n    This method will construct a `State` object from an ORM model without a loaded\n    artifact and attach data passed using the `with_data` argument to the `data`\n    field.\n    \"\"\"\n\n    field_keys = cls.schema()[\"properties\"].keys()\n    state_data = {\n        field: getattr(orm_state, field, None)\n        for field in field_keys\n        if field != \"data\"\n    }\n    state_data[\"data\"] = with_data\n    return cls(**state_data)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel","title":"StateBaseModel","text":"

Bases: IDBaseModel

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
class StateBaseModel(IDBaseModel):\n    def orm_dict(\n        self, *args, shallow: bool = False, json_compatible: bool = False, **kwargs\n    ) -> dict:\n\"\"\"\n        This method is used as a convenience method for constructing fixtues by first\n        building a `State` schema object and converting it into an ORM-compatible\n        format. Because the `data` field is not writable on ORM states, this method\n        omits the `data` field entirely for the purposes of constructing an ORM model.\n        If state data is required, an artifact must be created separately.\n        \"\"\"\n\n        schema_dict = self.dict(\n            *args, shallow=shallow, json_compatible=json_compatible, **kwargs\n        )\n        # remove the data field in order to construct a state ORM model\n        schema_dict.pop(\"data\", None)\n        return schema_dict\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateType","title":"StateType","text":"

Bases: AutoEnum

Enumeration of state types.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
class StateType(AutoEnum):\n\"\"\"Enumeration of state types.\"\"\"\n\n    SCHEDULED = AutoEnum.auto()\n    PENDING = AutoEnum.auto()\n    RUNNING = AutoEnum.auto()\n    COMPLETED = AutoEnum.auto()\n    FAILED = AutoEnum.auto()\n    CANCELLED = AutoEnum.auto()\n    CRASHED = AutoEnum.auto()\n    PAUSED = AutoEnum.auto()\n    CANCELLING = AutoEnum.auto()\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateType.auto","title":"auto staticmethod","text":"

Exposes enum.auto() to avoid requiring a second import to use AutoEnum

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
@staticmethod\ndef auto():\n\"\"\"\n    Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n    \"\"\"\n    return auto()\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.AwaitingRetry","title":"AwaitingRetry","text":"

Convenience function for creating AwaitingRetry states.

Returns:

Name Type Description State State

a AwaitingRetry state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def AwaitingRetry(\n    scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n\"\"\"Convenience function for creating `AwaitingRetry` states.\n\n    Returns:\n        State: a AwaitingRetry state\n    \"\"\"\n    return Scheduled(\n        cls=cls, scheduled_time=scheduled_time, name=\"AwaitingRetry\", **kwargs\n    )\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Cancelled","title":"Cancelled","text":"

Convenience function for creating Cancelled states.

Returns:

Name Type Description State State

a Cancelled state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Cancelled(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Cancelled` states.\n\n    Returns:\n        State: a Cancelled state\n    \"\"\"\n    return cls(type=StateType.CANCELLED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Cancelling","title":"Cancelling","text":"

Convenience function for creating Cancelling states.

Returns:

Name Type Description State State

a Cancelling state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Cancelling(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Cancelling` states.\n\n    Returns:\n        State: a Cancelling state\n    \"\"\"\n    return cls(type=StateType.CANCELLING, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Completed","title":"Completed","text":"

Convenience function for creating Completed states.

Returns:

Name Type Description State State

a Completed state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Completed(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Completed` states.\n\n    Returns:\n        State: a Completed state\n    \"\"\"\n    return cls(type=StateType.COMPLETED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Crashed","title":"Crashed","text":"

Convenience function for creating Crashed states.

Returns:

Name Type Description State State

a Crashed state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Crashed(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Crashed` states.\n\n    Returns:\n        State: a Crashed state\n    \"\"\"\n    return cls(type=StateType.CRASHED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Failed","title":"Failed","text":"

Convenience function for creating Failed states.

Returns:

Name Type Description State State

a Failed state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Failed(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Failed` states.\n\n    Returns:\n        State: a Failed state\n    \"\"\"\n    return cls(type=StateType.FAILED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Late","title":"Late","text":"

Convenience function for creating Late states.

Returns:

Name Type Description State State

a Late state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Late(\n    scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n\"\"\"Convenience function for creating `Late` states.\n\n    Returns:\n        State: a Late state\n    \"\"\"\n    return Scheduled(cls=cls, scheduled_time=scheduled_time, name=\"Late\", **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Paused","title":"Paused","text":"

Convenience function for creating Paused states.

Returns:

Name Type Description State State

a Paused state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Paused(\n    cls: Type[State] = State,\n    timeout_seconds: int = None,\n    pause_expiration_time: datetime.datetime = None,\n    reschedule: bool = False,\n    pause_key: str = None,\n    **kwargs,\n) -> State:\n\"\"\"Convenience function for creating `Paused` states.\n\n    Returns:\n        State: a Paused state\n    \"\"\"\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n\n    if state_details.pause_timeout:\n        raise ValueError(\"An extra pause timeout was provided in state_details\")\n\n    if pause_expiration_time is not None and timeout_seconds is not None:\n        raise ValueError(\n            \"Cannot supply both a pause_expiration_time and timeout_seconds\"\n        )\n\n    if pause_expiration_time is None and timeout_seconds is None:\n        pass\n    else:\n        state_details.pause_timeout = pause_expiration_time or (\n            pendulum.now(\"UTC\") + pendulum.Duration(seconds=timeout_seconds)\n        )\n\n    state_details.pause_reschedule = reschedule\n    state_details.pause_key = pause_key\n\n    return cls(type=StateType.PAUSED, state_details=state_details, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Pending","title":"Pending","text":"

Convenience function for creating Pending states.

Returns:

Name Type Description State State

a Pending state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Pending(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Pending` states.\n\n    Returns:\n        State: a Pending state\n    \"\"\"\n    return cls(type=StateType.PENDING, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Retrying","title":"Retrying","text":"

Convenience function for creating Retrying states.

Returns:

Name Type Description State State

a Retrying state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Retrying(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Retrying` states.\n\n    Returns:\n        State: a Retrying state\n    \"\"\"\n    return cls(type=StateType.RUNNING, name=\"Retrying\", **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Running","title":"Running","text":"

Convenience function for creating Running states.

Returns:

Name Type Description State State

a Running state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Running(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Running` states.\n\n    Returns:\n        State: a Running state\n    \"\"\"\n    return cls(type=StateType.RUNNING, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Scheduled","title":"Scheduled","text":"

Convenience function for creating Scheduled states.

Returns:

Name Type Description State State

a Scheduled state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Scheduled(\n    scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n\"\"\"Convenience function for creating `Scheduled` states.\n\n    Returns:\n        State: a Scheduled state\n    \"\"\"\n    # NOTE: `scheduled_time` must come first for backwards compatibility\n\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n    if scheduled_time is None:\n        scheduled_time = pendulum.now(\"UTC\")\n    elif state_details.scheduled_time:\n        raise ValueError(\"An extra scheduled_time was provided in state_details\")\n    state_details.scheduled_time = scheduled_time\n\n    return cls(type=StateType.SCHEDULED, state_details=state_details, **kwargs)\n
"},{"location":"api-ref/server/services/late_runs/","title":"server.services.late_runs","text":""},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs","title":"prefect.server.services.late_runs","text":"

The MarkLateRuns service. Responsible for putting flow runs in a Late state if they are not started on time. The threshold for a late run can be configured by changing PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS.

"},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs.MarkLateRuns","title":"MarkLateRuns","text":"

Bases: LoopService

A simple loop service responsible for identifying flow runs that are \"late\".

A flow run is defined as \"late\" if has not scheduled within a certain amount of time after its scheduled start time. The exact amount is configurable in Prefect REST API Settings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/late_runs.py
class MarkLateRuns(LoopService):\n\"\"\"\n    A simple loop service responsible for identifying flow runs that are \"late\".\n\n    A flow run is defined as \"late\" if has not scheduled within a certain amount\n    of time after its scheduled start time. The exact amount is configurable in\n    Prefect REST API Settings.\n    \"\"\"\n\n    def __init__(self, loop_seconds: float = None, **kwargs):\n        super().__init__(\n            loop_seconds=loop_seconds\n            or PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS.value(),\n            **kwargs,\n        )\n\n        # mark runs late if they are this far past their expected start time\n        self.mark_late_after: datetime.timedelta = (\n            PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS.value()\n        )\n\n        # query for this many runs to mark as late at once\n        self.batch_size = 400\n\n    @inject_db\n    async def run_once(self, db: PrefectDBInterface):\n\"\"\"\n        Mark flow runs as late by:\n\n        - Querying for flow runs in a scheduled state that are Scheduled to start in the past\n        - For any runs past the \"late\" threshold, setting the flow run state to a new `Late` state\n        \"\"\"\n        scheduled_to_start_before = pendulum.now(\"UTC\").subtract(\n            seconds=self.mark_late_after.total_seconds()\n        )\n\n        while True:\n            async with db.session_context(begin_transaction=True) as session:\n                query = self._get_select_late_flow_runs_query(\n                    scheduled_to_start_before=scheduled_to_start_before, db=db\n                )\n\n                result = await session.execute(query)\n                runs = result.all()\n\n                # mark each run as late\n                for run in runs:\n                    await self._mark_flow_run_as_late(session=session, flow_run=run)\n\n                # if no runs were found, exit the loop\n                if len(runs) < self.batch_size:\n                    break\n\n        self.logger.info(\"Finished monitoring for late runs.\")\n\n    @inject_db\n    def _get_select_late_flow_runs_query(\n        self, scheduled_to_start_before: datetime.datetime, db: PrefectDBInterface\n    ):\n\"\"\"\n        Returns a sqlalchemy query for late flow runs.\n\n        Args:\n            scheduled_to_start_before: the maximum next scheduled start time of\n                scheduled flow runs to consider in the returned query\n        \"\"\"\n        query = (\n            sa.select(\n                db.FlowRun.id,\n                db.FlowRun.next_scheduled_start_time,\n            )\n            .where(\n                # The next scheduled start time is in the past, including the mark late\n                # after buffer\n                (db.FlowRun.next_scheduled_start_time <= scheduled_to_start_before),\n                db.FlowRun.state_type == states.StateType.SCHEDULED,\n                db.FlowRun.state_name == \"Scheduled\",\n            )\n            .limit(self.batch_size)\n        )\n        return query\n\n    async def _mark_flow_run_as_late(\n        self, session: AsyncSession, flow_run: PrefectDBInterface.FlowRun\n    ) -> None:\n\"\"\"\n        Mark a flow run as late.\n\n        Pass-through method for overrides.\n        \"\"\"\n        await models.flow_runs.set_flow_run_state(\n            session=session,\n            flow_run_id=flow_run.id,\n            state=states.Late(scheduled_time=flow_run.next_scheduled_start_time),\n            force=True,\n        )\n
"},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs.MarkLateRuns.run_once","title":"run_once async","text":"

Mark flow runs as late by:

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/late_runs.py
@inject_db\nasync def run_once(self, db: PrefectDBInterface):\n\"\"\"\n    Mark flow runs as late by:\n\n    - Querying for flow runs in a scheduled state that are Scheduled to start in the past\n    - For any runs past the \"late\" threshold, setting the flow run state to a new `Late` state\n    \"\"\"\n    scheduled_to_start_before = pendulum.now(\"UTC\").subtract(\n        seconds=self.mark_late_after.total_seconds()\n    )\n\n    while True:\n        async with db.session_context(begin_transaction=True) as session:\n            query = self._get_select_late_flow_runs_query(\n                scheduled_to_start_before=scheduled_to_start_before, db=db\n            )\n\n            result = await session.execute(query)\n            runs = result.all()\n\n            # mark each run as late\n            for run in runs:\n                await self._mark_flow_run_as_late(session=session, flow_run=run)\n\n            # if no runs were found, exit the loop\n            if len(runs) < self.batch_size:\n                break\n\n    self.logger.info(\"Finished monitoring for late runs.\")\n
"},{"location":"api-ref/server/services/loop_service/","title":"server.services.loop_service","text":""},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service","title":"prefect.server.services.loop_service","text":"

The base class for all Prefect REST API loop services.

"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService","title":"LoopService","text":"

Loop services are relatively lightweight maintenance routines that need to run periodically.

This class makes it straightforward to design and integrate them. Users only need to define the run_once coroutine to describe the behavior of the service on each loop.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/loop_service.py
class LoopService:\n\"\"\"\n    Loop services are relatively lightweight maintenance routines that need to run periodically.\n\n    This class makes it straightforward to design and integrate them. Users only need to\n    define the `run_once` coroutine to describe the behavior of the service on each loop.\n    \"\"\"\n\n    loop_seconds = 60\n\n    def __init__(self, loop_seconds: float = None, handle_signals: bool = True):\n\"\"\"\n        Args:\n            loop_seconds (float): if provided, overrides the loop interval\n                otherwise specified as a class variable\n            handle_signals (bool): if True (default), SIGINT and SIGTERM are\n                gracefully intercepted and shut down the running service.\n        \"\"\"\n        if loop_seconds:\n            self.loop_seconds = loop_seconds  # seconds between runs\n        self._should_stop = False  # flag for whether the service should stop running\n        self._is_running = False  # flag for whether the service is running\n        self.name = type(self).__name__\n        self.logger = get_logger(f\"server.services.{self.name.lower()}\")\n\n        if handle_signals:\n            signal.signal(signal.SIGINT, self._stop)\n            signal.signal(signal.SIGTERM, self._stop)\n\n    @inject_db\n    async def _on_start(self, db: PrefectDBInterface) -> None:\n\"\"\"\n        Called prior to running the service\n        \"\"\"\n        # reset the _should_stop flag\n        self._should_stop = False\n        # set the _is_running flag\n        self._is_running = True\n\n    async def _on_stop(self) -> None:\n\"\"\"\n        Called after running the service\n        \"\"\"\n        # reset the _is_running flag\n        self._is_running = False\n\n    async def start(self, loops=None) -> None:\n\"\"\"\n        Run the service `loops` time. Pass loops=None to run forever.\n\n        Args:\n            loops (int, optional): the number of loops to run before exiting.\n        \"\"\"\n\n        await self._on_start()\n\n        i = 0\n        while not self._should_stop:\n            start_time = pendulum.now(\"UTC\")\n\n            try:\n                self.logger.debug(f\"About to run {self.name}...\")\n                await self.run_once()\n\n            except NotImplementedError as exc:\n                raise exc from None\n\n            # if an error is raised, log and continue\n            except Exception as exc:\n                self.logger.error(f\"Unexpected error in: {repr(exc)}\", exc_info=True)\n\n            end_time = pendulum.now(\"UTC\")\n\n            # if service took longer than its loop interval, log a warning\n            # that the interval might be too short\n            if (end_time - start_time).total_seconds() > self.loop_seconds:\n                self.logger.warning(\n                    f\"{self.name} took {(end_time-start_time).total_seconds()} seconds\"\n                    \" to run, which is longer than its loop interval of\"\n                    f\" {self.loop_seconds} seconds.\"\n                )\n\n            # check if early stopping was requested\n            i += 1\n            if loops is not None and i == loops:\n                self.logger.debug(f\"{self.name} exiting after {loops} loop(s).\")\n                await self.stop(block=False)\n\n            # next run is every \"loop seconds\" after each previous run *started*.\n            # note that if the loop took unexpectedly long, the \"next_run\" time\n            # might be in the past, which will result in an instant start\n            next_run = max(\n                start_time.add(seconds=self.loop_seconds), pendulum.now(\"UTC\")\n            )\n            self.logger.debug(f\"Finished running {self.name}. Next run at {next_run}\")\n\n            # check the `_should_stop` flag every 1 seconds until the next run time is reached\n            while pendulum.now(\"UTC\") < next_run and not self._should_stop:\n                await asyncio.sleep(\n                    min(1, (next_run - pendulum.now(\"UTC\")).total_seconds())\n                )\n\n        await self._on_stop()\n\n    async def stop(self, block=True) -> None:\n\"\"\"\n        Gracefully stops a running LoopService and optionally blocks until the\n        service stops.\n\n        Args:\n            block (bool): if True, blocks until the service is\n                finished running. Otherwise it requests a stop and returns but\n                the service may still be running a final loop.\n\n        \"\"\"\n        self._stop()\n\n        if block:\n            # if block=True, sleep until the service stops running,\n            # but no more than `loop_seconds` to avoid a deadlock\n            with anyio.move_on_after(self.loop_seconds):\n                while self._is_running:\n                    await asyncio.sleep(0.1)\n\n            # if the service is still running after `loop_seconds`, something's wrong\n            if self._is_running:\n                self.logger.warning(\n                    f\"`stop(block=True)` was called on {self.name} but more than one\"\n                    f\" loop interval ({self.loop_seconds} seconds) has passed. This\"\n                    \" usually means something is wrong. If `stop()` was called from\"\n                    \" inside the loop service, use `stop(block=False)` isntead.\"\n                )\n\n    def _stop(self, *_) -> None:\n\"\"\"\n        Private, synchronous method for setting the `_should_stop` flag. Takes arbitrary\n        arguments so it can be used as a signal handler.\n        \"\"\"\n        self._should_stop = True\n\n    async def run_once(self) -> None:\n\"\"\"\n        Represents one loop of the service.\n\n        Users should override this method.\n\n        To actually run the service once, call `LoopService().start(loops=1)`\n        instead of `LoopService().run_once()`, because this method will not invoke setup\n        and teardown methods properly.\n        \"\"\"\n        raise NotImplementedError(\"LoopService subclasses must implement this method.\")\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.run_once","title":"run_once async","text":"

Represents one loop of the service.

Users should override this method.

To actually run the service once, call LoopService().start(loops=1) instead of LoopService().run_once(), because this method will not invoke setup and teardown methods properly.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/loop_service.py
async def run_once(self) -> None:\n\"\"\"\n    Represents one loop of the service.\n\n    Users should override this method.\n\n    To actually run the service once, call `LoopService().start(loops=1)`\n    instead of `LoopService().run_once()`, because this method will not invoke setup\n    and teardown methods properly.\n    \"\"\"\n    raise NotImplementedError(\"LoopService subclasses must implement this method.\")\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.start","title":"start async","text":"

Run the service loops time. Pass loops=None to run forever.

Parameters:

Name Type Description Default loops int

the number of loops to run before exiting.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/loop_service.py
async def start(self, loops=None) -> None:\n\"\"\"\n    Run the service `loops` time. Pass loops=None to run forever.\n\n    Args:\n        loops (int, optional): the number of loops to run before exiting.\n    \"\"\"\n\n    await self._on_start()\n\n    i = 0\n    while not self._should_stop:\n        start_time = pendulum.now(\"UTC\")\n\n        try:\n            self.logger.debug(f\"About to run {self.name}...\")\n            await self.run_once()\n\n        except NotImplementedError as exc:\n            raise exc from None\n\n        # if an error is raised, log and continue\n        except Exception as exc:\n            self.logger.error(f\"Unexpected error in: {repr(exc)}\", exc_info=True)\n\n        end_time = pendulum.now(\"UTC\")\n\n        # if service took longer than its loop interval, log a warning\n        # that the interval might be too short\n        if (end_time - start_time).total_seconds() > self.loop_seconds:\n            self.logger.warning(\n                f\"{self.name} took {(end_time-start_time).total_seconds()} seconds\"\n                \" to run, which is longer than its loop interval of\"\n                f\" {self.loop_seconds} seconds.\"\n            )\n\n        # check if early stopping was requested\n        i += 1\n        if loops is not None and i == loops:\n            self.logger.debug(f\"{self.name} exiting after {loops} loop(s).\")\n            await self.stop(block=False)\n\n        # next run is every \"loop seconds\" after each previous run *started*.\n        # note that if the loop took unexpectedly long, the \"next_run\" time\n        # might be in the past, which will result in an instant start\n        next_run = max(\n            start_time.add(seconds=self.loop_seconds), pendulum.now(\"UTC\")\n        )\n        self.logger.debug(f\"Finished running {self.name}. Next run at {next_run}\")\n\n        # check the `_should_stop` flag every 1 seconds until the next run time is reached\n        while pendulum.now(\"UTC\") < next_run and not self._should_stop:\n            await asyncio.sleep(\n                min(1, (next_run - pendulum.now(\"UTC\")).total_seconds())\n            )\n\n    await self._on_stop()\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.stop","title":"stop async","text":"

Gracefully stops a running LoopService and optionally blocks until the service stops.

Parameters:

Name Type Description Default block bool

if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop.

True Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/loop_service.py
async def stop(self, block=True) -> None:\n\"\"\"\n    Gracefully stops a running LoopService and optionally blocks until the\n    service stops.\n\n    Args:\n        block (bool): if True, blocks until the service is\n            finished running. Otherwise it requests a stop and returns but\n            the service may still be running a final loop.\n\n    \"\"\"\n    self._stop()\n\n    if block:\n        # if block=True, sleep until the service stops running,\n        # but no more than `loop_seconds` to avoid a deadlock\n        with anyio.move_on_after(self.loop_seconds):\n            while self._is_running:\n                await asyncio.sleep(0.1)\n\n        # if the service is still running after `loop_seconds`, something's wrong\n        if self._is_running:\n            self.logger.warning(\n                f\"`stop(block=True)` was called on {self.name} but more than one\"\n                f\" loop interval ({self.loop_seconds} seconds) has passed. This\"\n                \" usually means something is wrong. If `stop()` was called from\"\n                \" inside the loop service, use `stop(block=False)` isntead.\"\n            )\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.run_multiple_services","title":"run_multiple_services async","text":"

Only one signal handler can be active at a time, so this function takes a list of loop services and runs all of them with a global signal handler.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/loop_service.py
async def run_multiple_services(loop_services: List[LoopService]):\n\"\"\"\n    Only one signal handler can be active at a time, so this function takes a list\n    of loop services and runs all of them with a global signal handler.\n    \"\"\"\n\n    def stop_all_services(self, *_):\n        for service in loop_services:\n            service._stop()\n\n    signal.signal(signal.SIGINT, stop_all_services)\n    signal.signal(signal.SIGTERM, stop_all_services)\n    await asyncio.gather(*[service.start() for service in loop_services])\n
"},{"location":"api-ref/server/services/scheduler/","title":"server.services.scheduler","text":""},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler","title":"prefect.server.services.scheduler","text":"

The Scheduler service.

"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.RecentDeploymentsScheduler","title":"RecentDeploymentsScheduler","text":"

Bases: Scheduler

A scheduler that only schedules deployments that were updated very recently. This scheduler can run on a tight loop and ensure that runs from newly-created or updated deployments are rapidly scheduled without having to wait for the \"main\" scheduler to complete its loop.

Note that scheduling is idempotent, so its ok for this scheduler to attempt to schedule the same deployments as the main scheduler. It's purpose is to accelerate scheduling for any deployments that users are interacting with.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/scheduler.py
class RecentDeploymentsScheduler(Scheduler):\n\"\"\"\n    A scheduler that only schedules deployments that were updated very recently.\n    This scheduler can run on a tight loop and ensure that runs from\n    newly-created or updated deployments are rapidly scheduled without having to\n    wait for the \"main\" scheduler to complete its loop.\n\n    Note that scheduling is idempotent, so its ok for this scheduler to attempt\n    to schedule the same deployments as the main scheduler. It's purpose is to\n    accelerate scheduling for any deployments that users are interacting with.\n    \"\"\"\n\n    # this scheduler runs on a tight loop\n    loop_seconds = 5\n\n    @inject_db\n    def _get_select_deployments_to_schedule_query(self, db: PrefectDBInterface):\n\"\"\"\n        Returns a sqlalchemy query for selecting deployments to schedule\n        \"\"\"\n        query = (\n            sa.select(db.Deployment.id)\n            .where(\n                db.Deployment.is_schedule_active.is_(True),\n                db.Deployment.schedule.is_not(None),\n                # use a slightly larger window than the loop interval to pick up\n                # any deployments that were created *while* the scheduler was\n                # last running (assuming the scheduler takes less than one\n                # second to run). Scheduling is idempotent so picking up schedules\n                # multiple times is not a concern.\n                db.Deployment.updated\n                >= pendulum.now(\"UTC\").subtract(seconds=self.loop_seconds + 1),\n            )\n            .order_by(db.Deployment.id)\n            .limit(self.deployment_batch_size)\n        )\n        return query\n
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.Scheduler","title":"Scheduler","text":"

Bases: LoopService

A loop service that schedules flow runs from deployments.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/scheduler.py
class Scheduler(LoopService):\n\"\"\"\n    A loop service that schedules flow runs from deployments.\n    \"\"\"\n\n    # the main scheduler takes its loop interval from\n    # PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS\n    loop_seconds = None\n\n    def __init__(self, loop_seconds: float = None, **kwargs):\n        super().__init__(\n            loop_seconds=(\n                loop_seconds\n                or self.loop_seconds\n                or PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS.value()\n            ),\n            **kwargs,\n        )\n        self.deployment_batch_size: int = (\n            PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE.value()\n        )\n        self.max_runs: int = PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS.value()\n        self.min_runs: int = PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS.value()\n        self.max_scheduled_time: datetime.timedelta = (\n            PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME.value()\n        )\n        self.min_scheduled_time: datetime.timedelta = (\n            PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME.value()\n        )\n        self.insert_batch_size = (\n            PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE.value()\n        )\n\n    @inject_db\n    async def run_once(self, db: PrefectDBInterface):\n\"\"\"\n        Schedule flow runs by:\n\n        - Querying for deployments with active schedules\n        - Generating the next set of flow runs based on each deployments schedule\n        - Inserting all scheduled flow runs into the database\n\n        All inserted flow runs are committed to the database at the termination of the\n        loop.\n        \"\"\"\n        total_inserted_runs = 0\n\n        last_id = None\n        while True:\n            async with db.session_context(begin_transaction=False) as session:\n                query = self._get_select_deployments_to_schedule_query()\n\n                # use cursor based pagination\n                if last_id:\n                    query = query.where(db.Deployment.id > last_id)\n\n                result = await session.execute(query)\n                deployment_ids = result.scalars().unique().all()\n\n                # collect runs across all deployments\n                try:\n                    runs_to_insert = await self._collect_flow_runs(\n                        session=session, deployment_ids=deployment_ids\n                    )\n                except TryAgain:\n                    continue\n\n            # bulk insert the runs based on batch size setting\n            for batch in batched_iterable(runs_to_insert, self.insert_batch_size):\n                async with db.session_context(begin_transaction=True) as session:\n                    inserted_runs = await self._insert_scheduled_flow_runs(\n                        session=session, runs=batch\n                    )\n                    total_inserted_runs += len(inserted_runs)\n\n            # if this is the last page of deployments, exit the loop\n            if len(deployment_ids) < self.deployment_batch_size:\n                break\n            else:\n                # record the last deployment ID\n                last_id = deployment_ids[-1]\n\n        self.logger.info(f\"Scheduled {total_inserted_runs} runs.\")\n\n    @inject_db\n    def _get_select_deployments_to_schedule_query(self, db: PrefectDBInterface):\n\"\"\"\n        Returns a sqlalchemy query for selecting deployments to schedule.\n\n        The query gets the IDs of any deployments with:\n\n            - an active schedule\n            - EITHER:\n                - fewer than `min_runs` auto-scheduled runs\n                - OR the max scheduled time is less than `max_scheduled_time` in the future\n        \"\"\"\n        now = pendulum.now(\"UTC\")\n        query = (\n            sa.select(db.Deployment.id)\n            .select_from(db.Deployment)\n            # TODO: on Postgres, this could be replaced with a lateral join that\n            # sorts by `next_scheduled_start_time desc` and limits by\n            # `self.min_runs` for a ~ 50% speedup. At the time of writing,\n            # performance of this universal query appears to be fast enough that\n            # this optimization is not worth maintaining db-specific queries\n            .join(\n                db.FlowRun,\n                # join on matching deployments, only picking up future scheduled runs\n                sa.and_(\n                    db.Deployment.id == db.FlowRun.deployment_id,\n                    db.FlowRun.state_type == StateType.SCHEDULED,\n                    db.FlowRun.next_scheduled_start_time >= now,\n                    db.FlowRun.auto_scheduled.is_(True),\n                ),\n                isouter=True,\n            )\n            .where(\n                db.Deployment.is_schedule_active.is_(True),\n                db.Deployment.schedule.is_not(None),\n            )\n            .group_by(db.Deployment.id)\n            # having EITHER fewer than three runs OR runs not scheduled far enough out\n            .having(\n                sa.or_(\n                    sa.func.count(db.FlowRun.next_scheduled_start_time) < self.min_runs,\n                    sa.func.max(db.FlowRun.next_scheduled_start_time)\n                    < now + self.min_scheduled_time,\n                )\n            )\n            .order_by(db.Deployment.id)\n            .limit(self.deployment_batch_size)\n        )\n        return query\n\n    async def _collect_flow_runs(\n        self,\n        session: sa.orm.Session,\n        deployment_ids: List[UUID],\n    ) -> List[Dict]:\n        runs_to_insert = []\n        for deployment_id in deployment_ids:\n            now = pendulum.now(\"UTC\")\n            # guard against erroneously configured schedules\n            try:\n                runs_to_insert.extend(\n                    await self._generate_scheduled_flow_runs(\n                        session=session,\n                        deployment_id=deployment_id,\n                        start_time=now,\n                        end_time=now + self.max_scheduled_time,\n                        min_time=self.min_scheduled_time,\n                        min_runs=self.min_runs,\n                        max_runs=self.max_runs,\n                    )\n                )\n            except Exception:\n                self.logger.exception(\n                    f\"Error scheduling deployment {deployment_id!r}.\",\n                )\n            finally:\n                connection = await session.connection()\n                if connection.invalidated:\n                    # If the error we handled above was the kind of database error that\n                    # causes underlying transaction to rollback and the connection to\n                    # become invalidated, rollback this session.  Errors that may cause\n                    # this are connection drops, database restarts, and things of the\n                    # sort.\n                    #\n                    # This rollback _does not rollback a transaction_, since that has\n                    # actually already happened due to the error above.  It brings the\n                    # Python session in sync with underlying connection so that when we\n                    # exec the outer with block, the context manager will not attempt to\n                    # commit the session.\n                    #\n                    # Then, raise TryAgain to break out of these nested loops, back to\n                    # the outer loop, where we'll begin a new transaction with\n                    # session.begin() in the next loop iteration.\n                    await session.rollback()\n                    raise TryAgain()\n        return runs_to_insert\n\n    @inject_db\n    async def _generate_scheduled_flow_runs(\n        self,\n        session: sa.orm.Session,\n        deployment_id: UUID,\n        start_time: datetime.datetime,\n        end_time: datetime.datetime,\n        min_time: datetime.timedelta,\n        min_runs: int,\n        max_runs: int,\n        db: PrefectDBInterface,\n    ) -> List[Dict]:\n\"\"\"\n        Given a `deployment_id` and schedule params, generates a list of flow run\n        objects and associated scheduled states that represent scheduled flow runs.\n\n        Pass-through method for overrides.\n\n\n        Args:\n            session: a database session\n            deployment_id: the id of the deployment to schedule\n            start_time: the time from which to start scheduling runs\n            end_time: runs will be scheduled until at most this time\n            min_time: runs will be scheduled until at least this far in the future\n            min_runs: a minimum amount of runs to schedule\n            max_runs: a maximum amount of runs to schedule\n\n        This function will generate the minimum number of runs that satisfy the min\n        and max times, and the min and max counts. Specifically, the following order\n        will be respected:\n\n            - Runs will be generated starting on or after the `start_time`\n            - No more than `max_runs` runs will be generated\n            - No runs will be generated after `end_time` is reached\n            - At least `min_runs` runs will be generated\n            - Runs will be generated until at least `start_time + min_time` is reached\n\n        \"\"\"\n        return await models.deployments._generate_scheduled_flow_runs(\n            session=session,\n            deployment_id=deployment_id,\n            start_time=start_time,\n            end_time=end_time,\n            min_time=min_time,\n            min_runs=min_runs,\n            max_runs=max_runs,\n        )\n\n    @inject_db\n    async def _insert_scheduled_flow_runs(\n        self,\n        session: sa.orm.Session,\n        runs: List[Dict],\n        db: PrefectDBInterface,\n    ) -> List[UUID]:\n\"\"\"\n        Given a list of flow runs to schedule, as generated by\n        `_generate_scheduled_flow_runs`, inserts them into the database. Note this is a\n        separate method to facilitate batch operations on many scheduled runs.\n\n        Pass-through method for overrides.\n        \"\"\"\n        return await models.deployments._insert_scheduled_flow_runs(\n            session=session, runs=runs\n        )\n
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.Scheduler.run_once","title":"run_once async","text":"

Schedule flow runs by:

All inserted flow runs are committed to the database at the termination of the loop.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/scheduler.py
@inject_db\nasync def run_once(self, db: PrefectDBInterface):\n\"\"\"\n    Schedule flow runs by:\n\n    - Querying for deployments with active schedules\n    - Generating the next set of flow runs based on each deployments schedule\n    - Inserting all scheduled flow runs into the database\n\n    All inserted flow runs are committed to the database at the termination of the\n    loop.\n    \"\"\"\n    total_inserted_runs = 0\n\n    last_id = None\n    while True:\n        async with db.session_context(begin_transaction=False) as session:\n            query = self._get_select_deployments_to_schedule_query()\n\n            # use cursor based pagination\n            if last_id:\n                query = query.where(db.Deployment.id > last_id)\n\n            result = await session.execute(query)\n            deployment_ids = result.scalars().unique().all()\n\n            # collect runs across all deployments\n            try:\n                runs_to_insert = await self._collect_flow_runs(\n                    session=session, deployment_ids=deployment_ids\n                )\n            except TryAgain:\n                continue\n\n        # bulk insert the runs based on batch size setting\n        for batch in batched_iterable(runs_to_insert, self.insert_batch_size):\n            async with db.session_context(begin_transaction=True) as session:\n                inserted_runs = await self._insert_scheduled_flow_runs(\n                    session=session, runs=batch\n                )\n                total_inserted_runs += len(inserted_runs)\n\n        # if this is the last page of deployments, exit the loop\n        if len(deployment_ids) < self.deployment_batch_size:\n            break\n        else:\n            # record the last deployment ID\n            last_id = deployment_ids[-1]\n\n    self.logger.info(f\"Scheduled {total_inserted_runs} runs.\")\n
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.TryAgain","title":"TryAgain","text":"

Bases: Exception

Internal control-flow exception used to retry the Scheduler's main loop

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/scheduler.py
class TryAgain(Exception):\n\"\"\"Internal control-flow exception used to retry the Scheduler's main loop\"\"\"\n
"},{"location":"api-ref/server/utilities/database/","title":"server.utilities.database","text":""},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database","title":"prefect.server.utilities.database","text":"

Utilities for interacting with Prefect REST API database and ORM layer.

Prefect supports both SQLite and Postgres. Many of these utilities allow the Prefect REST API to seamlessly switch between the two.

"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.GenerateUUID","title":"GenerateUUID","text":"

Bases: FunctionElement

Platform-independent UUID default generator. Note the actual functionality for this class is specified in the compiles-decorated functions below

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class GenerateUUID(FunctionElement):\n\"\"\"\n    Platform-independent UUID default generator.\n    Note the actual functionality for this class is specified in the\n    `compiles`-decorated functions below\n    \"\"\"\n\n    name = \"uuid_default\"\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.JSON","title":"JSON","text":"

Bases: TypeDecorator

JSON type that returns SQLAlchemy's dialect-specific JSON types, where possible. Uses generic JSON otherwise.

The \"base\" type is postgresql.JSONB to expose useful methods prior to SQL compilation

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class JSON(TypeDecorator):\n\"\"\"\n    JSON type that returns SQLAlchemy's dialect-specific JSON types, where\n    possible. Uses generic JSON otherwise.\n\n    The \"base\" type is postgresql.JSONB to expose useful methods prior\n    to SQL compilation\n    \"\"\"\n\n    impl = postgresql.JSONB\n    cache_ok = True\n\n    def load_dialect_impl(self, dialect):\n        if dialect.name == \"postgresql\":\n            return dialect.type_descriptor(postgresql.JSONB(none_as_null=True))\n        elif dialect.name == \"sqlite\":\n            return dialect.type_descriptor(sqlite.JSON(none_as_null=True))\n        else:\n            return dialect.type_descriptor(sa.JSON(none_as_null=True))\n\n    def process_bind_param(self, value, dialect):\n\"\"\"Prepares the given value to be used as a JSON field in a parameter binding\"\"\"\n        if not value:\n            return value\n\n        # PostgreSQL does not support the floating point extrema values `NaN`,\n        # `-Infinity`, or `Infinity`\n        # https://www.postgresql.org/docs/current/datatype-json.html#JSON-TYPE-MAPPING-TABLE\n        #\n        # SQLite supports storing and retrieving full JSON values that include\n        # `NaN`, `-Infinity`, or `Infinity`, but any query that requires SQLite to parse\n        # the value (like `json_extract`) will fail.\n        #\n        # Replace any `NaN`, `-Infinity`, or `Infinity` values with `None` in the\n        # returned value.  See more about `parse_constant` at\n        # https://docs.python.org/3/library/json.html#json.load.\n        return json.loads(json.dumps(value), parse_constant=lambda c: None)\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.Pydantic","title":"Pydantic","text":"

Bases: TypeDecorator

A pydantic type that converts inserted parameters to json and converts read values to the pydantic type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class Pydantic(TypeDecorator):\n\"\"\"\n    A pydantic type that converts inserted parameters to\n    json and converts read values to the pydantic type.\n    \"\"\"\n\n    impl = JSON\n    cache_ok = True\n\n    def __init__(self, pydantic_type, sa_column_type=None):\n        super().__init__()\n        self._pydantic_type = pydantic_type\n        if sa_column_type is not None:\n            self.impl = sa_column_type\n\n    def process_bind_param(self, value, dialect):\n        if value is None:\n            return None\n        # parse the value to ensure it complies with the schema\n        # (this will raise validation errors if not)\n        value = pydantic.parse_obj_as(self._pydantic_type, value)\n        # sqlalchemy requires the bind parameter's value to be a python-native\n        # collection of JSON-compatible objects. we achieve that by dumping the\n        # value to a json string using the pydantic JSON encoder and re-parsing\n        # it into a python-native form.\n        return json.loads(json.dumps(value, default=pydantic.json.pydantic_encoder))\n\n    def process_result_value(self, value, dialect):\n        if value is not None:\n            # load the json object into a fully hydrated typed object\n            return pydantic.parse_obj_as(self._pydantic_type, value)\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.Timestamp","title":"Timestamp","text":"

Bases: TypeDecorator

TypeDecorator that ensures that timestamps have a timezone.

For SQLite, all timestamps are converted to UTC (since they are stored as naive timestamps without timezones) and recovered as UTC.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class Timestamp(TypeDecorator):\n\"\"\"TypeDecorator that ensures that timestamps have a timezone.\n\n    For SQLite, all timestamps are converted to UTC (since they are stored\n    as naive timestamps without timezones) and recovered as UTC.\n    \"\"\"\n\n    impl = sa.TIMESTAMP(timezone=True)\n    cache_ok = True\n\n    def load_dialect_impl(self, dialect):\n        if dialect.name == \"postgresql\":\n            return dialect.type_descriptor(postgresql.TIMESTAMP(timezone=True))\n        elif dialect.name == \"sqlite\":\n            return dialect.type_descriptor(\n                sqlite.DATETIME(\n                    # SQLite is very particular about datetimes, and performs all comparisons\n                    # as alphanumeric comparisons without regard for actual timestamp\n                    # semantics or timezones. Therefore, it's important to have uniform\n                    # and sortable datetime representations. The default is an ISO8601-compatible\n                    # string with NO time zone and a space (\" \") delimeter between the date\n                    # and the time. The below settings can be used to add a \"T\" delimiter but\n                    # will require all other sqlite datetimes to be set similarly, including\n                    # the custom default value for datetime columns and any handwritten SQL\n                    # formed with `strftime()`.\n                    #\n                    # store with \"T\" separator for time\n                    # storage_format=(\n                    #     \"%(year)04d-%(month)02d-%(day)02d\"\n                    #     \"T%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d\"\n                    # ),\n                    # handle ISO 8601 with \"T\" or \" \" as the time separator\n                    # regexp=r\"(\\d+)-(\\d+)-(\\d+)[T ](\\d+):(\\d+):(\\d+).(\\d+)\",\n                )\n            )\n        else:\n            return dialect.type_descriptor(sa.TIMESTAMP(timezone=True))\n\n    def process_bind_param(self, value, dialect):\n        if value is None:\n            return None\n        else:\n            if value.tzinfo is None:\n                raise ValueError(\"Timestamps must have a timezone.\")\n            elif dialect.name == \"sqlite\":\n                return pendulum.instance(value).in_timezone(\"UTC\")\n            else:\n                return value\n\n    def process_result_value(self, value, dialect):\n        # retrieve timestamps in their native timezone (or UTC)\n        if value is not None:\n            return pendulum.instance(value).in_timezone(\"utc\")\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.UUID","title":"UUID","text":"

Bases: TypeDecorator

Platform-independent UUID type.

Uses PostgreSQL's UUID type, otherwise uses CHAR(36), storing as stringified hex values with hyphens.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class UUID(TypeDecorator):\n\"\"\"\n    Platform-independent UUID type.\n\n    Uses PostgreSQL's UUID type, otherwise uses\n    CHAR(36), storing as stringified hex values with\n    hyphens.\n    \"\"\"\n\n    impl = TypeEngine\n    cache_ok = True\n\n    def load_dialect_impl(self, dialect):\n        if dialect.name == \"postgresql\":\n            return dialect.type_descriptor(postgresql.UUID())\n        else:\n            return dialect.type_descriptor(CHAR(36))\n\n    def process_bind_param(self, value, dialect):\n        if value is None:\n            return None\n        elif dialect.name == \"postgresql\":\n            return str(value)\n        elif isinstance(value, uuid.UUID):\n            return str(value)\n        else:\n            return str(uuid.UUID(value))\n\n    def process_result_value(self, value, dialect):\n        if value is None:\n            return value\n        else:\n            if not isinstance(value, uuid.UUID):\n                value = uuid.UUID(value)\n            return value\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.date_add","title":"date_add","text":"

Bases: FunctionElement

Platform-independent way to add a date and an interval.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class date_add(FunctionElement):\n\"\"\"\n    Platform-independent way to add a date and an interval.\n    \"\"\"\n\n    type = Timestamp()\n    name = \"date_add\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, dt, interval):\n        self.dt = dt\n        self.interval = interval\n        super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.date_diff","title":"date_diff","text":"

Bases: FunctionElement

Platform-independent difference of dates. Computes d1 - d2.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class date_diff(FunctionElement):\n\"\"\"\n    Platform-independent difference of dates. Computes d1 - d2.\n    \"\"\"\n\n    type = sa.Interval()\n    name = \"date_diff\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, d1, d2):\n        self.d1 = d1\n        self.d2 = d2\n        super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.interval_add","title":"interval_add","text":"

Bases: FunctionElement

Platform-independent way to add two intervals.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class interval_add(FunctionElement):\n\"\"\"\n    Platform-independent way to add two intervals.\n    \"\"\"\n\n    type = sa.Interval()\n    name = \"interval_add\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, i1, i2):\n        self.i1 = i1\n        self.i2 = i2\n        super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_contains","title":"json_contains","text":"

Bases: FunctionElement

Platform independent json_contains operator, tests if the left expression contains the right expression.

On postgres this is equivalent to the @> containment operator. https://www.postgresql.org/docs/current/functions-json.html

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class json_contains(FunctionElement):\n\"\"\"\n    Platform independent json_contains operator, tests if the\n    `left` expression contains the `right` expression.\n\n    On postgres this is equivalent to the @> containment operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = BOOLEAN\n    name = \"json_contains\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, left, right):\n        self.left = left\n        self.right = right\n        super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_has_all_keys","title":"json_has_all_keys","text":"

Bases: FunctionElement

Platform independent json_has_all_keys operator.

On postgres this is equivalent to the ?& existence operator. https://www.postgresql.org/docs/current/functions-json.html

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class json_has_all_keys(FunctionElement):\n\"\"\"Platform independent json_has_all_keys operator.\n\n    On postgres this is equivalent to the ?& existence operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = BOOLEAN\n    name = \"json_has_all_keys\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, json_expr, values: List):\n        self.json_expr = json_expr\n        if isinstance(values, list) and not all(isinstance(v, str) for v in values):\n            raise ValueError(\n                \"json_has_all_key values must be strings if provided as a literal list\"\n            )\n        self.values = values\n        super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_has_any_key","title":"json_has_any_key","text":"

Bases: FunctionElement

Platform independent json_has_any_key operator.

On postgres this is equivalent to the ?| existence operator. https://www.postgresql.org/docs/current/functions-json.html

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class json_has_any_key(FunctionElement):\n\"\"\"\n    Platform independent json_has_any_key operator.\n\n    On postgres this is equivalent to the ?| existence operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = BOOLEAN\n    name = \"json_has_any_key\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, json_expr, values: List):\n        self.json_expr = json_expr\n        if not all(isinstance(v, str) for v in values):\n            raise ValueError(\"json_has_any_key values must be strings\")\n        self.values = values\n        super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.now","title":"now","text":"

Bases: FunctionElement

Platform-independent \"now\" generator.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class now(FunctionElement):\n\"\"\"\n    Platform-independent \"now\" generator.\n    \"\"\"\n\n    type = Timestamp()\n    name = \"now\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = True\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.get_dialect","title":"get_dialect","text":"

Get the dialect of a session, engine, or connection url.

Primary use case is figuring out whether the Prefect REST API is communicating with SQLite or Postgres.

Example
import prefect.settings\nfrom prefect.server.utilities.database import get_dialect\n\ndialect = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value())\nif dialect == \"sqlite\":\n    print(\"Using SQLite!\")\nelse:\n    print(\"Using Postgres!\")\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
def get_dialect(\n    obj: Union[str, sa.orm.Session, sa.engine.Engine],\n) -> sa.engine.Dialect:\n\"\"\"\n    Get the dialect of a session, engine, or connection url.\n\n    Primary use case is figuring out whether the Prefect REST API is communicating with\n    SQLite or Postgres.\n\n    Example:\n        ```python\n        import prefect.settings\n        from prefect.server.utilities.database import get_dialect\n\n        dialect = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value())\n        if dialect == \"sqlite\":\n            print(\"Using SQLite!\")\n        else:\n            print(\"Using Postgres!\")\n        ```\n    \"\"\"\n    if isinstance(obj, sa.orm.Session):\n        url = obj.bind.url\n    elif isinstance(obj, sa.engine.Engine):\n        url = obj.url\n    else:\n        url = sa.engine.url.make_url(obj)\n\n    return url.get_dialect()\n
"},{"location":"api-ref/server/utilities/schemas/","title":"server.utilities.schemas","text":""},{"location":"api-ref/server/utilities/schemas/#prefect.server.utilities.schemas","title":"prefect.server.utilities.schemas","text":"

Utilities for creating and working with Prefect REST API schemas.

"},{"location":"api-ref/server/utilities/schemas/#prefect.server.utilities.schemas.SubclassMethodMixin","title":"SubclassMethodMixin","text":"Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
class SubclassMethodMixin:\n    @classmethod\n    def subclass(\n        cls: Type[B],\n        name: str = None,\n        include_fields: List[str] = None,\n        exclude_fields: List[str] = None,\n    ) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n        See `pydantic_subclass()`.\n\n        Args:\n            name (str, optional): a name for the subclass\n            include_fields (List[str], optional): fields to include\n            exclude_fields (List[str], optional): fields to exclude\n\n        Returns:\n            BaseModel: a subclass of this class\n        \"\"\"\n        return pydantic_subclass(\n            base=cls,\n            name=name,\n            include_fields=include_fields,\n            exclude_fields=exclude_fields,\n        )\n
"},{"location":"api-ref/server/utilities/schemas/#prefect.server.utilities.schemas.SubclassMethodMixin.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/utilities/schemas/#prefect.server.utilities.schemas.pydantic_subclass","title":"pydantic_subclass","text":"

Creates a subclass of a Pydantic model that excludes certain fields. Pydantic models use the fields attribute of their parent class to determine inherited fields, so to create a subclass without fields, we temporarily remove those fields from the parent fields and use create_model to dynamically generate a new subclass.

Parameters:

Name Type Description Default base pydantic.BaseModel

a Pydantic BaseModel

required name str

a name for the subclass. If not provided it will have the same name as the base class.

None include_fields List[str]

a set of field names to include. If None, all fields are included.

None exclude_fields List[str]

a list of field names to exclude. If None, no fields are excluded.

None

Returns:

Type Description Type[B]

pydantic.BaseModel: a new model subclass that contains only the specified fields.

Example

To subclass a model with a subset of fields:

class Parent(pydantic.BaseModel):\n    x: int = 1\n    y: int = 2\n\nChild = pydantic_subclass(Parent, 'Child', exclude_fields=['y'])\nassert hasattr(Child(), 'x')\nassert not hasattr(Child(), 'y')\n

To subclass a model with a subset of fields but include a new field:

class Child(pydantic_subclass(Parent, exclude_fields=['y'])):\n    z: int = 3\n\nassert hasattr(Child(), 'x')\nassert not hasattr(Child(), 'y')\nassert hasattr(Child(), 'z')\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
def pydantic_subclass(\n    base: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of a Pydantic model that excludes certain fields.\n    Pydantic models use the __fields__ attribute of their parent class to\n    determine inherited fields, so to create a subclass without fields, we\n    temporarily remove those fields from the parent __fields__ and use\n    `create_model` to dynamically generate a new subclass.\n\n    Args:\n        base (pydantic.BaseModel): a Pydantic BaseModel\n        name (str): a name for the subclass. If not provided\n            it will have the same name as the base class.\n        include_fields (List[str]): a set of field names to include.\n            If `None`, all fields are included.\n        exclude_fields (List[str]): a list of field names to exclude.\n            If `None`, no fields are excluded.\n\n    Returns:\n        pydantic.BaseModel: a new model subclass that contains only the specified fields.\n\n    Example:\n        To subclass a model with a subset of fields:\n        ```python\n        class Parent(pydantic.BaseModel):\n            x: int = 1\n            y: int = 2\n\n        Child = pydantic_subclass(Parent, 'Child', exclude_fields=['y'])\n        assert hasattr(Child(), 'x')\n        assert not hasattr(Child(), 'y')\n        ```\n\n        To subclass a model with a subset of fields but include a new field:\n        ```python\n        class Child(pydantic_subclass(Parent, exclude_fields=['y'])):\n            z: int = 3\n\n        assert hasattr(Child(), 'x')\n        assert not hasattr(Child(), 'y')\n        assert hasattr(Child(), 'z')\n        ```\n    \"\"\"\n\n    # collect field names\n    field_names = set(include_fields or base.__fields__)\n    excluded_fields = set(exclude_fields or [])\n    if field_names.difference(base.__fields__):\n        raise ValueError(\n            \"Included fields not found on base class: \"\n            f\"{field_names.difference(base.__fields__)}\"\n        )\n    elif excluded_fields.difference(base.__fields__):\n        raise ValueError(\n            \"Excluded fields not found on base class: \"\n            f\"{excluded_fields.difference(base.__fields__)}\"\n        )\n    field_names.difference_update(excluded_fields)\n\n    # create a new class that inherits from `base` but only contains the specified\n    # pydantic __fields__\n    new_cls = type(\n        name or base.__name__,\n        (base,),\n        {\n            \"__fields__\": {\n                k: copy.copy(v) for k, v in base.__fields__.items() if k in field_names\n            }\n        },\n    )\n\n    return new_cls\n
"},{"location":"api-ref/server/utilities/server/","title":"server.utilities.server","text":""},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server","title":"prefect.server.utilities.server","text":"

Utilities for the Prefect REST API server.

"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.OrionAPIRoute","title":"OrionAPIRoute","text":"

Bases: PrefectAPIRoute

Deprecated. Use PrefectAPIRoute instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
@deprecated_callable(start_date=\"Feb 2023\", help=\"Use `PrefectAPIRoute` instead.\")\nclass OrionAPIRoute(PrefectAPIRoute):\n\"\"\"\n    Deprecated. Use `PrefectAPIRoute` instead.\n    \"\"\"\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.OrionRouter","title":"OrionRouter","text":"

Bases: PrefectRouter

Deprecated. Use PrefectRouter instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
@deprecated_callable(start_date=\"Feb 2023\", help=\"Use `PrefectRouter` instead.\")\nclass OrionRouter(PrefectRouter):\n\"\"\"\n    Deprecated. Use `PrefectRouter` instead.\n    \"\"\"\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectAPIRoute","title":"PrefectAPIRoute","text":"

Bases: APIRoute

A FastAPIRoute class which attaches an async stack to requests that exits before a response is returned.

Requests already have request.scope['fastapi_astack'] which is an async stack for the full scope of the request. This stack is used for managing contexts of FastAPI dependencies. If we want to close a dependency before the request is complete (i.e. before returning a response to the user), we need a stack with a different scope. This extension adds this stack at request.state.response_scoped_stack.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
class PrefectAPIRoute(APIRoute):\n\"\"\"\n    A FastAPIRoute class which attaches an async stack to requests that exits before\n    a response is returned.\n\n    Requests already have `request.scope['fastapi_astack']` which is an async stack for\n    the full scope of the request. This stack is used for managing contexts of FastAPI\n    dependencies. If we want to close a dependency before the request is complete\n    (i.e. before returning a response to the user), we need a stack with a different\n    scope. This extension adds this stack at `request.state.response_scoped_stack`.\n    \"\"\"\n\n    def get_route_handler(self) -> Callable[[Request], Coroutine[Any, Any, Response]]:\n        default_handler = super().get_route_handler()\n\n        async def handle_response_scoped_depends(request: Request) -> Response:\n            # Create a new stack scoped to exit before the response is returned\n            async with AsyncExitStack() as stack:\n                request.state.response_scoped_stack = stack\n                response = await default_handler(request)\n\n            return response\n\n        return handle_response_scoped_depends\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectRouter","title":"PrefectRouter","text":"

Bases: APIRouter

A base class for Prefect REST API routers.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
class PrefectRouter(APIRouter):\n\"\"\"\n    A base class for Prefect REST API routers.\n    \"\"\"\n\n    def __init__(self, **kwargs: Any) -> None:\n        kwargs.setdefault(\"route_class\", PrefectAPIRoute)\n        super().__init__(**kwargs)\n\n    def add_api_route(\n        self, path: str, endpoint: Callable[..., Any], **kwargs: Any\n    ) -> None:\n\"\"\"\n        Add an API route.\n\n        For routes that return content and have not specified a `response_model`,\n        use return type annotation to infer the response model.\n\n        For routes that return No-Content status codes, explicitly set\n        a `response_class` to ensure nothing is returned in the response body.\n        \"\"\"\n        if kwargs.get(\"status_code\") == status.HTTP_204_NO_CONTENT:\n            # any routes that return No-Content status codes must\n            # explicilty set a response_class that will handle status codes\n            # and not return anything in the body\n            kwargs[\"response_class\"] = Response\n        if kwargs.get(\"response_model\") is None:\n            kwargs[\"response_model\"] = get_type_hints(endpoint).get(\"return\")\n        return super().add_api_route(path, endpoint, **kwargs)\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectRouter.add_api_route","title":"add_api_route","text":"

Add an API route.

For routes that return content and have not specified a response_model, use return type annotation to infer the response model.

For routes that return No-Content status codes, explicitly set a response_class to ensure nothing is returned in the response body.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
def add_api_route(\n    self, path: str, endpoint: Callable[..., Any], **kwargs: Any\n) -> None:\n\"\"\"\n    Add an API route.\n\n    For routes that return content and have not specified a `response_model`,\n    use return type annotation to infer the response model.\n\n    For routes that return No-Content status codes, explicitly set\n    a `response_class` to ensure nothing is returned in the response body.\n    \"\"\"\n    if kwargs.get(\"status_code\") == status.HTTP_204_NO_CONTENT:\n        # any routes that return No-Content status codes must\n        # explicilty set a response_class that will handle status codes\n        # and not return anything in the body\n        kwargs[\"response_class\"] = Response\n    if kwargs.get(\"response_model\") is None:\n        kwargs[\"response_model\"] = get_type_hints(endpoint).get(\"return\")\n    return super().add_api_route(path, endpoint, **kwargs)\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.method_paths_from_routes","title":"method_paths_from_routes","text":"

Generate a set of strings describing the given routes in the format:

For example, \"GET /logs/\"

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
def method_paths_from_routes(routes: Iterable[APIRoute]) -> Set[str]:\n\"\"\"\n    Generate a set of strings describing the given routes in the format: <method> <path>\n\n    For example, \"GET /logs/\"\n    \"\"\"\n    method_paths = set()\n    for route in routes:\n        for method in route.methods:\n            method_paths.add(f\"{method} {route.path}\")\n\n    return method_paths\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.response_scoped_dependency","title":"response_scoped_dependency","text":"

Ensure that this dependency closes before the response is returned to the client. By default, FastAPI closes dependencies after sending the response.

Uses an async stack that is exited before the response is returned. This is particularly useful for database sesssions which must be committed before the client can do more work.

Do not use a response-scoped dependency within a FastAPI background task.

Background tasks run after FastAPI sends the response, so a response-scoped dependency will already be closed. Use a normal FastAPI dependency instead.

Parameters:

Name Type Description Default dependency Callable

An async callable. FastAPI dependencies may still be used.

required

Returns:

Type Description

A wrapped dependency which will push the dependency context manager onto

a stack when called.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
def response_scoped_dependency(dependency: Callable):\n\"\"\"\n    Ensure that this dependency closes before the response is returned to the client. By\n    default, FastAPI closes dependencies after sending the response.\n\n    Uses an async stack that is exited before the response is returned. This is\n    particularly useful for database sesssions which must be committed before the client\n    can do more work.\n\n    NOTE: Do not use a response-scoped dependency within a FastAPI background task.\n          Background tasks run after FastAPI sends the response, so a response-scoped\n          dependency will already be closed. Use a normal FastAPI dependency instead.\n\n    Args:\n        dependency: An async callable. FastAPI dependencies may still be used.\n\n    Returns:\n        A wrapped `dependency` which will push the `dependency` context manager onto\n        a stack when called.\n    \"\"\"\n    signature = inspect.signature(dependency)\n\n    async def wrapper(*args, request: Request, **kwargs):\n        # Replicate FastAPI behavior of auto-creating a context manager\n        if inspect.isasyncgenfunction(dependency):\n            context_manager = asynccontextmanager(dependency)\n        else:\n            context_manager = dependency\n\n        # Ensure request is provided if requested\n        if \"request\" in signature.parameters:\n            kwargs[\"request\"] = request\n\n        # Enter the route handler provided stack that is closed before responding,\n        # return the value yielded by the wrapped dependency\n        return await request.state.response_scoped_stack.enter_async_context(\n            context_manager(*args, **kwargs)\n        )\n\n    # Ensure that the signature includes `request: Request` to ensure that FastAPI will\n    # inject the request as a dependency; maintain the old signature so those depends\n    # work\n    request_parameter = inspect.signature(wrapper).parameters[\"request\"]\n    functools.update_wrapper(wrapper, dependency)\n\n    if \"request\" not in signature.parameters:\n        new_parameters = signature.parameters.copy()\n        new_parameters[\"request\"] = request_parameter\n        wrapper.__signature__ = signature.replace(\n            parameters=tuple(new_parameters.values())\n        )\n\n    return wrapper\n
"},{"location":"cloud/","title":"Welcome to Prefect Cloud","text":"

Prefect Cloud is a workflow coordination-as-a-service platform. Prefect Cloud provides all the capabilities of the Prefect server and UI in a hosted environment, plus additional features such as automations, workspaces, and organizations.

Getting Started with Prefect Cloud

Ready to jump right in and start running with Prefect Cloud? See the Cloud Getting Started Guide to create a workspace, configure a local execution environment, and write your first Prefect Cloud-monitored flow run.

Prefect Cloud includes all the features in the open-source Prefect server plus the following:

Prefect Cloud features

Features only available on Prefect Cloud include:

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#user-accounts","title":"User accounts","text":"

When you sign up for Prefect Cloud, a personal account is automatically provisioned for you. A personal account gives you access to profile settings where you can view and administer your:

As a personal account owner, you can create a workspace and invite collaborators to your workspace.

Organizations in Prefect Cloud enable you to invite users to collaborate in your workspaces with the ability to set role-based access controls (RBAC) for organization members. Organizations may also configure service accounts with API keys for non-user access to the Prefect Cloud API.

Prefect Cloud plans for teams of every size

See the Prefect Cloud plans for details on options for individual users and teams.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#workspaces","title":"Workspaces","text":"

A workspace is an isolated environment within Prefect Cloud for your flows, deployments, and block configuration. See the Workspaces documentation for more information about configuring and using workspaces.

Each workspace keeps track of its own:

When you first log into Prefect Cloud and create your workspace, it will most likely be empty. Don't Panic \u2014 you just haven't run any flows tracked by this workspace yet. See the Prefect Cloud Quickstart to configure a local execution environment and start tracking flow runs in Prefect Cloud.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#events","title":"Events","text":"

Prefect Cloud allows you to see your events.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#automations","title":"Automations","text":"

Prefect Cloud automations provide additional notification capabilities as the open-source Prefect server. Automations enable you to configure triggers and actions that can kick off flow runs, pause deployments, or send custom notifications in response to real-time monitoring events.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#organizations","title":"Organizations","text":"

A Prefect Cloud organization is a type of account available on Prefect Cloud that enables more extensive and granular control over workspace collaboration. Within an organization account you can:

See the Organizations documentation for more information about managing users, service accounts, and workspaces in a Prefect Cloud organization.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#service-accounts","title":"Service accounts","text":"

Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account. Service accounts are typically used to configure API access for running agents or executing flow runs on remote infrastructure.

See the service accounts documentation for more information about creating and managing service accounts in a Prefect Cloud organization.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#roles-and-custom-permissions","title":"Roles and custom permissions","text":"

Role-based access control (RBAC) functionality in Prefect Cloud enables you to assign users granular permissions to perform certain activities within an organization or a workspace.

See the role-based access controls (RBAC) documentation for more information about managing user roles in a Prefect Cloud organization.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#single-sign-on-sso","title":"Single Sign-on (SSO)","text":"

Prefect Cloud's Organization and Enterprise plans offer single sign-on (SSO) authentication integration with your team\u2019s identity provider. SSO integration can bet set up with identity providers that support OIDC and SAML.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#audit-log","title":"Audit Log","text":"

Prefect Cloud's Organization and Enterprise plans offer Audit Log compliance and transparency tools. Audit logs provide a chronological record of activities performed by users in your organization, allowing you to monitor detailed actions for security and compliance purposes.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#prefect-cloud-rest-api","title":"Prefect Cloud REST API","text":"

The Prefect REST API is used for communicating data from Prefect clients to Prefect Cloud or a local Prefect server so that orchestration can be performed. This API is mainly consumed by Prefect clients like the Prefect Python Client or the Prefect UI.

Prefect Cloud REST API interactive documentation

Prefect Cloud REST API documentation is available at https://app.prefect.cloud/api/docs.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#start-using-prefect-cloud","title":"Start using Prefect Cloud","text":"

To create an account or sign in with an existing Prefect Cloud account, go to https://app.prefect.cloud/.

Then follow the steps in our Prefect Cloud Quickstart to create a workspace, configure a local execution environment, and start running workflows with Prefect Cloud.

Need help?

Get your questions answered with a Prefect product advocate by Booking A Rubber Duck!

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/automations/","title":"Automations","text":"

Automations in Prefect Cloud enable you to configure actions that Prefect executes automatically based on trigger conditions related to your flows and work pools.

Using triggers and actions you can automatically kick off flow runs, pause deployments, or send custom notifications in response to real-time monitoring events.

Automations are only available in Prefect Cloud

Notifications in an open-source Prefect server provide a subset of the notification message-sending features available in Automations.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#automations-overview","title":"Automations overview","text":"

The Automations page provides an overview of all configured automations for your workspace.

Selecting the toggle next to an automation pauses execution of the automation.

The button next to the toggle provides commands to copy the automation ID, edit the automation, or delete the automation.

Select the name of an automation to view Details about it and relevant Events.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#create-an-automation","title":"Create an automation","text":"

On the Automations page, select the + icon to create a new automation. You'll be prompted to configure:

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#triggers","title":"Triggers","text":"

Triggers specify the conditions under which your action should be performed. Triggers can be of several types, including triggers based on:

Automations API

The automations API enables further programatic customization of trigger and action policies based on arbitrary events.

Importantly, triggers can be configured not only in reaction to events, but also proactively: to trigger in the absence of an event you expect to see.

For example, in the case of flow run state change triggers, you might expect production flows to finish in no longer than thirty minutes. But transient infrastructure or network issues could cause your flow to get \u201cstuck\u201d in a running state. A trigger could kick off an action if the flow stays in a running state for more than 30 minutes. This action could be on the flow itself, such as canceling or restarting it, or it could take the form of a notification so someone can take manual remediation steps.

Work queue health

A work queue is \"unhealthy\" if it has not been polled in over 60 seconds OR if it has one or more late runs.

Custom Triggers

Custom triggers allow advanced configuration of the conditions on which a trigger executes its actions.

For example, if you would only like a trigger to execute an action if it receives 2 flow run failure events of a specific deployment within 10 seconds, you could paste in the following trigger configuration:

{\n\"match\": {\n\"prefect.resource.id\": \"prefect.flow-run.*\"\n},\n\"match_related\": {\n\"prefect.resource.id\": \"prefect.deployment.70cb25fe-e33d-4f96-b1bc-74aa4e50b761\",\n\"prefect.resource.role\": \"deployment\"\n},\n\"for_each\": [\n\"prefect.resource.id\"\n],\n\"after\": [],\n\"expect\": [\n\"prefect.flow-run.Failed\"\n],\n\"posture\": \"Reactive\",\n\"threshold\": 2,\n\"within\": 10\n}\n

Matching on multiple resources

Each key in match and match_related can accept a list of multiple values that are OR'd together. For example, to match on multiple deployments:

\"match_related\": {\n\"prefect.resource.id\": [\n\"prefect.deployment.70cb25fe-e33d-4f96-b1bc-74aa4e50b761\",\n\"prefect.deployment.c33b8eaa-1ba7-43c4-ac43-7904a9550611\"\n],\n\"prefect.resource.role\": \"deployment\"\n},\n

Or, if your work queue enters an unhealthy state and you want your trigger to execute an action if it doesn't recover within 30 minutes, you could paste in the following trigger configuration:

{\n\"match\": {\n\"prefect.resource.id\": \"prefect.work-queue.70cb25fe-e33d-4f96-b1bc-74aa4e50b761\"\n},\n\"match_related\": {},\n\"for_each\": [\n\"prefect.resource.id\"\n],\n\"after\": [\n\"prefect.work-queue.unhealthy\"\n],\n\"expect\": [\n\"prefect.work-queue.healthy\"\n],\n\"posture\": \"Proactive\",\n\"threshold\": 1,\n\"within\": 1800\n}\n
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#actions","title":"Actions","text":"

Actions specify what your automation does when its trigger criteria are met. Current action types include:

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#selected-and-inferred-action-targets","title":"Selected and inferred action targets","text":"

Some actions require you to either select the target of the action, or specify that the target of the action should be inferred.

Selected targets are simple, and useful for when you know exactly what object your action should act on \u2014 for example, the case of a cleanup flow you want to run or a specific notification you\u2019d like to send.

Inferred targets are deduced from the trigger itself.

For example, if a trigger fires on a flow run that is stuck in a running state, and the action is to cancel an inferred flow run, the flow run to cancel is inferred as the stuck run that caused the trigger to fire.

Similarly, if a trigger fires on work queue health and the action is to pause an inferred work queue, the work queue to pause is inferred as the unhealthy work queue that caused the trigger to fire.

Prefect tries to infer the relevant event whenever possible, but sometimes one does not exist.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#details","title":"Details","text":"

Specify a name and, optionally, a description for the automation.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#create-an-automation-via-deployment-triggers","title":"Create an automation via deployment triggers","text":"

To enable the simple configuation of event-driven deployments, Prefect provides deployment triggers - a shorthand for creating automations that are linked to specific deployments to run them based on the presence or absence of events.

To

triggers:\n- enabled: true\nmatch:\nprefect.resource.id: my.external.resource\nexpect:\n- external.resource.pinged\nparameters:\nparam_1: \"{{ event }}\"\n

When applied, this will create a linked automation that responds to events from an external resource, and passes that event into the parameters of the executed flow run.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#automation-notifications","title":"Automation notifications","text":"

Notifications enable you to set up automation actions that send a message.

Automation notifications support sending notifications via any predefined block that is capable of and configured to send a message. That includes, for example:

Notification blocks must be pre-configured

Notification blocks must be pre-configured prior to creating a notification action. Any existing blocks capable of sending messages will be shown in the block drop-down list.

The Add + button cancels the current automation creation process and enables configuration a notification block.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#templating-notifications-with-jinja","title":"Templating notifications with Jinja","text":"

The notification body can include templated variables using Jinja syntax. Templated variable enable you to include details relevant to automation trigger, such as a flow or queue name.

Jinja templated variable syntax wraps the variable name in double curly brackets, like {{ variable }}.

You can access properties of the underlying flow run objects including:

In addition to its native properites, each object includes an id along with created and updated timestamps.

The flow_run|ui_url token returns the URL for viewing the flow run in Prefect Cloud.

Here\u2019s an example for something that would be relevant to a flow run state-based notification:

Flow run {{ flow_run.name }} entered state {{ flow_run.state.name }}. \n\n    Timestamp: {{ flow_run.state.timestamp }}\n    Flow ID: {{ flow_run.flow_id }}\n    Flow Run ID: {{ flow_run.id }}\n    State message: {{ flow_run.state.message }}\n

The resulting Slack webhook notification would look something like this:

You could include flow and deployment properties.

Flow run {{ flow_run.name }} for flow {{ flow.name }}\nentered state {{ flow_run.state.name }}\nwith message {{ flow_run.state.message }}\n\nFlow tags: {{ flow_run.tags }}\nDeployment name: {{ deployment.name }}\nDeployment version: {{ deployment.version }}\nDeployment parameters: {{ deployment.parameters }}\n

An automation that reports on work queue health might include notifications using work_queue properties.

Work queue health alert!\n\nName: {{ work_queue.name }}\nLast polled: {{ work_queue.last_polled }}\n

In addition to those shortcuts for flows, deployments, and work pools, you have access to the automation and the event that triggered the automation. See the Automations API for additional details.

Automation: {{ automation.name }}\nDescription: {{ automation.description }}\n\nEvent: {{ event.id }}\nResource:\n{% for label, value in event.resource %}\n{{ label }}: {{ value }}\n{% endfor %}\nRelated Resources:\n{% for related in event.related %}\n    Role: {{ related.role }}\n    {% for label, value in event.resource %}\n    {{ label }}: {{ value }}\n    {% endfor %}\n{% endfor %}\n

Note that this example also illustrates the ability to use Jinja features such as iterator and for loop control structures when templating notifications.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/cloud-quickstart/","title":"Getting Started with Prefect Cloud","text":"

Get started with Prefect Cloud in just a few steps:

  1. Sign in or register a Prefect Cloud account.
  2. Create a workspace for your account.
  3. Install Prefect in your local environment.
  4. Log into Prefect Cloud from a local terminal session.
  5. Run a flow locally and view flow run execution in Prefect Cloud.

Prefer to follow this tutorial in a video? We've got exactly what you need. Happy engineering!

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#sign-in-or-register","title":"Sign in or register","text":"

To sign in with an existing account or register an account, go to https://app.prefect.cloud/.

You can create an account with:

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#create-a-workspace","title":"Create a workspace","text":"

A workspace is an isolated environment within Prefect Cloud for your flows and deployments. You can use workspaces to organize or compartmentalize your workflows.

When you register a new account, you'll be prompted to create a workspace.

Select Create Workspace. You'll be prompted to provide a name and description for your workspace.

Note that the Owner setting applies only to users who are members of Prefect Cloud organizations and have permission to create workspaces within the organization.

Select Save to create the workspace. If you change your mind, select Edit from the options menu to modify the workspace details or to delete it.

The Workspace Settings page for your new workspace displays the commands that enable you to install Prefect and log into Prefect Cloud in a local execution environment.

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#install-prefect","title":"Install Prefect","text":"

Configure a local execution environment to use Prefect Cloud as the API server for flow runs. In other words, \"log in\" to Prefect Cloud from a local environment where you want to run a flow.

Open a new terminal session.

Install Prefect in the environment in which you want to execute flow runs.

$ pip install -U prefect\n

Installation requirements

Prefect requires Python 3.8 or later. If you have any questions about Prefect installations requirements or dependencies in your preferred development environment, check out the Installation documentation.

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#log-into-prefect-cloud-from-a-terminal","title":"Log into Prefect Cloud from a terminal","text":"

Use the prefect cloud login Prefect CLI command to log into Prefect Cloud from your environment.

$ prefect cloud login\n

The prefect cloud login command, used on its own, provides an interactive login experience. Using this command, you may log in with either an API key or through a browser.

$ prefect cloud login\n? How would you like to authenticate? [Use arrows to move; enter to select]\n> Log in with a web browser\n  Paste an API key\nOpening browser...\nWaiting for response...\n? Which workspace would you like to use? [Use arrows to move; enter to select]\n> prefect/terry-prefect-workspace\n  g-gadflow/g-workspace\nAuthenticated with Prefect Cloud! Using workspace 'prefect/terry-prefect-workspace'.\n

If you choose to log in via the browser, Prefect opens a new tab in your default browser and enables you to log in and authenticate the session. Use the same login information as you originally used to create your Prefect Cloud account.

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#run-a-flow-with-prefect-cloud","title":"Run a flow with Prefect Cloud","text":"

Okay, you're all set to run a flow locally, orchestrated with Prefect Cloud. Notice that everything works just like running local flows. However, because you logged into Prefect Cloud, your local flow runs show up in Prefect Cloud!

In your local environment, where you configured the previous steps, create a file named quickstart_flow.py with the following contents:

from prefect import flow, get_run_logger\n\n@flow(name=\"Prefect Cloud Quickstart\")\ndef quickstart_flow():\n    logger = get_run_logger()\n    logger.warning(\"Local quickstart flow is running!\")\n\nif __name__ == \"__main__\":\n    quickstart_flow()\n

Now run quickstart_flow.py. You'll see the following log messages in the terminal, indicating that the flow is running correctly.

$ python quickstart_flow.py\n17:52:38.741 | INFO    | prefect.engine - Created flow run 'aquamarine-deer' for flow 'Prefect Cloud Quickstart'\n17:52:39.487 | WARNING | Flow run 'aquamarine-deer' - Local quickstart flow is running!\n17:52:39.592 | INFO    | Flow run 'aquamarine-deer' - Finished in state Completed()\n

Go to the Flow Runs pages in your workspace in Prefect Cloud. You'll see the flow run results right there in Prefect Cloud!

Prefect Cloud automatically tracks any flow runs in a local execution environment logged into Prefect Cloud.

Select the name of the flow run to see details about this run. In this example, the randomly generated flow run name is aquamarine-deer. Your flow run name is likely to be different.

Congratulations! You successfully ran a local flow and, because you're logged into Prefect Cloud, the local flow run results were captured by Prefect Cloud.

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#next-steps","title":"Next steps","text":"

If you're new to Prefect, learn more about writing and running flows in the Prefect Flows First Steps tutorial. If you're already familiar with Prefect flows and want to try creating deployments and kicking off flow runs with Prefect Cloud, check out the Deployments tutorial.

Want to learn more about the features available in Prefect Cloud? Start with the Prefect Cloud Overview.

If you ran into any issues getting your first flow run with Prefect Cloud working, please join our community to ask questions or provide feedback:

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/connecting/","title":"Connecting & Troubleshooting Prefect Cloud","text":"

To create flow runs in a local or remote execution environment and use either Prefect Cloud or a Prefect server as the backend API server, you need to

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#log-into-prefect-cloud-from-a-terminal","title":"Log into Prefect Cloud from a terminal","text":"

Configure a local execution environment to use Prefect Cloud as the API server for flow runs. In other words, \"log in\" to Prefect Cloud from a local environment where you want to run a flow.

  1. Open a new terminal session.
  2. Install Prefect in the environment in which you want to execute flow runs.
$ pip install -U prefect\n
  1. Use the prefect cloud login Prefect CLI command to log into Prefect Cloud from your environment.
$ prefect cloud login\n

The prefect cloud login command, used on its own, provides an interactive login experience. Using this command, you can log in with either an API key or through a browser.

$ prefect cloud login\n? How would you like to authenticate? [Use arrows to move; enter to select]\n> Log in with a web browser\n    Paste an API key\nPaste your authentication key:\n? Which workspace would you like to use? [Use arrows to move; enter to select]\n> prefect/terry-prefect-workspace\n    g-gadflow/g-workspace\nAuthenticated with Prefect Cloud! Using workspace 'prefect/terry-prefect-workspace'.\n

You can also log in by providing a Prefect Cloud API key that you create.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#change-workspaces","title":"Change workspaces","text":"

If you need to change which workspace you're syncing with, use the prefect cloud workspace set Prefect CLI command while logged in, passing the account handle and workspace name.

$ prefect cloud workspace set --workspace \"prefect/my-workspace\"\n

If no workspace is provided, you will be prompted to select one.

Workspace Settings also shows you the prefect cloud workspace set Prefect CLI command you can use to sync a local execution environment with a given workspace.

You may also use the prefect cloud login command with the --workspace or -w option to set the current workspace.

$ prefect cloud login --workspace \"prefect/my-workspace\"\n
","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#manually-configure-prefect-api-settings","title":"Manually configure Prefect API settings","text":"

You can also manually configure the PREFECT_API_URL setting to specify the Prefect Cloud API.

For Prefect Cloud, you can configure the PREFECT_API_URL and PREFECT_API_KEY settings to authenticate with Prefect Cloud by using an account ID, workspace ID, and API key.

$ prefect config set PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n$ prefect config set PREFECT_API_KEY=\"[API-KEY]\"\n

When you're in a Prefect Cloud workspace, you can copy the PREFECT_API_URL value directly from the page URL.

In this example, we configured PREFECT_API_URL and PREFECT_API_KEY in the default profile. You can use prefect profile CLI commands to create settings profiles for different configurations. For example, you could have a \"cloud\" profile configured to use the Prefect Cloud API URL and API key, and another \"local\" profile for local development using a local Prefect API server started with prefect server start. See Settings for details.

Environment variables

You can also set PREFECT_API_URL and PREFECT_API_KEY as you would any other environment variable. See Overriding defaults with environment variables for more information.

See the Flow orchestration with Prefect tutorial for examples.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#install-requirements-in-execution-environments","title":"Install requirements in execution environments","text":"

In local and remote execution environments \u2014 such as VMs and containers \u2014 you must make sure any flow requirements or dependencies have been installed before creating a flow run.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#troubleshooting-prefect-cloud","title":"Troubleshooting Prefect Cloud","text":"

This section provides tips that may be helpful if you run into problems using Prefect Cloud.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-and-proxies","title":"Prefect Cloud and proxies","text":"

Proxies intermediate network requests between a server and a client.

To communicate with Prefect Cloud, the Prefect client library makes HTTPS requests. These requests are made using the httpx Python library. httpx respects accepted proxy environment variables, so the Prefect client is able to communicate through proxies.

To enable communication via proxies, simply set the HTTPS_PROXY and SSL_CERT_FILE environment variables as appropriate in your execution environment and things should \u201cjust work.\u201d

See the Using Prefect Cloud with proxies topic in Prefect Discourse for examples of proxy configuration.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-access-via-api","title":"Prefect Cloud access via API","text":"

If the Prefect Cloud API key, environment variable settings, or account login for your execution environment are not configured correctly, you may experience errors or unexpected flow run results when using Prefect CLI commands, running flows, or observing flow run results in Prefect Cloud.

Use the prefect config view CLI command to make sure your execution environment is correctly configured to access Prefect Cloud.

$ prefect config view\nPREFECT_PROFILE='cloud'\nPREFECT_API_KEY='pnu_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' (from profile)\nPREFECT_API_URL='https://api-beta.prefect.io/api/accounts/...' (from profile)\n

Make sure PREFECT_API_URL is configured to use https://api-beta.prefect.io/api/....

Make sure PREFECT_API_KEY is configured to use a valid API key.

You can use the prefect cloud workspace ls CLI command to view or set the active workspace.

$ prefect cloud workspace ls\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503   Available Workspaces: \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502   g-gadflow/g-workspace \u2502\n\u2502    * prefect/workinonit \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n    * active workspace\n

You can also check that the account and workspace IDs specified in the URL for PREFECT_API_URL match those shown in the URL bar for your Prefect Cloud workspace.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-login-errors","title":"Prefect Cloud login errors","text":"

If you're having difficulty logging in to Prefect Cloud, the following troubleshooting steps may resolve the issue, or will provide more information when sharing your case to the support channel.

Other tips to help with login difficulties:

In some cases, logging in to Prefect Cloud results in a \"404 Page Not Found\" page or fails with the error: \"Failed to load module script: Expected a JavaScript module script but the server responded with a MIME type of \u201ctext/html\u201d. Strict MIME type checking is enforced for module scripts per HTML spec.\"

This error may be caused by a bad service worker.

To resolve the problem, we recommend unregistering service workers.

In your browser, start by opening the developer console.

Once the developer console is open:

  1. Go to the Application tab in the developer console.
  2. Select Storage.
  3. Make sure Unregister service workers is selected.
  4. Select Clear site data, then hard refresh the page with CMD+Shift+R (CTRL+Shift+R on Windows).

See the Login to Prefect Cloud fails... topic in Prefect Discourse for a video demonstrating these steps.

None of this worked?

Email us at help@prefect.io and provide answers to the questions above in your email to make it faster to troubleshoot and unblock you. Make sure you add the email address with which you were trying to log in, your Prefect Cloud account name, and, if applicable, the organization to which it belongs.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/events/","title":"Events","text":"

An event is a notification of a change. Together, events form a feed of activity recording what's happening across your stack.

Events power several features in Prefect Cloud, including flow run logs, audit logs, and automations.

Events can represent API calls, state transitions, or changes in your execution environment or infrastructure.

Events enable observability into your data stack via the event feed, and the configuration of Prefect's reactivity via automations.

","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-specification","title":"Event Specification","text":"

Events adhere to a structured specification.

Name Type Required? Description occurred String yes When the event happened event String yes The name of the event that happened resource Object yes The primary Resource this event concerns related Array no A list of additional Resources involved in this event payload Object no An open-ended set of data describing what happened id String yes The client-provided identifier of this event follows String no The ID of an event that is known to have occurred prior to this one.","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-grammar","title":"Event Grammar","text":"

Generally, events have a consistent and informative grammar - an event describes a resource and an action that the resource took or that was taken on that resource. For example, events emitted by Prefect objects take the form of:

prefect.block.write-method.called\nprefect-cloud.automation.action.executed\nprefect-cloud.user.logged-in\n
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-sources","title":"Event Sources","text":"

Events are automatically emitted by all Prefect objects, including flows, tasks, deployments, work queues, and logs. Prefect-emitted events will contain the prefect or prefect-cloud resource prefix. Events can also be sent to the Prefect events API via authenticated http request.

The Prefect SDK provides a method that emits events, for use in arbitrary python code that may not be a task or flow. Running the following code will emit events to Prefect Cloud, which will validate and ingest the event data.

from prefect.events import emit_event\n\ndef some_function(name: str=\"kiki\") -> None:\n    print(f\"hi {name}!\")\n    emit_event(event=f\"{name}.sent.event!\", resource={\"prefect.resource.id\": f\"coder.{name}\"})\n\nsome_function()\n

Prefect Cloud offers programmable webhooks to recieve HTTP requests from other systems and translate them into events within your workspace. Webhooks can emit pre-defined static events, dynamic events that use portions of the incoming HTTP request, or events derived from CloudEvents.

Events emitted from any source will appear in the event feed, where you can visualize activity in context and configure automations to react to the presence or absence of it in the future.

","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#resources","title":"Resources","text":"

Every event has a primary resource, which describes the object that emitted an event. Resources are used as quasi-stable identifiers for sources of events, and are constructed as dot-delimited strings, for example:

prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3.action.0\nacme.user.kiki.elt_script_1\nprefect.flow-run.e3755d32-cec5-42ca-9bcd-af236e308ba6\n

Resources can optionally have additional arbitrary labels which can be used in event aggregation queries, such as:

\"resource\": {\n\"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3\",\n\"prefect-cloud.action.type\": \"call-webhook\"\n}\n

Events can optionally contain related resources, used to associate the event with other resources, such as in the case that the primary resource acted on or with another resource:

\"resource\": {\n\"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3.action.0\",\n\"prefect-cloud.action.type\": \"call-webhook\"\n},\n\"related\": [\n{\n\"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3\",\n\"prefect.resource.role\": \"automation\",\n\"prefect-cloud.name\": \"webhook_body_demo\",\n\"prefect-cloud.posture\": \"Reactive\"\n}\n]\n
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#events-in-the-cloud-ui","title":"Events in the Cloud UI","text":"

Prefect Cloud provides an interactive dashboard to analyze and take action on events that occurred in your workspace on the event feed page.

","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-feed","title":"Event feed","text":"

The event feed is the primary place to view, search, and filter events to understand activity across your stack. Each entry displays data on the resource, related resource, and event that took place.

","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-details","title":"Event details","text":"

You can view more information about an event by clicking into it, where you can view the full details of an event's resource, related resources, and its payload.

","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#reacting-to-events","title":"Reacting to events","text":"

From an event page, you can easily configure an automation to trigger on the observation of matching events or a lack of matching events by clicking the automate button in the overflow menu:

The default trigger configuration will fire every time it sees an event with a matching resource identifier. Advanced configuration is possible via custom triggers.

","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/organizations/","title":"Organizations","text":"

For larger teams or companies with more complex needs around user and access management, organizations in Prefect Cloud provide several features that enable you to collaborate securely at scale, including:

See the Prefect Cloud plans to learn more about options for supporting more users, service accounts, and workspaces.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#organizations-overview","title":"Organizations overview","text":"

An organization is a type of account available on Prefect Cloud that enables more extensive and granular control over inviting workspace collaboration. Organizations are only available on Prefect Cloud.

Within an organization account you can:

For example, you might create a workspace for a specific team. Within that workspace, give a developer member full Collaborator role access, and invite a data scientist with Read-only Collaborator permissions to monitor the status of scheduled and completed flow runs.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#navigating-organizations","title":"Navigating organizations","text":"

You can see the organizations you're a member of, or create a new organization, by selecting the Organizations icon in the left navigation bar.

When you select an organization, the Profile page provides an overview of the organization.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#organization-workspaces","title":"Organization workspaces","text":"

Workspaces shows you a list of workspaces you can access within the organization. If you have been given the organization Admin role, you can create and manage workspaces here.

You can also select the Prefect icon to see all workspaces you have been invited to access, across personal accounts and organizations.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#organization-members","title":"Organization members","text":"

Members shows you a list of users who are members of the organization. If you have been given the organization Admin role, you can invite new members and set organization roles for users here.

You can control granular access to workspaces by setting default access for all organization members, inviting specific members to collaborate in an organization workspace, or adding service account permissions.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#inviting-organization-members","title":"Inviting organization members","text":"

To invite new members to an organization in Prefect Cloud, select the + icon. Provide an email address and organization role. You may add multiple email addresses.

The user will receive an invite email with a confirmation link at the address provided. If the user does not already have a Prefect Cloud account, they must create an account before accessing the organization.

When the user accepts the invite, they become a member of your organization and are assigned the role from the invite. The member's role can be changed (or access removed entirely) by an organization Admin.

The maximum number of organization members varies. See the Prefect Cloud plans to learn more about options for supporting more users, service accounts, and workspaces.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#service-accounts","title":"Service accounts","text":"

Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account. Service accounts are typically used to configure API access for running agents or executing flow runs on remote infrastructure.

Select Service Accounts to view, create, or edit service accounts for your organization.

Service accounts are created at the organization level, but may be shared to individual workspaces within the organization. See workspace sharing for more information.

Service account credentials

When you create a service account, Prefect Cloud creates a new API key for the account and provides the API configuration command for the execution environment. Save these to a safe location for future use. If the access credentials are lost or compromised, you should regenerate the credentials from the service account page.

Service account roles

Service accounts are created at the organization level, and may then become members of workspaces within the organization.

A service account may only be a Member of an organization. It can never be an organization Admin. You may apply any valid workspace-level role to a service account.

See the service accounts documentation for more information.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#workspace-sharing","title":"Workspace sharing","text":"

Workspace sharing enables you to give organization members and service accounts access to workspaces within an organization. Each workspace within an organization may have its own members and service accounts with roles and permissions specific to that workspace. Organization Admins have full access to all workspaces in an organization.

Within a workspace, select Workspace Sharing, then select the + icon to add new members or service accounts to the workspace. Only organization Admins and workspace Owners may add members or service accounts to a workspace.

Members and service accounts must already be configured for the organization. An Admin or Owner may configure a different role for the user or service account as needed.

Default workspace role

You may make a workspace available to any user in an organization by setting a default role for \"Anyone at...\". Users in the organization may access the workspace with the specified default role permissions. Default workspace roles do not apply to service accounts.

The role given to users specifically added to the workspace is the union of workspace scopes given by the default workspace role and that users' role in the workspace.

You may set this to \"No Access\" if you do not want organization members not specifically added to the workspace to access it (organization Admins excepted).

See the Workspace sharing documentation for more information.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#organization-and-workspace-roles","title":"Organization and workspace roles","text":"

Prefect Cloud enables you to configure both organization and workspace roles for users.

Select Roles within an organization to see the configured workspace roles for your organization.

Prefect Cloud provides default workspace roles that cover most use cases. You may also create custom workspace roles to suit your specific organization needs.

See the Roles (RBAC) documentation for more information on default and custom role permissions.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#single-sign-on-sso","title":"Single Sign-on (SSO)","text":"

Prefect Cloud's Organization and Enterprise plans offer single sign-on (SSO) authentication integration with your team\u2019s identity provider. SSO integration can bet set up with identity providers that support OIDC and SAML.

See the Single Sign-on (SSO) documentation for more information on default and custom role permissions.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/rate-limits/","title":"API Rate Limits & Retention Periods","text":"

API rate limits restrict the number of requests that a single client can make in a given time period. They ensure Prefect Cloud's stability, so that when you make an API call, you always get a response.

Prefect Cloud rate limits are subject to change

The following rate limits are in effect currently, but are subject to change. Contact Prefect support at help@prefect.io if you have questions about current rate limits.

Prefect Cloud enforces the following rate limits:

","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/rate-limits/#flow-flow-run-and-task-run-rate-limits","title":"Flow, flow run, and task run rate limits","text":"

Prefect Cloud limits the flow_runs, task_runs, and flows endpoints and their subroutes at the following levels:

The Prefect Cloud API will return a 429 response with an appropriate Retry-After header if these limits are triggered.

","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/rate-limits/#log-service-rate-limits","title":"Log service rate limits","text":"

Prefect Cloud limits the number of logs accepted:

The Prefect Cloud API will return a 429 response if these limits are triggered.

","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/rate-limits/#flow-run-retention","title":"Flow run retention","text":"

Prefect Cloud feature

The Flow Run Retention Policy setting is only applicable in Prefect Cloud.

Flow runs in Prefect Cloud are retained according to the Flow Run Retention Policy setting in your personal account or organization profile. The policy setting applies to all workspaces owned by the personal account or organization respectively.

The flow run retention policy represents the number of days each flow run is available in the Prefect Cloud UI, and via the Prefect CLI and API after it ends. Once a flow run reaches a terminal state (detailed in the chart here), it will be retained until the end of the flow run retention period.

Flow Run Retention Policy keys on terminal state

Note that, because Flow Run Retention Policy keys on terminal state, if two flows start at the same time, but reach a terminal state at different times, they will be removed at different times according to when they each reached their respective terminal states.

This retention policy applies to all details about a flow run, including its task runs. Subflow runs follow the retention policy independently from their parent flow runs, and are removed based on the time each subflow run reaches a terminal state.

If you or your organization have needs that require a tailored retention period, contact the Prefect Sales team.

","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/webhooks/","title":"Webhooks","text":"

Use webhooks in your Prefect Cloud workspace to receive, observe, and react to events from other systems in your ecosystem. Each webhook exposes a unique URL endpoint to receive events from other systems and transforms them into Prefect events for use in automations.

Webhooks are defined by two essential components: a unique URL and a template which translates incoming web requests to a Prefect event.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#configuring-webhooks","title":"Configuring webhooks","text":"","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#via-the-prefect-cloud-api","title":"Via the Prefect Cloud API","text":"

Webhooks are managed via the Webhooks API endpoints. This is a Prefect Cloud-only feature. You authenticate API calls using the standard authentication methods you use with Prefect Cloud.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#via-prefect-cloud","title":"Via Prefect Cloud","text":"

Webhooks can be created and managed from the Prefect Cloud UI.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#via-the-prefect-cli","title":"Via the Prefect CLI","text":"

Webhooks can be managed and interacted with via the prefect cloud webhook command group.

prefect cloud webhook --help\n

You can create your first webhook by invoking create:

prefect cloud webhook create your-webhook-name \\\n--description \"Receives webhooks from your system\" \\\n--template '{ \"event\": \"your.event.name\", \"resource\": { \"prefect.resource.id\": \"your.resource.id\" } }'\n

Note the template string, which is discussed in greater detail down below

You can retrieve details for a specific webhook by ID using get, or optionally query all webhooks in your workspace via ls:

# get webhook by ID\nprefect cloud webhook get <webhook-id>\n\n# list all configured webhooks in your workspace\nprefect cloud webhook ls\n

If you ever need to disable an existing webhook without deleting it, use toggle:

prefect cloud webhook toggle <webhook-id>\nWebhook is now disabled\n\nprefect cloud webhook toggle <webhook-id>\nWebhook is now enabled\n

If you are concerned that your webhook endpoint may have been compromised, use rotate to generate a new, random endpoint

prefect cloud webhook rotate <webhook-url-slug>\n
","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#webhook-endpoints","title":"Webhook endpoints","text":"

The webhook endpoints have randomly generated opaque URLs that do not divulge any information about your Prefect Cloud workspace. They are rooted at https://api.prefect.cloud/hooks/. For example: https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ. Prefect Cloud assigns this URL when you create a webhook; it cannot be set via the API. You may rotate your webhook URL at any time without losing the associated configuration.

All webhooks may accept requests via the most common HTTP methods:

Prefect Cloud webhooks are deliberately quiet to the outside world, and will only return a 204 No Content response when they are successful, and a 400 Bad Request error when there is any error interpreting the request. For more visibility when your webhooks fail, see the Troubleshooting section below.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#webhook-templates","title":"Webhook templates","text":"

The purpose of a webhook is to accept an HTTP request from another system and produce a Prefect event from it. You may find that you often have little influence or control over the format of those requests, so Prefect's webhook system gives you full control over how you turn those notifications from other systems into meaningful events in your Prefect Cloud workspace. The template you define for each webhook will determine how individual components of the incoming HTTP request become the event name and resource labels of the resulting Prefect event.

As with the templates available in Prefect Cloud Automation for defining notifications and other parameters, you will write templates in Jinja2. All of the built-in Jinja2 blocks and filters are available, as well as the filters from the jinja2-humanize-extensions package.

Your goal when defining your event template is to produce a valid JSON object that defines (at minimum) the event name and the resource[\"prefect.resource.id\"], which are required of all events. The simplest template is one in which these are statically defined.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#static-webhook-events","title":"Static webhook events","text":"

Let's see a static webhook template example. Say you want to configure a webhook that will notify Prefect when your recommendations machine learning model has been updated, so you can then send a Slack notification to your team and run a few subsequent deployments. Those models are produced on a daily schedule by another team that is using cron for scheduling. They aren't able to use Prefect for their flows (yet!), but they are happy to add a curl to the end of their daily script to notify you. Because this webhook will only be used for a single event from a single resource, your template can be entirely static:

{\n\"event\": \"model.refreshed\",\n\"resource\": {\n\"prefect.resource.id\": \"product.models.recommendations\",\n\"prefect.resource.name\": \"Recommendations [Products]\",\n\"producing-team\": \"Data Science\"\n}\n}\n

Make sure to produce valid JSON

The output of your template, when rendered, should be a valid string that can be parsed, for example, with json.loads.

A webhook with this template may be invoked via any of the HTTP methods, including a GET request with no body, so the team you are integrating with can include this line at the end of their daily script:

curl https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ\n

Each time the script hits the webhook, the webhook will produce a single Prefect event with that name and resource in your workspace.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#event-fields-that-prefect-cloud-populates-for-you","title":"Event fields that Prefect Cloud populates for you","text":"

You may notice that you only had to provide the event and resource definition, which is not a completely fleshed out event. Prefect Cloud will set default values for any missing fields, such as occurred and id, so you don't need to set them in your template. Additionally, Prefect Cloud will add the webhook itself as a related resource on all of the events it produces.

If your template does not produce a payload field, the payload will default to a standard set of debugging information, including the HTTP method, headers, and body.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#dynamic-webhook-events","title":"Dynamic webhook events","text":"

Now let's say that after a few days you and the Data Science team are getting a lot of value from the automations you have set up with the static webhook. You've agreed to upgrade this webhook to handle all of the various models that the team produces. It's time to add some dynamic information to your webhook template.

Your colleagues on the team have adjusted their daily cron scripts to POST a small body that includes the ID and name of the model that was updated:

curl \\\n-d \"model=recommendations\" \\\n-d \"friendly_name=Recommendations%20[Products]\" \\\n-X POST https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ\n

This script will send a POST request and the body will include a traditional URL-encoded form with two fields describing the model that was updated: model and friendly_name. Here's the webhook code that uses Jinja to receive these values in your template and produce different events for the different models:

{\n    \"event\": \"model.refreshed\",\n    \"resource\": {\n        \"prefect.resource.id\": \"product.models.{{ body.model }}\",\n        \"prefect.resource.name\": \"{{ body.friendly_name }}\",\n        \"producing-team\": \"Data Science\"\n    }\n}\n

All subsequent POST requests will produce events with those variable resource IDs and names. The other statically-defined parts, such as event or the producing-team label you included earlier will still be used.

Use Jinja2's default filter to handle missing values

Jinja2 has a helpful default filter that can compensate for missing values in the request. In this example, you may want to use the model's ID in place of the friendly name when the friendly name is not provided: {{ body.friendly_name|default(body.model) }}.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#how-http-request-components-are-handled","title":"How HTTP request components are handled","text":"

The Jinja2 template context includes the three parts of the incoming HTTP request:

HTTP headers are available without any alteration as a dict-like object, but you may access them with header names in any case. For example, these template expressions all return the value of the Content-Length header:

{{ headers['Content-Length'] }}\n\n{{ headers['content-length'] }}\n\n{{ headers['CoNtEnt-LeNgTh'] }}\n

The HTTP request body goes through some light preprocessing to make it more useful in templates. If the Content-Type of the request is application/json, the body will be parsed as a JSON object and made available to the webhook templates. If the Content-Type is application/x-www-form-urlencoded (as in our example above), the body is parsed into a flat dict-like object of key-value pairs. Jinja2 supports both index and attribute access to the fields of these objects, so the following two expressions are equivalent:

{{ body['friendly_name'] }}\n\n{{ body.friendly_name }}\n

Only for Python identifiers

Jinja2's syntax only allows attribute-like access if the key is a valid Python identifier, so body.friendly-name will not work. Use body['friendly-name'] in those cases.

You may not have much control over the client invoking your webhook, but would still like for bodies that look like JSON to be parsed as such. Prefect Cloud will attempt to parse any other content type (like text/plain) as if it were JSON first. In any case where the body cannot be transformed into JSON, it will be made available to your templates as a Python str.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#accepting-prefect-events-directly","title":"Accepting Prefect events directly","text":"

In cases where you have more control over the client, your webhook can accept Prefect events directly with a simple pass-through template:

{{ body|tojson }}\n

This template accepts the incoming body (assuming it was in JSON format) and just passes it through unmodified. This allows a POST of a partial Prefect event as in this example:

POST /hooks/AERylZ_uewzpDx-8fcweHQ HTTP/1.1\nHost: api.prefect.cloud\nContent-Type: application/json\nContent-Length: 228\n\n{\n    \"event\": \"model.refreshed\",\n    \"resource\": {\n        \"prefect.resource.id\": \"product.models.recommendations\",\n        \"prefect.resource.name\": \"Recommendations [Products]\",\n        \"producing-team\": \"Data Science\"\n    }\n}\n

The resulting event will be filled out with the default values for occurred, id, and other fields as described above.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#accepting-cloudevents","title":"Accepting CloudEvents","text":"

The Cloud Native Computing Foundation has standardized CloudEvents for use by systems to exchange event information in a common format. These events are supported by major cloud providers and a growing number of cloud-native systems. Prefect Cloud can interpret a webhook containing a CloudEvent natively with the following template:

{{ body|from_cloud_event(headers) }}\n

The resulting event will use the CloudEvent's subject as the resource (or the source if no subject is available). The CloudEvent's data attribute will become the Prefect event's payload['data'], and the other CloudEvent metadata will be at payload['cloudevents']. If you would like to handle CloudEvents in a more specific way tailored to your use case, use a dynamic template to interpret the incoming body.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#troubleshooting","title":"Troubleshooting","text":"

The initial configuration of your webhook may require some trial and error as you get the sender and your receiving webhook speaking a compatible language. While you are in this phase, you may find the Event Feed in the UI to be indispensable for seeing the events as they are happening.

When Prefect Cloud encounters an error during receipt of a webhook, it will produce a prefect-cloud.webhook.failed event in your workspace. This event will include critical information about the HTTP method, headers, and body it received, as well as what the template rendered. Keep an eye out for these events when something goes wrong.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/workspaces/","title":"Workspaces","text":"

A workspace is a discrete environment within Prefect Cloud for your flows and deployments. Workspaces are only available to Prefect Cloud accounts.

Workspaces could be used in any way you like to organize or compartmentalize your workflows. For example, you could use separate workspaces to isolate dev, staging, and prod environments, or to provide separation between different teams.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspaces-overview","title":"Workspaces overview","text":"

When you first log into Prefect Cloud, you will be prompted to create your own initial workspace. After creating your workspace, you'll be able to view flow runs, flows, deployments, and other workspace-specific features in the Prefect Cloud UI.

Select the Workspaces Icon to see all of the workspaces you can access.

Your list of available workspaces may include:

Workspace-specific features

Each workspace keeps track of its own:

Your user permissions within workspaces may vary. Organizations can assign roles and permissions at the workspace level.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#create-a-workspace","title":"Create a workspace","text":"

On the Workspaces page, select the + icon to create a new workspace. You'll be prompted to configure:

Select Create to actually create the new workspace. The number of available workspaces varies by Prefect Cloud plan. See Pricing if you need additional workspaces or users.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-settings","title":"Workspace settings","text":"

Within a workspace, select Workspace Settings to view or edit workspace details.

The options menu enables you to edit workspace details or delete the workspace.

Deleting a workspace

Deleting a workspace deletes all deployments, flow run history, work pools, and notifications configured in workspace.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-collaborators","title":"Workspace collaborators","text":"

Personal account users may invite workspace collaborators, users who can join, view, and run flows in your workspaces.

In your workspace, select Workspace Collaborators. If you've previously invited collaborators, you'll see them listed.

To invite a user to become a workspace collaborator, select the + icon. You'll be prompted for the email address of the person you'd like to invite. Add the email address, then select Send to initiate the invitation.

If the user does not already have a Prefect Cloud account, they will be able to create one when accepting the workspace collaborator invitation.

To delete a workspace collaborator, select Remove from the menu on the left side of the user's information on this page.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-sharing","title":"Workspace sharing","text":"

Within a Prefect Cloud organization, Admins and workspace Owners may invite users and service accounts to work in an organization workspace. In addition to giving the user access to the workspace, the Admin or Owner assigns a workspace role to the user. The role specifies the scope of permissions for the user within the workspace.

In an organization workspace, select Workspace Sharing to manage users and service accounts for the workspace. If you've previously invited users and service accounts, you'll see them listed.

To invite a user to become a workspace collaborator, select the Members + icon. You can select from a list of existing organization members.

Select a Workspace Role for the user. This will be the initial role for the user within the workspace. A workspace Owner can change this role at any time.

Select Send to initiate the invitation.

To add a service account to a workspace, select the Service Accounts + icon. You can select from a list of existing service accounts configured for the organization. Select a Workspace Role for the service account. This will be the initial role for the service account within the workspace. A workspace Owner can change this role at any time. Select Share to finalize adding the service account.

To delete a workspace collaborator or service account, select Remove from the menu on the right side of the user or service account information on this page.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-transfer","title":"Workspace transfer","text":"

Workspace transfer enables you to move an existing workspace from one account to another. For example, you may transfer a workspace from a personal account to an organization.

Workspace transfer retains existing workspace configuration and flow run history, including blocks, deployments, notifications, work pools, and logs.

Workspace transfer permissions

Workspace transfer must be initiated or approved by a user with admin priviliges for the workspace to be transferred.

For example, if you are transferring a personal workspace to an organization, the owner of the personal account is the default admin for that account and must initiate or approve the transfer.

To initiate a workspace transfer between personal accounts, contact support@prefect.io.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#transfer-a-workspace","title":"Transfer a workspace","text":"

To transfer a workspace, select Workspace Settings within the workspace. Then, from the options menu, select Transfer to initiate the workspace transfer process.

The Transfer Workspace page shows the workspace to be transferred on the left. Select the target account or organization for the workspace on the right. You may also change the handle of the workspace during the transfer process.

Select Transfer to transfer the workspace.

Workspace transfer impact on accounts

Workspace transfer may impact resource usage and costs for source and target account or organization.

When you transfer a workspace, users, API keys, and service accounts may lose access to the workspace. Audit log will no longer track activity on the workspace. Flow runs ending outside of the destination account\u2019s flow run retention period will be removed. You may also need to update Prefect CLI profiles and execution environment settings to access the workspace's new location.

You may also incur new charges in the target account to accommodate the transferred workspace.

The Transfer Workspace page outlines the impacts of transferring the selected workspace to the selected target. Please review these notes carefully before selecting Transfer to transfer the workspace.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/","title":"Users","text":"

Users of Prefect Cloud are accounts for individuals created by signing up at app.prefect.cloud. Users can also be added as a Member of other Organizations and Workspaces.

","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#user-settings","title":"User Settings","text":"

Users can access their account in the profile menu, including:

","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#roles","title":"Roles","text":"

Users can be granted roles with their respective permission sets at the Account and Workspace levels.

","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/api-keys/","title":"Manage Prefect Cloud API Keys","text":"

API keys enable you to authenticate a local environment to work with Prefect Cloud. See Configure execution environment for details on how API keys are configured in your execution environment.

See Configure Local Environments for details on setting up access to Prefect Cloud from a local development environment or remote execution environment.

","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#create-an-api-key","title":"Create an API key","text":"

To create an API key, select the account icon at the bottom-left corner of the UI and select your account name. This displays your account profile.

Select the API Keys tab. This displays a list of previously generated keys and lets you create new API keys or delete keys.

Select the + button to create a new API key. You're prompted to provide a name for the key and, optionally, an expiration date. Select Create API Key to generate the key.

Note that API keys cannot be revealed again in the UI after you generate them, so copy the key to a secure location.

","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#log-into-prefect-cloud-with-an-api-key","title":"Log into Prefect Cloud with an API Key","text":"
prefect cloud login -k '<my-api-key>'\n
","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#service-account-api-keys","title":"Service account API keys","text":"

Service accounts are a feature of Prefect Cloud organizations that enable you to create a Prefect Cloud API key that is not associated with a user account.

Service accounts are typically used to configure API access for running agents or executing flow runs on remote infrastructure. Events and logs for flow runs in those environments are then associated with the service account rather than a user, and API access may be managed or revoked by configuring or removing the service account without disrupting user access.

See the service accounts documentation for more information about creating and managing service accounts in a Prefect Cloud organization.

","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/audit-log/","title":"Audit Log","text":"

Prefect Cloud's Organization and Enterprise plans offer enhanced compliance and transparency tools with Audit Log. Audit logs provide a chronological record of activities performed by Prefect Cloud users in your organization, allowing you to monitor detailed Prefect Cloud actions for security and compliance purposes.

Audit logs enable you to identify who took what action, when, and using what resources within your Prefect Cloud organization. In conjunction with appropriate tools and procedures, audit logs can assist in detecting potential security violations and investigating application errors.

Audit logs can be used to identify changes in:

See the Prefect Cloud plans to learn more about options for supporting audit logs.

","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/audit-log/#viewing-audit-logs","title":"Viewing audit logs","text":"

Within your organization, select the Audit Log page to view audit logs.

Organization admins can view audit logs for:

Admins can filter audit logs on multiple dimensions to restrict the results they see by workspace, user, or event type. Available audit log events are displayed in the Events drop-down menu.

Audit logs may also be filtered by date range. Audit log retention period varies by Prefect Cloud plan. See your organization profile page for the current audit log retention period.

","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/roles/","title":"User and Service Account Roles","text":"

Organizations in Prefect Cloud let you give people in your organization access to the appropriate Prefect functionality within your organization and within specific workspaces.

Role-based access control (RBAC) functionality in Prefect Cloud enables you to assign users granular permissions to perform certain activities within an organization or a workspace.

To give users access to functionality beyond the scope of Prefect\u2019s built-in workspace roles, you may also create custom roles for users.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#built-in-roles","title":"Built-in roles","text":"

Roles give users abilities at either the organization level or at the individual workspace level.

The following sections outline the abilities of the built-in, Prefect-defined organizational and workspace roles.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#organization-level-roles","title":"Organization-level roles","text":"

The following built-in roles have permissions across an organization in Prefect Cloud.

Role Abilities Admin \u2022 Set/change all organization profile settings allowed to be set/changed by a Prefect user. \u2022 Add and remove organization members, and their organization roles. \u2022 Create and delete service accounts in the organization. \u2022 Create workspaces in the organization. \u2022 Implicit workspace owner access on all workspaces in the organization. Member \u2022 View organization profile settings. \u2022 View workspaces I have access to in the organization. \u2022 View organization members and their roles. \u2022 View service accounts in the organization.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-level-roles","title":"Workspace-level roles","text":"

The following built-in roles have permissions within a given workspace in Prefect Cloud.

Role Abilities Viewer \u2022 View flow runs within a workspace. \u2022 View deployments within a workspace. \u2022 View all work pools within a workspace. \u2022 View all blocks within a workspace. \u2022 View all automations within a workspace. \u2022 View workspace handle and description. Runner All Viewer abilities, plus: \u2022 Run deployments within a workspace. Developer All Runner abilities, plus: \u2022 Run flows within a workspace. \u2022 Delete flow runs within a workspace. \u2022 Create, edit, and delete deployments within a workspace. \u2022 Create, edit, and delete work pools within a workspace. \u2022 Create, edit, and delete all blocks and their secrets within a workspace. \u2022 Create, edit, and delete automations within a workspace. \u2022 View all workspace settings. Owner All Developer abilities, plus: \u2022 Add and remove organization members, and set their role within a workspace. \u2022 Set the workspace\u2019s default workspace role for all users in the organization. \u2022 Set, view, edit workspace settings. Worker The minimum scopes required for a worker to poll for and submit work.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#custom-workspace-roles","title":"Custom workspace roles","text":"

The built-in roles will serve the needs of most users, but your team may need to configure custom roles, giving users access to specific permissions within a workspace.

Custom roles can inherit permissions from a built-in role. This enables tweaks to meet your organization\u2019s needs, while ensuring users can still benefit from Prefect\u2019s default workspace role permission curation as new functionality becomes available.

Custom workspace roles can also be created independent of Prefect\u2019s built-in roles. This option gives workspace admins full control of user access to workspace functionality. However, for non-inherited custom roles, the workspace admin takes on the responsibility for monitoring and setting permissions for new functionality as it is released.

See Role permissions for details of permissions you may set for custom roles.

After you create a new role, it become available in the organization Members page and the Workspace Sharing page for you to apply to users.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#inherited-roles","title":"Inherited roles","text":"

A custom role may be configured as an Inherited Role. Using an inherited role allows you to create a custom role using a set of initial permissions associated with a built-in Prefect role. Additional permissions can be added to the custom role. Permissions included in the inherited role cannot be removed.

Custom roles created using an inherited role will follow Prefect's default workspace role permission curation as new functionality becomes available.

To configure an inherited role when configuring a custom role, select the Inherit permission from a default role check box, then select the role from which the new role should inherit permissions.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-role-permissions","title":"Workspace role permissions","text":"

The following permissions are available for custom roles.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#automations","title":"Automations","text":"Permission Description View automations User can see configured automations within a workspace. Create, edit, and delete automations User can create, edit, and delete automations within a workspace. Includes permissions of View automations.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#blocks","title":"Blocks","text":"Permission Description View blocks User can see configured blocks within a workspace. View secret block data User can see configured blocks and their secrets within a workspace. Includes permissions of\u00a0View blocks. Create, edit, and delete blocks User can create, edit, and delete blocks within a workspace. Includes permissions of View blocks and View secret block data.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#deployments","title":"Deployments","text":"Permission Description View deployments User can see configured deployments within a workspace. Run deployments User can run deployments within a workspace. This does not give a user permission to execute the flow associated with the deployment. This only gives a user (via their key) the ability to run a deployment \u2014 another user/key must actually execute that flow, such as a service account with an appropriate role. Includes permissions of View deployments. Create and edit deployments User can create and edit deployments within a workspace. Includes permissions of View deployments and Run deployments. Delete deployments User can delete deployments within a workspace. Includes permissions of View deployments, Run deployments, and Create and edit deployments.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#flows","title":"Flows","text":"Permission Description View flows and flow runs User can see flows and flow runs within a workspace. Create, update, and delete saved search filters User can create, update, and delete saved flow run search filters configured within a workspace. Includes permissions of View flows and flow runs. Create, update, and run flows User can create, update, and run flows within a workspace. Includes permissions of View flows and flow runs. Delete flows User can delete flows within a workspace. Includes permissions of View flows and flow runs and Create, update, and run flows.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#notifications","title":"Notifications","text":"Permission Description View notification policies User can see notification policies configured within a workspace. Create and edit notification policies User can create and edit notification policies configured within a workspace. Includes permissions of View notification policies. Delete notification policies User can delete notification policies configured within a workspace. Includes permissions of View notification policies and Create and edit notification policies.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#task-run-concurrency","title":"Task run concurrency","text":"Permission Description View concurrency limits User can see configured task run concurrency limits within a workspace. Create, edit, and delete concurrency limits User can create, edit, and delete task run concurrency limits within a workspace. Includes permissions of View concurrency limits.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#work-pools","title":"Work pools","text":"Permission Description View work pools User can see work pools configured within a workspace. Create, edit, and pause work pools User can create, edit, and pause work pools configured within a workspace. Includes permissions of View work pools. Delete work pools User can delete work pools configured within a workspace. Includes permissions of View work pools and Create, edit, and pause work pools.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-management","title":"Workspace management","text":"Permission Description View information about workspace service accounts User can see service accounts configured within a workspace. View information about workspace users User can see user accounts for users invited to the workspace. View workspace settings User can see settings configured within a workspace. Edit workspace settings User can edit settings for a workspace. Includes permissions of View workspace settings. Delete the workspace User can delete a workspace. Includes permissions of View workspace settings and Edit workspace settings.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/service-accounts/","title":"Service Accounts","text":"

Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account. Service accounts are typically used to configure API access for running agents or executing deployment flow runs on remote infrastructure.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/service-accounts/#service-accounts-overview","title":"Service accounts overview","text":"

Service accounts are non-user organization accounts that have the following:

Using service account credentials, you can configure an execution environment to interact with your Prefect Cloud organization workspaces without a user having to manually log in from that environment. Service accounts may be created, added to workspaces, have their roles changed, or deleted without affecting organization user accounts.

Select Service Accounts to view, create, or edit service accounts for your organization.

Service accounts are created at the organization level, but individual workspaces within the organization may be shared with the account. See workspace sharing for more information.

Service account credentials

When you create a service account, Prefect Cloud creates a new API key for the account and provides the API configuration command for the execution environment. Save these to a safe location for future use. If the access credentials are lost or compromised, you should regenerate the credentials from the service account page.

Service account roles

Service accounts are created at the organization level, and may then become members of workspaces within the organization.

A service account may only be a Member of an organization. It can never be an organization Admin. You may apply any valid workspace-level role to a service account.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/service-accounts/#create-a-service-account","title":"Create a service account","text":"

Within your organization, on the Service Accounts page, select the + icon to create a new service account. You'll be prompted to configure:

Service account roles

A service account may only be a Member of an organization. You may apply any valid workspace-level role to a service account when it is added to a workspace.

Select Create to actually create the new service account.

Note that API keys cannot be revealed again in the UI after you generate them, so copy the key to a secure location.

You can change the API key and expiration for a service account by rotating the API key. Select Rotate API Key from the menu on the left side of the service account's information on this page.

To delete a service account, select Remove from the menu on the left side of the service account's information.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/sso/","title":"Single Sign-on (SSO)","text":"

Prefect Cloud's Organization and Enterprise plans offer single sign-on (SSO) integration with your team\u2019s identity provider. SSO integration can bet set up with any identity provider that supports:

When using SSO, Prefect Cloud won't store passwords for any accounts managed by your identity provider. Members of your Prefect Cloud organization will instead log in to the organization and authenticate using your identity provider.

Once your SSO integration has been set up, non-admins will be required to authenticate through the SSO provider when accessing organization resources.

See the Prefect Cloud plans to learn more about options for supporting more users and workspaces, service accounts, and SSO.

","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#configuring-sso","title":"Configuring SSO","text":"

Within your organization, select the SSO page to enable SSO for users.

If you haven't enabled SSO for a domain yet, enter the email domains for which you want to configure SSO in Prefect Cloud. Select Save to accept the domains.

Under Enabled Domains, select the domains from the Domains list, then select Generate Link. This step creates a link you can use to configure SSO with your identity provider.

Using the provided link navigate to the Identity Provider Configuration dashboard and select your identity provider to continue configuration. If your provider isn't listed, you can continue with the SAML or Open ID Connect choices instead.

Once you complete SSO configuration your users will be required to authenticate via your identity provider when accessing organization resources, giving you full control over application access.

","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#directory-sync","title":"Directory sync","text":"

Directory sync automatically provisions and de-provisions users for your organization.

Provisioned users are given basic \u201cMember\u201d roles and will have access to any resources that role entails.

When a user is unassigned from the Prefect Cloud application in your identity provider, they will automatically lose access to Prefect Cloud resources, allowing your IT team to control access to Prefect Cloud without ever signing into the app.

","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"community/","title":"Community","text":"

There are many ways to get involved with the Prefect community

","tags":["community","Slack","Discourse"],"boost":2},{"location":"concepts/","title":"Explore Prefect concepts","text":"","tags":["concepts","features","overview"],"boost":2},{"location":"concepts/#develop","title":"Develop","text":"Keyword Description Flows A Prefect workflow, modeled as a Python function. Tasks Discrete units of work in a Prefect workflow. Results The data returned by a flow or a task. Artifacts Formatted outputs rendered in the Prefect UI, such as markdown, tables, or links. States The status of a particular task run or flow run. Task Runners Enable you to engage specific executors for Prefect tasks, such as concurrent, parallel, or distributed execution of tasks. Runtime Context Information about the current flow or task run that you can refer to in your code. Profiles & Configuration Settings you can use to interact with Prefect Cloud and a Prefect server. Blocks Prefect primitives that enable the storage of configuration and provide a UI interface. Variables Named, mutable string values, much like environment variables.","tags":["concepts","features","overview"],"boost":2},{"location":"concepts/#deploy","title":"Deploy","text":"Keyword Description Deployments A server-side concept that encapsulates a flow, allowing it to be scheduled and triggered via API. Deployment Management A minimally opinionated set of files that describe how to prepare one or more flow deployments. Work Pools, Workers & Agents Bridge the Prefect orchestration environment with your execution environment. Storage Lets you configure how flow code for deployments is persisted and retrieved by Prefect agents. Filesystems Blocks that allow you to read and write data from paths. Infrastructure Blocks that specify infrastructure for flow runs created by the deployment. Schedules Tell the Prefect API how to create new flow runs for you automatically on a specified cadence. Logging Log a variety of useful information about your flow and task runs on the server.

Features specific to Prefect Cloud are in their own subheading.

","tags":["concepts","features","overview"],"boost":2},{"location":"concepts/artifacts/","title":"Artifacts","text":"

Artifacts are persisted outputs such as tables, files, or links. They can be published via the Prefect SDK or REST API. They are stored on Prefect Cloud or a Prefect server instance and rendered in the Prefect UI. Artifacts make it easy to track and monitor the objects that your flows produce and update over time.

Published artifacts may be associated with a particular task run, flow run, or outside a flow run context. Artifacts provide a richer way to present information relative to typical logging practices \u2014 including the ability to display tables, Markdown, and links to external data.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#artifacts-overview","title":"Artifacts Overview","text":"

Whether you're publishing links, markdown, or tables, artifacts provide a powerful and flexible way to showcase data within your workflow. With artifacts, you can easily manage and share information with your team, providing valuable insights and context.

Common use cases for artifacts include:

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#creating-artifacts","title":"Creating Artifacts","text":"

Creating artifacts allows you to publish data from task and flow runs or outside of a flow run context. Currently, you can render three artifact types: links, markdown, and tables.

Artifacts render individually

Please note that every artifact created within a task will be displayed as an individual artifact in the Prefect UI. This means that each call to create_link_artifact() or create_markdown_artifact() generates a distinct artifact.

Unlike the print() command, where you can concatenate multiple calls to include additional items in a report, within a task, these commands must be used multiple times if necessary.

To create artifacts like reports or summaries using create_markdown_artifact(), compile your message string separately and then pass it to create_markdown_artifact() to create the complete artifact.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#creating-link-artifacts","title":"Creating Link Artifacts","text":"

To create a link artifact, use the create_link_artifact() function. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key argument to the create_link_artifact() function to track an artifact's history over time. Without a key, the artifact will only be visible in the artifacts tab of the associated flow run or task run.\"

from prefect import flow, task\nfrom prefect.artifacts import create_link_artifact\n\n@task\ndef my_first_task():\n        create_link_artifact(\n            key=\"irregular-data\",\n            link=\"https://nyc3.digitaloceanspaces.com/my-bucket-name/highly_variable_data.csv\",\n            description=\"## Highly variable data\",\n        )\n\n@task\ndef my_second_task():\n        create_link_artifact(\n            key=\"irregular-data\",\n            link=\"https://nyc3.digitaloceanspaces.com/my-bucket-name/low_pred_data.csv\",\n            description=\"# Low prediction accuracy\",\n        )\n\n@flow\ndef my_flow():\n    my_first_task()\n    my_second_task()\n\nif __name__ == \"__main__\":\n    my_flow()\n

Tip

You can specify multiple artifacts with the same key to more easily track something very specific that you care about, such as irregularities in your data pipeline.

After running the above flows, you can find your new artifacts in the Artifacts page of the UI. You can click into the \"irregular data\" artifact and see all versions of it, along with custom descriptions and links to the relevant data.

Here, you'll also be able to view information about your artifact such as its associated flow run or task run id, previous and future versions of the artifact (multiple artifacts can have the same key in order to show lineage), the data you've stored (in this case a markdown-rendered link), an optional markdown description, and when the artifact was created or updated.

To make the links more readable for you and your collaborators, you can pass in a link_text argument for your link artifacts:

from prefect import flow\nfrom prefect.artifacts import create_link_artifact\n\n@flow\ndef my_flow():\n    create_link_artifact(\n        key=\"my-important-link\",\n        link=\"https://www.prefect.io/\",\n        link_text=\"Prefect\",\n    )\n\nif __name__ == \"__main__\":\n    my_flow()\n
In the above example, the create_link_artifact method is used within a flow to create a link artifact with a key of my-important-link. The link parameter is used to specify the external resource to be linked to, and link_text is used to specify the text to be displayed for the link. An optional description could also be added for context.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#creating-markdown-artifacts","title":"Creating Markdown Artifacts","text":"

To create a markdown artifact, you can use the create_markdown_artifact() function. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key argument to the create_markdown_artifact() function to track an artifact's history over time. Without a key, the artifact will only be visible in the artifacts tab of the associated flow run or task run.\"

from prefect import flow, task\nfrom prefect.artifacts import create_markdown_artifact\n\n@task\ndef markdown_task():\n    na_revenue = 500000\n    markdown_report = f\"\"\"# Sales Report\n\n## Summary\n\nIn the past quarter, our company saw a significant increase in sales, with a total revenue of $1,000,000. \nThis represents a 20% increase over the same period last year.\n\n## Sales by Region\n\n| Region        | Revenue |\n|:--------------|-------:|\n| North America | ${na_revenue:,} |\n| Europe        | $250,000 |\n| Asia          | $150,000 |\n| South America | $75,000 |\n| Africa        | $25,000 |\n\n## Top Products\n\n1. Product A - $300,000 in revenue\n2. Product B - $200,000 in revenue\n3. Product C - $150,000 in revenue\n\n## Conclusion\n\nOverall, these results are very encouraging and demonstrate the success of our sales team in increasing revenue \nacross all regions. However, we still have room for improvement and should focus on further increasing sales in \nthe coming quarter.\n\"\"\"\n    create_markdown_artifact(\n        key=\"gtm-report\",\n        markdown=markdown_report,\n        description=\"Quarterly Sales Report\",\n    )\n\n@flow()\ndef my_flow():\n    markdown_task()\n\n\nif __name__ == \"__main__\":\n    my_flow()\n

After running the above flow, you should see your \"gtm-report\" artifact in the Artifacts page of the UI. As with all artifacts, you'll be able to view the associated flow run or task run id, previous and future versions of the artifact, your rendered markdown data, and your optional markdown description.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#create-table-artifacts","title":"Create Table Artifacts","text":"

You can create a table artifact by calling create_table_artifact(). To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key argument to the create_table_artifact() function to track an artifact's history over time. Without a key, the artifact will only be visible in the artifacts tab of the associated flow run or task run.\"

Note

The create_table_artifact() function accepts a table argument, which can be provided as either a list of lists, a list of dictionaries, or a dictionary of lists.

from prefect.artifacts import create_table_artifact\n\ndef my_fn():\n    highest_churn_possibility = [\n       {'customer_id':'12345', 'name': 'John Smith', 'churn_probability': 0.85 }, \n       {'customer_id':'56789', 'name': 'Jane Jones', 'churn_probability': 0.65 } \n    ]\n\n    create_table_artifact(\n        key=\"personalized-reachout\",\n        table=highest_churn_possibility,\n        description= \"# Marvin, please reach out to these customers today!\"\n    )\n\nif __name__ == \"__main__\":\n    my_fn()\n

As you can see, you don't need to create an artifact in a flow run context. You can use it however you like and still get the benefits in the Prefect UI.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#managing-artifacts","title":"Managing Artifacts","text":"","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#reading-artifacts","title":"Reading Artifacts","text":"

In the Prefect UI, you can view all of the latest versions of your artifacts and click into a specific artifact to see its lineage over time. Additionally, you can inspect all versions of an artifact with a given key by running:

$ prefect artifact inspect <my-key>\n

or view all artifacts by running:

$ prefect artifact ls\n

You can also use the Prefect REST API to programmatically filter your results.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#deleting-artifacts","title":"Deleting Artifacts","text":"

You can delete an artifact directly using the CLI to delete specific artifacts with a given key or id:

$ prefect artifact delete <my-key>\n
$ prefect artifact delete --id <my-id>\n

Alternatively, you can delete artifacts using the Prefect REST API.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#artifacts-api","title":"Artifacts API","text":"

Prefect provides the Prefect REST API to allow you to create, read, and delete artifacts programmatically. With the Artifacts API, you can automate the creation and management of artifacts as part of your workflow.

For example, to read the 5 most recently created markdown, table, and link artifacts, you can do the following:

import requests\n\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/abc/workspaces/xyz\"\nPREFECT_API_KEY=\"pnu_ghijk\"\ndata = {\n    \"sort\": \"CREATED_DESC\",\n    \"limit\": 5,\n    \"artifacts\": {\n        \"key\": {\n            \"exists_\": True\n        }\n    }\n}\n\nheaders = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\nendpoint = f\"{PREFECT_API_URL}/artifacts/filter\"\n\nresponse = requests.post(endpoint, headers=headers, json=data)\nassert response.status_code == 200\nfor artifact in response.json():\n    print(artifact)\n

If you don't specify a key or that a key must exist, you will also return results (which are a type of key-less artifact).

See the rest of the Prefect REST API documentation on artifacts for more information!

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/blocks/","title":"Blocks","text":"

Blocks are a primitive within Prefect that enable the storage of configuration and provide an interface for interacting with external systems.

With blocks, you can securely store credentials for authenticating with services like AWS, GitHub, Slack, and any other system you'd like to orchestrate with Prefect.

Blocks expose methods that provide pre-built functionality for performing actions against an external system. They can be used to download data from or upload data to an S3 bucket, query data from or write data to a database, or send a message to a Slack channel.

You may configure blocks through code or via the Prefect Cloud and the Prefect server UI.

You can access blocks for both configuring flow deployments and directly from within your flow code.

Prefect provides some built-in block types that you can use right out of the box. Additional blocks are available through Prefect Integrations. To use these blocks you can pip install the package, then register the blocks you want to use with Prefect Cloud or a Prefect server.

Prefect Cloud and the Prefect server UI display a library of block types available for you to configure blocks that may be used by your flows.

Blocks and parameters

Blocks are useful for configuration that needs to be shared across flow runs and between flows.

For configuration that will change between flow runs, we recommend using parameters.

","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#prefect-built-in-blocks","title":"Prefect built-in blocks","text":"

Prefect provides a broad range of commonly used, built-in block types. These block types are available in Prefect Cloud and the Prefect server UI.

Block Slug Description Azure azure Store data as a file on Azure Datalake and Azure Blob Storage. Date Time date-time A block that represents a datetime. Docker Container docker-container Runs a command in a container. Docker Registry docker-registry Connects to a Docker registry. Requires a Docker Engine to be connectable. GCS gcs Store data as a file on Google Cloud Storage. GitHub github Interact with files stored on public GitHub repositories. JSON json A block that represents JSON. Kubernetes Cluster Config kubernetes-cluster-config Stores configuration for interaction with Kubernetes clusters. Kubernetes Job kubernetes-job Runs a command as a Kubernetes Job. Local File System local-file-system Store data as a file on a local file system. Microsoft Teams Webhook ms-teams-webhook Enables sending notifications via a provided Microsoft Teams webhook. Opsgenie Webhook opsgenie-webhook Enables sending notifications via a provided Opsgenie webhook. Pager Duty Webhook pager-duty-webhook Enables sending notifications via a provided PagerDuty webhook. Process process Run a command in a new process. Remote File System remote-file-system Store data as a file on a remote file system. Supports any remote file system supported by fsspec. S3 s3 Store data as a file on AWS S3. Secret secret A block that represents a secret value. The value stored in this block will be obfuscated when this block is logged or shown in the UI. Slack Webhook slack-webhook Enables sending notifications via a provided Slack webhook. SMB smb Store data as a file on a SMB share. String string A block that represents a string. Twilio SMS twilio-sms Enables sending notifications via Twilio SMS. Webhook webhook Block that enables calling webhooks.","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#blocks-in-prefect-integrations","title":"Blocks in Prefect Integrations","text":"

Blocks can also be created by anyone and shared with the community. You'll find blocks that are available for consumption in many of the published Prefect Integrations. The following table provides an overview of the blocks available from our most popular Prefect Integrations.

Integration Block Slug prefect-airbyte Airbyte Connection airbyte-connection prefect-airbyte Airbyte Server airbyte-server prefect-aws AWS Credentials aws-credentials prefect-aws ECS Task ecs-task prefect-aws MinIO Credentials minio-credentials prefect-aws S3 Bucket s3-bucket prefect-azure Azure Blob Storage Credentials azure-blob-storage-credentials prefect-azure Azure Container Instance Credentials azure-container-instance-credentials prefect-azure Azure Container Instance Job azure-container-instance-job prefect-azure Azure Cosmos DB Credentials azure-cosmos-db-credentials prefect-azure AzureML Credentials azureml-credentials prefect-bitbucket BitBucket Credentials bitbucket-credentials prefect-bitbucket BitBucket Repository bitbucket-repository prefect-census Census Credentials census-credentials prefect-census Census Sync census-sync prefect-databricks Databricks Credentials databricks-credentials prefect-dbt dbt CLI BigQuery Target Configs dbt-cli-bigquery-target-configs prefect-dbt dbt CLI Profile dbt-cli-profile prefect-dbt dbt Cloud Credentials dbt-cloud-credentials prefect-dbt dbt CLI Global Configs dbt-cli-global-configs prefect-dbt dbt CLI Postgres Target Configs dbt-cli-postgres-target-configs prefect-dbt dbt CLI Snowflake Target Configs dbt-cli-snowflake-target-configs prefect-dbt dbt CLI Target Configs dbt-cli-target-configs prefect-docker Docker Host docker-host prefect-docker Docker Registry Credentials docker-registry-credentials prefect-email Email Server Credentials email-server-credentials prefect-firebolt Firebolt Credentials firebolt-credentials prefect-firebolt Firebolt Database firebolt-database prefect-gcp BigQuery Warehouse bigquery-warehouse prefect-gcp GCP Cloud Run Job cloud-run-job prefect-gcp GCP Credentials gcp-credentials prefect-gcp GcpSecret gcpsecret prefect-gcp GCS Bucket gcs-bucket prefect-gcp Vertex AI Custom Training Job vertex-ai-custom-training-job prefect-github GitHub Credentials github-credentials prefect-github GitHub Repository github-repository prefect-gitlab GitLab Credentials gitlab-credentials prefect-gitlab GitLab Repository gitlab-repository prefect-hex Hex Credentials hex-credentials prefect-hightouch Hightouch Credentials hightouch-credentials prefect-kubernetes Kubernetes Credentials kubernetes-credentials prefect-monday Monday Credentials monday-credentials prefect-monte-carlo Monte Carlo Credentials monte-carlo-credentials prefect-openai OpenAI Completion Model openai-completion-model prefect-openai OpenAI Image Model openai-image-model prefect-openai OpenAI Credentials openai-credentials prefect-slack Slack Credentials slack-credentials prefect-slack Slack Incoming Webhook slack-incoming-webhook prefect-snowflake Snowflake Connector snowflake-connector prefect-snowflake Snowflake Credentials snowflake-credentials prefect-sqlalchemy Database Credentials database-credentials prefect-sqlalchemy SQLAlchemy Connector sqlalchemy-connector prefect-twitter Twitter Credentials twitter-credentials","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#using-existing-block-types","title":"Using existing block types","text":"

Blocks are classes that subclass the Block base class. They can be instantiated and used like normal classes.

","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#instantiating-blocks","title":"Instantiating blocks","text":"

For example, to instantiate a block that stores a JSON value, use the JSON block:

from prefect.blocks.system import JSON\n\njson_block = JSON(value={\"the_answer\": 42})\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#saving-blocks","title":"Saving blocks","text":"

If this JSON value needs to be retrieved later to be used within a flow or task, we can use the .save() method on the block to store the value in a block document on the Prefect database for retrieval later:

json_block.save(name=\"life-the-universe-everything\")\n

Utilizing the UI

Blocks documents can also be created and updated via the Prefect UI.

","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#loading-blocks","title":"Loading blocks","text":"

The name given when saving the value stored in the JSON block can be used when retrieving the value during a flow or task run:

from prefect import flow\nfrom prefect.blocks.system import JSON\n\n@flow\ndef what_is_the_answer():\njson_block = JSON.load(\"life-the-universe-everything\")\nprint(json_block.value[\"the_answer\"])\n\nwhat_is_the_answer() # 42\n

Blocks can also be loaded with a unique slug that is a combination of a block type slug and a block document name.

To load our JSON block document from before, we can run the following:

from prefect.blocks.core import Block\n\njson_block = Block.load(\"json/life-the-universe-everything\")\nprint(json_block.value[\"the-answer\"]) #42\n

Sharing Blocks

Blocks can also be loaded by fellow Workspace Collaborators, available on Prefect Cloud.

","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#deleting-blocks","title":"Deleting blocks","text":"

You can delete a block by using the .delete() method on the block:

from prefect.blocks.core import Block\nBlock.delete(\"json/life-the-universe-everything\")\n

You can also use the CLI to delete specific blocks with a given slug or id:

prefect block delete json/life-the-universe-everything\n
prefect block delete --id <my-id>\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#creating-new-block-types","title":"Creating new block types","text":"

To create a custom block type, define a class that subclasses Block. The Block base class builds off of Pydantic's BaseModel, so custom blocks can be declared in same manner as a Pydantic model.

Here's a block that represents a cube and holds information about the length of each edge in inches:

from prefect.blocks.core import Block\n\nclass Cube(Block):\n    edge_length_inches: float\n

You can also include methods on a block include useful functionality. Here's the same cube block with methods to calculate the volume and surface area of the cube:

from prefect.blocks.core import Block\n\nclass Cube(Block):\n    edge_length_inches: float\n\ndef get_volume(self):\nreturn self.edge_length_inches**3\ndef get_surface_area(self):\nreturn 6 * self.edge_length_inches**2\n

Now the Cube block can be used to store different cube configuration that can later be used in a flow:

from prefect import flow\n\nrubiks_cube = Cube(edge_length_inches=2.25)\nrubiks_cube.save(\"rubiks-cube\")\n\n@flow\ndef calculate_cube_surface_area(cube_name):\n    cube = Cube.load(cube_name)\n    print(cube.get_surface_area())\n\ncalculate_cube_surface_area(\"rubiks-cube\") # 30.375\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#secret-fields","title":"Secret fields","text":"

All block values are encrypted before being stored, but if you have values that you would not like visible in the UI or in logs, then you can use the SecretStr field type provided by Pydantic to automatically obfuscate those values. This can be useful for fields that are used to store credentials like passwords and API tokens.

Here's an example of an AWSCredentials block that uses SecretStr:

from typing import Optional\n\nfrom prefect.blocks.core import Block\nfrom pydantic import SecretStr\n\nclass AWSCredentials(Block):\n    aws_access_key_id: Optional[str] = None\naws_secret_access_key: Optional[SecretStr] = None\naws_session_token: Optional[str] = None\n    profile_name: Optional[str] = None\n    region_name: Optional[str] = None\n

Because aws_secret_access_key has the SecretStr type hint assigned to it, the value of that field will not be exposed if the object is logged:

aws_credentials_block = AWSCredentials(\n    aws_access_key_id=\"AKIAJKLJKLJKLJKLJKLJK\",\n    aws_secret_access_key=\"secret_access_key\"\n)\n\nprint(aws_credentials_block)\n# aws_access_key_id='AKIAJKLJKLJKLJKLJKLJK' aws_secret_access_key=SecretStr('**********') aws_session_token=None profile_name=None region_name=None\n

There's also use the SecretDict field type provided by Prefect. This type will allow you to add a dictionary field to your block that will have values at all levels automatically obfuscated in the UI or in logs. This is useful for blocks where typing or structure of secret fields is not known until configuration time.

Here's an example of a block that uses SecretDict:

from typing import Dict\n\nfrom prefect.blocks.core import Block\nfrom prefect.blocks.fields import SecretDict\n\n\nclass SystemConfiguration(Block):\n    system_secrets: SecretDict\n    system_variables: Dict\n\n\nsystem_configuration_block = SystemConfiguration(\n    system_secrets={\n        \"password\": \"p@ssw0rd\",\n        \"api_token\": \"token_123456789\",\n        \"private_key\": \"<private key here>\",\n    },\n    system_variables={\n        \"self_destruct_countdown_seconds\": 60,\n        \"self_destruct_countdown_stop_time\": 7,\n    },\n)\n
system_secrets will be obfuscated when system_configuration_block is displayed, but system_variables will be shown in plain-text:

print(system_configuration_block)\n# SystemConfiguration(\n#   system_secrets=SecretDict('{'password': '**********', 'api_token': '**********', 'private_key': '**********'}'), \n#   system_variables={'self_destruct_countdown_seconds': 60, 'self_destruct_countdown_stop_time': 7}\n# )\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#blocks-metadata","title":"Blocks metadata","text":"

The way that a block is displayed can be controlled by metadata fields that can be set on a block subclass.

Available metadata fields include:

Property Description _block_type_name Display name of the block in the UI. Defaults to the class name. _block_type_slug Unique slug used to reference the block type in the API. Defaults to a lowercase, dash-delimited version of the block type name. _logo_url URL pointing to an image that should be displayed for the block type in the UI. Default to None. _description Short description of block type. Defaults to docstring, if provided. _code_example Short code snippet shown in UI for how to load/use block type. Default to first example provided in the docstring of the class, if provided.","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#nested-blocks","title":"Nested blocks","text":"

Block are composable. This means that you can create a block that uses functionality from another block by declaring it as an attribute on the block that you're creating. It also means that configuration can be changed for each block independently, which allows configuration that may change on different time frames to be easily managed and configuration can be shared across multiple use cases.

To illustrate, here's a an expanded AWSCredentials block that includes the ability to get an authenticated session via the boto3 library:

from typing import Optional\n\nimport boto3\nfrom prefect.blocks.core import Block\nfrom pydantic import SecretStr\n\nclass AWSCredentials(Block):\n    aws_access_key_id: Optional[str] = None\n    aws_secret_access_key: Optional[SecretStr] = None\n    aws_session_token: Optional[str] = None\n    profile_name: Optional[str] = None\n    region_name: Optional[str] = None\n\n    def get_boto3_session(self):\n        return boto3.Session(\n            aws_access_key_id = self.aws_access_key_id\n            aws_secret_access_key = self.aws_secret_access_key\n            aws_session_token = self.aws_session_token\n            profile_name = self.profile_name\n            region_name = self.region\n        )\n

The AWSCredentials block can be used within an S3Bucket block to provide authentication when interacting with an S3 bucket:

import io\n\nclass S3Bucket(Block):\n    bucket_name: str\ncredentials: AWSCredentials\ndef read(self, key: str) -> bytes:\n        s3_client = self.credentials.get_boto3_session().client(\"s3\")\n\n        stream = io.BytesIO()\n        s3_client.download_fileobj(Bucket=self.bucket_name, key=key, Fileobj=stream)\n\n        stream.seek(0)\n        output = stream.read()\n\n        return output\n\n    def write(self, key: str, data: bytes) -> None:\n        s3_client = self.credentials.get_boto3_session().client(\"s3\")\n        stream = io.BytesIO(data)\n        s3_client.upload_fileobj(stream, Bucket=self.bucket_name, Key=key)\n

You can use this S3Bucket block with previously saved AWSCredentials block values in order to interact with the configured S3 bucket:

my_s3_bucket = S3Bucket(\n    bucket_name=\"my_s3_bucket\",\n    credentials=AWSCredentials.load(\"my_aws_credentials\")\n)\n\nmy_s3_bucket.save(\"my_s3_bucket\")\n

Saving block values like this links the values of the two blocks so that any changes to the values stored for the AWSCredentials block with the name my_aws_credentials will be seen the next time that block values for the S3Bucket block named my_s3_bucket is loaded.

Values for nested blocks can also be hard coded by not first saving child blocks:

my_s3_bucket = S3Bucket(\n    bucket_name=\"my_s3_bucket\",\n    credentials=AWSCredentials(\n        aws_access_key_id=\"AKIAJKLJKLJKLJKLJKLJK\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n)\n\nmy_s3_bucket.save(\"my_s3_bucket\")\n

In the above example, the values for AWSCredentials are saved with my_s3_bucket and will not be usable with any other blocks.

","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#handling-updates-to-custom-block-types","title":"Handling updates to custom Block types","text":"

Let's say that you now want to add a bucket_folder field to your custom S3Bucket block that represents the default path to read and write objects from (this field exists on our implementation).

We can add the new field to the class definition:

class S3Bucket(Block):\n    bucket_name: str\n    credentials: AWSCredentials\nbucket_folder: str = None\n...\n

Then register the updated block type with either Prefect Cloud or your self-hosted Prefect server.

If you have any existing blocks of this type that were created before the update and you'd prefer to not re-create them, you can migrate them to the new version of your block type by adding the missing values:

# Bypass Pydantic validation to allow your local Block class to load the old block version\nmy_s3_bucket_block = S3Bucket.load(\"my-s3-bucket\", validate=False)\n\n# Set the new field to an appropriate value\nmy_s3_bucket_block.bucket_path = \"my-default-bucket-path\"\n\n# Overwrite the old block values and update the expected fields on the block\nmy_s3_bucket_block.save(\"my-s3-bucket\", overwrite=True)\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#registering-blocks-for-use-in-the-prefect-ui","title":"Registering blocks for use in the Prefect UI","text":"

Blocks can be registered from a Python module available in the current virtual environment with a CLI command like this:

$ prefect block register --module prefect_aws.credentials\n

This command is useful for registering all blocks found in the credentials module within Prefect Integrations.

Or, if a block has been created in a .py file, the block can also be registered with the CLI command:

$ prefect block register --file my_block.py\n

The registered block will then be available in the Prefect UI for configuration.

","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/deployments-ux/","title":"Managing Deployments","text":"

You can manage your deployments with a prefect.yaml file that describes how to prepare one or more flow deployments. At a high level, you simply add the following file to your working directory:

You can initialize your deployment configuration, which creates the prefect.yaml file, by running the CLI command prefect init in any directory or repository that stores your flow code.

Deployment configuration recipes

Prefect ships with many off-the-shelf \"recipes\" that allow you to get started with more structure within your prefect.yaml file; run prefect init to be prompted with available recipes in your installation. You can provide a recipe name in your initialization command with the --recipe flag, otherwise Prefect will attempt to guess an appropriate recipe based on the structure of your working directory (for example if you initialize within a git repository, Prefect will use the git recipe).

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#the-prefect-yaml-file","title":"The Prefect YAML file","text":"

The prefect.yaml file contains deployment configuration for deployments created from this file, default instructions for how to build and push any necessary code artifacts (such as Docker images), and default instructions for pulling a deployment in remote execution environments (e.g., cloning a GitHub repository).

Any deployment configuration can be overridden via options available on the prefect deploy CLI command when creating a deployment.

The base structure for prefect.yaml is as follows:

# generic metadata\nprefect-version: null\nname: null\n\n# preparation steps\nbuild: null\npush: null\n\n# runtime steps\npull: null\n\n# deployment configurations\ndeployments:\n- # base metadata\nname: null\nversion: null\ntags: []\ndescription: null\nschedule: null\n\n# flow-specific fields\nflow_name: null\nentrypoint: null\nparameters: {}\n\n# infra-specific fields\nwork_pool:\nname: null\nwork_queue_name: null\njob_variables: {}\n

The metadata fields are always pre-populated for you and are currently for bookkeeping purposes only. The other sections are pre-populated based on recipe; if no recipe is provided, Prefect will attempt to guess an appropriate one based on local configuration.

You can create deployments via the CLI command prefect deploy without ever needing to alter the deployments section of your prefect.yaml file \u2014 the prefect deploy command will help in deployment creation via interactive prompts. However, it is useful for version-controlling your deployments and managing multiple deployments.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#deployment-actions","title":"Deployment Actions","text":"

Deployment actions defined in your prefect.yaml file control the lifecycle of the creation and execution of your deployments. The three actions available are build, push, and pull. pull is the only required deployment action \u2014 it is used to define how Prefect will pull your deployment in remote execution environments.

Each action is defined as a list of steps that are executing in sequence.

Each step has the following format:

section:\n- prefect_package.path.to.importable.step:\nid: \"step-id\" # optional\nrequires: \"pip-installable-package-spec\" # optional\nkwarg1: value\nkwarg2: more-values\n

Every step can optionally provide a requires field that Prefect will use to auto-install in the event that the step cannot be found in the current environment. Each step can also specify an id for the step which is used when referencing step outputs in later steps. The additional fields map directly onto Python keyword arguments to the step function. Within a given section, steps always run in the order that they are provided within the prefect.yaml file.

Deployment Instruction Overrides

build, push, and pull sections can all be overridden on a per-deployment basis by defining build, push, and pull fields within a deployment definition in the prefect.yaml file.

The prefect deploy command will use any build, push, or pull instructions provided in a deployment's definition in the prefect.yaml file.

This capability is useful with multiple deployments that require different deployment instructions.

For more information on the mechanics of steps, see below.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#the-build-action","title":"The Build Action","text":"

The build section of prefect.yaml is where any necessary side effects for running your deployments are built - the most common type of side effect produced here is a Docker image. If you initialize with the docker recipe, you will be prompted to provide required information, such as image name and tag:

$ prefect init --recipe docker\n>> image_name: < insert image name here >\n>> tag: < insert image tag here >\n

Use --field to avoid the interactive experience

We recommend that you only initialize a recipe when you are first creating your deployment structure, and afterwards store your configuration files within version control; however, sometimes you may need to initialize programmatically and avoid the interactive prompts. To do so, provide all required fields for your recipe using the --field flag:

$ prefect init --recipe docker \\\n--field image_name=my-repo/my-image \\\n--field tag=my-tag\n

build:\n- prefect_docker.deployments.steps.build_docker_image:\nrequires: prefect-docker>=0.3.0\nimage_name: my-repo/my-image\ntag: my-tag\ndockerfile: auto\npush: true\n

Once you've confirmed that these fields are set to their desired values, this step will automatically build a Docker image with the provided name and tag and push it to the repository referenced by the image name. As the documentation notes, this step produces a few fields that can optionally be used in future steps or within prefect.yaml as template values. It is best practice to use {{ image }} within prefect.yaml (specifically the work pool's job variables section) so that you don't risk having your build step and deployment specification get out of sync with hardcoded values. For a worked example, check out the deployments tutorial.

Note

Note that in the build step example above, we relied on the prefect-docker package; in cases that deal with external services, additional packages are often required and will be auto-installed for you.

Pass output to downstream steps

Each deployment action can be composed of multiple steps. For example, if you wanted to build a Docker image tagged with the current commit hash, you could use the run_shell_script step and feed the output into the build_docker_image step:

build:\n- prefect.deployments.steps.run_shell_script:\nid: get-commit-hash\nscript: git rev-parse --short HEAD\nstream_output: false\n- prefect_docker.deployments.steps.build_docker_image:\nrequires: prefect-docker\nimage_name: my-image\nimage_tag: \"{{ get-commit-hash.stdout }}\"\ndockerfile: auto\n

Note that the id field is used in the run_shell_script step so that its output can be referenced in the next step.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#the-push-action","title":"The Push Action","text":"

The push section is most critical for situations in which code is not stored on persistent filesystems or in version control. In this scenario, code is often pushed and pulled from a Cloud storage bucket of some kind (e.g., S3, GCS, Azure Blobs, etc.). The push section allows users to specify and customize the logic for pushing this code repository to arbitrary remote locations.

For example, a user wishing to store their code in an S3 bucket and rely on default worker settings for its runtime environment could use the s3 recipe:

$ prefect init --recipe s3\n>> bucket: < insert bucket name here >\n

Inspecting our newly created prefect.yaml file we find that the push and pull sections have been templated out for us as follows:

push:\n- prefect_aws.deployments.steps.push_to_s3:\nid: push-code\nrequires: prefect-aws>=0.3.0\nbucket: my-bucket\nfolder: project-name\ncredentials: null\n\npull:\n- prefect_aws.deployments.steps.pull_from_s3:\nrequires: prefect-aws>=0.3.0\nbucket: my-bucket\nfolder: \"{{ push-code.folder }}\"\ncredentials: null\n

The bucket has been populated with our provided value (which also could have been provided with the --field flag); note that the folder property of the push step is a template - the pull_from_s3 step outputs both a bucket value as well as a folder value that can be used to template downstream steps. Doing this helps you keep your steps consistent across edits.

As discussed above, if you are using blocks, the credentials section can be templated with a block reference for secure and dynamic credentials access:

push:\n- prefect_aws.deployments.steps.push_to_s3:\nrequires: prefect-aws>=0.3.0\nbucket: my-bucket\nfolder: project-name\ncredentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n

Anytime you run prefect deploy, this push section will be executed upon successful completion of your build section. For more information on the mechanics of steps, see below.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#the-pull-action","title":"The Pull Action","text":"

The pull section is the most important section within the prefect.yaml file as it contains instructions for preparing your flows for a deployment run. These instructions will be executed each time a deployment created within this folder is run via a worker.

There are three main types of steps that typically show up in a pull section:

Use block and variable references

All block and variable references within your pull step will remain unresolved until runtime and will be pulled each time your deployment is run. This allows you to avoid storing sensitive information insecurely; it also allows you to manage certain types of configuration from the API and UI without having to rebuild your deployment every time.

Below is an example of how to use an existing GitHubCredentials block to clone a private GitHub repository:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/org/repo.git\ncredentials: \"{{ prefect.blocks.github-credentials.my-credentials }}\"\n

Alternatively, you can specify a BitBucketCredentials or GitLabCredentials block to clone from Bitbucket or GitLab. In lieu of a credentials block, you can also provide a GitHub, GitLab, or Bitbucket token directly to the 'access_token` field. You can use a Secret block to do this securely:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://bitbucket.org/org/repo.git\naccess_token: \"{{ prefect.blocks.secret.bitbucket-token }}\"\n
","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#utility-steps","title":"Utility Steps","text":"

Utility steps can be used within a build, push, or pull action to assist in managing the deployment lifecycle:

Here is an example of retrieving the short Git commit hash of the current repository to use as a Docker image tag:

build:\n- prefect.deployments.steps.run_shell_script:\nid: get-commit-hash\nscript: git rev-parse --short HEAD\nstream_output: false\n- prefect_docker.deployments.steps.build_docker_image:\nrequires: prefect-docker>=0.3.0\nimage_name: my-image\nimage_tag: \"{{ get-commit-hash.stdout }}\"\ndockerfile: auto\n

Below is an example of installing dependencies from a requirements.txt file after cloning:

pull:\n- prefect.deployments.steps.git_clone:\nid: clone-step\nrepository: https://github.com/org/repo.git\n- prefect.deployments.steps.pip_install_requirements:\ndirectory: {{ clone-step.directory }}\nrequirements_file: requirements.txt\nstream_output: False\n

Below is an example that retrieves an access token from a 3rd party Key Vault and uses it in a private clone step:

pull:\n- prefect.deployments.steps.run_shell_script:\nid: get-access-token\nscript: az keyvault secret show --name <secret name> --vault-name <secret vault> --query \"value\" --output tsv\nstream_output: false\n- prefect.deployments.steps.git_clone:\nrepository: https://bitbucket.org/samples/deployments.git\nbranch: master\naccess_token: \"{{ get-access-token.stdout }}\"\n

You can also run custom steps by packaging them. In the example below, retrieve_secrets is a custom python module that has been packaged into the default working directory of a docker image (which is /opt/prefect by default). main is the function entry point, which returns an access token (e.g. return {\"access_token\": access_token}) like the preceding example, but utilizing the Azure Python SDK for retrieval.

- retrieve_secrets.main:\nid: get-access-token\n- prefect.deployments.steps.git_clone:\nrepository: https://bitbucket.org/samples/deployments.git\nbranch: master\naccess_token: '{{ get-access-token.access_token }}'\n
","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#templating-options","title":"Templating Options","text":"

Values that you place within your prefect.yaml file can reference dynamic values in several different ways:

As an example, consider the following prefect.yaml file:

build:\n- prefect_docker.deployments.steps.build_docker_image:\nid: build-image`\nrequires: prefect-docker>=0.3.0\nimage_name: my-repo/my-image\ntag: my-tag\ndockerfile: auto\npush: true\n\ndeployments:\n- # base metadata\nname: null\nversion: \"{{ build-image.tag }}\"\ntags:\n- \"{{ $my_deployment_tag }}\"\n- \"{{ prefect.variables.some_common_tag }}\"\ndescription: null\nschedule: null\n\n# flow-specific fields\nflow_name: null\nentrypoint: null\nparameters: {}\n\n# infra-specific fields\nwork_pool:\nname: \"my-k8s-work-pool\"\nwork_queue_name: null\njob_variables:\nimage: \"{{ build-image.image }}\"\ncluster_config: \"{{ prefect.blocks.kubernetes-cluster-config.my-favorite-config }}\"\n

So long as our build steps produce fields called image_name and image_tag, every time we deploy a new version of our deployment these fields will be dynamically populated with the relevant values.

Docker step

The most commonly used build step is prefect_docker.deployments.steps.build_docker_image which produces both the image_name and tag fields.

For an example, check out the deployments tutorial.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#deployment-configurations","title":"Deployment Configurations","text":"

Each prefect.yaml file can have multiple deployment configurations that control the behavior of created deployments. These deployments can be managed independently of one another, allowing you to deploy the same flow with different configurations in the same codebase.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#working-with-multiple-deployments","title":"Working With Multiple Deployments","text":"

Prefect supports multiple deployment declarations within the prefect.yaml file. This method of declaring multiple deployments allows the configuration for all deployments to be version controlled and deployed with a single command.

New deployment declarations can be added to the prefect.yaml file by adding a new entry to the deployments list. Each deployment declaration must have a unique name field which is used to select deployment declarations when using the prefect deploy command.

For example, consider the following prefect.yaml file:

build: ...\npush: ...\npull: ...\n\ndeployments:\n- name: deployment-1\nentrypoint: flows/hello.py:my_flow\nparameters:\nnumber: 42,\nmessage: Don't panic!\nwork_pool:\nname: my-process-work-pool\nwork_queue_name: primary-queue\n\n- name: deployment-2\nentrypoint: flows/goodbye.py:my_other_flow\nwork_pool:\nname: my-process-work-pool\nwork_queue_name: secondary-queue\n\n- name: deployment-3\nentrypoint: flows/hello.py:yet_another_flow\nwork_pool:\nname: my-docker-work-pool\nwork_queue_name: tertiary-queue\n

This file has three deployment declarations, each referencing a different flow. Each deployment declaration has a unique name field and can be deployed individually by using the --name flag when deploying.

For example, to deploy deployment-1 we would run:

$ prefect deploy --name deployment-1\n

To deploy multiple deployments you can provide multiple --name flags:

$ prefect deploy --name deployment-1 --name deployment-2\n

To deploy multiple deployments with the same name, you can prefix the deployment name with its flow name:

$ prefect deploy --name my_flow/deployment-1 --name my_other_flow/deployment-1\n

To deploy all deployments you can use the --all flag:

$ prefect deploy --all\n

CLI Options When Deploying Multiple Deployments

When deploying more than one deployment with a single prefect deploy command, any additional attributes provided via the CLI will be ignored.

To provide overrides to a deployment via the CLI, you must deploy that deployment individually.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#reusing-configuration-across-deployments","title":"Reusing Configuration Across Deployments","text":"

Because a prefect.yaml file is a standard YAML file, you can use YAML aliases to reuse configuration across deployments.

This functionality is useful when multiple deployments need to share the work pool configuration, deployment actions, or other configurations.

You can declare a YAML alias by using the &{alias_name} syntax and insert that alias elsewhere in the file with the *{alias_name} syntax. When aliasing YAML maps, you can also override specific fields of the aliased map by using the <<: *{alias_name} syntax and adding additional fields below.

We recommend adding a definitions section to your prefect.yaml file at the same level as the deployments section to store your aliases.

For example, consider the following prefect.yaml file:

build: ...\npush: ...\npull: ...\n\ndefinitions:\nwork_pools:\nmy_docker_work_pool: &my_docker_work_pool\nname: my-docker-work-pool\nwork_queue_name: default\njob_variables:\nimage: \"{{ build-image.image }}\"\nschedules:\nevery_ten_minutes: &every_10_minutes\ninterval: 600\nactions:\ndocker_build: &docker_build\n- prefect_docker.deployments.steps.build_docker_image: &docker_build_config\nid: build-image\nrequires: prefect-docker>=0.3.0\nimage_name: my-example-image\ntag: dev\ndockerfile: auto\npush: true\n\ndeployments:\n- name: deployment-1\nentrypoint: flows/hello.py:my_flow\nschedule: *every_10_minutes\nparameters:\nnumber: 42,\nmessage: Don't panic!\nwork_pool: *my_docker_work_pool\nbuild: *docker_build # Uses the full docker_build action with no overrides\n\n- name: deployment-2\nentrypoint: flows/goodbye.py:my_other_flow\nwork_pool: *my_docker_work_pool\nbuild:\n- prefect_docker.deployments.steps.build_docker_image:\n<<: *docker_build_config # Uses the docker_build_config alias and overrides the dockerfile field\ndockerfile: Dockerfile.custom\n\n- name: deployment-3\nentrypoint: flows/hello.py:yet_another_flow\nschedule: *every_10_minutes\nwork_pool:\nname: my-process-work-pool\nwork_queue_name: primary-queue\n

In the above example, we are using YAML aliases to reuse work pool, schedule, and build configuration across multiple deployments:

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#deployment-declaration-reference","title":"Deployment Declaration Reference","text":"","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#deployment-fields","title":"Deployment Fields","text":"

Below are fields that can be added to each deployment declaration.

Property Description name The name to give to the created deployment. Used with the prefect deploy command to create or update specific deployments. version An optional version for the deployment. tags A list of strings to assign to the deployment as tags. description An optional description for the deployment. schedule An optional schedule to assign to the deployment. Fields for this section are documented in the Schedule Fields section. triggers An optional array of triggers to assign to the deployment flow_name Deprecated: The name of a flow that has been registered in the .prefect directory. Either flow_name or entrypoint is required. entrypoint The path to the .py file containing the flow you want to deploy (relative to the root directory of your development folder) combined with the name of the flow function. Should be in the format path/to/file.py:flow_function_name. Either flow_name or entrypoint is required. parameters Optional default values to provide for the parameters of the deployed flow. Should be an object with key/value pairs. work_pool Information on where to schedule flow runs for the deployment. Fields for this section are documented in the Work Pool Fields section.","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#schedule-fields","title":"Schedule Fields","text":"

Below are fields that can be added to a deployment declaration's schedule section.

Property Description interval Number of seconds indicating the time between flow runs. Cannot be used in conjunction with cron or rrule. anchor_date Datetime string indicating the starting or \"anchor\" date to begin the schedule. If no anchor_date is supplied, the current UTC time is used. Can only be used with interval. timezone String name of a time zone, used to enforce localization behaviors like DST boundaries. See the IANA Time Zone Database for valid time zones. cron A valid cron string. Cannot be used in conjunction with interval or rrule. day_or Boolean indicating how croniter handles day and day_of_week entries. Must be used with cron. Defaults to True. rrule String representation of an RRule schedule. See the rrulestr examples for syntax. Cannot be used in conjunction with interval or cron.

For more information about schedules, see the Schedules concept doc.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#work-pool-fields","title":"Work Pool Fields","text":"

Below are fields that can be added to a deployment declaration's work_pool section.

Property Description name The name of the work pool to schedule flow runs in for the deployment. work_queue_name The name of the work queue within the specified work pool to schedule flow runs in for the deployment. If not provided, the default queue for the specified work pool will be used. job_variables Values used to override the default values in the specified work pool's base job template. Maps directly to a created deployments infra_overrides attribute.","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#deployment-mechanics","title":"Deployment mechanics","text":"

Anytime you run prefect deploy, the following actions are taken in order:

Deployment Instruction Overrides

The build, push, and pull sections in deployment definitions take precedence over the corresponding sections above them in prefect.yaml.

Anytime a step is run, the following actions are taken in order:

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments/","title":"Deployments","text":"

A deployment is a server-side concept that encapsulates a flow, allowing it to be scheduled and triggered via API. The deployment stores metadata about where your flow's code is stored and how your flow should be run.

Each deployment references a single \"entrypoint\" flow (though that flow may, in turn, call any number of tasks and subflows). Any single flow, however, may be referenced by any number of deployments.

At a high level, you can think of a deployment as configuration for managing flows, whether you run them via the CLI, the UI, or the API.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#deployments-overview","title":"Deployments overview","text":"

All Prefect flow runs are tracked by the API. The API does not require prior registration of flows. With Prefect, you can call a flow locally or on a remote environment and it will be tracked.

Creating a deployment for a Prefect workflow means packaging workflow code, settings, and infrastructure configuration so that the workflow can be managed via the Prefect API and run remotely by a Prefect agent.

The following diagram provides a high-level overview of the conceptual elements involved in defining a deployment and executing a flow run based on that deployment.

%%{\n  init: {\n    'theme': 'base',\n    'themeVariables': {\n      'fontSize': '19px'\n    }\n  }\n}%%\n\nflowchart LR\n    F(\"<div style='margin: 5px 10px 5px 5px;'>Flow Code</div>\"):::yellow -.-> A(\"<div style='margin: 5px 10px 5px 5px;'>Deployment Definition</div>\"):::gold\n    subgraph Server [\"<div style='width: 150px; text-align: center; margin-top: 5px;'>Prefect API</div>\"]\n        D(\"<div style='margin: 5px 10px 5px 5px;'>Deployment</div>\"):::green\n    end\n    subgraph Remote Storage [\"<div style='width: 160px; text-align: center; margin-top: 5px;'>Remote Storage</div>\"]\n        B(\"<div style='margin: 5px 6px 5px 5px;'>Flow</div>\"):::yellow\n    end\n    subgraph Infrastructure [\"<div style='width: 150px; text-align: center; margin-top: 5px;'>Infrastructure</div>\"]\n        G(\"<div style='margin: 5px 10px 5px 5px;'>Flow Run</div>\"):::blue\n    end\n\n    A --> D\n    D --> E(\"<div style='margin: 5px 10px 5px 5px;'>Agent</div>\"):::red\n    B -.-> E\n    A -.-> B\n    E -.-> G\n\n    classDef gold fill:goldenrod,stroke:goldenrod,stroke-width:4px\n    classDef yellow fill:gold,stroke:gold,stroke-width:4px\n    classDef gray fill:lightgray,stroke:lightgray,stroke-width:4px\n    classDef blue fill:blue,stroke:blue,stroke-width:4px,color:white\n    classDef green fill:green,stroke:green,stroke-width:4px,color:white\n    classDef red fill:red,stroke:red,stroke-width:4px,color:white\n    classDef dkgray fill:darkgray,stroke:darkgray,stroke-width:4px,color:white

Your flow code and the Prefect hybrid model

In the diagram above, the dotted line indicates the path of your flow code in the lifecycle of a Prefect deployment, from creation to executing a flow run. Notice that your flow code stays within your storage and execution infrastructure and never lives on the Prefect server or database.

This is the heart of the Prefect hybrid model: there's always a boundary between your code, your private infrastructure, and the Prefect backend, such as Prefect Cloud. Even if you're using a self-hosted Prefect server, you only register the deployment metadata on the backend allowing for a clean separation of concerns.

When creating a deployment, a user must answer two basic questions:

A deployment additionally enables you to:

With remote storage blocks, you can package not only your flow code script but also any supporting files, including your custom modules, SQL scripts and any configuration files needed in your project.

To define how your flow execution environment should be configured, you may either reference pre-configured infrastructure blocks or let Prefect create those automatically for you as anonymous blocks (this happens when you specify the infrastructure type using --infra flag during the build process).

Work queue affinity improved starting from Prefect 2.0.5

Until Prefect 2.0.4, tags were used to associate flow runs with work queues. Starting in Prefect 2.0.5, tag-based work queues are deprecated. Instead, work queue names are used to explicitly direct flow runs from deployments into queues.

Note that backward compatibility is maintained and work queues that use tag-based matching can still be created and will continue to work. However, those work queues are now considered legacy and we encourage you to use the new behavior by specifying work queues explicitly on agents and deployments.

See Agents & Work Pools for details.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#deployments-and-flows","title":"Deployments and flows","text":"

Each deployment is associated with a single flow, but any given flow can be referenced by multiple deployments.

Deployments are uniquely identified by the combination of: flow_name/deployment_name.

graph LR\n    F(\"my_flow\"):::yellow -.-> A(\"Deployment 'daily'\"):::tan --> W(\"my_flow/daily\"):::fgreen\n    F -.-> B(\"Deployment 'weekly'\"):::gold  --> X(\"my_flow/weekly\"):::green\n    F -.-> C(\"Deployment 'ad-hoc'\"):::dgold --> Y(\"my_flow/ad-hoc\"):::dgreen\n    F -.-> D(\"Deployment 'trigger-based'\"):::dgold --> Z(\"my_flow/trigger-based\"):::dgreen\n\n    classDef gold fill:goldenrod,stroke:goldenrod,stroke-width:4px,color:white\n    classDef yellow fill:gold,stroke:gold,stroke-width:4px\n    classDef dgold fill:darkgoldenrod,stroke:darkgoldenrod,stroke-width:4px,color:white\n    classDef tan fill:tan,stroke:tan,stroke-width:4px,color:white\n    classDef fgreen fill:forestgreen,stroke:forestgreen,stroke-width:4px,color:white\n    classDef green fill:green,stroke:green,stroke-width:4px,color:white\n    classDef dgreen fill:darkgreen,stroke:darkgreen,stroke-width:4px,color:white

This enables you to run a single flow with different parameters, based on multiple schedules and triggers, and in different environments. This also enables you to run different versions of the same flow for testing and production purposes.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#deployment-definition","title":"Deployment definition","text":"

A deployment definition captures the settings for creating a deployment object on the Prefect API. You can create the deployment definition by:

The minimum required information to create a deployment includes:

You may provide additional settings for the deployment. Any settings you do not explicitly specify are inferred from defaults.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-deployment-on-the-cli","title":"Create a deployment on the CLI","text":"

To create a deployment on the CLI, there are two steps:

  1. Build the deployment definition file deployment.yaml. This step includes uploading your flow to its configured remote storage location, if one is specified.
  2. Create the deployment on the API.
","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#build-the-deployment","title":"Build the deployment","text":"

To build the deployment definition file deployment.yaml, run the prefect deployment build Prefect CLI command from the folder containing your flow script and any dependencies of the script.

$ prefect deployment build [OPTIONS] PATH\n

Path to the flow is specified in the format path-to-script:flow-function-name \u2014 The path and filename of the flow script file, a colon, then the name of the entrypoint flow function.

For example:

$ prefect deployment build -n marvin -p default-agent-pool -q test flows/marvin.py:say_hi\n

When you run this command, Prefect:

Uploading files may require storage filesystem libraries

Note that the appropriate filesystem library supporting the storage location must be installed prior to building a deployment with a storage block. For example, the AWS S3 Storage block requires the s3fs library.

Ignore files or directories from a deployment

By default, Prefect uploads all files in the current folder to the configured storage location (local by default) when you build a deployment.

If you want to omit certain files or directories from your deployments, add a .prefectignore file to the root directory. .prefectignore enables users to omit certain files or directories from their deployments.

Similar to other .ignore files, the syntax supports pattern matching, so an entry of *.pyc will ensure all .pyc files are ignored by the deployment call when uploading to remote storage.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#deployment-build-options","title":"Deployment build options","text":"

You may specify additional options to further customize your deployment.

Options Description PATH Path, filename, and flow name of the flow definition. (Required) --apply, -a When provided, automatically registers the resulting deployment with the API. --cron TEXT A cron string that will be used to set a CronSchedule on the deployment. For example, --cron \"*/1 * * * *\" to create flow runs from that deployment every minute. --help Display help for available commands and options. --infra-block TEXT, -ib The infrastructure block to use, in block-type/block-name format. --infra, -i The infrastructure type to use. (Default is Process) --interval INTEGER An integer specifying an interval (in seconds) that will be used to set an IntervalSchedule on the deployment. For example, --interval 60 to create flow runs from that deployment every minute. --name TEXT, -n The name of the deployment. --output TEXT, -o Optional location for the YAML manifest generated as a result of the build step. You can version-control that file, but it's not required since the CLI can generate everything you need to define a deployment. --override TEXT One or more optional infrastructure overrides provided as a dot delimited path. For example, specify an environment variable: env.env_key=env_value. For Kubernetes, specify customizations: customizations='[{\"op\": \"add\",\"path\": \"/spec/template/spec/containers/0/resources/limits\", \"value\": {\"memory\": \"8Gi\",\"cpu\": \"4000m\"}}]' (note the string format). --param An optional parameter override, values are parsed as JSON strings. For example, --param question=ultimate --param answer=42. --params An optional parameter override in a JSON string format. For example, --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\'. --path An optional path to specify a subdirectory of remote storage to upload to, or to point to a subdirectory of a locally stored flow. --pool TEXT, -p The work pool that will handle this deployment's runs. \u2502 --rrule TEXT An RRule that will be used to set an RRuleSchedule on the deployment. For example, --rrule 'FREQ=HOURLY;BYDAY=MO,TU,WE,TH,FR;BYHOUR=9,10,11,12,13,14,15,16,17' to create flow runs from that deployment every hour but only during business hours. --skip-upload When provided, skips uploading this deployment's files to remote storage. --storage-block TEXT, -sb The storage block to use, in block-type/block-name or block-type/block-name/path format. Note that the appropriate library supporting the storage filesystem must be installed. --tag TEXT, -t One or more optional tags to apply to the deployment. --version TEXT, -v An optional version for the deployment. This could be a git commit hash if you use this command from a CI/CD pipeline. --work-queue TEXT, -q The work queue that will handle this deployment's runs. It will be created if it doesn't already exist. Defaults to None. Note that if a work queue is not set, work will not be scheduled.","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#block-identifiers","title":"Block identifiers","text":"

When specifying a storage block with the -sb or --storage-block flag, you may specify the block by passing its slug. The storage block slug is formatted as block-type/block-name.

For example, s3/example-block is the slug for an S3 block named example-block.

In addition, when passing the storage block slug, you may pass just the block slug or the block slug and a path.

When specifying an infrastructure block with the -ib or --infra-block flag, you specify the block by passing its slug. The infrastructure block slug is formatted as block-type/block-name.

Block name Block class name Block type for a slug Azure Azure azure Docker Container DockerContainer docker-container GitHub GitHub github GCS GCS gcs Kubernetes Job KubernetesJob kubernetes-job Process Process process Remote File System RemoteFileSystem remote-file-system S3 S3 s3 SMB SMB smb GitLab Repository GitLabRepository gitlab-repository

Note that the appropriate library supporting the storage filesystem must be installed prior to building a deployment with a storage block. For example, the AWS S3 Storage block requires the s3fs library. See Storage for more information.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#deploymentyaml","title":"deployment.yaml","text":"

A deployment's YAML file configures additional settings needed to create a deployment on the server.

As a single flow may have multiple deployments created for it, with different schedules, tags, and so on. A single flow definition may have multiple deployment YAML files referencing it, each specifying different settings. The only requirement is that each deployment must have a unique name.

The default {flow-name}-deployment.yaml filename may be edited as needed with the --output flag to prefect deployment build.

###\n### A complete description of a Prefect Deployment for flow 'Cat Facts'\n###\nname: catfact\ndescription: null\nversion: c0fc95308d8137c50d2da51af138aa23\n# The work queue that will handle this deployment's runs\nwork_queue_name: test\nwork_pool_name: null\ntags: []\nparameters: {}\nschedule: null\ninfra_overrides: {}\ninfrastructure:\ntype: process\nenv: {}\nlabels: {}\nname: null\ncommand:\n- python\n- -m\n- prefect.engine\nstream_output: true\n###\n### DO NOT EDIT BELOW THIS LINE\n###\nflow_name: Cat Facts\nmanifest_path: null\nstorage: null\npath: /Users/terry/test/testflows/catfact\nentrypoint: catfact.py:catfacts_flow\nparameter_openapi_schema:\ntitle: Parameters\ntype: object\nproperties:\nurl:\ntitle: url\nrequired:\n- url\ndefinitions: null\n

Editing deployment.yaml

Note the big DO NOT EDIT comment in your deployment's YAML: In practice, anything above this block can be freely edited before running prefect deployment apply to create the deployment on the API.

We recommend editing most of these fields from the CLI or Prefect UI for convenience.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#parameters-in-deployments","title":"Parameters in deployments","text":"

You may provide default parameter values in the deployment.yaml configuration, and these parameter values will be used for flow runs based on the deployment.

To configure default parameter values, add them to the parameters: {} line of deployment.yaml as JSON key-value pairs. The parameter list configured in deployment.yaml must match the parameters expected by the entrypoint flow function.

parameters: {\"name\": \"Marvin\", \"num\": 42, \"url\": \"https://catfact.ninja/fact\"}\n

Passing **kwargs as flow parameters

You may pass **kwargs as a deployment parameter as a \"kwargs\":{} JSON object containing the key-value pairs of any passed keyword arguments.

parameters: {\"name\": \"Marvin\", \"kwargs\":{\"cattype\":\"tabby\",\"num\": 42}\n

You can edit default parameters for deployments in the Prefect UI, and you can override default parameter values when creating ad-hoc flow runs via the Prefect UI.

To edit parameters in the Prefect UI, go the the details page for a deployment, then select Edit from the commands menu. If you change parameter values, the new values are used for all future flow runs based on the deployment.

To create an ad-hoc flow run with different parameter values, go the the details page for a deployment, select Run, then select Custom. You will be able to provide custom values for any editable deployment fields. Under Parameters, select Custom. Provide the new values, then select Save. Select Run to begin the flow run with custom values.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-deployment","title":"Create a deployment","text":"

When you've configured deployment.yaml for a deployment, you can create the deployment on the API by running the prefect deployment apply Prefect CLI command.

$ prefect deployment apply catfacts_flow-deployment.yaml\n

For example:

$ prefect deployment apply ./catfacts_flow-deployment.yaml\nSuccessfully loaded 'catfact'\nDeployment '76a9f1ac-4d8c-4a92-8869-615bec502685' successfully created.\n

prefect deployment apply accepts an optional --upload flag that, when provided, uploads this deployment's files to remote storage.

Once the deployment has been created, you'll see it in the Prefect UI and can inspect it using the CLI.

$ prefect deployment ls\n                               Deployments\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name                           \u2503 ID                                   \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 Cat Facts/catfact              \u2502 76a9f1ac-4d8c-4a92-8869-615bec502685 \u2502\n\u2502 leonardo_dicapriflow/hello_leo \u2502 fb4681d7-aa5a-4617-bf6f-f67e6f964984 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n

When you run a deployed flow with Prefect, the following happens:

Agents and work pools enable the Prefect orchestration engine and API to run deployments in your local execution environments. To execute deployed flow runs you need to configure at least one agent.

Scheduled flow runs

Scheduled flow runs will not be created unless the scheduler is running with either Prefect Cloud or a local Prefect server started with prefect server start.

Scheduled flow runs will not run unless an appropriate agent and work pool are configured.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-deployment-from-a-python-object","title":"Create a deployment from a Python object","text":"

You can also create deployments from Python scripts by using the prefect.deployments.Deployment class.

Create a new deployment using configuration defaults for an imported flow:

from my_project.flows import my_flow\nfrom prefect.deployments import Deployment\n\ndeployment = Deployment.build_from_flow(\n    flow=my_flow,\n    name=\"example-deployment\", \n    version=1, \n    work_queue_name=\"demo\",\n    work_pool_name=\"default-agent-pool\",\n)\ndeployment.apply()\n

Create a new deployment with a pre-defined storage block and an infrastructure override:

from my_project.flows import my_flow\nfrom prefect.deployments import Deployment\nfrom prefect.filesystems import S3\n\nstorage = S3.load(\"dev-bucket\") # load a pre-defined block\n\ndeployment = Deployment.build_from_flow(\n    flow=my_flow,\n    name=\"s3-example\",\n    version=2,\n    work_queue_name=\"aws\",\n    work_pool_name=\"default-agent-pool\",\n    storage=storage,\n    infra_overrides={\n        \"env\": {\n            \"ENV_VAR\": \"value\"\n        }\n    },\n)\n\ndeployment.apply()\n

If you have settings that you want to share from an existing deployment you can load those settings:

deployment = Deployment(\n    name=\"a-name-you-used\", \n    flow_name=\"name-of-flow\"\n)\ndeployment.load() # loads server-side settings\n

Once the existing deployment settings are loaded, you may update them as needed by changing deployment properties.

View all of the parameters for the Deployment object in the Python API documentation.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#deployment-api-representation","title":"Deployment API representation","text":"

When you create a deployment, it is constructed from deployment definition data you provide and additional properties set by client-side utilities.

Deployment properties include:

Property Description id An auto-generated UUID ID value identifying the deployment. created A datetime timestamp indicating when the deployment was created. updated A datetime timestamp indicating when the deployment was last changed. name The name of the deployment. version The version of the deployment description A description of the deployment. flow_id The id of the flow associated with the deployment. schedule An optional schedule for the deployment. is_schedule_active Boolean indicating whether the deployment schedule is active. Default is True. infra_overrides One or more optional infrastructure overrides parameters An optional dictionary of parameters for flow runs scheduled by the deployment. tags An optional list of tags for the deployment. work_queue_name The optional work queue that will handle the deployment's run parameter_openapi_schema JSON schema for flow parameters. path The path to the deployment.yaml file entrypoint The path to a flow entry point storage_document_id Storage block configured for the deployment. infrastructure_document_id Infrastructure block configured for the deployment.

You can inspect a deployment using the CLI with the prefect deployment inspect command, referencing the deployment with <flow_name>/<deployment_name>.

$ prefect deployment inspect 'Cat Facts/catfact'\n{\n'id': '76a9f1ac-4d8c-4a92-8869-615bec502685',\n    'created': '2022-07-26T03:48:14.723328+00:00',\n    'updated': '2022-07-26T03:50:02.043238+00:00',\n    'name': 'catfact',\n    'version': '899b136ebc356d58562f48d8ddce7c19',\n    'description': None,\n    'flow_id': '2c7b36d1-0bdb-462e-bb97-f6eb9fef6fd5',\n    'schedule': None,\n    'is_schedule_active': True,\n    'infra_overrides': {},\n    'parameters': {},\n    'tags': [],\n    'work_queue_name': 'test',\n    'parameter_openapi_schema': {\n'title': 'Parameters',\n        'type': 'object',\n        'properties': {'url': {'title': 'url'}},\n        'required': ['url']\n},\n    'path': '/Users/terry/test/testflows/catfact',\n    'entrypoint': 'catfact.py:catfacts_flow',\n    'manifest_path': None,\n    'storage_document_id': None,\n    'infrastructure_document_id': 'f958db1c-b143-4709-846c-321125247e07',\n    'infrastructure': {\n'type': 'process',\n        'env': {},\n        'labels': {},\n        'name': None,\n        'command': ['python', '-m', 'prefect.engine'],\n        'stream_output': True\n    }\n}\n
","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-flow-run-from-a-deployment","title":"Create a flow run from a deployment","text":"","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-flow-run-with-a-schedule","title":"Create a flow run with a schedule","text":"

If you specify a schedule for a deployment, the deployment will execute its flow automatically on that schedule as long as a Prefect server and agent are running. Prefect Cloud creates schedules flow runs automatically, and they will run on schedule if an agent is configured to pick up flow runs for the deployment.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-flow-run-with-an-event-trigger","title":"Create a flow run with an event trigger","text":"

deployment triggers are only available in Prefect Cloud

Deployments can optionally take a trigger specification, which will configure an automation to run the deployment based on the presence or absence of events, and optionally pass event data into the deployment run as parameters via jinja templating.

triggers:\n- enabled: true\nmatch:\nprefect.resource.id: prefect.flow-run.*\nexpect:\n- prefect.flow-run.Completed\nmatch_related:\nprefect.resource.name: prefect.flow.etl-flow\nprefect.resource.role: flow\nparameters:\nparam_1: \"{{ event }}\"\n

When applied, this deployment will start a flow run upon the completion of the upstream flow specified in the match_related key, with the flow run passed in as a parameter. Triggers can be configured to respond to the presence or absence of arbitrary internal or external events. The trigger system and API are detailed in Automations.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-flow-run-with-prefect-ui","title":"Create a flow run with Prefect UI","text":"

In the Prefect UI, you can click the Run button next to any deployment to execute an ad hoc flow run for that deployment.

The prefect deployment CLI command provides commands for managing and running deployments locally.

Command Description apply Create or update a deployment from a YAML file. build Generate a deployment YAML from /path/to/file.py:flow_function. delete Delete a deployment. inspect View details about a deployment. ls View all deployments or deployments for specific flows. pause-schedule Pause schedule of a given deployment. resume-schedule Resume schedule of a given deployment. run Create a flow run for the given flow and deployment. set-schedule Set schedule for a given deployment.","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-flow-run-in-a-python-script","title":"Create a flow run in a Python script","text":"

You can create a flow run from a deployment in a Python script with the run_deployment function.

from prefect.deployments import run_deployment\n\n\ndef main():\n    response = run_deployment(name=\"flow-name/deployment-name\")\n    print(response)\n\n\nif __name__ == \"__main__\":\n   main()\n

PREFECT_API_URL setting for agents

You'll need to configure agents and work pools that can create flow runs for deployments in remote environments. PREFECT_API_URL must be set for the environment in which your agent is running.

If you want the agent to communicate with Prefect Cloud from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL in that environment.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#examples","title":"Examples","text":"","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/filesystems/","title":"Filesystems","text":"

A filesystem block is an object that allows you to read and write data from paths. Prefect provides multiple built-in file system types that cover a wide range of use cases.

Additional file system types are available in Prefect Collections.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#local-filesystem","title":"Local filesystem","text":"

The LocalFileSystem block enables interaction with the files in your current development environment.

LocalFileSystem properties include:

Property Description basepath String path to the location of files on the local filesystem. Access to files outside of the base path will not be allowed.
from prefect.filesystems import LocalFileSystem\n\nfs = LocalFileSystem(basepath=\"/foo/bar\")\n

Limited access to local file system

Be aware that LocalFileSystem access is limited to the exact path provided. This file system may not be ideal for some use cases. The execution environment for your workflows may not have the same file system as the environment you are writing and deploying your code on.

Use of this file system can limit the availability of results after a flow run has completed or prevent the code for a flow from being retrieved successfully at the start of a run.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#remote-file-system","title":"Remote file system","text":"

The RemoteFileSystem block enables interaction with arbitrary remote file systems. Under the hood, RemoteFileSystem uses fsspec and supports any file system that fsspec supports.

RemoteFileSystem properties include:

Property Description basepath String path to the location of files on the remote filesystem. Access to files outside of the base path will not be allowed. settings Dictionary containing extra parameters required to access the remote file system.

The file system is specified using a protocol:

For example, to use it with Amazon S3:

from prefect.filesystems import RemoteFileSystem\n\nblock = RemoteFileSystem(basepath=\"s3://my-bucket/folder/\")\nblock.save(\"dev\")\n

You may need to install additional libraries to use some remote storage types.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#remotefilesystem-examples","title":"RemoteFileSystem examples","text":"

How can we use RemoteFileSystem to store our flow code? The following is a use case where we use MinIO as a storage backend:

from prefect.filesystems import RemoteFileSystem\n\nminio_block = RemoteFileSystem(\n    basepath=\"s3://my-bucket\",\n    settings={\n        \"key\": \"MINIO_ROOT_USER\",\n        \"secret\": \"MINIO_ROOT_PASSWORD\",\n        \"client_kwargs\": {\"endpoint_url\": \"http://localhost:9000\"},\n    },\n)\nminio_block.save(\"minio\")\n
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#azure","title":"Azure","text":"

The Azure file system block enables interaction with Azure Datalake and Azure Blob Storage. Under the hood, the Azure block uses adlfs.

Azure properties include:

Property Description bucket_path String path to the location of files on the remote filesystem. Access to files outside of the bucket path will not be allowed. azure_storage_connection_string Azure storage connection string. azure_storage_account_name Azure storage account name. azure_storage_account_key Azure storage account key. azure_storage_tenant_id Azure storage tenant ID. azure_storage_client_id Azure storage client ID. azure_storage_client_secret Azure storage client secret. azure_storage_anon Anonymous authentication, disable to use DefaultAzureCredential.

To create a block:

from prefect.filesystems import Azure\n\nblock = Azure(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n

To use it in a deployment:

prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb az/dev\n

You need to install adlfs to use it.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#github","title":"GitHub","text":"

The GitHub filesystem block enables interaction with GitHub repositories. This block is read-only and works with both public and private repositories.

GitHub properties include:

Property Description reference An optional reference to pin to, such as a branch name or tag. repository The URL of a GitHub repository to read from, in either HTTPS or SSH format. access_token A GitHub Personal Access Token (PAT) with repo scope.

To create a block:

from prefect.filesystems import GitHub\n\nblock = GitHub(\n    repository=\"https://github.com/my-repo/\",\n    access_token=<my_access_token> # only required for private repos\n)\nblock.get_directory(\"folder-in-repo\") # specify a subfolder of repo\nblock.save(\"dev\")\n

To use it in a deployment:

prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb github/dev -a\n
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#gitlabrepository","title":"GitLabRepository","text":"

The GitLabRepository block is read-only and works with private GitLab repositories.

GitLabRepository properties include:

Property Description reference An optional reference to pin to, such as a branch name or tag. repository The URL of a GitLab repository to read from, in either HTTPS or SSH format. credentials A GitLabCredentials block with Personal Access Token (PAT) with read_repository scope.

To create a block:

from prefect_gitlab.credentials import GitLabCredentials\nfrom prefect_gitlab.repositories import GitLabRepository\n\ngitlab_creds = GitLabCredentials(token=\"YOUR_GITLAB_ACCESS_TOKEN\")\ngitlab_repo = GitLabRepository(\n    repository=\"https://gitlab.com/yourorg/yourrepo.git\",\n    reference=\"main\",\n    credentials=gitlab_creds,\n)\ngitlab_repo.save(\"dev\")\n

To use it in a deployment (and apply):

prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb gitlab-repository/dev -a\n

Note that to use this block, you need to install the prefect-gitlab collection.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#gcs","title":"GCS","text":"

The GCS file system block enables interaction with Google Cloud Storage. Under the hood, GCS uses gcsfs.

GCS properties include:

Property Description bucket_path A GCS bucket path service_account_info The contents of a service account keyfile as a JSON string. project The project the GCS bucket resides in. If not provided, the project will be inferred from the credentials or environment.

To create a block:

from prefect.filesystems import GCS\n\nblock = GCS(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n

To use it in a deployment:

prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb gcs/dev\n

You need to install gcsfsto use it.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#s3","title":"S3","text":"

The S3 file system block enables interaction with Amazon S3. Under the hood, S3 uses s3fs.

S3 properties include:

Property Description bucket_path An S3 bucket path aws_access_key_id AWS Access Key ID aws_secret_access_key AWS Secret Access Key

To create a block:

from prefect.filesystems import S3\n\nblock = S3(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n

To use it in a deployment:

prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb s3/dev\n

You need to install s3fsto use this block.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#smb","title":"SMB","text":"

The SMB file system block enables interaction with SMB shared network storage. Under the hood, SMB uses smbprotocol. Used to connect to Windows-based SMB shares from Linux-based Prefect flows. The SMB file system block is able to copy files, but cannot create directories.

SMB properties include:

Property Description basepath String path to the location of files on the remote filesystem. Access to files outside of the base path will not be allowed. smb_host Hostname or IP address where SMB network share is located. smb_port Port for SMB network share (defaults to 445). smb_username SMB username with read/write permissions. smb_password SMB password.

To create a block:

from prefect.filesystems import SMB\n\nblock = SMB(basepath=\"my-share/folder/\")\nblock.save(\"dev\")\n

To use it in a deployment:

prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb smb/dev\n

You need to install smbprotocol to use it.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#handling-credentials-for-cloud-object-storage-services","title":"Handling credentials for cloud object storage services","text":"

If you leverage S3, GCS, or Azure storage blocks, and you don't explicitly configure credentials on the respective storage block, those credentials will be inferred from the environment. Make sure to set those either explicitly on the block or as environment variables, configuration files, or IAM roles within both the build and runtime environment for your deployments.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#filesystem-package-dependencies","title":"Filesystem package dependencies","text":"

A Prefect installation and doesn't include filesystem-specific package dependencies such as s3fs, gcsfs or adlfs. This includes Prefect base Docker images.

You must ensure that filesystem-specific libraries are installed in an execution environment where they will be used by flow runs.

In Dockerized deployments using the Prefect base image, you can leverage the EXTRA_PIP_PACKAGES environment variable. Those dependencies will be installed at runtime within your Docker container or Kubernetes Job before the flow starts running.

In Dockerized deployments using a custom image, you must include the filesystem-specific package dependency in your image.

Here is an example from a deployment YAML file showing how to specify the installation of s3fs from into your image:

infrastructure:\ntype: docker-container\nenv:\nEXTRA_PIP_PACKAGES: s3fs  # could be gcsfs, adlfs, etc.\n

You may specify multiple dependencies by providing a comma-delimted list.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#saving-and-loading-file-systems","title":"Saving and loading file systems","text":"

Configuration for a file system can be saved to the Prefect API. For example:

fs = RemoteFileSystem(basepath=\"s3://my-bucket/folder/\")\nfs.write_path(\"foo\", b\"hello\")\nfs.save(\"dev-s3\")\n

This file system can be retrieved for later use with load.

fs = RemoteFileSystem.load(\"dev-s3\")\nfs.read_path(\"foo\")  # b'hello'\n
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#readable-and-writable-file-systems","title":"Readable and writable file systems","text":"

Prefect provides two abstract file system types, ReadableFileSystem and WriteableFileSystem.

A file system may implement both of these types.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/flows/","title":"Flows","text":"

Flows are the most basic Prefect object. Flows are the only Prefect abstraction that can be interacted with, displayed, and run without needing to reference any other aspect of the Prefect engine. A flow is a container for workflow logic and allows users to interact with and reason about the state of their workflows. It is represented in Python as a single function.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flows-overview","title":"Flows overview","text":"

Flows are like functions. They can take inputs, perform work, and return an output. In fact, you can turn any function into a Prefect flow by adding the @flow decorator. When a function becomes a flow, its behavior changes, giving it the following advantages:

Flows also take advantage of automatic Prefect logging to capture details about flow runs such as run time, task tags, and final state.

All workflows are defined within the context of a flow. Flows can include calls to tasks as well as to other flows, which we call \"subflows\" in this context. Flows may be defined within modules and imported for use as subflows in your flow definitions.

Flows are required for deployments \u2014 every deployment points to a specific flow as the entrypoint for a flow run.

Tasks must be called from flows

All tasks must be called from within a flow. Tasks may not be called from other tasks.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flow-runs","title":"Flow runs","text":"

A flow run represents a single execution of the flow.

You can create a flow run by calling the flow. For example, by running a Python script or importing the flow into an interactive session.

You can also create a flow run by:

However you run the flow, the Prefect API monitors the flow run, capturing flow run state for observability.

When you run a flow that contains tasks or additional flows, Prefect will track the relationship of each child run to the parent flow run.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#writing-flows","title":"Writing flows","text":"

For most use cases, we recommend using the @flow decorator to designate a flow:

from prefect import flow\n\n@flow\ndef my_flow():\n    return\n

Flows are uniquely identified by name. You can provide a name parameter value for the flow. If you don't provide a name, Prefect uses the flow function name.

@flow(name=\"My Flow\")\ndef my_flow():\n    return\n

Flows can call tasks to do specific work:

from prefect import flow, task\n\n@task\ndef print_hello(name):\n    print(f\"Hello {name}!\")\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    print_hello(name)\n

Flows and tasks

There's nothing stopping you from putting all of your code in a single flow function \u2014 Prefect will happily run it!

However, organizing your workflow code into smaller flow and task units lets you take advantage of Prefect features like retries, more granular visibility into runtime state, the ability to determine final state regardless of individual task state, and more.

In addition, if you put all of your workflow logic in a single flow function and any line of code fails, the entire flow will fail and must be retried from the beginning. This can be avoided by breaking up the code into multiple tasks.

Each Prefect workflow must contain one primary, entrypoint @flow function. From that flow function, you may call any number of other tasks, subflows, and even regular Python functions. You can pass parameters to your entrypoint flow function that will be used elsewhere in the workflow, and the final state of that entrypoint flow function determines the final state of your workflow.

Prefect encourages \"small tasks\" \u2014 each one should represent a single logical step of your workflow. This allows Prefect to better contain task failures.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flow-settings","title":"Flow settings","text":"

Flows allow a great deal of configuration by passing arguments to the decorator. Flows accept the following optional settings.

Argument Description description An optional string description for the flow. If not provided, the description will be pulled from the docstring for the decorated function. name An optional name for the flow. If not provided, the name will be inferred from the function. retries An optional number of times to retry on flow run failure. retry_delay_seconds An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero. flow_run_name An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables; this name can also be provided as a function that returns a string. task_runner An optional task runner to use for task execution within the flow when you .submit() tasks. If not provided and you .submit() tasks, the ConcurrentTaskRunner will be used. timeout_seconds An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called. validate_parameters Boolean indicating whether parameters passed to flows are validated by Pydantic. Default is True. version An optional version string for the flow. If not provided, we will attempt to create a version string as a hash of the file containing the wrapped function. If the file cannot be located, the version will be null.

For example, you can provide a name value for the flow. Here we've also used the optional description argument and specified a non-default task runner.

from prefect import flow\nfrom prefect.task_runners import SequentialTaskRunner\n\n@flow(name=\"My Flow\",\n      description=\"My flow using SequentialTaskRunner\",\n      task_runner=SequentialTaskRunner())\ndef my_flow():\n    return\n

You can also provide the description as the docstring on the flow function.

@flow(name=\"My Flow\",\n      task_runner=SequentialTaskRunner())\ndef my_flow():\n\"\"\"My flow using SequentialTaskRunner\"\"\"\n    return\n

You can distinguish runs of this flow by providing a flow_run_name; this setting accepts a string that can optionally contain templated references to the parameters of your flow. The name will be formatted using Python's standard string formatting syntax as can be seen here:

import datetime\nfrom prefect import flow\n\n@flow(flow_run_name=\"{name}-on-{date:%A}\")\ndef my_flow(name: str, date: datetime.datetime):\n    pass\n\n# creates a flow run called 'marvin-on-Thursday'\nmy_flow(name=\"marvin\", date=datetime.datetime.utcnow())\n

Additionally this setting also accepts a function that returns a string for the flow run name:

import datetime\nfrom prefect import flow\n\ndef generate_flow_run_name():\n    date = datetime.datetime.utcnow()\n\n    return f\"{date:%A}-is-a-nice-day\"\n\n@flow(flow_run_name=generate_flow_run_name)\ndef my_flow(name: str):\n    pass\n\n# creates a flow run called 'Thursday-is-a-nice-day'\nmy_flow(name=\"marvin\")\n

If you need access to information about the flow, use the prefect.runtime module. For example:

from prefect import flow\nfrom prefect.runtime import flow_run\n\ndef generate_flow_run_name():\n    flow_name = flow_run.flow_name\n\n    parameters = flow_run.parameters\n    name = parameters[\"name\"]\n    limit = parameters[\"limit\"]\n\n    return f\"{flow_name}-with-{name}-and-{limit}\"\n\n@flow(flow_run_name=generate_flow_run_name)\ndef my_flow(name: str, limit: int = 100):\n    pass\n\n# creates a flow run called 'my-flow-with-marvin-and-100'\nmy_flow(name=\"marvin\")\n

Note that validate_parameters will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type. For example, if a parameter is defined as x: int and \"5\" is passed, it will be resolved to 5. If set to False, no validation will be performed on flow parameters.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#separating-logic-into-tasks","title":"Separating logic into tasks","text":"

The simplest workflow is just a @flow function that does all the work of the workflow.

from prefect import flow\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    print(f\"Hello {name}!\")\n\nhello_world(\"Marvin\")\n

When you run this flow, you'll see the following output:

$ python hello.py\n15:11:23.594 | INFO    | prefect.engine - Created flow run 'benevolent-donkey' for flow 'hello-world'\n15:11:23.594 | INFO    | Flow run 'benevolent-donkey' - Using task runner 'ConcurrentTaskRunner'\nHello Marvin!\n15:11:24.447 | INFO    | Flow run 'benevolent-donkey' - Finished in state Completed()\n

A better practice is to create @task functions that do the specific work of your flow, and use your @flow function as the conductor that orchestrates the flow of your application:

from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n\nhello_world(\"Marvin\")\n

When you run this flow, you'll see the following output, which illustrates how the work is encapsulated in a task run.

$ python hello.py\n15:15:58.673 | INFO    | prefect.engine - Created flow run 'loose-wolverine' for flow 'Hello Flow'\n15:15:58.674 | INFO    | Flow run 'loose-wolverine' - Using task runner 'ConcurrentTaskRunner'\n15:15:58.973 | INFO    | Flow run 'loose-wolverine' - Created task run 'Print Hello-84f0fe0e-0' for task 'Print Hello'\nHello Marvin!\n15:15:59.037 | INFO    | Task run 'Print Hello-84f0fe0e-0' - Finished in state Completed()\n15:15:59.568 | INFO    | Flow run 'loose-wolverine' - Finished in state Completed('All states completed.')\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#composing-flows","title":"Composing flows","text":"

A subflow run is created when a flow function is called inside the execution of another flow. The primary flow is the \"parent\" flow. The flow created within the parent is the \"child\" flow or \"subflow.\"

Subflow runs behave like normal flow runs. There is a full representation of the flow run in the backend as if it had been called separately. When a subflow starts, it will create a new task runner for tasks within the subflow. When the subflow completes, the task runner is shut down.

Subflows will block execution of the parent flow until completion. However, asynchronous subflows can be run in parallel by using AnyIO task groups or asyncio.gather.

Subflows differ from normal flows in that they will resolve any passed task futures into data. This allows data to be passed from the parent flow to the child easily.

The relationship between a child and parent flow is tracked by creating a special task run in the parent flow. This task run will mirror the state of the child flow run.

A task that represents a subflow will be annotated as such in its state_details via the presence of a child_flow_run_id field. A subflow can be identified via the presence of a parent_task_run_id on state_details.

You can define multiple flows within the same file. Whether running locally or via a deployment, you must indicate which flow is the entrypoint for a flow run.

Cancelling subflow runs

In line subflow runs, i.e. those created without run_deployment, cannot be cancelled without cancelling their parent flow run. If you may need to cancel a subflow run independent of its parent flow run, we recommend deploying it separately and starting it using the run_deployment method.

from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Subflow\")\ndef my_subflow(msg):\n    print(f\"Subflow says: {msg}\")\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n    my_subflow(message)\n\nhello_world(\"Marvin\")\n

You can also define flows or tasks in separate modules and import them for usage. For example, here's a simple subflow module:

from prefect import flow, task\n\n@flow(name=\"Subflow\")\ndef my_subflow(msg):\n    print(f\"Subflow says: {msg}\")\n

Here's a parent flow that imports and uses my_subflow() as a subflow:

from prefect import flow, task\nfrom subflow import my_subflow\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n    my_subflow(message)\n\nhello_world(\"Marvin\")\n

Running the hello_world() flow (in this example from the file hello.py) creates a flow run like this:

$ python hello.py\n15:19:21.651 | INFO    | prefect.engine - Created flow run 'daft-cougar' for flow 'Hello Flow'\n15:19:21.651 | INFO    | Flow run 'daft-cougar' - Using task runner 'ConcurrentTaskRunner'\n15:19:21.945 | INFO    | Flow run 'daft-cougar' - Created task run 'Print Hello-84f0fe0e-0' for task 'Print Hello'\nHello Marvin!\n15:19:22.055 | INFO    | Task run 'Print Hello-84f0fe0e-0' - Finished in state Completed()\n15:19:22.107 | INFO    | Flow run 'daft-cougar' - Created subflow run 'ninja-duck' for flow 'Subflow'\nSubflow says: Hello Marvin!\n15:19:22.794 | INFO    | Flow run 'ninja-duck' - Finished in state Completed()\n15:19:23.215 | INFO    | Flow run 'daft-cougar' - Finished in state Completed('All states completed.')\n

Subflows or tasks?

In Prefect you can call tasks or subflows to do work within your workflow, including passing results from other tasks to your subflow. So a common question we hear is:

\"When should I use a subflow instead of a task?\"

We recommend writing tasks that do a discrete, specific piece of work in your workflow: calling an API, performing a database operation, analyzing or transforming a data point. Prefect tasks are well suited to parallel or distributed execution using distributed computation frameworks such as Dask or Ray. For troubleshooting, the more granular you create your tasks, the easier it is to find and fix issues should a task fail.

Subflows enable you to group related tasks within your workflow. Here are some scenarios where you might choose to use a subflow rather than calling tasks individually:

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#parameters","title":"Parameters","text":"

Flows can be called with both positional and keyword arguments. These arguments are resolved at runtime into a dictionary of parameters mapping name to value. These parameters are stored by the Prefect orchestration engine on the flow run object.

Prefect API requires keyword arguments

When creating flow runs from the Prefect API, parameter names must be specified when overriding defaults \u2014 they cannot be positional.

Type hints provide an easy way to enforce typing on your flow parameters via pydantic. This means any pydantic model used as a type hint within a flow will be coerced automatically into the relevant object type:

from prefect import flow\nfrom pydantic import BaseModel\n\nclass Model(BaseModel):\n    a: int\n    b: float\n    c: str\n\n@flow\ndef model_validator(model: Model):\n    print(model)\n

Note that parameter values can be provided to a flow via API using a deployment. Flow run parameters sent to the API on flow calls are coerced to a serializable form. Type hints on your flow functions provide you a way of automatically coercing JSON provided values to their appropriate Python representation.

For example, to automatically convert something to a datetime:

from prefect import flow\nfrom datetime import datetime\n\n@flow\ndef what_day_is_it(date: datetime = None):\n    if date is None:\n        date = datetime.utcnow()\n    print(f\"It was {date.strftime('%A')} on {date.isoformat()}\")\n\nwhat_day_is_it(\"2021-01-01T02:00:19.180906\")\n# It was Friday on 2021-01-01T02:00:19.180906\n

Parameters are validated before a flow is run. If a flow call receives invalid parameters, a flow run is created in a Failed state. If a flow run for a deployment receives invalid parameters, it will move from a Pending state to a Failed without entering a Running state.

Flow run parameters cannot exceed 512kb in size

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#final-state-determination","title":"Final state determination","text":"

Prerequisite

Read the documentation about states before proceeding with this section.

The final state of the flow is determined by its return value. The following rules apply:

The following examples illustrate each of these cases:

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#raise-an-exception","title":"Raise an exception","text":"

If an exception is raised within the flow function, the flow is immediately marked as failed.

from prefect import flow\n\n@flow\ndef always_fails_flow():\nraise ValueError(\"This flow immediately fails\")\nalways_fails_flow()\n

Running this flow produces the following result:

22:22:36.864 | INFO    | prefect.engine - Created flow run 'acrid-tuatara' for flow 'always-fails-flow'\n22:22:36.864 | INFO    | Flow run 'acrid-tuatara' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n22:22:37.060 | ERROR   | Flow run 'acrid-tuatara' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: This flow immediately fails\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-none","title":"Return None","text":"

A flow with no return statement is determined by the state of all of its task runs.

from prefect import flow, task\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n    print(\"I'm fail safe!\")\n    return \"success\"\n\n@flow\ndef always_fails_flow():\n    always_fails_task.submit().result(raise_on_failure=False)\n    always_succeeds_task()\n\nif __name__ == \"__main__\":\n    always_fails_flow()\n

Running this flow produces the following result:

18:32:05.345 | INFO    | prefect.engine - Created flow run 'auburn-lionfish' for flow 'always-fails-flow'\n18:32:05.346 | INFO    | Flow run 'auburn-lionfish' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:32:05.582 | INFO    | Flow run 'auburn-lionfish' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:32:05.582 | INFO    | Flow run 'auburn-lionfish' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:32:05.610 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\n18:32:05.638 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:32:05.658 | INFO    | Flow run 'auburn-lionfish' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:32:05.659 | INFO    | Flow run 'auburn-lionfish' - Executing 'always_succeeds_task-9c27db32-0' immediately...\nI'm fail safe!\n18:32:05.703 | INFO    | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:32:05.730 | ERROR   | Flow run 'auburn-lionfish' - Finished in state Failed('1/2 states failed.')\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-a-future","title":"Return a future","text":"

If a flow returns one or more futures, the final state is determined based on the underlying states.

from prefect import flow, task\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n    print(\"I'm fail safe!\")\n    return \"success\"\n\n@flow\ndef always_succeeds_flow():\n    x = always_fails_task.submit().result(raise_on_failure=False)\ny = always_succeeds_task.submit(wait_for=[x])\nreturn y\n\nif __name__ == \"__main__\":\n    always_succeeds_flow()\n

Running this flow produces the following result \u2014 it succeeds because it returns the future of the task that succeeds:

18:35:24.965 | INFO    | prefect.engine - Created flow run 'whispering-guan' for flow 'always-succeeds-flow'\n18:35:24.965 | INFO    | Flow run 'whispering-guan' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:35:25.204 | INFO    | Flow run 'whispering-guan' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:35:25.205 | INFO    | Flow run 'whispering-guan' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:35:25.232 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\n18:35:25.265 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:35:25.289 | INFO    | Flow run 'whispering-guan' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:35:25.289 | INFO    | Flow run 'whispering-guan' - Submitted task run 'always_succeeds_task-9c27db32-0' for execution.\nI'm fail safe!\n18:35:25.335 | INFO    | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:35:25.362 | INFO    | Flow run 'whispering-guan' - Finished in state Completed('All states completed.')\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-multiple-states-or-futures","title":"Return multiple states or futures","text":"

If a flow returns a mix of futures and states, the final state is determined by resolving all futures to states, then determining if any of the states are not COMPLETED.

from prefect import task, flow\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I am bad task\")\n\n@task\ndef always_succeeds_task():\n    return \"foo\"\n\n@flow\ndef always_succeeds_flow():\n    return \"bar\"\n\n@flow\ndef always_fails_flow():\n    x = always_fails_task()\n    y = always_succeeds_task()\n    z = always_succeeds_flow()\nreturn x, y, z\n

Running this flow produces the following result. It fails because one of the three returned futures failed. Note that the final state is Failed, but the states of each of the returned futures is included in the flow state:

20:57:51.547 | INFO    | prefect.engine - Created flow run 'impartial-gorilla' for flow 'always-fails-flow'\n20:57:51.548 | INFO    | Flow run 'impartial-gorilla' - Using task runner 'ConcurrentTaskRunner'\n20:57:51.645 | INFO    | Flow run 'impartial-gorilla' - Created task run 'always_fails_task-58ea43a6-0' for task 'always_fails_task'\n20:57:51.686 | INFO    | Flow run 'impartial-gorilla' - Created task run 'always_succeeds_task-c9014725-0' for task 'always_succeeds_task'\n20:57:51.727 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: I am bad task\n20:57:51.787 | INFO    | Task run 'always_succeeds_task-c9014725-0' - Finished in state Completed()\n20:57:51.808 | INFO    | Flow run 'impartial-gorilla' - Created subflow run 'unbiased-firefly' for flow 'always-succeeds-flow'\n20:57:51.884 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Finished in state Failed('Task run encountered an exception.')\n20:57:52.438 | INFO    | Flow run 'unbiased-firefly' - Finished in state Completed()\n20:57:52.811 | ERROR   | Flow run 'impartial-gorilla' - Finished in state Failed('1/3 states failed.')\nFailed(message='1/3 states failed.', type=FAILED, result=(Failed(message='Task run encountered an exception.', type=FAILED, result=ValueError('I am bad task'), task_run_id=5fd4c697-7c4c-440d-8ebc-dd9c5bbf2245), Completed(message=None, type=COMPLETED, result='foo', task_run_id=df9b6256-f8ac-457c-ba69-0638ac9b9367), Completed(message=None, type=COMPLETED, result='bar', task_run_id=cfdbf4f1-dccd-4816-8d0f-128750017d0c)), flow_run_id=6d2ec094-001a-4cb0-a24e-d2051db6318d)\n

Returning multiple states

When returning multiple states, they must be contained in a set, list, or tuple. If other collection types are used, the result of the contained states will not be checked.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-a-manual-state","title":"Return a manual state","text":"

If a flow returns a manually created state, the final state is determined based on the return value.

from prefect import task, flow\nfrom prefect.server.schemas.states import Completed, Failed\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n    print(\"I'm fail safe!\")\n    return \"success\"\n\n@flow\ndef always_succeeds_flow():\n    x = always_fails_task.submit()\ny = always_succeeds_task.submit()\nif y.result() == \"success\":\nreturn Completed(message=\"I am happy with this result\")\nelse:\nreturn Failed(message=\"How did this happen!?\")\n\nif __name__ == \"__main__\":\n    always_succeeds_flow()\n

Running this flow produces the following result.

18:37:42.844 | INFO    | prefect.engine - Created flow run 'lavender-elk' for flow 'always-succeeds-flow'\n18:37:42.845 | INFO    | Flow run 'lavender-elk' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:37:43.125 | INFO    | Flow run 'lavender-elk' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:37:43.126 | INFO    | Flow run 'lavender-elk' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:37:43.162 | INFO    | Flow run 'lavender-elk' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:37:43.163 | INFO    | Flow run 'lavender-elk' - Submitted task run 'always_succeeds_task-9c27db32-0' for execution.\n18:37:43.175 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\nI'm fail safe!\n18:37:43.217 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:37:43.236 | INFO    | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:37:43.264 | INFO    | Flow run 'lavender-elk' - Finished in state Completed('I am happy with this result')\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-an-object","title":"Return an object","text":"

If the flow run returns any other object, then it is marked as completed.

from prefect import task, flow\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@flow\ndef always_succeeds_flow():\n    always_fails_task().submit()\nreturn \"foo\"\nif __name__ == \"__main__\":\n    always_succeeds_flow()\n

Running this flow produces the following result.

21:02:45.715 | INFO    | prefect.engine - Created flow run 'sparkling-pony' for flow 'always-succeeds-flow'\n21:02:45.715 | INFO    | Flow run 'sparkling-pony' - Using task runner 'ConcurrentTaskRunner'\n21:02:45.816 | INFO    | Flow run 'sparkling-pony' - Created task run 'always_fails_task-58ea43a6-0' for task 'always_fails_task'\n21:02:45.853 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: I am bad task\n21:02:45.879 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Finished in state Failed('Task run encountered an exception.')\n21:02:46.593 | INFO    | Flow run 'sparkling-pony' - Finished in state Completed()\nCompleted(message=None, type=COMPLETED, result='foo', flow_run_id=7240e6f5-f0a8-4e00-9440-a7b33fb51153)\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#pause-a-flow-run","title":"Pause a flow run","text":"

Prefect enables pausing an in-progress flow run for manual approval. Prefect exposes this functionality via the pause_flow_run and resume_flow_run functions, as well as the Prefect server or Prefect Cloud UI.

Most simply, pause_flow_run can be called inside a flow. A timeout option can be supplied as well \u2014 after the specified number of seconds, the flow will fail if it hasn't been resumed.

from prefect import task, flow, pause_flow_run, resume_flow_run\n\n@task\nasync def marvin_setup():\n    return \"a raft of ducks walk into a bar...\"\n@task\nasync def marvin_punchline():\n    return \"it's a wonder none of them ducked!\"\n@flow\nasync def inspiring_joke():\n    await marvin_setup()\n    await pause_flow_run(timeout=600)  # pauses for 10 minutes\n    await marvin_punchline()\n

Calling this flow will pause after the first task and wait for resumption.

await inspiring_joke()\n> \"a raft of ducks walk into a bar...\"\n

Paused flow runs can be resumed by clicking the Resume button in the Prefect UI or calling the resume_flow_run utility via client code.

resume_flow_run(FLOW_RUN_ID)\n

The paused flow run will then finish!

> \"it's a wonder none of them ducked!\"\n

Here is an example of a flow that does not block flow execution while paused. This flow will exit after one task, and will be rescheduled upon resuming. The stored result of the first task is retrieved instead of being rerun.

from prefect import flow, pause_flow_run, task\n\n@task(persist_result=True)\ndef foo():\n    return 42\n\n@flow(persist_result=True)\ndef noblock_pausing():\n    x = foo.submit()\n    pause_flow_run(timeout=30, reschedule=True)\n    y = foo.submit()\n    z = foo(wait_for=[x])\n    alpha = foo(wait_for=[y])\n    omega = foo(wait_for=[x, y])\n

This long-running flow can be paused out of process, either by calling pause_flow_run(flow_run_id=<ID>) or selecting the Pause button in the Prefect UI or Prefect Cloud.

from prefect import flow, task\nimport time\n\n@task(persist_result=True)\nasync def foo():\n    return 42\n\n@flow(persist_result=True)\nasync def longrunning():\n    res = 0\n    for ii in range(20):\n        time.sleep(5)\n        res += (await foo())\n    return res\n

Pausing flow runs is blocking by default

By default, pausing a flow run blocks the agent \u2014 the flow is still running inside the pause_flow_run function. However, you may pause any flow run in this fashion, including non-deployment local flow runs and subflows.

Alternatively, flow runs can be paused without blocking the flow run process. This is particularly useful when running the flow via an agent and you want the agent to be able to pick up other flows while the paused flow is paused.

Non-blocking pause can be accomplished by setting the reschedule flag to True. In order to use this feature, flows that pause with the reschedule flag must have:

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-a-flow-run","title":"Cancel a flow run","text":"

You may cancel a scheduled or in-progress flow run from the CLI, UI, REST API, or Python client.

When cancellation is requested, the flow run is moved to a \"Cancelling\" state. The agent monitors the state of flow runs and detects that cancellation has been requested. The agent then sends a signal to the flow run infrastructure, requesting termination of the run. If the run does not terminate after a grace period (default of 30 seconds), the infrastructure will be killed, ensuring the flow run exits.

An agent is required

Flow run cancellation requires the flow run to be submitted by an agent or worker. The agent or worker must be running to enforce the cancellation. In line subflow runs, i.e. those created without run_deployment, cannot be cancelled without cancelling the parent flow run. If you may need to cancel a subflow run independent of its parent flow run, we recommend deploying it separately and starting it using the run_deployment method.

Support for cancellation is included for all core library infrastructure types:

Cancellation is robust to restarts of the agent. To enable this, we attach metadata about the created infrastructure to the flow run. Internally, this is referred to as the infrastructure_pid or infrastructure identifier. Generally, this is composed of two parts:

  1. Scope: identifying where the infrastructure is running.
  2. ID: a unique identifier for the infrastructure within the scope.

The scope is used to ensure that Prefect does not kill the wrong infrastructure. For example, agents running on multiple machines may have overlapping process IDs but should not have a matching scope.

The identifiers for the primary infrastructure types are as follows:

While the cancellation process is robust, there are a few issues than can occur:

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-via-the-cli","title":"Cancel via the CLI","text":"

From the command line in your execution environment, you can cancel a flow run by using the prefect flow-run cancel CLI command, passing the ID of the flow run.

$ prefect flow-run cancel 'a55a4804-9e3c-4042-8b59-b3b6b7618736'\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-via-the-ui","title":"Cancel via the UI","text":"

From the UI you can cancel a flow run by navigating to the flow run's detail page and clicking the Cancel button in the upper right corner.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/infrastructure/","title":"Infrastructure","text":"

Users may specify an infrastructure block when creating a deployment. This block will be used to specify infrastructure for flow runs created by the deployment at runtime.

Infrastructure can only be used with a deployment. When you run a flow directly by calling the flow yourself, you are responsible for the environment in which the flow executes.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#infrastructure-overview","title":"Infrastructure overview","text":"

Prefect uses infrastructure to create the environment for a user's flow to execute.

Infrastructure is attached to a deployment and is propagated to flow runs created for that deployment. Infrastructure is deserialized by the agent and it has two jobs:

The engine acquires and calls the flow. Infrastructure doesn't know anything about how the flow is stored, it's just passing a flow run ID to the engine.

Infrastructure is specific to the environments in which flows will run. Prefect currently provides the following infrastructure types:

What about tasks?

Flows and tasks can both use configuration objects to manage the environment in which code runs.

Flows use infrastructure.

Tasks use task runners. For more on how task runners work, see Task Runners.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#using-infrastructure","title":"Using infrastructure","text":"

You may create customized infrastructure blocks through the Prefect UI or Prefect Cloud Blocks page or create them in code and save them to the API using the blocks .save() method.

Once created, there are two distinct ways to use infrastructure in a deployment:

For example, when creating your deployment files, the supported Prefect infrastrucure types are:

$ prefect deployment build ./my_flow.py:my_flow -n my-flow-deployment -t test -i docker-container -sb s3/my-bucket --override env.EXTRA_PIP_PACKAGES=s3fs\nFound flow 'my-flow'\nSuccessfully uploaded 2 files to s3://bucket-full-of-sunshine\nDeployment YAML created at '/Users/terry/test/flows/infra/deployment.yaml'.\n

In this example we specify the DockerContainer infrastructure in addition to a preconfigured AWS S3 bucket storage block.

The default deployment YAML filename may be edited as needed to add an infrastructure type or infrastructure settings.

###\n### A complete description of a Prefect Deployment for flow 'my-flow'\n###\nname: my-flow-deployment\ndescription: null\nversion: e29de5d01b06d61b4e321d40f34a480c\n# The work queue that will handle this deployment's runs\nwork_queue_name: default\nwork_pool_name: default-agent-pool\ntags:\n- test\nparameters: {}\nschedule: null\nis_schedule_active: true\ninfra_overrides:\nenv.EXTRA_PIP_PACKAGES: s3fs\ninfrastructure:\ntype: docker-container\nenv: {}\nlabels: {}\nname: null\ncommand:\n- python\n- -m\n- prefect.engine\nimage: prefecthq/prefect:2-latest\nimage_pull_policy: null\nnetworks: []\nnetwork_mode: null\nauto_remove: false\nvolumes: []\nstream_output: true\nmemswap_limit: null\nmem_limit: null\nprivileged: false\nblock_type_slug: docker-container\n_block_type_slug: docker-container\n\n###\n### DO NOT EDIT BELOW THIS LINE\n###\nflow_name: my-flow\nmanifest_path: my_flow-manifest.json\nstorage:\nbucket_path: bucket-full-of-sunshine\naws_access_key_id: '**********'\naws_secret_access_key: '**********'\n_is_anonymous: true\n_block_document_name: anonymous-xxxxxxxx-f1ff-4265-b55c-6353a6d65333\n_block_document_id: xxxxxxxx-06c2-4c3c-a505-4a8db0147011\nblock_type_slug: s3\n_block_type_slug: s3\npath: ''\nentrypoint: my_flow.py:my-flow\nparameter_openapi_schema:\ntitle: Parameters\ntype: object\nproperties: {}\nrequired: null\ndefinitions: null\ntimestamp: '2023-02-08T23:00:14.974642+00:00'\n

Editing deployment YAML

Note the big DO NOT EDIT comment in the deployment YAML: In practice, anything above this block can be freely edited before running prefect deployment apply to create the deployment on the API.

Once the deployment exists, any flow runs that this deployment starts will use DockerContainer infrastructure.

You can also create custom infrastructure blocks \u2014 either in the Prefect UI for in code via the API \u2014 and use the settings in the block to configure your infastructure. For example, here we specify settings for Kubernetes infrastructure in a block named k8sdev.

from prefect.infrastructure import KubernetesJob, KubernetesImagePullPolicy\n\nk8s_job = KubernetesJob(\n    namespace=\"dev\",\n    image=\"prefecthq/prefect:2.0.0-python3.9\",\n    image_pull_policy=KubernetesImagePullPolicy.IF_NOT_PRESENT,\n)\nk8s_job.save(\"k8sdev\")\n

Now we can apply the infrastrucure type and settings in the block by specifying the block slug kubernetes-job/k8sdev as the infrastructure type when building a deployment:

prefect deployment build flows/k8s_example.py:k8s_flow --name k8sdev --tag k8s -sb s3/dev -ib kubernetes-job/k8sdev\n

See Deployments for more information about deployment build options.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#configuring-infrastructure","title":"Configuring infrastructure","text":"

Every infrastrcture type has type-specific options.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#process","title":"Process","text":"

Process infrastructure runs a command in a new process.

Current environment variables and Prefect settings will be included in the created process. Configured environment variables will override any current environment variables.

Process supports the following settings:

Attributes Description command A list of strings specifying the command to start the flow run. In most cases you should not override this. env Environment variables to set for the new process. labels Labels for the process. Labels are for metadata purposes only and cannot be attached to the process itself. name A name for the process. For display purposes only.","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#dockercontainer","title":"DockerContainer","text":"

DockerContainer infrastructure executes flow runs in a container.

Requirements for DockerContainer:

DockerContainer supports the following settings:

Attributes Description auto_remove Bool indicating whether the container will be removed on completion. If False, the container will remain after exit for inspection. command A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this. env Environment variables to set for the container. image An optional string specifying the name of a Docker image to use. Defaults to the Prefect image. If the image is stored anywhere other than a public Docker Hub registry, use a corresponding registry block, e.g. DockerRegistry or ensure otherwise that your execution layer is authenticated to pull the image from the image registry. image_pull_policy Specifies if the image should be pulled. One of 'ALWAYS', 'NEVER', 'IF_NOT_PRESENT'. image_registry A DockerRegistry block containing credentials to use if image is stored in a private image registry. labels An optional dictionary of labels, mapping name to value. name An optional name for the container. networks An optional list of strings specifying Docker networks to connect the container to. network_mode Set the network mode for the created container. Defaults to 'host' if a local API url is detected, otherwise the Docker default of 'bridge' is used. If 'networks' is set, this cannot be set. stream_output Bool indicating whether to stream output from the subprocess to local standard output. volumes An optional list of volume mount strings in the format of \"local_path:container_path\".

Prefect automatically sets a Docker image matching the Python and Prefect version you're using at deployment time. You can see all available images at Docker Hub.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#kubernetesjob","title":"KubernetesJob","text":"

KubernetesJob infrastructure executes flow runs in a Kubernetes Job.

Requirements for KubernetesJob:

The Prefect CLI command prefect kubernetes manifest server automatically generates a Kubernetes manifest with default settings for Prefect deployments. By default, it simply prints out the YAML configuration for a manifest. You can pipe this output to a file of your choice and edit as necessary.

KubernetesJob supports the following settings:

Attributes Description cluster_config An optional Kubernetes cluster config to use for this job. command A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this. customizations A list of JSON 6902 patches to apply to the base Job manifest. Alternatively, a valid JSON string is allowed (handy for deployments CLI). env Environment variables to set for the container. finished_job_ttl The number of seconds to retain jobs after completion. If set, finished jobs will be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be manually removed. image String specifying the tag of a Docker image to use for the Job. image_pull_policy The Kubernetes image pull policy to use for job containers. job The base manifest for the Kubernetes Job. job_watch_timeout_seconds Number of seconds to watch for job creation before timing out (defaults to None). labels Dictionary of labels to add to the Job. name An optional name for the job. namespace String signifying the Kubernetes namespace to use. pod_watch_timeout_seconds Number of seconds to watch for pod creation before timing out (default 60). service_account_name An optional string specifying which Kubernetes service account to use. stream_output Bool indicating whether to stream output from the subprocess to local standard output.","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#kubernetesjob-overrides-and-customizations","title":"KubernetesJob overrides and customizations","text":"

When creating deployments using KubernetesJob infrastructure, the infra_overrides parameter expects a dictionary. For a KubernetesJob, the customizations parameter expects a list.

Containers expect a list of objects, even if there is only one. For any patches applying to the container, the path value should be a list, for example: /spec/templates/spec/containers/0/resources

A Kubernetes-Job infrastructure block defined in Python:

customizations = [\n    {\n        \"op\": \"add\",\n        \"path\": \"/spec/template/spec/containers/0/resources\",\n        \"value\": {\n            \"requests\": {\n                \"cpu\": \"2000m\",\n                \"memory\": \"4gi\"\n            },\n            \"limits\": {\n                \"cpu\": \"4000m\",\n                \"memory\": \"8Gi\",\n                \"nvidia.com/gpu\": \"1\"\n            }\n        },\n    }\n]\n\nk8s_job = KubernetesJob(\n        namespace=namespace,\n        image=image_name,\n        image_pull_policy=KubernetesImagePullPolicy.ALWAYS,\n        finished_job_ttl=300,\n        job_watch_timeout_seconds=600,\n        pod_watch_timeout_seconds=600,\n        service_account_name=\"prefect-server\",\n        customizations=customizations,\n    )\nk8s_job.save(\"devk8s\")\n

A Deployment with infra-overrides defined in Python:

infra_overrides={ \n    \"customizations\": [\n            {\n                \"op\": \"add\",\n                \"path\": \"/spec/template/spec/containers/0/resources\",\n                \"value\": {\n                    \"requests\": {\n                        \"cpu\": \"2000m\",\n                        \"memory\": \"4gi\"\n                    },\n                    \"limits\": {\n                        \"cpu\": \"4000m\",\n                        \"memory\": \"8Gi\",\n                        \"nvidia.com/gpu\": \"1\"\n                }\n            },\n        }\n    ]\n}\n\n# Load an already created K8s Block\nk8sjob = k8s_job.load(\"devk8s\")\n\ndeployment = Deployment.build_from_flow(\n    flow=my_flow,\n    name=\"s3-example\",\n    version=2,\n    work_queue_name=\"aws\",\n    infrastructure=k8sjob,\n    storage=storage,\n    infra_overrides=infra_overrides,\n)\n\ndeployment.apply()\n
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#ecstask","title":"ECSTask","text":"

ECSTask infrastructure runs your flow in an ECS Task.

Requirements for ECSTask:

FROM prefecthq/prefect:2-python3.9  # example base image \nRUN pip install s3fs prefect-aws\n

To get started using Prefect with ECS, check out the repository template dataflow-ops demonstrating ECS agent setup and various deployment configurations for using ECSTask block.

Make sure to allocate enough CPU and memory to your agent, and consider adding retries

When you start a Prefect agent on AWS ECS Fargate, allocate as much CPU and memory as needed for your workloads. Your agent needs enough resources to appropriately provision infrastructure for your flow runs and to monitor their execution. Otherwise, your flow runs may get stuck in a Pending state. Alternatively, set a work-queue concurrency limit to ensure that the agent will not try to process all runs at the same time.

Some API calls to provision infrastructure may fail due to unexpected issues on the client side (for example, transient errors such as ConnectionError, HTTPClientError, or RequestTimeout), or due to server-side rate limiting from the AWS service. To mitigate those issues, we recommend adding environment variables such as AWS_MAX_ATTEMPTS (can be set to an integer value such as 10) and AWS_RETRY_MODE (can be set to a string value including standard or adaptive modes). Those environment variables must be added within the agent environment, e.g. on your ECS service running the agent, rather than on the ECSTask infrastructure block.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#docker-images","title":"Docker images","text":"

Every release of Prefect comes with a few built-in images. These images are all named prefecthq/prefect and their tags are used to identify differences in images.

Prefect agents rely on Docker images for executing flow runs using DockerContainer or KubernetesJob infrastructure. If you do not specify an image, we will use a Prefect image tag that matches your local Prefect and Python versions. If you are building your own image, you may find it useful to use one of the Prefect images as a base.

Choose image versions wisely

It's a good practice to use Docker images with specific Prefect versions in production.

Use care when employing images that automatically update to new versions (such as prefecthq/prefect:2-python3.9 or prefecthq/prefect:2-latest).

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#image-tags","title":"Image tags","text":"

When a release is published, images are built for all of Prefect's supported Python versions. These images are tagged to identify the combination of Prefect and Python versions contained. Additionally, we have \"convenience\" tags which are updated with each release to facilitate automatic updates.

For example, when release 2.1.1 is published:

  1. Images with the release packaged are built for each supported Python version (3.8, 3.9, 3.10, 3.11) with both standard Python and Conda.
  2. These images are tagged with the full description, e.g. prefect:2.1.1-python3.10 and prefect:2.1.1-python3.10-conda.
  3. For users that want more specific pins, these images are also tagged with the SHA of the git commit of the release, e.g. sha-88a7ff17a3435ec33c95c0323b8f05d7b9f3f6d2-python3.10
  4. For users that want to be on the latest 2.1.x release, receiving patch updates, we update a tag without the patch version to this release, e.g. prefect.2.1-python3.10.
  5. For users that want to be on the latest 2.x.y release, receiving minor version updates, we update a tag without the minor or patch version to this release, e.g. prefect.2-python3.10
  6. Finally, for users who want the latest 2.x.y release without specifying a Python version, we update 2-latest to the image for our highest supported Python version, which in this case would be equivalent to prefect:2.1.1-python3.10.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#standard-python","title":"Standard Python","text":"

Standard Python images are based on the official Python slim images, e.g. python:3.10-slim.

Tag Prefect Version Python Version 2-latest most recent v2 PyPi version 3.10 2-python3.11 most recent v2 PyPi version 3.11 2-python3.10 most recent v2 PyPi version 3.10 2-python3.9 most recent v2 PyPi version 3.9 2-python3.8 most recent v2 PyPi version 3.8 2.X-python3.11 2.X 3.11 2.X-python3.10 2.X 3.10 2.X-python3.9 2.X 3.9 2.X-python3.8 2.X 3.8 sha-<hash>-python3.11 <hash> 3.11 sha-<hash>-python3.10 <hash> 3.10 sha-<hash>-python3.9 <hash> 3.9 sha-<hash>-python3.8 <hash> 3.8","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#conda-flavored-python","title":"Conda-flavored Python","text":"

Conda flavored images are based on continuumio/miniconda3. Prefect is installed into a conda environment named prefect.

Note, Conda support for Python 3.11 is not available so we cannot build an image yet.

Tag Prefect Version Python Version 2-latest-conda most recent v2 PyPi version 3.10 2-python3.10-conda most recent v2 PyPi version 3.10 2-python3.9-conda most recent v2 PyPi version 3.9 2-python3.8-conda most recent v2 PyPi version 3.8 2.X-python3.10-conda 2.X 3.10 2.X-python3.9-conda 2.X 3.9 2.X-python3.8-conda 2.X 3.8 sha-<hash>-python3.10-conda <hash> 3.10 sha-<hash>-python3.9-conda <hash> 3.9 sha-<hash>-python3.8-conda <hash> 3.8","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#installing-extra-dependencies-at-runtime","title":"Installing Extra Dependencies at Runtime","text":"

If you're using the prefecthq/prefect image (or an image based on prefecthq/prefect), you can make use of the EXTRA_PIP_PACKAGES environment variable to install dependencies at runtime. If defined, pip install ${EXTRA_PIP_PACKAGES} is executed before the flow run starts.

For production deploys we recommend building a custom image (as described below). Installing dependencies during each flow run can be costly (since you're downloading from PyPI on each execution) and adds another opportunity for failure. Use of EXTRA_PIP_PACKAGES can be useful during development though, as it allows you to iterate on dependencies without building a new image each time.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#building-your-own-image","title":"Building your Own Image","text":"

If your flow relies on dependencies not found in the default prefecthq/prefect images, you'll want to build your own image. You can either base it off of one of the provided prefecthq/prefect images, or build your own from scratch.

Extending the prefecthq/prefect image

Here we provide an example Dockerfile for building an image based on prefecthq/prefect:2-latest, but with scikit-learn installed.

FROM prefecthq/prefect:2-latest\n\nRUN pip install scikit-learn\n
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#choosing-an-image-strategy","title":"Choosing an Image Strategy","text":"

The options described above have different complexity (and performance) characteristics. For choosing a strategy, we provide the following recommendations:

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/logs/","title":"Logging","text":"

Prefect enables you to log a variety of useful information about your flow and task runs, capturing information about your workflows for purposes such as monitoring, troubleshooting, and auditing.

Prefect captures logs for your flow and task runs by default, even if you have not started a Prefect server with prefect server start.

You can view and filter logs in the Prefect UI or Prefect Cloud, or access log records via the API.

Prefect enables fine-grained customization of log levels for flows and tasks, including configuration for default levels and log message formatting.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#logging-overview","title":"Logging overview","text":"

Whenever you run a flow, Prefect automatically logs events for flow runs and task runs, along with any custom log handlers you have configured. No configuration is needed to enable Prefect logging.

For example, say you created a simple flow in a file flow.py. If you create a local flow run with python flow.py, you'll see an example of the log messages created automatically by Prefect:

$ python flow.py\n16:45:44.534 | INFO    | prefect.engine - Created flow run 'gray-dingo' for flow 'hello-flow'\n16:45:44.534 | INFO    | Flow run 'gray-dingo' - Using task runner 'SequentialTaskRunner'\n16:45:44.598 | INFO    | Flow run 'gray-dingo' - Created task run 'hello-task-54135dc1-0' for task 'hello-task'\nHello world!\n16:45:44.650 | INFO    | Task run 'hello-task-54135dc1-0' - Finished in state \nCompleted(None)\n16:45:44.672 | INFO    | Flow run 'gray-dingo' - Finished in state \nCompleted('All states completed.')\n

You can see logs for the flow run in the Prefect UI by navigating to the Flow Runs page and selecting a specific flow run to inspect.

These log messages reflect the logging configuration for log levels and message formatters. You may customize the log levels captured and the default message format through configuration, and you can capture custom logging events by explicitly emitting log messages during flow and task runs.

Prefect supports the standard Python logging levels CRITICAL, ERROR, WARNING, INFO, and DEBUG. By default, Prefect displays INFO-level and above events. You can configure the root logging level as well as specific logging levels for flow and task runs.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#logging-configuration","title":"Logging Configuration","text":"","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#logging-settings","title":"Logging settings","text":"

Prefect provides several settings for configuring logging level and loggers.

By default, Prefect displays INFO-level and above logging records. You may change this level to DEBUG and DEBUG-level logs created by Prefect will be shown as well. You may need to change the log level used by loggers from other libraries to see their log records.

You can override any logging configuration by setting an environment variable or Prefect Profile setting using the syntax PREFECT_LOGGING_[PATH]_[TO]_[KEY], with [PATH]_[TO]_[KEY] corresponding to the nested address of any setting.

For example, to change the default logging levels for Prefect to DEBUG, you can set the environment variable PREFECT_LOGGING_LEVEL=\"DEBUG\".

You may also configure the \"root\" Python logger. The root logger receives logs from all loggers unless they explicitly opt out by disabling propagation. By default, the root logger is configured to output WARNING level logs to the console. As with other logging settings, you can override this from the environment or in the logging configuration file. For example, you can change the level with the variable PREFECT_LOGGING_ROOT_LEVEL.

You may adjust the log level used by specific handlers. For example, you could set PREFECT_LOGGING_HANDLERS_API_LEVEL=ERROR to have only ERROR logs reported to the Prefect API. The console handlers will still default to level INFO.

There is a logging.yml file packaged with Prefect that defines the default logging configuration.

You can customize logging configuration by creating your own version of logging.yml with custom settings, by either creating the file at the default location (/.prefect/logging.yml) or by specifying the path to the file with PREFECT_LOGGING_SETTINGS_PATH. (If the file does not exist at the specified location, Prefect ignores the setting and uses the default configuration.)

See the Python Logging configuration documentation for more information about the configuration options and syntax used by logging.yml.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#prefect-loggers","title":"Prefect Loggers","text":"

To access the Prefect logger, import from prefect import get_run_logger. You can send messages to the logger in both flows and tasks.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#logging-in-flows","title":"Logging in flows","text":"

To log from a flow, retrieve a logger instance with get_run_logger(), then call the standard Python logging methods.

from prefect import flow, get_run_logger\n\n@flow(name=\"log-example-flow\")\ndef logger_flow():\n    logger = get_run_logger()\n    logger.info(\"INFO level log message.\")\n

Prefect automatically uses the flow run logger based on the flow context. If you run the above code, Prefect captures the following as a log event.

15:35:17.304 | INFO    | Flow run 'mottled-marten' - INFO level log message.\n

The default flow run log formatter uses the flow run name for log messages.

Note

Starting in 2.7.11, if you use a logger that sends logs to the API outside of a flow or task run, a warning will be displayed instead of an error. You can silence this warning by setting PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW=ignore or have the logger raise an error by setting the value to error.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#logging-in-tasks","title":"Logging in tasks","text":"

Logging in tasks works much as logging in flows: retrieve a logger instance with get_run_logger(), then call the standard Python logging methods.

from prefect import flow, task, get_run_logger\n\n@task(name=\"log-example-task\")\ndef logger_task():\n    logger = get_run_logger()\n    logger.info(\"INFO level log message from a task.\")\n\n@flow(name=\"log-example-flow\")\ndef logger_flow():\n    logger_task()\n

Prefect automatically uses the task run logger based on the task context. The default task run log formatter uses the task run name for log messages.

15:33:47.179 | INFO   | Task run 'logger_task-80a1ffd1-0' - INFO level log message from a task.\n

The underlying log model for task runs captures the task name, task run ID, and parent flow run ID, which are persisted to the database for reporting and may also be used in custom message formatting.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#logging-print-statements","title":"Logging print statements","text":"

Prefect provides the log_prints option to enable the logging of print statements at the task or flow level. When log_prints=True for a given task or flow, the Python builtin print will be patched to redirect to the Prefect logger for the scope of that task or flow.

By default, tasks and subflows will inherit the log_prints setting from their parent flow, unless opted out with their own explicit log_prints setting.

from prefect import task, flow\n\n@task\ndef my_task():\n    print(\"we're logging print statements from a task\")\n\n@flow(log_prints=True)\ndef my_flow():\n    print(\"we're logging print statements from a flow\")\n    my_task()\n

Will output:

15:52:11.244 | INFO    | prefect.engine - Created flow run 'emerald-gharial' for flow 'my-flow'\n15:52:11.812 | INFO    | Flow run 'emerald-gharial' - we're logging print statements from a flow\n15:52:11.926 | INFO    | Flow run 'emerald-gharial' - Created task run 'my_task-20c6ece6-0' for task 'my_task'\n15:52:11.927 | INFO    | Flow run 'emerald-gharial' - Executing 'my_task-20c6ece6-0' immediately...\n15:52:12.217 | INFO    | Task run 'my_task-20c6ece6-0' - we're logging print statements from a task\n
from prefect import task, flow\n\n@task\ndef my_task(log_prints=False):\n    print(\"not logging print statements in this task\")\n\n@flow(log_prints=True)\ndef my_flow():\n    print(\"we're logging print statements from a flow\")\n    my_task()\n

Using log_prints=False at the task level will output:

15:52:11.244 | INFO    | prefect.engine - Created flow run 'emerald-gharial' for flow 'my-flow'\n15:52:11.812 | INFO    | Flow run 'emerald-gharial' - we're logging print statements from a flow\n15:52:11.926 | INFO    | Flow run 'emerald-gharial' - Created task run 'my_task-20c6ece6-0' for task 'my_task'\n15:52:11.927 | INFO    | Flow run 'emerald-gharial' - Executing 'my_task-20c6ece6-0' immediately...\nnot logging print statements in this task\n

You can also configure this behavior globally for all Prefect flows, tasks, and subflows.

prefect config set PREFECT_LOGGING_LOG_PRINTS=True\n
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#formatters","title":"Formatters","text":"

Prefect log formatters specify the format of log messages. You can see details of message formatting for different loggers in logging.yml. For example, the default formatting for task run log records is:

\"%(asctime)s.%(msecs)03d | %(levelname)-7s | Task run %(task_run_name)r - %(message)s\"\n

The variables available to interpolate in log messages varies by logger. In addition to the run context, message string, and any keyword arguments, flow and task run loggers have access to additional variables.

The flow run logger has the following:

The task run logger has the following:

You can specify custom formatting by setting an environment variable or by modifying the formatter in a logging.yml file as described earlier. For example, to change the formatting for the flow runs formatter:

PREFECT_LOGGING_FORMATTERS_STANDARD_FLOW_RUN_FMT=\"%(asctime)s.%(msecs)03d | %(levelname)-7s | %(flow_run_id)s - %(message)s\"\n

The resulting messages, using the flow run ID instead of name, would look like this:

10:40:01.211 | INFO    | e43a5a80-417a-41c4-a39e-2ef7421ee1fc - Created task run 'othertask-1c085beb-3' for task 'othertask'\n
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#styles","title":"Styles","text":"

By default, Prefect highlights specific keywords in the console logs with a variety of colors.

Highlighting can be toggled on/off with the PREFECT_LOGGING_COLORS setting, e.g.

PREFECT_LOGGING_COLORS=False\n

You can change what gets highlighted and also adjust the colors by updating the styles in a logging.yml file. Below lists the specific keys built-in to the PrefectConsoleHighlighter.

URLs:

Log levels:

State types:

Flow (run) names:

Task (run) names:

You can also build your own handler with a custom highlighter. For example, to additionally highlight emails:

  1. Copy and paste the following into my_package_or_module.py (rename as needed) in the same directory as the flow run script, or ideally part of a Python package so it's available in site-packages to be accessed anywhere within your environment.
import logging\nfrom typing import Dict, Union\n\nfrom rich.highlighter import Highlighter\n\nfrom prefect.logging.handlers import PrefectConsoleHandler\nfrom prefect.logging.highlighters import PrefectConsoleHighlighter\n\nclass CustomConsoleHighlighter(PrefectConsoleHighlighter):\n    base_style = \"log.\"\n    highlights = PrefectConsoleHighlighter.highlights + [\n        # ?P<email> is naming this expression as `email`\n        r\"(?P<email>[\\w-]+@([\\w-]+\\.)+[\\w-]+)\",\n    ]\n\nclass CustomConsoleHandler(PrefectConsoleHandler):\n    def __init__(\n        self,\n        highlighter: Highlighter = CustomConsoleHighlighter,\n        styles: Dict[str, str] = None,\n        level: Union[int, str] = logging.NOTSET,\n   ):\n        super().__init__(highlighter=highlighter, styles=styles, level=level)\n
  1. Update /.prefect/logging.yml to use my_package_or_module.CustomConsoleHandler and additionally reference the base_style and named expression: log.email.

        console_flow_runs:\nlevel: 0\nclass: my_package_or_module.CustomConsoleHandler\nformatter: flow_runs\nstyles:\nlog.email: magenta\n# other styles can be appended here, e.g.\n# log.completed_state: green\n

  2. Then on your next flow run, text that looks like an email will be highlighted--e.g. my@email.com is colored in magenta here.

    from prefect import flow, get_run_logger\n\n@flow\ndef log_email_flow():\n    logger = get_run_logger()\n    logger.info(\"my@email.com\")\n\nlog_email_flow()\n

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#applying-markup-in-logs","title":"Applying markup in logs","text":"

To use Rich's markup in Prefect logs, first configure PREFECT_LOGGING_MARKUP.

PREFECT_LOGGING_MARKUP=True\n

Then, the following will highlight \"fancy\" in red.

from prefect import flow, get_run_logger\n\n@flow\ndef my_flow():\n    logger = get_run_logger()\n    logger.info(\"This is [bold red]fancy[/]\")\n\nlog_email_flow()\n

Inaccurate logs could result

Although this can be convenient, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. DROP TABLE [dbo].[SomeTable];\" outputs DROP TABLE .[SomeTable];.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#log-database-schema","title":"Log database schema","text":"

Logged events are also persisted to the Prefect database. A log record includes the following data:

Column Description id Primary key ID of the log record. created Timestamp specifying when the record was created. updated Timestamp specifying when the record was updated. name String specifying the name of the logger. level Integer representation of the logging level. flow_run_id ID of the flow run associated with the log record. If the log record is for a task run, this is the parent flow of the task. task_run_id ID of the task run associated with the log record. Null if logging a flow run event. message Log message. timestamp The client-side timestamp of this logged statement.

For more information, see Log schema in the API documentation.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#including-logs-from-other-libraries","title":"Including logs from other libraries","text":"

By default, Prefect won't capture log statements from libraries that your flows and tasks use. You can tell Prefect to include logs from these libraries with the PREFECT_LOGGING_EXTRA_LOGGERS setting.

To use this setting, specify one or more Python library names to include, separated by commas. For example, if you want to make sure Prefect captures Dask and SciPy logging statements with your flow and task run logs:

PREFECT_LOGGING_EXTRA_LOGGERS=dask,scipy\n

You can set this setting as an environment variable or in a profile. See Settings for more details about how to use settings.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/results/","title":"Results","text":"

Results represent the data returned by a flow or a task.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#retrieving-results","title":"Retrieving results","text":"

When calling flows or tasks, the result is returned directly:

from prefect import flow, task\n\n@task\ndef my_task():\n    return 1\n\n@flow\ndef my_flow():\n    task_result = my_task()\n    return task_result + 1\n\nresult = my_flow()\nassert result == 2\n

When working with flow and task states, the result can be retrieved with the State.result() method:

from prefect import flow, task\n\n@task\ndef my_task():\n    return 1\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n    return state.result() + 1\n\nstate = my_flow(return_state=True)\nassert state.result() == 2\n

When submitting tasks to a runner, the result can be retrieved with the Future.result() method:

from prefect import flow, task\n\n@task\ndef my_task():\n    return 1\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n    return future.result() + 1\n\nresult = my_flow()\nassert result == 2\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#handling-failures","title":"Handling failures","text":"

Sometimes your flows or tasks will encounter an exception. Prefect captures all exceptions in order to report states to the orchestrator, but we do not hide them from you (unless you ask us to) as your program needs to know if an unexpected error has occurred.

When calling flows or tasks, the exceptions are raised as in normal Python:

from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    try:\n        my_task()\n    except ValueError:\n        print(\"Oh no! The task failed.\")\n\n    return True\n\nmy_flow()\n

If you would prefer to check for a failed task without using try/except, you may ask Prefect to return the state:

from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n\n    if state.is_failed():\n        print(\"Oh no! The task failed. Falling back to '1'.\")\n        result = 1\n    else:\n        result = state.result()\n\n    return result + 1\n\nresult = my_flow()\nassert result == 2\n

If you retrieve the result from a failed state, the exception will be raised. For this reason, it's often best to check if the state is failed first.

from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n\n    try:\n        result = state.result()\n    except ValueError:\n        print(\"Oh no! The state raised the error!\")\n\n    return True\n\nmy_flow()\n

When retrieving the result from a state, you can ask Prefect not to raise exceptions:

from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n\n    maybe_result = state.result(raise_on_failure=False)\n    if isinstance(maybe_result, ValueError):\n        print(\"Oh no! The task failed. Falling back to '1'.\")\n        result = 1\n    else:\n        result = maybe_result\n\n    return result + 1\n\nresult = my_flow()\nassert result == 2\n

When submitting tasks to a runner, Future.result() works the same as State.result():

from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n\n    try:\n        future.result()\n    except ValueError:\n        print(\"Ah! Futures will raise the failure as well.\")\n\n    # You can ask it not to raise the exception too\n    maybe_result = future.result(raise_on_failure=False)\n    print(f\"Got {type(maybe_result)}\")\n\n    return True\n\nmy_flow()\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#working-with-async-results","title":"Working with async results","text":"

When calling flows or tasks, the result is returned directly:

import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n    return 1\n\n@flow\nasync def my_flow():\n    task_result = await my_task()\n    return task_result + 1\n\nresult = asyncio.run(my_flow())\nassert result == 2\n

When working with flow and task states, the result can be retrieved with the State.result() method:

import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n    return 1\n\n@flow\nasync def my_flow():\n    state = await my_task(return_state=True)\n    result = await state.result(fetch=True)\n    return result + 1\n\nasync def main():\n    state = await my_flow(return_state=True)\n    assert await state.result(fetch=True) == 2\n\nasyncio.run(main())\n

Resolving results

Prefect 2.6.0 added automatic retrieval of persisted results. Prior to this version, State.result() did not require an await. For backwards compatibility, when used from an asynchronous context, State.result() returns a raw result type.

You may opt-in to the new behavior by passing fetch=True as shown in the example above. If you would like this behavior to be used automatically, you may enable the PREFECT_ASYNC_FETCH_STATE_RESULT setting. If you do not opt-in to this behavior, you will see a warning.

You may also opt-out by setting fetch=False. This will silence the warning, but you will need to retrieve your result manually from the result type.

When submitting tasks to a runner, the result can be retrieved with the Future.result() method:

import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n    return 1\n\n@flow\nasync def my_flow():\n    future = await my_task.submit()\n    result = await future.result()\n    return result + 1\n\nresult = asyncio.run(my_flow())\nassert result == 2\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisting-results","title":"Persisting results","text":"

The Prefect API does not store your results except in special cases. Instead, the result is persisted to a storage location in your infrastructure and Prefect stores a reference to the result.

The following Prefect features require results to be persisted:

If results are not persisted, these features may not be usable.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#configuring-persistence-of-results","title":"Configuring persistence of results","text":"

Persistence of results requires a serializer and a storage location. Prefect sets defaults for these, and you should not need to adjust them until you want to customize behavior. You can configure results on the flow and task decorators with the following options:

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#toggling-persistence","title":"Toggling persistence","text":"

Persistence of the result of a task or flow can be configured with the persist_result option. The persist_result option defaults to a null value, which will automatically enable persistence if it is needed for a Prefect feature used by the flow or task. Otherwise, persistence is disabled by default.

For example, the following flow has retries enabled. Flow retries require that all task results are persisted, so the task's result will be persisted:

from prefect import flow, task\n\n@task\ndef my_task():\n    return \"hello world!\"\n\n@flow(retries=2)\ndef my_flow():\n    # This task does not have persistence toggled off and it is needed for the flow feature,\n    # so Prefect will persist its result at runtime\n    my_task()\n

Flow retries do not require the flow's result to be persisted, so it will not be.

In this next example, one task has caching enabled. Task caching requires that the given task's result is persisted:

from prefect import flow, task\nfrom datetime import timedelta\n\n@task(cache_key_fn=lambda: \"always\", cache_expiration=timedelta(seconds=20))\ndef my_task():\n    # This task uses caching so its result will be persisted by default\n    return \"hello world!\"\n\n\n@task\ndef my_other_task():\n    ...\n\n@flow\ndef my_flow():\n    # This task uses a feature that requires result persistence\n    my_task()\n\n    # This task does not use a feature that requires result persistence and the\n    # flow does not use any features that require task result persistence so its\n    # result will not be persisted by default\n    my_other_task()\n

Persistence of results can be manually toggled on or off:

from prefect import flow, task\n\n@flow(persist_result=True)\ndef my_flow():\n    # This flow will persist its result even if not necessary for a feature.\n    ...\n\n@task(persist_result=False)\ndef my_task():\n    # This task will never persist its result.\n    # If persistence needed for a feature, an error will be raised.\n    ...\n

Toggling persistence manually will always override any behavior that Prefect would infer.

You may also change Prefect's default persistence behavior with the PREFECT_RESULTS_PERSIST_BY_DEFAULT setting. To persist results by default, even if they are not needed for a feature change the value to a truthy value:

$ prefect config set PREFECT_RESULTS_PERSIST_BY_DEFAULT=true\n

Task and flows with persist_result=False will not persist their results even if PREFECT_RESULTS_PERSIST_BY_DEFAULT is true.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-location","title":"Result storage location","text":"

The result storage location can be configured with the result_storage option. The result_storage option defaults to a null value, which infers storage from the context. Generally, this means that tasks will use the result storage configured on the flow unless otherwise specified. If there is no context to load the storage from and results must be persisted, results will be stored in the path specified by the PREFECT_LOCAL_STORAGE_PATH setting (defaults to ~/.prefect/storage).

from prefect import flow, task\nfrom prefect.filesystems import LocalFileSystem, S3\n\n@flow(persist_result=True)\ndef my_flow():\n    my_task()  # This task will use the flow's result storage\n\n@task(persist_result=True)\ndef my_task():\n    ...\n\nmy_flow()  # The flow has no result storage configured and no parent, the local file system will be used.\n\n\n# Reconfigure the flow to use a different storage type\nnew_flow = my_flow.with_options(result_storage=S3(bucket_path=\"my-bucket\"))\n\nnew_flow()  # The flow and task within it will use S3 for result storage.\n

You can configure this to use a specific storage using one of the following:

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-key","title":"Result storage key","text":"

The path of the result file in the result storage can be configured with the result_storage_key. The result_storage_key option defaults to a null value, which generates a unique identifier for each result.

from prefect import flow, task\nfrom prefect.filesystems import LocalFileSystem, S3\n\n@flow(result_storage=S3(bucket_path=\"my-bucket\"))\ndef my_flow():\n    my_task()\n\n@task(persist_result=True, result_storage_key=\"my_task.json\")\ndef my_task():\n    ...\n\nmy_flow()  # The task's result will be persisted to 's3://my-bucket/my_task.json'\n

Result storage keys are formatted with access to all of the modules in prefect.runtime and the run's parameters. In the following example, we will run a flow with three runs of the same task. Each task run will write its result to a unique file based on the name parameter.

from prefect import flow, task\n\n@flow()\ndef my_flow():\n    hello_world()\n    hello_world(name=\"foo\")\n    hello_world(name=\"bar\")\n\n@task(persist_result=True, result_storage_key=\"hello-{parameters[name]}.json\")\ndef hello_world(name: str = \"world\"):\n    return f\"hello {name}\"\n\nmy_flow()\n

After running the flow, we can see three persisted result files in our storage directory:

$ ls ~/.prefect/storage | grep \"hello-\"\nhello-bar.json\nhello-foo.json\nhello-world.json\n

In the next example, we include metadata about the flow run from the prefect.runtime.flow_run module:

from prefect import flow, task\n\n@flow\ndef my_flow():\n    hello_world()\n\n@task(persist_result=True, result_storage_key=\"{flow_run.flow_name}_{flow_run.name}_hello.json\")\ndef hello_world(name: str = \"world\"):\n    return f\"hello {name}\"\n\nmy_flow()\n

After running this flow, we can see a result file templated with the name of the flow and the flow run:

\u276f ls ~/.prefect/storage | grep \"my-flow\"    \nmy-flow_industrious-trout_hello.json\n

If a result exists at a given storage key in the storage location, it will be overwritten.

Result storage keys can only be configured on tasks at this time.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-serializer","title":"Result serializer","text":"

The result serializer can be configured with the result_serializer option. The result_serializer option defaults to a null value, which infers the serializer from the context. Generally, this means that tasks will use the result serializer configured on the flow unless otherwise specified. If there is no context to load the serializer from, the serializer defined by PREFECT_RESULTS_DEFAULT_SERIALIZER will be used. This setting defaults to Prefect's pickle serializer.

You may configure the result serializer using:

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#compressing-results","title":"Compressing results","text":"

Prefect provides a CompressedSerializer which can be used to wrap other serializers to provide compression over the bytes they generate. The compressed serializer uses lzma compression by default. We test other compression schemes provided in the Python standard library such as bz2 and zlib, but you should be able to use any compression library that provides compress and decompress methods.

You may configure compression of results using:

Note that the \"compressed/<serializer-type>\" shortcut will only work for serializers provided by Prefect. If you are using custom serializers, you must pass a full instance.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#storage-of-results-in-prefect","title":"Storage of results in Prefect","text":"

The Prefect API does not store your results in most cases for the following reasons:

There are a few cases where Prefect will store your results directly in the database. This is an optimization to reduce the overhead of reading and writing to result storage.

The following data types will be stored by the API without persistence to storage:

If persist_result is set to False, these values will never be stored.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#tracking-results","title":"Tracking results","text":"

The Prefect API tracks metadata about your results. The value of your result is only stored in specific cases. Result metadata can be seen in the UI on the \"Results\" page for flows.

Prefect tracks the following result metadata: - Data type - Storage location (if persisted)

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#caching-of-results-in-memory","title":"Caching of results in memory","text":"

When running your workflows, Prefect will keep the results of all tasks and flows in memory so they can be passed downstream. In some cases, it is desirable to override this behavior. For example, if you are returning a large amount of data from a task it can be costly to keep it memory for the entire duration of the flow run.

Flows and tasks both include an option to drop the result from memory with cache_result_in_memory:

@flow(cache_result_in_memory=False)\ndef foo():\n    return \"pretend this is large data\"\n\n@task(cache_result_in_memory=False)\ndef bar():\n    return \"pretend this is biiiig data\"\n

When cache_result_in_memory is disabled, the result of your flow or task will be persisted by default. The result will then be pulled from storage when needed.

@flow\ndef foo():\n    result = bar()\n    state = bar(return_state=True)\n\n    # The result will be retrieved from storage here\n    state.result()\n\n    future = bar.submit()\n    # The result will be retrieved from storage here\n    future.result()\n\n@task(cache_result_in_memory=False)\ndef bar():\n    # This result will persisted\n    return \"pretend this is biiiig data\"\n

If both cache_result_in_memory and persistence are disabled, your results will not be available downstream.

@task(persist_result=False, cache_result_in_memory=False)\ndef bar():\n    return \"pretend this is biiiig data\"\n\n@flow\ndef foo():\n    # Raises an error\n    result = bar()\n\n    # This is oaky\n    state = bar(return_state=True)\n\n    # Raises an error\n    state.result()\n\n    # This is okay\n    future = bar.submit()\n\n    # Raises an error\n    future.result()\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-types","title":"Result storage types","text":"

Result storage is responsible for reading and writing serialized data to an external location. At this time, any file system block can be used for result storage.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-serializer-types","title":"Result serializer types","text":"

A result serializer is responsible for converting your Python object to and from bytes. This is necessary to store the object outside of Python and retrieve it later.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#pickle-serializer","title":"Pickle serializer","text":"

Pickle is a standard Python protocol for encoding arbitrary Python objects. We supply a custom pickle serializer at prefect.serializers.PickleSerializer. Prefect's pickle serializer uses the cloudpickle project by default to support more object types. Alternative pickle libraries can be specified:

from prefect.serializers import PickleSerializer\n\nPickleSerializer(picklelib=\"custompickle\")\n

Benefits of the pickle serializer:

Drawbacks of the pickle serializer:

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#json-serializer","title":"JSON serializer","text":"

We supply a custom JSON serializer at prefect.serializers.JSONSerializer. Prefect's JSON serializer uses custom hooks by default to support more object types. Specifically, we add support for all types supported by Pydantic.

By default, we use the standard Python json library. Alternative JSON libraries can be specified:

from prefect.serializers import JSONSerializer\n\nJSONSerializer(jsonlib=\"orjson\")\n

Benefits of the JSON serializer:

Drawbacks of the JSON serializer:

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-types","title":"Result types","text":"

Prefect uses internal result types to capture information about the result attached to a state. The following types are used:

All result types include a get() method that can be called to return the value of the result. This is done behind the scenes when the result() method is used on states or futures.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#unpersisted-results","title":"Unpersisted results","text":"

Unpersisted results are used to represent results that have not been and will not be persisted beyond the current flow run. The value associated with the result is stored in memory, but will not be available later. Result metadata is attached to this object for storage in the API and representation in the UI.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#literal-results","title":"Literal results","text":"

Literal results are used to represent results stored in the Prefect database. The values contained by these results must always be JSON serializable.

Example:

result = LiteralResult(value=None)\nresult.json()\n# {\"type\": \"result\", \"value\": \"null\"}\n

Literal results reduce the overhead required to persist simple results.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisted-results","title":"Persisted results","text":"

The persisted result type contains all of the information needed to retrieve the result from storage. This includes:

Persisted result types also contain metadata for inspection without retrieving the result:

The get() method on result references retrieves the data from storage, deserializes it, and returns the original object. The get() operation will cache the resolved object to reduce the overhead of subsequent calls.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisted-result-blob","title":"Persisted result blob","text":"

When results are persisted to storage, they are always written as a JSON document. The schema for this is described by the PersistedResultBlob type. The document contains:

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/runtime-context/","title":"Runtime context","text":"

Prefect tracks information about the current flow or task run with a run context. The run context can be thought of as a global variable that allows the Prefect engine to determine relationships between your runs, e.g. which flow your task is called from.

The run context itself contains many internal objects used by Prefect to manage execution of your run and is only available in specific situtations. For this reason, we expose a simple interface that only includes the items you care about and dynamically retrieves additional information when necessary. We call this the \"runtime context\" as it contains information that can only be accessed when a run is happening.

Mock values via environment variable

Oftentimes, you may want to mock certain values for testing purposes. For example, by manually setting an ID or a scheduled start time to ensure your code is functioning properly. Starting in version 2.10.3, you can mock values in runtime via environment variable using the schema PREFECT__RUNTIME__{SUBMODULE}__{KEY_NAME}=value:

$ export PREFECT__RUNTIME__TASK_RUN__FAKE_KEY='foo'\n$ python -c 'from prefect.runtime import task_run; print(task_run.fake_key)' # \"foo\"\n
If the environment variable mocks an existing runtime attribute, the value is cast to the same type. This works for runtime attributes of basic types (bool, int, float and str) and pendulum.DateTime. For complex types like list or dict, we suggest mocking them using monkeypatch or a similar tool.

","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"concepts/runtime-context/#accessing-runtime-information","title":"Accessing runtime information","text":"

The prefect.runtime module is the home for all runtime context access. Each major runtime concept has its own submodule:

For example:

from prefect import flow, task\nfrom prefect import runtime\n@flow(log_prints=True)\ndef my_flow(x):\nprint(\"My name is\", runtime.flow_run.name)\nprint(\"I belong to deployment\", runtime.deployment.name)\nmy_task(2)\n\n@task\ndef my_task(y):\nprint(\"My name is\", runtime.task_run.name)\nprint(\"Flow run parameters:\", runtime.flow_run.parameters)\nmy_flow(1)\n

Outputs:

$ python my_runtime_info.py\n10:08:02.948 | INFO    | prefect.engine - Created flow run 'solid-gibbon' for flow 'my-flow'\n10:08:03.555 | INFO    | Flow run 'solid-gibbon' - My name is solid-gibbon\n10:08:03.558 | INFO    | Flow run 'solid-gibbon' - I belong to deployment None\n10:08:03.703 | INFO    | Flow run 'solid-gibbon' - Created task run 'my_task-0' for task 'my_task'\n10:08:03.704 | INFO    | Flow run 'solid-gibbon' - Executing 'my_task-0' immediately...\n10:08:04.006 | INFO    | Task run 'my_task-0' - My name is my_task-0\n10:08:04.007 | INFO    | Task run 'my_task-0' - Flow run parameters: {'x': 1}\n10:08:04.105 | INFO    | Task run 'my_task-0' - Finished in state Completed()\n10:08:04.968 | INFO    | Flow run 'solid-gibbon' - Finished in state Completed('All states completed.')\n

Above, we demonstrate access to information about the current flow run, task run, and deployment. When run without a deployment (via python my_runtime_info.py), you should see \"I belong to deployment None\" logged. When information is not available, the runtime will always return an empty value. Because this flow was run without a deployment, there is no deployment data. If this flow were deployed and executed by an agent or worker, we'd see the name of the deployment instead.

See the runtime API reference for a full list of available attributes.

","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"concepts/runtime-context/#accessing-the-run-context-directly","title":"Accessing the run context directly","text":"

The current run context can be accessed with prefect.context.get_run_context(). This will raise an exception if no run context is available, meaning you are not in a flow or task run. If a task run context is available, it will be returned even if a flow run context is available.

Alternatively, you can access the flow run or task run context explicitly. This will, for example, allow you to access the flow run context from a task run. Note that we do not send the flow run context to distributed task workers because the context is costly to serialize and deserialize.

from prefect.context import FlowRunContext, TaskRunContext\n\nflow_run_ctx = FlowRunContext.get()\ntask_run_ctx = TaskRunContext.get()\n

Unlike get_run_context, this will not raise an error if the context is not available. Instead, it will return None.

","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"concepts/schedules/","title":"Schedules","text":"

Schedules tell the Prefect API how to create new flow runs for you automatically on a specified cadence.

You can add a schedule to any flow deployment. The Prefect Scheduler service periodically reviews every deployment and creates new flow runs according to the schedule configured for the deployment.

There are four recommended ways to create a schedule for a deployment:

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-through-the-ui","title":"Creating schedules through the UI","text":"

You can add, modify, and view schedules by selecting Edit under the three dot menu next to a Deployment in the Deployments tab of the Prefect UI.

To create a schedule from the UI, select Add.

Then select Interval or Cron to create a schedule.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#schedule-types","title":"Schedule types","text":"

Prefect supports several types of schedules that cover a wide range of use cases and offer a large degree of customization:

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-through-the-cli-with-the-deployment-build-command","title":"Creating schedules through the CLI with the deployment build command","text":"

When you build a deployment from the CLI you can use cron, interval, or rrule flags to set a schedule.

prefect deployment build demo.py:pipeline -n etl --cron \"0 0 * * *\"\n

This schedule will create flow runs for this deployment every day at midnight.

interval and rrule are the other two command line schedule flags.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-through-a-python-deployment-file","title":"Creating schedules through a Python deployment file","text":"

Here's how you create the equivalent schedule in a Python deployment file, with a timezone specified.

from prefect.server.schemas.schedules import CronSchedule\n\ncron_demo = Deployment.build_from_flow(\n    pipeline,\n    \"etl\",\n    schedule=(CronSchedule(cron=\"0 0 * * *\", timezone=\"America/Chicago\"))\n)\n

IntervalSchedule and RRuleSchedule are the other two Python class schedule options.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-through-a-deployment-yaml-files-schedule-section","title":"Creating schedules through a deployment YAML file's schedule section","text":"

Alternatively, you can edit the schedule section of the deployment YAML file.

schedule:\ncron: 0 0 * * *\ntimezone: America/Chicago\n

Let's discuss the three schedule types in more detail.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#cron","title":"Cron","text":"

A schedule may be specified with a cron pattern. Users may also provide a timezone to enforce DST behaviors.

Cron uses croniter to specify datetime iteration with a cron-like format.

Cron properties include:

Property Description cron A valid cron string. (Required) day_or Boolean indicating how croniter handles day and day_of_week entries. Default is True. timezone String name of a time zone. (See the IANA Time Zone Database for valid time zones.)

The day_or property defaults to True, matching cron, which connects those values using OR. If False, the values are connected using AND. This behaves like fcron and enables you to, for example, define a job that executes each 2nd Friday of a month by setting the days of month and the weekday.

Supported croniter features

While Prefect supports most features of croniter for creating cron-like schedules, we do not currently support \"R\" random or \"H\" hashed keyword expressions or the schedule jittering possible with those expressions.

Daylight saving time considerations

If the timezone is a DST-observing one, then the schedule will adjust itself appropriately.

The cron rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule fires on every new schedule hour, not every elapsed hour. For example, when clocks are set back, this results in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later.

Longer schedules, such as one that fires at 9am every morning, will adjust for DST automatically.

See the Python CronSchedule class docs for more information.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#interval","title":"Interval","text":"

An Interval schedule creates new flow runs on a regular interval measured in seconds. Intervals are computed from an optional anchor_date. For example, here's how you can create a schedule for every 10 minutes in the deployent YAML file.

schedule:\ninterval: 600\ntimezone: America/Chicago 

Interval properties include:

Property Description interval datetime.timedelta indicating the time between flow runs. (Required) anchor_date datetime.datetime indicating the starting or \"anchor\" date to begin the schedule. If no anchor_date is supplied, the current UTC time is used. timezone String name of a time zone, used to enforce localization behaviors like DST boundaries. (See the IANA Time Zone Database for valid time zones.)

Note that the anchor_date does not indicate a \"start time\" for the schedule, but rather a fixed point in time from which to compute intervals. If the anchor date is in the future, then schedule dates are computed by subtracting the interval from it. Note that in this example, we import the Pendulum Python package for easy datetime manipulation. Pendulum isn\u2019t required, but it\u2019s a useful tool for specifying dates.

Daylight saving time considerations

If the schedule's anchor_date or timezone are provided with a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals.

For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time.

For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.

See the Python IntervalSchedule class docs for more information.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#rrule","title":"RRule","text":"

An RRule scheduling supports iCal recurrence rules (RRules), which provide convenient syntax for creating repetitive schedules. Schedules can repeat on a frequency from yearly down to every minute.

RRule uses the dateutil rrule module to specify iCal recurrence rules.

RRules are appropriate for any kind of calendar-date manipulation, including simple repetition, irregular intervals, exclusions, week day or day-of-month adjustments, and more. RRules can represent complex logic like:

RRule properties include:

Property Description rrule String representation of an RRule schedule. See the rrulestr examples for syntax. timezone String name of a time zone. See the IANA Time Zone Database for valid time zones.

You may find it useful to use an RRule string generator such as the iCalendar.org RRule Tool to help create valid RRules.

For example, the following RRule schedule creates flow runs on Monday, Wednesday, and Friday until July 30, 2024.

schedule:\nrrule: 'FREQ=WEEKLY;BYDAY=MO,WE,FR;UNTIL=20240730T040000Z'\n

Max RRule length

Note the max supported character length of an rrulestr is 6500 characters

Daylight saving time considerations

Note that as a calendar-oriented standard, RRules are sensitive to the initial timezone provided. A 9am daily schedule with a DST-aware start date will maintain a local 9am time through DST boundaries. A 9am daily schedule with a UTC start date will maintain a 9am UTC time.

See the Python RRuleSchedule class docs for more information.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#the-scheduler-service","title":"The Scheduler service","text":"

The Scheduler service is started automatically when prefect server start is run and it is a built-in service of Prefect Cloud.

By default, the Scheduler service visits deployments on a 60-second loop, though recently-modified deployments will be visited more frequently. The Scheduler evaluates each deployment's schedule and creates new runs appropriately. For typical deployments, it will create the next three runs, though more runs will be scheduled if the next 3 would all start in the next hour.

More specifically, the Scheduler tries to create the smallest number of runs that satisfy the following constraints, in order:

These behaviors can all be adjusted through the relevant settings that can be viewed with the terminal command prefect config view --show-defaults:

PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE='100' PREFECT_API_SERVICES_SCHEDULER_ENABLED='True' PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE='500' PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS='60.0' PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS='3' PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS='100' PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME='1:00:00' PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME='100 days, 0:00:00' 

See the Settings docs for more information on altering your settings.

These settings mean that if a deployment has an hourly schedule, the default settings will create runs for the next 4 days (or 100 hours). If it has a weekly schedule, the default settings will maintain the next 14 runs (up to 100 days in the future).

The Scheduler does not affect execution

The Prefect Scheduler service only creates new flow runs and places them in Scheduled states. It is not involved in flow or task execution.

If you change a schedule, previously scheduled flow runs that have not started are removed, and new scheduled flow runs are created to reflect the new schedule.

To remove all scheduled runs for a flow deployment, update the deployment YAML with no schedule value. Alternatively, remove the schedule via the UI.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/settings/","title":"Profiles & Configuration","text":"

Prefect's settings are documented and type-validated. By modifying the default settings, you can customize various aspects of the system.

Settings can be viewed from the CLI or the UI.

You can override the setting for a profile with environment variables.

Prefect profiles are groups of settings that you can persist on your machine. When you change profiles, all of the settings configured in the profile are applied. You can apply profiles to individual commands or set a profile for your environment.

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#commonly-configured-settings","title":"Commonly configured settings","text":"

This section describes some commonly configured settings for Prefect installations. See Configuring settings for details on setting and unsetting configuration values.

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#prefect_api_url","title":"PREFECT_API_URL","text":"

The PREFECT_API_URL value specifies the API endpoint of your Prefect Cloud workspace or a Prefect server instance.

For example, using a local Prefect server instance.

PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n

Using Prefect Cloud:

PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n

View your Account ID and Workspace ID in your browser URL when logged into Prefect Cloud. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.

PREFECT_API_URL setting for agents

When using workers, agents, and work pools that can create flow runs for deployments in remote environments, PREFECT_API_URL must be set for the environment in which your worker or agent is running.

If you want the worker or agent to communicate with Prefect Cloud or a Prefect server instance from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL in that environment.

Running the Prefect UI behind a reverse proxy

When using a reverse proxy (such as Nginx or Traefik) to proxy traffic to a locally-hosted Prefect UI instance, the Prefect server also needs to be configured to know how to connect to the API. The PREFECT_UI_API_URL should be set to the external proxy URL (e.g. if your external URL is https://prefect-server.example.com/ then set PREFECT_UI_API_URL=https://prefect-server.example.com/api for the Prefect server process). You can also accomplish this by setting PREFECT_API_URL to the API URL, as this setting is used as a fallback if PREFECT_UI_API_URL is not set.

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#prefect_api_key","title":"PREFECT_API_KEY","text":"

The PREFECT_API_KEY value specifies the API key used to authenticate with your Prefect Cloud workspace.

PREFECT_API_KEY=\"[API-KEY]\"\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#prefect_home","title":"PREFECT_HOME","text":"

The PREFECT_HOME value specifies the local Prefect directory for configuration files, profiles, and the location of the default Prefect SQLite database.

PREFECT_HOME='~/.prefect'\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#prefect_local_storage_path","title":"PREFECT_LOCAL_STORAGE_PATH","text":"

The PREFECT_LOCAL_STORAGE_PATH value specifies the default location of local storage for flow runs.

PREFECT_LOCAL_STORAGE_PATH='${PREFECT_HOME}/storage'\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#database-settings","title":"Database settings","text":"

Prefect provides several self-hosting database configuration settings you can read about here.

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#logging-settings","title":"Logging settings","text":"

Prefect provides several logging configuration settings that you can read about in the logging docs.

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#configuring-settings","title":"Configuring settings","text":"

The prefect config CLI commands enable you to view, set, and unset settings.

Command Description set Change the value for a setting. unset Restore the default value for a setting. view Display the current settings.","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#viewing-settings-from-the-cli","title":"Viewing settings from the CLI","text":"

The prefect config view command will display settings that override default values.

$ prefect config view\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG'\n

You can show the sources of values with --show-sources:

$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG' (from env)\n

You can also include default values with --show-defaults:

$ prefect config view --show-defaults\nPREFECT_PROFILE='default'\nPREFECT_AGENT_PREFETCH_SECONDS='10' (from defaults)\nPREFECT_AGENT_QUERY_INTERVAL='5.0' (from defaults)\nPREFECT_API_KEY='None' (from defaults)\nPREFECT_API_REQUEST_TIMEOUT='30.0' (from defaults)\nPREFECT_API_URL='None' (from defaults)\n...\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#setting-and-clearing-values","title":"Setting and clearing values","text":"

The prefect config set command lets you change the value of a default setting.

A commonly used example is setting the PREFECT_API_URL, which you may need to change when interacting with different Prefect server instances or Prefect Cloud.

# use a local Prefect server\nprefect config set PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n\n# use Prefect Cloud\nprefect config set PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n

If you want to configure a setting to use its default value, use the prefect config unset command.

prefect config unset PREFECT_API_URL\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#overriding-defaults-with-environment-variables","title":"Overriding defaults with environment variables","text":"

All settings have keys that match the environment variable that can be used to override them.

For example, configuring the home directory:

# environment variable\nexport PREFECT_HOME=\"/path/to/home\"\n
# python\nimport prefect.settings\nprefect.settings.PREFECT_HOME.value()  # PosixPath('/path/to/home')\n

Configuring the server's port:

# environment variable\nexport PREFECT_SERVER_API_PORT=4242\n
# python\nprefect.settings.PREFECT_SERVER_API_PORT.value()  # 4242\n

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#configuration-profiles","title":"Configuration profiles","text":"

Prefect allows you to persist settings instead of setting an environment variable each time you open a new shell. Settings are persisted to profiles, which allow you to move between groups of settings quickly.

The prefect profile CLI commands enable you to create, review, and manage profiles.

Command Description create Create a new profile. delete Delete the given profile. inspect Display settings from a given profile; defaults to active. ls List profile names. rename Change the name of a profile. use Switch the active profile.

If you configured settings for a profile, prefect profile inspect displays those settings:

$ prefect profile inspect\nPREFECT_PROFILE = \"default\"\nPREFECT_API_KEY = \"pnu_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\"\nPREFECT_API_URL = \"http://127.0.0.1:4200/api\"\n

You can pass the name of a profile to view its settings:

$ prefect profile create test\n$ prefect profile inspect test\nPREFECT_PROFILE=\"test\"\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#creating-and-removing-profiles","title":"Creating and removing profiles","text":"

Create a new profile with no settings:

$ prefect profile create test\nCreated profile 'test' at /Users/terry/.prefect/profiles.toml.\n

Create a new profile foo with settings cloned from an existing default profile:

$ prefect profile create foo --from default\nCreated profile 'cloud' matching 'default' at /Users/terry/.prefect/profiles.toml.\n

Rename a profile:

$ prefect profile rename temp test\nRenamed profile 'temp' to 'test'.\n

Remove a profile:

$ prefect profile delete test\nRemoved profile 'test'.\n

Removing the default profile resets it:

$ prefect profile delete default\nReset profile 'default'.\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#change-values-in-profiles","title":"Change values in profiles","text":"

Set a value in the current profile:

$ prefect config set VAR=X\nSet variable 'VAR' to 'X'\nUpdated profile 'default'\n

Set multiple values in the current profile:

$ prefect config set VAR2=Y VAR3=Z\nSet variable 'VAR2' to 'Y'\nSet variable 'VAR3' to 'Z'\nUpdated profile 'default'\n

You can set a value in another profile by passing the --profile NAME option to a CLI command:

$ prefect --profile \"foo\" config set VAR=Y\nSet variable 'VAR' to 'Y'\nUpdated profile 'foo'\n

Unset values in the current profile to restore the defaults:

$ prefect config unset VAR2 VAR3\nUnset variable 'VAR2'\nUnset variable 'VAR3'\nUpdated profile 'default'\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#inspecting-profiles","title":"Inspecting profiles","text":"

See a list of available profiles:

$ prefect profile ls\n* default\ncloud\ntest\nlocal\n

View all settings for a profile:

$ prefect profile inspect cloud\nPREFECT_API_URL='https://api.prefect.cloud/api/accounts/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx\nx/workspaces/xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'\nPREFECT_API_KEY='xxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'          
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#using-profiles","title":"Using profiles","text":"

The profile default is used by default. There are several methods to switch to another profile.

The recommended method is to use the prefect profile use command with the name of the profile:

$ prefect profile use foo\nProfile 'test' now active.\n

Alternatively, you may set the environment variable PREFECT_PROFILE to the name of the profile:

$ export PREFECT_PROFILE=foo\n

Or, specify the profile in the CLI command for one-time usage:

$ prefect --profile \"foo\" ...\n

Note that this option must come before the subcommand. For example, to list flow runs using the profile foo:

$ prefect --profile \"foo\" flow-run ls\n

You may use the -p flag as well:

$ prefect -p \"foo\" flow-run ls\n

You may also create an 'alias' to automatically use your profile:

$ alias prefect-foo=\"prefect --profile 'foo' \"\n# uses our profile!\n$ prefect-foo config view  
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#conflicts-with-environment-variables","title":"Conflicts with environment variables","text":"

If setting the profile from the CLI with --profile, environment variables that conflict with settings in the profile will be ignored.

In all other cases, environment variables will take precedence over the value in the profile.

For example, a value set in a profile will be used by default:

$ prefect config set PREFECT_LOGGING_LEVEL=\"ERROR\"\n$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='ERROR' (from profile)\n

But, setting an environment variable will override the profile setting:

$ export PREFECT_LOGGING_LEVEL=\"DEBUG\"\n$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG' (from env)\n

Unless the profile is explicitly requested when using the CLI:

$ prefect --profile default config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='ERROR' (from profile)\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#profile-files","title":"Profile files","text":"

Profiles are persisted to the PREFECT_PROFILES_PATH, which can be changed with an environment variable.

By default, it is stored in your PREFECT_HOME directory:

$ prefect config view --show-defaults | grep PROFILES_PATH\nPREFECT_PROFILES_PATH='~/.prefect/profiles.toml'\n

The TOML format is used to store profile data.

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/states/","title":"States","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#overview","title":"Overview","text":"

States are rich objects that contain information about the status of a particular task run or flow run. While you don't need to know the details of the states to use Prefect, you can give your workflows superpowers by taking advantage of it.

At any moment, you can learn anything you need to know about a task or flow by examining its current state or the history of its states. For example, a state could tell you:

By manipulating a relatively small number of task states, Prefect flows can harness the complexity that emerges in workflows.

Only runs have states

Though we often refer to the \"state\" of a flow or a task, what we really mean is the state of a flow run or a task run. Flows and tasks are templates that describe what a system does; only when we run the system does it also take on a state. So while we might refer to a task as \"running\" or being \"successful\", we really mean that a specific instance of the task is in that state.

","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#state-types","title":"State Types","text":"

States have names and types. State types are canonical, with specific orchestration rules that apply to transitions into and out of each state type. A state's name, is often, but not always, synonymous with its type. For example, a task run that is running for the first time has a state with the name Running and the type RUNNING. However, if the task retries, that same task run will have the name Retrying and the type RUNNING. Each time the task run transitions into the RUNNING state, the same orchestration rules are applied.

There are terminal state types from which there are no orchestrated transitions to any other state type.

The full complement of states and state types includes:

Name Type Terminal? Description Scheduled SCHEDULED No The run will begin at a particular time in the future. Late SCHEDULED No The run's scheduled start time has passed, but it has not transitioned to PENDING (5 seconds by default). AwaitingRetry SCHEDULED No The run did not complete successfully because of a code issue and had remaining retry attempts. Pending PENDING No The run has been submitted to run, but is waiting on necessary preconditions to be satisfied. Running RUNNING No The run code is currently executing. Retrying RUNNING No The run code is currently executing after previously not complete successfully. Paused PAUSED No The run code has stopped executing until it recieves manual approval to proceed. Cancelling CANCELLING No The infrastructure on which the code was running is being cleaned up. Cancelled CANCELLED Yes The run did not complete because a user determined that it should not. Completed COMPLETED Yes The run completed successfully. Failed FAILED Yes The run did not complete because of a code issue and had no remaining retry attempts. Crashed CRASHED Yes The run did not complete because of an infrastructure issue.","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#returned-values","title":"Returned values","text":"

When calling a task or a flow, there are three types of returned values:

Returning data\u200a is the default behavior any time you call your_task().

Returning Prefect State occurs anytime you call your task or flow with the argument return_state=True.

Returning PrefectFuture is achieved by calling your_task.submit().

","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-data","title":"Return Data","text":"

By default, running a task will return data:

from prefect import flow, task \n\n@task \ndef add_one(x):\nreturn x + 1\n@flow \ndef my_flow():\n    result = add_one(1) # return int\n

The same rule applies for a subflow:

@flow \ndef subflow():\nreturn 42 \n@flow \ndef my_flow():\n    result = subflow() # return data\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-prefect-state","title":"Return Prefect State","text":"

To return a State instead, add return_state=True as a parameter of your task call.

@flow \ndef my_flow():\nstate = add_one(1, return_state=True) # return State\n

To get data from a State, call .result().

@flow \ndef my_flow():\n    state = add_one(1, return_state=True) # return State\nresult = state.result() # return int\n

The same rule applies for a subflow:

@flow \ndef subflow():\n    return 42 \n\n@flow \ndef my_flow():\nstate = subflow(return_state=True) # return State\nresult = state.result() # return int\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-a-prefectfuture","title":"Return a PrefectFuture","text":"

To get a PrefectFuture, add .submit() to your task call.

@flow \ndef my_flow():\nfuture = add_one.submit(1) # return PrefectFuture\n

To get data from a PrefectFuture, call .result().

@flow \ndef my_flow():\n    future = add_one.submit(1) # return PrefectFuture\nresult = future.result() # return data\n

To get a State from a PrefectFuture, call .wait().

@flow \ndef my_flow():\n    future = add_one.submit(1) # return PrefectFuture\nstate = future.wait() # return State\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#final-state-determination","title":"Final state determination","text":"

The final state of a flow is determined by its return value. The following rules apply:

See the Final state determination section of the Flows documentation for further details and examples.

","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#state-change-hooks","title":"State Change Hooks","text":"

State change hooks execute code in response to changes in flow or task run states, enabling you to define actions for specific state transitions in a workflow.

","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#a-simple-example","title":"A simple example","text":"
from prefect import flow\n\ndef my_success_hook(flow, flow_run, state):\n    print(\"Flow run succeeded!\")\n\n@flow(on_completion=[my_success_hook])\ndef my_flow():\n    return 42\n\nmy_flow()\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-and-use-hooks","title":"Create and use hooks","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#available-state-change-hooks","title":"Available state change hooks","text":"Type Flow Task Description on_completion \u2713 \u2713 Executes when a flow or task run enters a Completed state. on_failure \u2713 \u2713 Executes when a flow or task run enters a Failed state. on_cancellation \u2713 - Executes when a flow run enters a Cancelling state. on_crashed \u2713 - Executes when a flow run enters a Crashed state.","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-flow-run-state-change-hooks","title":"Create flow run state change hooks","text":"
def my_flow_hook(flow: Flow, flow_run: FlowRun, state: State):\n\"\"\"This is the required signature for a flow run state\n    change hook. This hook can only be passed into flows.\n    \"\"\"\n\n# pass hook as a list of callables\n@flow(on_completion=[my_flow_hook])\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-task-run-state-change-hooks","title":"Create task run state change hooks","text":"
def my_task_hook(task: Task, task_run: TaskRun, state: State):\n\"\"\"This is the required signature for a task run state change\n    hook. This hook can only be passed into tasks.\n    \"\"\"\n\n# pass hook as a list of callables\n@task(on_failure=[my_task_hook])\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#use-multiple-state-change-hooks","title":"Use multiple state change hooks","text":"

State change hooks are versatile, allowing you to specify multiple state change hooks for the same state transition, or to use the same state change hook for different transitions:

def my_success_hook(task, task_run, state):\n    print(\"Task run succeeded!\")\n\ndef my_failure_hook(task, task_run, state):\n    print(\"Task run failed!\")\n\ndef my_succeed_or_fail_hook(task, task_run, state):\n    print(\"If the task run succeeds or fails, this hook runs.\")\n\n@task(\n    on_completion=[my_success_hook, my_succeed_or_fail_hook],\n    on_failure=[my_failure_hook, my_succeed_or_fail_hook]\n)\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#more-examples-of-state-change-hooks","title":"More examples of state change hooks","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/storage/","title":"Storage","text":"

Storage lets you configure how flow code for deployments is persisted and retrieved by Prefect agents. Anytime you build a deployment, a storage block is used to upload the entire directory containing your workflow code (along with supporting files) to its configured location. This helps ensure portability of your relative imports, configuration files, and more. Note that your environment dependencies (for example, external Python packages) still need to be managed separately.

If no storage is explicitly configured, Prefect will use LocalFileSystem storage by default. Local storage works fine for many local flow run scenarios, especially when testing and getting started. However, due to the inherent lack of portability, many use cases are better served by using remote storage such as S3 or Google Cloud Storage.

Prefect supports creating multiple storage configurations and switching between storage as needed.

Storage uses blocks

Blocks is the Prefect technology underlying storage, and enables you to do so much more.

In addition to creating storage blocks via the Prefect CLI, you can now create storage blocks and other kinds of block configuration objects via the Prefect UI and Prefect Cloud.

","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":2},{"location":"concepts/storage/#configuring-storage-for-a-deployment","title":"Configuring storage for a deployment","text":"

When building a deployment for a workflow, you have two options for configuring workflow storage:

","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":2},{"location":"concepts/storage/#using-the-default","title":"Using the default","text":"

Anytime you call prefect deployment build without providing the --storage-block flag, a default LocalFileSystem block will be used. Note that this block will always use your present working directory as its basepath (which is usually desirable). You can see the block's settings by inspecting the deployment.yaml file that Prefect creates after calling prefect deployment build.

While you generally can't run a deployment stored on a local file system on other machines, any agent running on the same machine will be able to successfully run your deployment.

","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":2},{"location":"concepts/storage/#supported-storage-blocks","title":"Supported storage blocks","text":"

Current options for deployment storage blocks include:

Storage Description Required Library Local File System Store code in a run's local file system. Remote File System Store code in a any filesystem supported by fsspec. AWS S3 Storage Store code in an AWS S3 bucket. s3fs Azure Storage Store code in Azure Datalake and Azure Blob Storage. adlfs GitHub Storage Store code in a GitHub repository. Google Cloud Storage Store code in a Google Cloud Platform (GCP) Cloud Storage bucket. gcsfs SMB Store code in SMB shared network storage. smbprotocol GitLab Repository Store code in a GitLab repository. prefect-gitlab Bitbucket Repository Store code in a Bitbucket repository. prefect-bitbucket

Accessing files may require storage filesystem libraries

Note that the appropriate filesystem library supporting the storage location must be installed prior to building a deployment with a storage block or accessing the storage location from flow scripts.

For example, the AWS S3 Storage block requires the s3fs library.

See Filesystem package dependencies for more information about configuring filesystem libraries in your execution environment.

","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":2},{"location":"concepts/storage/#configuring-a-block","title":"Configuring a block","text":"

You can create these blocks either via the UI or via Python.

You can create, edit, and manage storage blocks in the Prefect UI and Prefect Cloud. On a Prefect server, blocks are created in the server's database. On Prefect Cloud, blocks are created on a workspace.

To create a new block, select the + button. Prefect displays a library of block types you can configure to create blocks to be used by your flows.

Select Add + to configure a new storage block based on a specific block type. Prefect displays a Create page that enables specifying storage settings.

You can also create blocks using the Prefect Python API:

from prefect.filesystems import S3\n\nblock = S3(bucket_path=\"my-bucket/a-sub-directory\", \n           aws_access_key_id=\"foo\", \n           aws_secret_access_key=\"bar\"\n)\nblock.save(\"example-block\")\n

This block configuration is now available to be used by anyone with appropriate access to your Prefect API. We can use this block to build a deployment by passing its slug to the prefect deployment build command. The storage block slug is formatted as block-type/block-name. In this case, s3/example-block for an AWS S3 Bucket block named example-block. See block identifiers for details.

prefect deployment build ./flows/my_flow.py:my_flow --name \"Example Deployment\" --storage-block s3/example-block\n

This command will upload the contents of your flow's directory to the designated storage location, then the full deployment specification will be persisted to a newly created deployment.yaml file. For more information, see Deployments.

","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":2},{"location":"concepts/task-runners/","title":"Task runners","text":"

Task runners enable you to engage specific executors for Prefect tasks, such as for concurrent, parallel, or distributed execution of tasks.

Task runners are not required for task execution. If you call a task function directly, the task executes as a regular Python function, without a task runner, and produces whatever result is returned by the function.

","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#task-runner-overview","title":"Task runner overview","text":"

Calling a task function from within a flow, using the default task settings, executes the function sequentially. Execution of the task function blocks execution of the flow until the task completes. This means, by default, calling multiple tasks in a flow causes them to run in order.

However, that's not the only way to run tasks!

You can use the .submit() method on a task function to submit the task to a task runner. Using a task runner enables you to control whether tasks run sequentially, concurrently, or if you want to take advantage of a parallel or distributed execution library such as Dask or Ray.

Using the .submit() method to submit a task also causes the task run to return a PrefectFuture, a Prefect object that contains both any data returned by the task function and a State, a Prefect object indicating the state of the task run.

Prefect currently provides the following built-in task runners:

In addition, the following Prefect-developed task runners for parallel or distributed task execution may be installed as Prefect Integrations.

Concurrency versus parallelism

The words \"concurrency\" and \"parallelism\" may sound the same, but they mean different things in computing.

Concurrency refers to a system that can do more than one thing simultaneously, but not at the exact same time. It may be more accurate to think of concurrent execution as non-blocking: within the restrictions of resources available in the execution environment and data dependencies between tasks, execution of one task does not block execution of other tasks in a flow.

Parallelism refers to a system that can do more than one thing at the exact same time. Again, within the restrictions of resources available, parallel execution can run tasks at the same time, such as for operations mapped across a dataset.

","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-a-task-runner","title":"Using a task runner","text":"

You do not need to specify a task runner for a flow unless your tasks require a specific type of execution.

To configure your flow to use a specific task runner, import a task runner and assign it as an argument for the flow when the flow is defined.

Remember to call .submit() when using a task runner

Make sure you use .submit() to run your task with a task runner. Calling the task directly, without .submit(), from within a flow will run the task sequentially instead of using a specified task runner.

For example, you can use ConcurrentTaskRunner to allow tasks to switch when they would block.

from prefect import flow, task\nfrom prefect.task_runners import ConcurrentTaskRunner\nimport time\n\n@task\ndef stop_at_floor(floor):\n    print(f\"elevator moving to floor {floor}\")\n    time.sleep(floor)\n    print(f\"elevator stops on floor {floor}\")\n\n@flow(task_runner=ConcurrentTaskRunner())\ndef elevator():\n    for floor in range(10, 0, -1):\n        stop_at_floor.submit(floor)\n

If you specify an uninitialized task runner class, a task runner instance of that type is created with the default settings. You can also pass additional configuration parameters for task runners that accept parameters, such as DaskTaskRunner and RayTaskRunner.

Default task runner

If you don't specify a task runner for a flow and you call a task with .submit() within the flow, Prefect uses the default ConcurrentTaskRunner.

","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-sequentially","title":"Running tasks sequentially","text":"

Sometimes, it's useful to force tasks to run sequentially to make it easier to reason about the behavior of your program. Switching to the SequentialTaskRunner will force submitted tasks to run sequentially rather than concurrently.

Synchronous and asynchronous tasks

The SequentialTaskRunner works with both synchronous and asynchronous task functions. Asynchronous tasks are Python functions defined using async def rather than def.

The following example demonstrates using the SequentialTaskRunner to ensure that tasks run sequentially. In the example, the flow glass_tower runs the task stop_at_floor for floors one through 38, in that order.

from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\nimport random\n\n@task\ndef stop_at_floor(floor):\n    situation = random.choice([\"on fire\",\"clear\"])\n    print(f\"elevator stops on {floor} which is {situation}\")\n\n@flow(task_runner=SequentialTaskRunner(),\n      name=\"towering-infernflow\",\n      )\ndef glass_tower():\n    for floor in range(1, 39):\n        stop_at_floor.submit(floor)\n\nglass_tower()\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-multiple-task-runners","title":"Using multiple task runners","text":"

Each flow can only have a single task runner, but sometimes you may want a subset of your tasks to run using a specific task runner. In this case, you can create subflows for tasks that need to use a different task runner.

For example, you can have a flow (in the example below called sequential_flow) that runs its tasks locally using the SequentialTaskRunner. If you have some tasks that can run more efficiently in parallel on a Dask cluster, you could create a subflow (such as dask_subflow) to run those tasks using the DaskTaskRunner.

from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef hello_local():\n    print(\"Hello!\")\n\n@task\ndef hello_dask():\n    print(\"Hello from Dask!\")\n\n@flow(task_runner=SequentialTaskRunner())\ndef sequential_flow():\n    hello_local.submit()\n    dask_subflow()\n    hello_local.submit()\n\n@flow(task_runner=DaskTaskRunner())\ndef dask_subflow():\n    hello_dask.submit()\n\nif __name__ == \"__main__\":\n    sequential_flow()\n

Guarding main

Note that you should guard the main function by using if __name__ == \"__main__\" to avoid issues with parallel processing.

This script outputs the following logs demonstrating the use of the Dask task runner:

120:14:29.785 | INFO    | prefect.engine - Created flow run 'ivory-caiman' for flow 'sequential-flow'\n20:14:29.785 | INFO    | Flow run 'ivory-caiman' - Starting 'SequentialTaskRunner'; submitted tasks will be run sequentially...\n20:14:29.880 | INFO    | Flow run 'ivory-caiman' - Created task run 'hello_local-7633879f-0' for task 'hello_local'\n20:14:29.881 | INFO    | Flow run 'ivory-caiman' - Executing 'hello_local-7633879f-0' immediately...\nHello!\n20:14:29.904 | INFO    | Task run 'hello_local-7633879f-0' - Finished in state Completed()\n20:14:29.952 | INFO    | Flow run 'ivory-caiman' - Created subflow run 'nimble-sparrow' for flow 'dask-subflow'\n20:14:29.953 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n20:14:31.862 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at http://127.0.0.1:8787/status\n20:14:31.901 | INFO    | Flow run 'nimble-sparrow' - Created task run 'hello_dask-2b96d711-0' for task 'hello_dask'\n20:14:32.370 | INFO    | Flow run 'nimble-sparrow' - Submitted task run 'hello_dask-2b96d711-0' for execution.\nHello from Dask!\n20:14:33.358 | INFO    | Flow run 'nimble-sparrow' - Finished in state Completed('All states completed.')\n20:14:33.368 | INFO    | Flow run 'ivory-caiman' - Created task run 'hello_local-7633879f-1' for task 'hello_local'\n20:14:33.368 | INFO    | Flow run 'ivory-caiman' - Executing 'hello_local-7633879f-1' immediately...\nHello!\n20:14:33.386 | INFO    | Task run 'hello_local-7633879f-1' - Finished in state Completed()\n20:14:33.399 | INFO    | Flow run 'ivory-caiman' - Finished in state Completed('All states completed.')\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-results-from-submitted-tasks","title":"Using results from submitted tasks","text":"

When you use .submit() to submit a task to a task runner, the task runner creates a PrefectFuture for access to the state and result of the task.

A PrefectFuture is an object that provides access to a computation happening in a task runner \u2014 even if that computation is happening on a remote system.

In the following example, we save the return value of calling .submit() on the task say_hello to the variable future, and then we print the type of the variable:

from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@flow\ndef hello_world():\n    future = say_hello.submit(\"Marvin\")\n    print(f\"variable 'future' is type {type(future)}\")\n\nhello_world()\n

When you run this code, you'll see that the variable future is a PrefectFuture:

variable 'future' is type <class 'prefect.futures.PrefectFuture'>\n

When you pass a future into a task, Prefect waits for the \"upstream\" task \u2014 the one that the future references \u2014 to reach a final state before starting the downstream task.

This means that the downstream task won't receive the PrefectFuture you passed as an argument. Instead, the downstream task will receive the value that the upstream task returned.

Take a look at how this works in the following example

from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef print_result(result):\n    print(type(result))\n    print(result)\n\n@flow(name=\"hello-flow\")\ndef hello_world():\n    future = say_hello.submit(\"Marvin\")\n    print_result.submit(future)\n\nhello_world()\n
<class 'str'>\nHello Marvin!\n

Futures have a few useful methods. For example, you can get the return value of the task run with .result():

from prefect import flow, task\n\n@task\ndef my_task():\n    return 42\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n    result = future.result()\n    print(result)\n\nmy_flow()\n

The .result() method will wait for the task to complete before returning the result to the caller. If the task run fails, .result() will raise the task run's exception. You may disable this behavior with the raise_on_failure option:

from prefect import flow, task\n\n@task\ndef my_task():\n    return \"I'm a task!\"\n\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n    result = future.result(raise_on_failure=False)\n    if future.get_state().is_failed():\n        # `result` is an exception! handle accordingly\n        ...\n    else:\n        # `result` is the expected return value of our task\n        ...\n

You can retrieve the current state of the task run associated with the PrefectFuture using .get_state():

@flow\ndef my_flow():\n    future = my_task.submit()\n    state = future.get_state()\n

You can also wait for a task to complete by using the .wait() method:

@flow\ndef my_flow():\n    future = my_task.submit()\n    final_state = future.wait()\n

You can include a timeout in the wait call to perform logic if the task has not finished in a given amount of time:

@flow\ndef my_flow():\n    future = my_task.submit()\n    final_state = future.wait(1)  # Wait one second max\n    if final_state:\n        # Take action if the task is done\n        result = final_state.result()\n    else:\n        ... # Task action if the task is still running\n

You may also use the wait_for=[] parameter when calling a task, specifying upstream task dependencies. This enables you to control task execution order for tasks that do not share data dependencies.

@task\ndef task_a():\n    pass\n\n@task\ndef task_b():\n    pass\n\n@task\ndef task_c():\n    pass\n\n@task\ndef task_d():\n    pass\n\n@flow\ndef my_flow():\n    a = task_a.submit()\n    b = task_b.submit()\n    # Wait for task_a and task_b to complete\n    c = task_c.submit(wait_for=[a, b])\n    # task_d will wait for task_c to complete\n    # Note: If waiting for one task it must still be in a list.\n    d = task_d(wait_for=[c])\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#when-to-use-result-in-flows","title":"When to use .result() in flows","text":"

The simplest pattern for writing a flow is either only using tasks or only using pure Python functions. When you need to mix the two, use .result().

Using only tasks:

from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef say_nice_to_meet_you(hello_greeting):\n    return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n    hello = say_hello.submit(\"Marvin\")\n    nice_to_meet_you = say_nice_to_meet_you.submit(hello)\n\nhello_world()\n

Using only Python functions:

from prefect import flow, task\n\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\ndef say_nice_to_meet_you(hello_greeting):\n    return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n    # because this is just a Python function, calls will not be tracked\n    hello = say_hello(\"Marvin\") \n    nice_to_meet_you = say_nice_to_meet_you(hello)\n\nhello_world()\n

Mixing tasks and Python functions:

from prefect import flow, task\n\ndef say_hello_extra_nicely_to_marvin(hello): # not a task or flow!\n    if hello == \"Hello Marvin!\":\n        return \"HI MARVIN!\"\n    return hello\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef say_nice_to_meet_you(hello_greeting):\n    return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n    # run a task and get the result\n    hello = say_hello.submit(\"Marvin\").result()\n\n    # not calling a task or flow\n    special_greeting = say_hello_extra_nicely_to_marvin(hello)\n\n    # pass our modified greeting back into a task\n    nice_to_meet_you = say_nice_to_meet_you.submit(special_greeting)\n\n    print(nice_to_meet_you.result())\n\nhello_world()\n

Note that .result() also limits Prefect's ability to track task dependencies. In the \"mixed\" example above, Prefect will not be aware that say_hello is upstream of nice_to_meet_you.

Calling .result() is blocking

When calling .result(), be mindful your flow function will have to wait until the task run is completed before continuing.

from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef do_important_stuff():\n    print(\"Doing lots of important stuff!\")\n\n@flow\ndef hello_world():\n    # blocks until `say_hello` has finished\n    result = say_hello.submit(\"Marvin\").result() \n    do_important_stuff.submit()\n\nhello_world()\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-on-dask","title":"Running tasks on Dask","text":"

The DaskTaskRunner is a parallel task runner that submits tasks to the dask.distributed scheduler. By default, a temporary Dask cluster is created for the duration of the flow run. If you already have a Dask cluster running, either local or cloud hosted, you can provide the connection URL via the address kwarg.

  1. Make sure the prefect-dask collection is installed: pip install prefect-dask.
  2. In your flow code, import DaskTaskRunner from prefect_dask.task_runners.
  3. Assign it as the task runner when the flow is defined using the task_runner=DaskTaskRunner argument.

For example, this flow uses the DaskTaskRunner configured to access an existing Dask cluster at http://my-dask-cluster.

from prefect import flow\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@flow(task_runner=DaskTaskRunner(address=\"http://my-dask-cluster\"))\ndef my_flow():\n    ...\n

DaskTaskRunner accepts the following optional parameters:

Parameter Description address Address of a currently running Dask scheduler. cluster_class The cluster class to use when creating a temporary Dask cluster. It can be either the full class name (for example, \"distributed.LocalCluster\"), or the class itself. cluster_kwargs Additional kwargs to pass to the cluster_class when creating a temporary Dask cluster. adapt_kwargs Additional kwargs to pass to cluster.adapt when creating a temporary Dask cluster. Note that adaptive scaling is only enabled if adapt_kwargs are provided. client_kwargs Additional kwargs to use when creating a dask.distributed.Client.

Multiprocessing safety

Note that, because the DaskTaskRunner uses multiprocessing, calls to flows in scripts must be guarded with if __name__ == \"__main__\": or you will encounter warnings and errors.

If you don't provide the address of a Dask scheduler, Prefect creates a temporary local cluster automatically. The number of workers used is based on the number of cores on your machine. The default provides a mix of processes and threads that should work well for most workloads. If you want to specify this explicitly, you can pass values for n_workers or threads_per_worker to cluster_kwargs.

# Use 4 worker processes, each with 2 threads\nDaskTaskRunner(\n    cluster_kwargs={\"n_workers\": 4, \"threads_per_worker\": 2}\n)\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-a-temporary-cluster","title":"Using a temporary cluster","text":"

The DaskTaskRunner is capable of creating a temporary cluster using any of Dask's cluster-manager options. This can be useful when you want each flow run to have its own Dask cluster, allowing for per-flow adaptive scaling.

To configure, you need to provide a cluster_class. This can be:

You can also configure cluster_kwargs, which takes a dictionary of keyword arguments to pass to cluster_class when starting the flow run.

For example, to configure a flow to use a temporary dask_cloudprovider.aws.FargateCluster with 4 workers running with an image named my-prefect-image:

DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n    cluster_kwargs={\"n_workers\": 4, \"image\": \"my-prefect-image\"},\n)\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#connecting-to-an-existing-cluster","title":"Connecting to an existing cluster","text":"

Multiple Prefect flow runs can all use the same existing Dask cluster. You might manage a single long-running Dask cluster (maybe using the Dask Helm Chart) and configure flows to connect to it during execution. This has a few downsides when compared to using a temporary cluster (as described above):

That said, you may prefer managing a single long-running cluster.

To configure a DaskTaskRunner to connect to an existing cluster, pass in the address of the scheduler to the address argument:

# Connect to an existing cluster running at a specified address\nDaskTaskRunner(address=\"tcp://...\")\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#adaptive-scaling","title":"Adaptive scaling","text":"

One nice feature of using a DaskTaskRunner is the ability to scale adaptively to the workload. Instead of specifying n_workers as a fixed number, this lets you specify a minimum and maximum number of workers to use, and the dask cluster will scale up and down as needed.

To do this, you can pass adapt_kwargs to DaskTaskRunner. This takes the following fields:

For example, here we configure a flow to run on a FargateCluster scaling up to at most 10 workers.

DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n    adapt_kwargs={\"maximum\": 10}\n)\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#dask-annotations","title":"Dask annotations","text":"

Dask annotations can be used to further control the behavior of tasks.

For example, we can set the priority of tasks in the Dask scheduler:

import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n    print(x)\n\n\n@flow(task_runner=DaskTaskRunner())\ndef my_flow():\n    with dask.annotate(priority=-10):\n        future = show.submit(1)  # low priority task\n\n    with dask.annotate(priority=10):\n        future = show.submit(2)  # high priority task\n

Another common use case is resource annotations:

import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n    print(x)\n\n# Create a `LocalCluster` with some resource annotations\n# Annotations are abstract in dask and not inferred from your system.\n# Here, we claim that our system has 1 GPU and 1 process available per worker\n@flow(\n    task_runner=DaskTaskRunner(\n        cluster_kwargs={\"n_workers\": 1, \"resources\": {\"GPU\": 1, \"process\": 1}}\n    )\n)\n\ndef my_flow():\n    with dask.annotate(resources={'GPU': 1}):\n        future = show(0)  # this task requires 1 GPU resource on a worker\n\n    with dask.annotate(resources={'process': 1}):\n        # These tasks each require 1 process on a worker; because we've \n        # specified that our cluster has 1 process per worker and 1 worker,\n        # these tasks will run sequentially\n        future = show(1)\n        future = show(2)\n        future = show(3)\n\n\nif __name__ == \"__main__\":\n    my_flow()\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-on-ray","title":"Running tasks on Ray","text":"

The RayTaskRunner \u2014 installed separately as a Prefect Collection \u2014 is a parallel task runner that submits tasks to Ray. By default, a temporary Ray instance is created for the duration of the flow run. If you already have a Ray instance running, you can provide the connection URL via an address argument.

Remote storage and Ray tasks

We recommend configuring remote storage for task execution with the RayTaskRunner. This ensures tasks executing in Ray have access to task result storage, particularly when accessing a Ray instance outside of your execution environment.

To configure your flow to use the RayTaskRunner:

  1. Make sure the prefect-ray collection is installed: pip install prefect-ray.
  2. In your flow code, import RayTaskRunner from prefect_ray.task_runners.
  3. Assign it as the task runner when the flow is defined using the task_runner=RayTaskRunner argument.

For example, this flow uses the RayTaskRunner configured to access an existing Ray instance at ray://192.0.2.255:8786.

from prefect import flow\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@flow(task_runner=RayTaskRunner(address=\"ray://192.0.2.255:8786\"))\ndef my_flow():\n    ... \n

RayTaskRunner accepts the following optional parameters:

Parameter Description address Address of a currently running Ray instance, starting with the ray:// URI. init_kwargs Additional kwargs to use when calling ray.init.

Note that Ray Client uses the ray:// URI to indicate the address of a Ray instance. If you don't provide the address of a Ray instance, Prefect creates a temporary instance automatically.

Ray environment limitations

While we're excited about adding support for parallel task execution via Ray to Prefect, there are some inherent limitations with Ray you should be aware of:

Ray's support for Python 3.11 is experimental.

Ray support for non-x86/64 architectures such as ARM/M1 processors with installation from pip alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with conda. See the Ray documentation for instructions.

See the Ray installation documentation for further compatibility information.

","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/tasks/","title":"Tasks","text":"

A task is a function that represents a discrete unit of work in a Prefect workflow. Tasks are not required \u2014 you may define Prefect workflows that consist only of flows, using regular Python statements and functions. Tasks enable you to encapsulate elements of your workflow logic in observable units that can be reused across flows and subflows.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#tasks-overview","title":"Tasks overview","text":"

Tasks are functions: they can take inputs, perform work, and return an output. A Prefect task can do almost anything a Python function can do.

Tasks are special because they receive metadata about upstream dependencies and the state of those dependencies before they run, even if they don't receive any explicit data inputs from them. This gives you the opportunity to, for example, have a task wait on the completion of another task before executing.

Tasks also take advantage of automatic Prefect logging to capture details about task runs such as runtime, tags, and final state.

You can define your tasks within the same file as your flow definition, or you can define tasks within modules and import them for use in your flow definitions. All tasks must be called from within a flow. Tasks may not be called from other tasks.

Calling a task from a flow

Use the @task decorator to designate a function as a task. Calling the task from within a flow function creates a new task run:

from prefect import flow, task\n\n@task\ndef my_task():\nprint(\"Hello, I'm a task\")\n@flow\ndef my_flow():\n    my_task()\n

Tasks are uniquely identified by a task key, which is a hash composed of the task name, the fully-qualified name of the function, and any tags. If the task does not have a name specified, the name is derived from the task function.

How big should a task be?

Prefect encourages \"small tasks\" \u2014 each one should represent a single logical step of your workflow. This allows Prefect to better contain task failures.

To be clear, there's nothing stopping you from putting all of your code in a single task \u2014 Prefect will happily run it! However, if any line of code fails, the entire task will fail and must be retried from the beginning. This can be avoided by splitting the code into multiple dependent tasks.

Calling a task's function from another task

Prefect does not allow triggering task runs from other tasks. If you want to call your task's function directly, you can use task.fn().

from prefect import flow, task\n\n@task\ndef my_first_task(msg):\n    print(f\"Hello, {msg}\")\n\n@task\ndef my_second_task(msg):\nmy_first_task.fn(msg)\n@flow\ndef my_flow():\n    my_second_task(\"Trillian\")\n

Note that in the example above you are only calling the task's function without actually generating a task run. Prefect won't track task execution in your Prefect backend if you call the task function this way. You also won't be able to use features such as retries with this function call.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-arguments","title":"Task arguments","text":"

Tasks allow for customization through optional arguments:

Argument Description name An optional name for the task. If not provided, the name will be inferred from the function name. description An optional string description for the task. If not provided, the description will be pulled from the docstring for the decorated function. tags An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags context at task runtime. cache_key_fn An optional callable that, given the task run context and call parameters, generates a string key. If the key matches a previous completed state, that state result will be restored instead of running the task again. cache_expiration An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire. retries An optional number of times to retry on task run failure. retry_delay_seconds An optional number of seconds to wait before retrying the task after failure. This is only applicable if retries is nonzero. log_prints An optional boolean indicating whether to log print statements. persist_result An optional boolean indicating whether to persist the result of the task run to storage.

See all possible parameters in the Python SDK API docs.

For example, you can provide a name value for the task. Here we've used the optional description argument as well.

@task(name=\"hello-task\", \ndescription=\"This task says hello.\")\ndef my_task():\n    print(\"Hello, I'm a task\")\n

You can distinguish runs of this task by providing a task_run_name; this setting accepts a string that can optionally contain templated references to the keyword arguments of your task. The name will be formatted using Python's standard string formatting syntax as can be seen here:

import datetime\nfrom prefect import flow, task\n\n@task(name=\"My Example Task\", \n      description=\"An example task for a tutorial.\",\n      task_run_name=\"hello-{name}-on-{date:%A}\")\ndef my_task(name, date):\n    pass\n\n@flow\ndef my_flow():\n    # creates a run with a name like \"hello-marvin-on-Thursday\"\n    my_task(name=\"marvin\", date=datetime.datetime.utcnow())\n

Additionally this setting also accepts a function that returns a string to be used for the task run name:

import datetime\nfrom prefect import flow, task\n\ndef generate_task_name():\n    date = datetime.datetime.utcnow()\n    return f\"{date:%A}-is-a-lovely-day\"\n\n@task(name=\"My Example Task\",\n      description=\"An example task for a tutorial.\",\n      task_run_name=generate_task_name)\ndef my_task(name):\n    pass\n\n@flow\ndef my_flow():\n    # creates a run with a name like \"Thursday-is-a-lovely-day\"\n    my_task(name=\"marvin\")\n

If you need access to information about the task, use the prefect.runtime module. For example:

from prefect import flow\nfrom prefect.runtime import flow_run, task_run\n\ndef generate_task_name():\n    flow_name = flow_run.flow_name\n    task_name = task_run.task_name\n\n    parameters = task_run.parameters\n    name = parameters[\"name\"]\n    limit = parameters[\"limit\"]\n\n    return f\"{flow_name}-{task_name}-with-{name}-and-{limit}\"\n\n@task(name=\"my-example-task\",\n      description=\"An example task for a tutorial.\",\n      task_run_name=generate_task_name)\ndef my_task(name: str, limit: int = 100):\n    pass\n\n@flow\ndef my_flow(name: str):\n    # creates a run with a name like \"my-flow-my-example-task-with-marvin-and-100\"\n    my_task(name=\"marvin\")\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#tags","title":"Tags","text":"

Tags are optional string labels that enable you to identify and group tasks other than by name or flow. Tags are useful for:

Tags may be specified as a keyword argument on the task decorator.

@task(name=\"hello-task\", tags=[\"test\"])\ndef my_task():\n    print(\"Hello, I'm a task\")\n

You can also provide tags as an argument with a tags context manager, specifying tags when the task is called rather than in its definition.

from prefect import flow, task\nfrom prefect import tags\n\n@task\ndef my_task():\n    print(\"Hello, I'm a task\")\n\n@flow\ndef my_flow():\nwith tags(\"test\"):\nmy_task()\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#retries","title":"Retries","text":"

Prefect can automatically retry tasks on failure. In Prefect, a task fails if its Python function raises an exception.

To enable retries, pass retries and retry_delay_seconds parameters to your task. If the task fails, Prefect will retry it up to retries times, waiting retry_delay_seconds seconds between each attempt. If the task fails on the final retry, Prefect marks the task as crashed if the task raised an exception or failed if it returned a string.

Retries don't create new task runs

A new task run is not created when a task is retried. A new state is added to the state history of the original task run.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#a-real-world-example-making-an-api-request","title":"A real-world example: making an API request","text":"

Consider the real-world problem of making an API request. In this example, we'll use the httpx library to make an HTTP request.

import httpx\n\nfrom prefect import flow, task\n@task(retries=2, retry_delay_seconds=5)\ndef get_data_task(\n    url: str = \"https://api.brittle-service.com/endpoint\"\n) -> dict:\n    response = httpx.get(url)\n\n    # If the response status code is anything but a 2xx, httpx will raise\n    # an exception. This task doesn't handle the exception, so Prefect will\n    # catch the exception and will consider the task run failed.\n    response.raise_for_status()\n\n    return response.json()\n\n\n@flow\ndef get_data_flow():\n    get_data_task()\n

In this task, if the HTTP request to the brittle API receives any status code other than a 2xx (200, 201, etc.), Prefect will retry the task a maximum of two times, waiting five seconds in between retries.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#custom-retry-behavior","title":"Custom retry behavior","text":"

The retry_delay_seconds option accepts a list of delays for more custom retry behavior. The following task will wait for successively increasing intervals of 1, 10, and 100 seconds, respectively, before the next attempt starts:

from prefect import task\n\n@task(retries=3, retry_delay_seconds=[1, 10, 100])\ndef some_task_with_manual_backoff_retries():\n   ...\n

Additionally, you can pass a callable that accepts the number of retries as an argument and returns a list. Prefect includes an exponential_backoff utility that will automatically generate a list of retry delays that correspond to an exponential backoff retry strategy. The following flow will wait for 10, 20, then 40 seconds before each retry.

from prefect import task\nfrom prefect.tasks import exponential_backoff\n\n@task(retries=3, retry_delay_seconds=exponential_backoff(backoff_factor=10))\ndef some_task_with_exponential_backoff_retries():\n   ...\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#advanced-topic-adding-jitter","title":"Advanced topic: adding \"jitter\"","text":"

While using exponential backoff, you may also want to add jitter to the delay times. Jitter is a random amount of time added to retry periods that helps prevent \"thundering herd\" scenarios, which is when many tasks all retry at the exact same time, potentially overwhelming systems.

The retry_jitter_factor option can be used to add variance to the base delay. For example, a retry delay of 10 seconds with a retry_jitter_factor of 0.5 will be allowed to delay up to 15 seconds. Large values of retry_jitter_factor provide more protection against \"thundering herds,\" while keeping the average retry delay time constant. For example, the following task adds jitter to its exponential backoff so the retry delays will vary up to a maximum delay time of 20, 40, and 80 seconds respectively.

from prefect import task\nfrom prefect.tasks import exponential_backoff\n\n@task(\n    retries=3,\n    retry_delay_seconds=exponential_backoff(backoff_factor=10),\n    retry_jitter_factor=1,\n)\ndef some_task_with_exponential_backoff_retries():\n   ...\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#configuring-retry-behavior-globally-with-settings","title":"Configuring retry behavior globally with settings","text":"

You can also set retries and retry delays by using the following global settings. These settings will not override the retries or retry_delay_seconds that are set in the flow or task decorator.

prefect config set PREFECT_FLOW_DEFAULT_RETRIES=2\nprefect config set PREFECT_TASK_DEFAULT_RETRIES=2\nprefect config set PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS = [1, 10, 100]\nprefect config set PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS = [1, 10, 100]\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#caching","title":"Caching","text":"

Caching refers to the ability of a task run to reflect a finished state without actually running the code that defines the task. This allows you to efficiently reuse results of tasks that may be expensive to run with every flow run, or reuse cached results if the inputs to a task have not changed.

To determine whether a task run should retrieve a cached state, we use \"cache keys\". A cache key is a string value that indicates if one run should be considered identical to another. When a task run with a cache key finishes, we attach that cache key to the state. When each task run starts, Prefect checks for states with a matching cache key. If a state with an identical key is found, Prefect will use the cached state instead of running the task again.

To enable caching, specify a cache_key_fn \u2014 a function that returns a cache key \u2014 on your task. You may optionally provide a cache_expiration timedelta indicating when the cache expires. If you do not specify a cache_expiration, the cache key does not expire.

You can define a task that is cached based on its inputs by using the Prefect task_input_hash. This is a task cache key implementation that hashes all inputs to the task using a JSON or cloudpickle serializer. If the task inputs do not change, the cached results are used rather than running the task until the cache expires.

Note that, if any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, task_input_hash returns a null key indicating that a cache key could not be generated for the given inputs.

In this example, until the cache_expiration time ends, as long as the input to hello_task() remains the same when it is called, the cached return value is returned. In this situation the task is not rerun. However, if the input argument value changes, hello_task() runs using the new input.

from datetime import timedelta\nfrom prefect import flow, task\nfrom prefect.tasks import task_input_hash\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\ndef hello_task(name_input):\n    # Doing some work\n    print(\"Saying hello\")\n    return \"hello \" + name_input\n\n@flow\ndef hello_flow(name_input):\n    hello_task(name_input)\n

Alternatively, you can provide your own function or other callable that returns a string cache key. A generic cache_key_fn is a function that accepts two positional arguments:

Note that the cache_key_fn is not defined as a @task.

Task cache keys

By default, a task cache key is limited to 2000 characters, specified by the PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH setting.

from prefect import task, flow\n\ndef static_cache_key(context, parameters):\n# return a constant\nreturn \"static cache key\"\n@task(cache_key_fn=static_cache_key)\ndef cached_task():\n    print('running an expensive operation')\n    return 42\n\n@flow\ndef test_caching():\n    cached_task()\n    cached_task()\n    cached_task()\n

In this case, there's no expiration for the cache key, and no logic to change the cache key, so cached_task() only runs once.

>>> test_caching()\nrunning an expensive operation\n>>> test_caching()\n>>> test_caching()\n

When each task run requested to enter a Running state, it provided its cache key computed from the cache_key_fn. The Prefect backend identified that there was a COMPLETED state associated with this key and instructed the run to immediately enter the same COMPLETED state, including the same return values.

A real-world example might include the flow run ID from the context in the cache key so only repeated calls in the same flow run are cached.

def cache_within_flow_run(context, parameters):\n    return f\"{context.task_run.flow_run_id}-{task_input_hash(context, parameters)}\"\n\n@task(cache_key_fn=cache_within_flow_run)\ndef cached_task():\n    print('running an expensive operation')\n    return 42\n

Task results, retries, and caching

Task results are cached in memory during a flow run and persisted to the location specified by the PREFECT_LOCAL_STORAGE_PATH setting. As a result, task caching between flow runs is currently limited to flow runs with access to that local storage path.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#refreshing-the-cache","title":"Refreshing the cache","text":"

Sometimes, you want a task to update the data associated with its cache key instead of using the cache. This is a cache \"refresh\".

The refresh_cache option can be used to enable this behavior for a specific task:

import random\n\n\ndef static_cache_key(context, parameters):\n    # return a constant\n    return \"static cache key\"\n\n\n@task(cache_key_fn=static_cache_key, refresh_cache=True)\ndef caching_task():\n    return random.random()\n

When this task runs, it will always update the cache key instead of using the cached value. This is particularly useful when you have a flow that is responsible for updating the cache.

If you want to refresh the cache for all tasks, you can use the PREFECT_TASKS_REFRESH_CACHE setting. Setting PREFECT_TASKS_REFRESH_CACHE=true will change the default behavior of all tasks to refresh. This is particularly useful if you want to rerun a flow without cached results.

If you have tasks that should not refresh when this setting is enabled, you may explicitly set refresh_cache to False. These tasks will never refresh the cache \u2014 if a cache key exists it will be read, not updated. Note that, if a cache key does not exist yet, these tasks can still write to the cache.

@task(cache_key_fn=static_cache_key, refresh_cache=False)\ndef caching_task():\n    return random.random()\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#timeouts","title":"Timeouts","text":"

Task timeouts are used to prevent unintentional long-running tasks. When the duration of execution for a task exceeds the duration specified in the timeout, a timeout exception will be raised and the task will be marked as failed. In the UI, the task will be visibly designated as TimedOut. From the perspective of the flow, the timed-out task will be treated like any other failed task.

Timeout durations are specified using the timeout_seconds keyword argument.

from prefect import task, get_run_logger\nimport time\n\n@task(timeout_seconds=1)\ndef show_timeouts():\n    logger = get_run_logger()\n    logger.info(\"I will execute\")\n    time.sleep(5)\n    logger.info(\"I will not execute\")\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-results","title":"Task results","text":"

Depending on how you call tasks, they can return different types of results and optionally engage the use of a task runner.

Any task can return:

To run your task with a task runner, you must call the task with .submit().

See state returned values for examples.

Task runners are optional

If you just need the result from a task, you can simply call the task from your flow. For most workflows, the default behavior of calling a task directly and receiving a result is all you'll need.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#wait-for","title":"Wait for","text":"

To create a dependency between two tasks that do not exchange data, but one needs to wait for the other to finish, use the special wait_for keyword argument:

@task\ndef task_1():\n    pass\n\n@task\ndef task_2():\n    pass\n\n@flow\ndef my_flow():\n    x = task_1()\n\n    # task 2 will wait for task_1 to complete\n    y = task_2(wait_for=[x])\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#map","title":"Map","text":"

Prefect provides a .map() implementation that automatically creates a task run for each element of its input data. Mapped tasks represent the computations of many individual children tasks.

The simplest Prefect map takes a tasks and applies it to each element of its inputs.

from prefect import flow, task\n\n@task\ndef print_nums(nums):\n    for n in nums:\n        print(n)\n\n@task\ndef square_num(num):\n    return num**2\n\n@flow\ndef map_flow(nums):\n    print_nums(nums)\n    squared_nums = square_num.map(nums) \n    print_nums(squared_nums)\n\nmap_flow([1,2,3,5,8,13])\n

Prefect also supports unmapped arguments, allowing you to pass static values that don't get mapped over.

from prefect import flow, task\n\n@task\ndef add_together(x, y):\n    return x + y\n\n@flow\ndef sum_it(numbers, static_value):\n    futures = add_together.map(numbers, static_value)\n    return futures\n\nsum_it([1, 2, 3], 5)\n

If your static argument is an iterable, you'll need to wrap it with unmapped to tell Prefect that it should be treated as a static value.

from prefect import flow, task, unmapped\n\n@task\ndef sum_plus(x, static_iterable):\n    return x + sum(static_iterable)\n\n@flow\ndef sum_it(numbers, static_iterable):\n    futures = sum_plus.map(numbers, static_iterable)\n    return futures\n\nsum_it([4, 5, 6], unmapped([1, 2, 3]))\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#async-tasks","title":"Async tasks","text":"

Prefect also supports asynchronous task and flow definitions by default. All of the standard rules of async apply:

import asyncio\n\nfrom prefect import task, flow\n\n@task\nasync def print_values(values):\n    for value in values:\n        await asyncio.sleep(1) # yield\n        print(value, end=\" \")\n\n@flow\nasync def async_flow():\n    await print_values([1, 2])  # runs immediately\n    coros = [print_values(\"abcd\"), print_values(\"6789\")]\n\n    # asynchronously gather the tasks\n    await asyncio.gather(*coros)\n\nasyncio.run(async_flow())\n

Note, if you are not using asyncio.gather, calling .submit() is required for asynchronous execution on the ConcurrentTaskRunner.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-run-concurrency-limits","title":"Task run concurrency limits","text":"

There are situations in which you want to actively prevent too many tasks from running simultaneously. For example, if many tasks across multiple flows are designed to interact with a database that only allows 10 connections, you want to make sure that no more than 10 tasks that connect to this database are running at any given time.

Prefect has built-in functionality for achieving this: task concurrency limits.

Task concurrency limits use task tags. You can specify an optional concurrency limit as the maximum number of concurrent task runs in a Running state for tasks with a given tag. The specified concurrency limit applies to any task to which the tag is applied.

If a task has multiple tags, it will run only if all tags have available concurrency.

Tags without explicit limits are considered to have unlimited concurrency.

0 concurrency limit aborts task runs

Currently, if the concurrency limit is set to 0 for a tag, any attempt to run a task with that tag will be aborted instead of delayed.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#execution-behavior","title":"Execution behavior","text":"

Task tag limits are checked whenever a task run attempts to enter a Running state.

If there are no concurrency slots available for any one of your task's tags, the transition to a Running state will be delayed and the client is instructed to try entering a Running state again in 30 seconds.

Concurrency limits in subflows

Using concurrency limits on task runs in subflows can cause deadlocks. As a best practice, configure your tags and concurrency limits to avoid setting limits on task runs in subflows.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#configuring-concurrency-limits","title":"Configuring concurrency limits","text":"

Flow run concurrency limits are set at a work pool and/or work queue level

While task run concurrency limits are configured via tags (as shown below), flow run concurrency limits are configured via work pools and/or work queues.

You can set concurrency limits on as few or as many tags as you wish. You can set limits through:

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#cli","title":"CLI","text":"

You can create, list, and remove concurrency limits by using Prefect CLI concurrency-limit commands.

$ prefect concurrency-limit [command] [arguments]\n
Command Description create Create a concurrency limit by specifying a tag and limit. delete Delete the concurrency limit set on the specified tag. inspect View details about a concurrency limit set on the specified tag. ls View all defined concurrency limits.

For example, to set a concurrency limit of 10 on the 'small_instance' tag:

$ prefect concurrency-limit create small_instance 10\n

To delete the concurrency limit on the 'small_instance' tag:

$ prefect concurrency-limit delete small_instance\n

To view details about the concurrency limit on the 'small_instance' tag:

$ prefect concurrency-limit inspect small_instance\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#python-client","title":"Python client","text":"

To update your tag concurrency limits programmatically, use PrefectClient.orchestration.create_concurrency_limit.

create_concurrency_limit takes two arguments:

For example, to set a concurrency limit of 10 on the 'small_instance' tag:

from prefect import get_client\n\nasync with get_client() as client:\n    # set a concurrency limit of 10 on the 'small_instance' tag\n    limit_id = await client.create_concurrency_limit(\n        tag=\"small_instance\", \n        concurrency_limit=10\n        )\n

To remove all concurrency limits on a tag, use PrefectClient.delete_concurrency_limit_by_tag, passing the tag:

async with get_client() as client:\n    # remove a concurrency limit on the 'small_instance' tag\n    await client.delete_concurrency_limit_by_tag(tag=\"small_instance\")\n

If you wish to query for the currently set limit on a tag, use PrefectClient.read_concurrency_limit_by_tag, passing the tag:

To see all of your limits across all of your tags, use PrefectClient.read_concurrency_limits.

async with get_client() as client:\n    # query the concurrency limit on the 'small_instance' tag\n    limit = await client.read_concurrency_limit_by_tag(tag=\"small_instance\")\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/variables/","title":"Variables","text":"

Variables enable you to store and reuse non-sensitive bits of data, such as configuration information. Variables are named, mutable string values, much like environment variables. Variables are scoped to a Prefect server instance or a single workspace in Prefect Cloud.

Variables can be created or modified at any time, but are intended for values with infrequent writes and frequent reads. Variable values may be cached for quicker retrieval.

While variable values are most commonly loaded during flow runtime, they can be loaded in other contexts, at any time, such that they can be used to pass configuration information to Prefect configuration files, such as deployment steps.

Variables are not Encrypted

Using variables to store sensitive information, such as credentials, is not recommended. Instead, use Secret blocks to store and access sensitive information.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#managing-variables","title":"Managing variables","text":"

You can create, read, edit and delete variables via the Prefect UI, API, and CLI. Names must adhere to traditional variable naming conventions:

Values must:

Optionally, you can add tags to the variable.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#via-the-prefect-ui","title":"Via the Prefect UI","text":"

You can see all the variables in your Prefect server instance or Prefect Cloud workspace on the Variables page of the Prefect UI. Both the name and value of all variables are visible to anyone with access to the server or workspace.

To create a new variable, select the + button next to the header of the Variables page. Enter the name and value of the variable.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#via-the-rest-api","title":"Via the REST API","text":"

Variables can be created and deleted via the REST API. You can also set and get variables via the API with either the variable name or ID. See the REST reference for more information.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#via-the-cli","title":"Via the CLI","text":"

You can list, inspect, and delete variables via the command line interface with the prefect variable ls, prefect variable inspect <name>, and prefect variable delete <name> commands, respectively.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#accessing-variables","title":"Accessing variables","text":"

In addition to the UI and API, variables can be referenced in code and in certain Prefect configuration files.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#in-python-code","title":"In Python code","text":"

You can access any variable via the Python SDK via the .get() method. If you attempt to reference a variable that does not exist, the method will return None.

from prefect import variables\n\n# from a synchronous context\nanswer = variables.get('the_answer')\nprint(answer)\n# 42\n\n# from an asynchronous context\nanswer = await variables.get('the_answer')\nprint(answer)\n# 42\n\n# without a default value\nanswer = variables.get('not_the_answer')\nprint(answer)\n# None\n\n# with a default value\nanswer = variables.get('not_the_answer', default='42')\nprint(answer)\n# 42\n
","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#in-project-steps","title":"In Project steps","text":"

In .yaml files, variables are denoted by quotes and double curly brackets, like so: \"{{ prefect.variables.my_variable }}\". You can use variables to templatize deployment steps by referencing them in the prefect.yaml file used to create deployments. For example, you could pass a variable in to specify a branch for a git repo in a deployment pull step:

pull:\n- prefect.deployments.steps.git_clone:\n    repository: https://github.com/PrefectHQ/hello-projects.git\n    branch: \"{{ prefect.variables.deployment_branch }}\"\n

The deployment_branch variable will be evaluated at runtime for the deployed flow, allowing changes to be made to variables used in a pull action without updating a deployment directly.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/work-pools/","title":"Work Pools, Workers, and Agents","text":"

Work pools and workers bridge the Prefect orchestration environment with your execution environment. When a deployment creates a flow run, it is submitted to a specific work pool for scheduling. A worker running in the execution environment can poll its respective work pool for new runs to execute, or the work pool can submit flow runs to serverless infrastructure directly, depending on your configuration.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#work-pool-overview","title":"Work pool overview","text":"

Work pools organize work for execution. Work pools have types corresponding to the infrastructure that will execute the flow code, as well as the delivery method of work to that environment. Pull work pools require agents or workers to poll the work pool for flow runs to execute. Push work pools can submit runs directly to serverless infrastructure providers like Cloud Run, Azure Container Instances, and AWS ECS without the need for an agent or worker.

Work pools are like pub/sub topics

It's helpful to think of work pools as a way to coordinate (potentially many) deployments with (potentially many) agents through a known channel: the pool itself. This is similar to how \"topics\" are used to connect producers and consumers in a pub/sub or message-based system. By switching a deployment's work pool, users can quickly change the agent that will execute their runs, making it easy to promote runs through environments or even debug locally.

In addition, users can control aspects of work pool behavior, like how many runs the pool allows to be run concurrently or pausing delivery entirely. These options can be modified at any time, and any workers requesting work for a specific pool will only see matching flow runs.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#work-pool-configuration","title":"Work pool configuration","text":"

You can configure work pools by using:

To manage work pools in the UI, click the Work Pools icon. This displays a list of currently configured work pools.

You can pause a work pool from this page by using the toggle.

Select the + button to create a new work pool. You'll be able to specify the details for work served by this work pool.

To configure a work pool via the Prefect CLI, use the prefect work-pool create command:

prefect work-pool create [OPTIONS] NAME\n

NAME is a required, unique name for the work pool.

Optional configuration parameters you can specify to filter work on the pool include:

Option Description --paused If provided, the work pool will be created in a paused state. --type The type of infrastructure that can execute runs from this work pool. [default: prefect-agent]

For example, to create a work pool called test-pool, you would run this command:

$ prefect work-pool create test-pool\n\nCreated work pool with properties:\n    name - 'test-pool'\nid - a51adf8c-58bb-4949-abe6-1b87af46eabd\n    concurrency limit - None\n\nStart an agent to pick up flows from the work pool:\n    prefect agent start -p 'test-pool'\n\nInspect the work pool:\n    prefect work-pool inspect 'test-pool'\n

On success, the command returns the details of the newly created work pool.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#base-job-template","title":"Base job template","text":"

Each work pool has a base job template that allows the customization of the behavior of the worker executing flow runs from the work pool.

The base job template acts as a contract defining the configuration passed to the worker for each flow run and the options available to deployment creators to customize worker behavior per deployment.

A base job template comprises a job_configuration section and a variables section.

The variables section defines the fields available to be customized per deployment. The variables section follows the OpenAPI specification, which allows work pool creators to place limits on provided values (type, minimum, maximum, etc.).

The job configuration section defines how values provided for fields in the variables section should be translated into the configuration given to a worker when executing a flow run.

The values in the job_configuration can use placeholders to reference values provided in the variables section. Placeholders are declared using double curly braces, e.g., {{ variable_name }}. job_configuration values can also be hard-coded if the value should not be customizable.

Each worker type is configured with a default base job template, making it easy to start with a work pool. The default base template defines fields that can be edited on a per-deployment basis or for the entire work pool via the Prefect API and UI.

For example, if we create a process work pool named 'above-ground' via the CLI:

$ prefect work-pool create --type process above-ground\n

We see these configuration options available in the Prefect UI:

For a process work pool with the default base job template, we can set environment variables for spawned processes, set the working directory to execute flows, and control whether the flow run output is streamed to workers' standard output. You can also see an example of JSON formatted base job template with the 'Advanced' tab.

You can override each of these attributes on a per-deployment basis. When deploying a flow, you can specify these overrides in the work_pool.job_variables section of a deployment.yaml.

If we wanted to turn off streaming output for a specific deployment, we could add the following to our deployment.yaml:

work_pool:\nname: above-ground  job_variables:\nstream_output: false\n

Advanced Customization of the Base Job Template

For advanced use cases, users can create work pools with fully customizable job templates. This customization is available when creating or editing a work pool on the 'Advanced' tab within the UI.

Advanced customization is useful anytime the underlying infrastructure supports a high degree of customization. In these scenarios a work pool job template allows you to expose a minimal and easy-to-digest set of options to deployment authors. Additionally, these options are the only customizable aspects for deployment infrastructure, which can be useful for restricting functionality in secure environments. For example, the kubernetes worker type allows users to specify a custom job template that can be used to configure the manifest that workers use to create jobs for flow execution.

For more information and advanced configuration examples, see the Kubernetes Worker documentation.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#viewing-work-pools","title":"Viewing work pools","text":"

At any time, users can see and edit configured work pools in the Prefect UI.

To view work pools with the Prefect CLI, you can:

prefect work-pool ls lists all configured work pools for the server.

$ prefect work-pool ls\nprefect work-pool ls\n                               Work pools\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name       \u2503    Type        \u2503                                   ID \u2503 Concurrency Limit \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 barbeque   \u2502 prefect-agent  \u2502 72c0a101-b3e2-4448-b5f8-a8c5184abd17 \u2502 None              \u2502\n\u2502 k8s-pool   \u2502  prefect-agent \u2502 7b6e3523-d35b-4882-84a7-7a107325bb3f \u2502 None              \u2502\n\u2502 test-pool  \u2502  prefect-agent \u2502 a51adf8c-58bb-4949-abe6-1b87af46eabd \u2502 None              \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                       (**) denotes a paused pool\n

prefect work-pool inspect provides all configuration metadata for a specific work pool by ID.

$ prefect work-pool inspect 'test-pool'\nWorkpool(\nid='a51adf8c-58bb-4949-abe6-1b87af46eabd',\n    created='2 minutes ago',\n    updated='2 minutes ago',\n    name='test-pool',\n    filter=None,\n)\n

prefect work-pool preview displays scheduled flow runs for a specific work pool by ID for the upcoming hour. The optional --hours flag lets you specify the number of hours to look ahead.

$ prefect work-pool preview 'test-pool' --hours 12 \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Scheduled Star\u2026 \u2503 Run ID                     \u2503 Name         \u2503 Deployment ID               \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 2022-02-26 06:\u2026 \u2502 741483d4-dc90-4913-b88d-0\u2026 \u2502 messy-petrel \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 05:\u2026 \u2502 14e23a19-a51b-4833-9322-5\u2026 \u2502 unselfish-g\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 04:\u2026 \u2502 deb44d4d-5fa2-4f70-a370-e\u2026 \u2502 solid-ostri\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 03:\u2026 \u2502 07374b5c-121f-4c8d-9105-b\u2026 \u2502 sophisticat\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 02:\u2026 \u2502 545bc975-b694-4ece-9def-8\u2026 \u2502 gorgeous-mo\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 01:\u2026 \u2502 704f2d67-9dfa-4fb8-9784-4\u2026 \u2502 sassy-hedge\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 00:\u2026 \u2502 691312f0-d142-4218-b617-a\u2026 \u2502 sincere-moo\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 23:\u2026 \u2502 7cb3ff96-606b-4d8c-8a33-4\u2026 \u2502 curious-cat\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 22:\u2026 \u2502 3ea559fe-cb34-43b0-8090-1\u2026 \u2502 primitive-f\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 21:\u2026 \u2502 96212e80-426d-4bf4-9c49-e\u2026 \u2502 phenomenal-\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                                   (**) denotes a late run\n
","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#pausing-and-deleting-work-pools","title":"Pausing and deleting work pools","text":"

A work pool can be paused at any time to stop the delivery of work to agents. Agents will not receive any work when polling a paused pool.

To pause a work pool through the Prefect CLI, use the prefect work-pool pause command:

$ prefect work-pool pause 'test-pool'\nPaused work pool 'test-pool'\n

To resume a work pool through the Prefect CLI, use the prefect work-pool resume command with the work pool name.

To delete a work pool through the Prefect CLI, use the prefect work-pool delete command with the work pool name.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#managing-concurrency","title":"Managing concurrency","text":"

Each work pool can optionally restrict concurrent runs of matching flows.

For example, a work pool with a concurrency limit of 5 will only release new work if fewer than 5 matching runs are currently in a Running or Pending state. If 3 runs are Running or Pending, polling the pool for work will only result in 2 new runs, even if there are many more available, to ensure that the concurrency limit is not exceeded.

When using the prefect work-pool Prefect CLI command to configure a work pool, the following subcommands set concurrency limits:

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#work-queues","title":"Work queues","text":"

Advanced topic

Work queues do not require manual creation or configuration, because Prefect will automatically create them whenever needed. Managing work queues offers advanced control over how runs are executed.

Each work pool has a \"default\" queue that all work will be sent to by default. Additional queues can be added to a work pool. Work queues enable greater control over work delivery through fine grained priority and concurrency. Each work queue has a priority indicated by a unique positive integer. Lower numbers take greater priority in the allocation of work. Accordingly, new queues can be added without changing the rank of the higher-priority queues (e.g. no matter how many queues you add, the queue with priority 1 will always be the highest priority).

Work queues can also have their own concurrency limits. Note that each queue is also subject to the global work pool concurrency limit, which cannot be exceeded.

Together work queue priority and concurrency enable precise control over work. For example, a pool may have three queues: A \"low\" queue with priority 10 and no concurrency limit, a \"high\" queue with priority 5 and a concurrency limit of 3, and a \"critical\" queue with priority 1 and a concurrency limit of 1. This arrangement would enable a pattern in which there are two levels of priority, \"high\" and \"low\" for regularly scheduled flow runs, with the remaining \"critical\" queue for unplanned, urgent work, such as a backfill.

Priority is evaluated to determine the order in which flow runs are submitted for execution. If all flow runs are capable of being executed with no limitation due to concurrency or otherwise, priority is still used to determine order of submission, but there is no impact to execution. If not all flow runs can be executed, usually as a result of concurrency limits, priority is used to determine which queues receive precedence to submit runs for execution.

Priority for flow run submission proceeds from the highest priority to the lowest priority. In the preceding example, all work from the \"critical\" queue (priority 1) will be submitted, before any work is submitted from \"high\" (priority 5). Once all work has been submitted from priority queue \"critical\", work from the \"high\" queue will begin submission.

If new flow runs are received on the \"critical\" queue while flow runs are still in scheduled on the \"high\" and \"low\" queues, flow run submission goes back to ensuring all scheduled work is first satisfied from the highest priority queue, until it is empty, in waterfall fashion.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#local-debugging","title":"Local debugging","text":"

As long as your deployment's infrastructure block supports it, you can use work pools to temporarily send runs to an agent running on your local machine for debugging by running prefect agent start -p my-local-machine and updating the deployment's work pool to my-local-machine.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#worker-overview","title":"Worker overview","text":"

Workers are lightweight polling services that retrieve scheduled runs from a work pool and execute them.

Workers are similar to agents, but offer greater control over infrastructure configuration and the ability to route work to specific types of execution environments.

Workers each have a type corresponding to the execution environment to which they will submit flow runs. Workers are only able to join work pools that match their type. As a result, when deployments are assigned to a work pool, you know in which execution environment scheduled flow runs for that deployment will run.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#worker-types","title":"Worker types","text":"

Below is a list of available worker types. Note that most worker types will require installation of an additional package.

Worker Type Description Required Package process Executes flow runs in subprocesses kubernetes Executes flow runs as Kubernetes jobs prefect-kubernetes docker Executes flow runs within Docker containers prefect-docker ecs Executes flow runs as ECS tasks prefect-aws cloud-run Executes flow runs as Google Cloud Run jobs prefect-gcp azure-container-instance Execute flow runs in ACI containers prefect-azure

If you don\u2019t see a worker type that meets your needs, consider developing a new worker type!

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#worker-options","title":"Worker options","text":"

Workers poll for work from one or more queues within a work pool. If the worker references a work queue that doesn't exist, it will be created automatically. The worker CLI is able to infer the worker type from the work pool. Alternatively, you can also specify the worker type explicitly. If you supply the worker type to the worker CLI, a work pool will be created automatically if it doesn't exist (using default job settings).

Configuration parameters you can specify when starting a worker include:

Option Description --name, -n The name to give to the started worker. If not provided, a unique name will be generated. --pool, -p The work pool the started worker should poll. --work-queue, -q One or more work queue names for the worker to pull from. If not provided, the worker will pull from all work queues in the work pool. --type, -t The type of worker to start. If not provided, the worker type will be inferred from the work pool. --prefetch-seconds The amount of time before a flow run's scheduled start time to begin submission. Default is the value of PREFECT_WORKER_PREFETCH_SECONDS. --run-once Only run worker polling once. By default, the worker runs forever. --limit, -l The maximum number of flow runs to start simultaneously. --with-healthcheck Start a healthcheck server for the worker. --install-policy Install policy to use workers from Prefect integration packages.

You must start a worker within an environment that can access or create the infrastructure needed to execute flow runs. The worker will deploy flow runs to the infrastructure corresponding to the worker type. For example, if you start a worker with type kubernetes, the worker will deploy flow runs to a Kubernetes cluster.

Prefect must be installed in execution environments

Prefect must be installed in any environment (virtual environment, Docker container, etc.) where you intend to run the worker or execute a flow run.

PREFECT_API_URL and PREFECT_API_KEYsettings for workers

PREFECT_API_URL must be set for the environment in which your worker is running. You must also have a user or service account with the Worker role, which can be configured by setting the PREFECT_API_KEY.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#starting-a-worker","title":"Starting a worker","text":"

Use the prefect worker start CLI command to start a worker. You must pass at least the work pool name. If the work pool does not exist, it will be created if the --type flag is used.

$ prefect worker start -p [work pool name]\n

For example:

prefect worker start -p \"my-pool\"\nDiscovered worker type 'process' for work pool 'my-pool'.\nWorker 'ProcessWorker 65716280-96f8-420b-9300-7e94417f2673' started!\n

In this case, Prefect automatically discovered the worker type from the work pool. To create a work pool and start a worker in one command, use the --type flag:

prefect worker start -p \"my-pool\" --type \"process\"\nWorker 'ProcessWorker d24f3768-62a9-4141-9480-a056b9539a25' started!\n06:57:53.289 | INFO    | prefect.worker.process.processworker d24f3768-62a9-4141-9480-a056b9539a25 - Worker pool 'my-pool' created.\n

In addition, workers can limit the number of flow runs they will start simultaneously with the --limit flag. For example, to limit a worker to five concurrent flow runs:

prefect worker start --pool \"my-pool\" --limit 5\n
","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#configuring-prefetch","title":"Configuring prefetch","text":"

By default, the worker begins submitting flow runs a short time (10 seconds) before they are scheduled to run. This behavior allows time for the infrastructure to be created so that the flow run can start on time.

In some cases, infrastructure will take longer than 10 seconds to start the flow run. The prefetch can be increased using the --prefetch-seconds option or the PREFECT_WORKER_PREFETCH_SECONDS setting.

If this value is more than the amount of time it takes for the infrastructure to start, the flow run will wait until its scheduled start time.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#polling-for-work","title":"Polling for work","text":"

Workers poll for work every 15 seconds by default. This interval is configurable in your profile settings with the PREFECT_WORKER_QUERY_SECONDS setting.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#install-policy","title":"Install policy","text":"

The Prefect CLI can install the required package for Prefect-maintained worker types automatically. You can configure this behavior with the --install-policy option. The following are valid install policies

Install Policy Description always Always install the required package. Will update the required package to the most recent version if already installed. if-not-present Install the required package if it is not already installed. never Never install the required package. prompt Prompt the user to choose whether to install the required package. This is the default install policy. If prefect worker start is run non-interactively, the prompt install policy will behave the same as never.","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#agent-overview","title":"Agent overview","text":"

Agent processes are lightweight polling services that get scheduled work from a work pool and deploy the corresponding flow runs.

Agents poll for work every 15 seconds by default. This interval is configurable in your profile settings with the PREFECT_AGENT_QUERY_INTERVAL setting.

It is possible for multiple agent processes to be started for a single work pool. Each agent process sends a unique ID to the server to help disambiguate themselves and let users know how many agents are active.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#agent-options","title":"Agent options","text":"

Agents are configured to pull work from one or more work pool queues. If the agent references a work queue that doesn't exist, it will be created automatically.

Configuration parameters you can specify when starting an agent include:

Option Description --api The API URL for the Prefect server. Default is the value of PREFECT_API_URL. --hide-welcome Do not display the startup ASCII art for the agent process. --limit Maximum number of flow runs to start simultaneously. [default: None] --match, -m Dynamically matches work queue names with the specified prefix for the agent to pull from,for example dev- will match all work queues with a name that starts with dev-. [default: None] --pool, -p A work pool name for the agent to pull from. [default: None] --prefetch-seconds The amount of time before a flow run's scheduled start time to begin submission. Default is the value of PREFECT_AGENT_PREFETCH_SECONDS. --run-once Only run agent polling once. By default, the agent runs forever. [default: no-run-once] --work-queue, -q One or more work queue names for the agent to pull from. [default: None]

You must start an agent within an environment that can access or create the infrastructure needed to execute flow runs. Your agent will deploy flow runs to the infrastructure specified by the deployment.

Prefect must be installed in execution environments

Prefect must be installed in any environment in which you intend to run the agent or execute a flow run.

PREFECT_API_URL and PREFECT_API_KEY settings for agents

PREFECT_API_URL must be set for the environment in which your agent is running or specified when starting the agent with the --api flag. You must also have a user or service account with the Worker role, which can be configured by setting the PREFECT_API_KEY.

If you want an agent to communicate with Prefect Cloud or a Prefect server from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL in that environment.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#starting-an-agent","title":"Starting an agent","text":"

Use the prefect agent start CLI command to start an agent. You must pass at least one work pool name or match string that the agent will poll for work. If the work pool does not exist, it will be created.

$ prefect agent start -p [work pool name]\n

For example:

$ prefect agent start -p \"my-pool\"\nStarting agent with ephemeral API...\n\u00a0 ___ ___ ___ ___ ___ ___ _____ \u00a0 \u00a0 _ \u00a0 ___ ___ _\u00a0 _ _____\n\u00a0| _ \\ _ \\ __| __| __/ __|_ \u00a0 _| \u00a0 /_\\ / __| __| \\| |_ \u00a0 _|\n|\u00a0 _/ \u00a0 / _|| _|| _| (__\u00a0 | |\u00a0 \u00a0 / _ \\ (_ | _|| .` | | |\n|_| |_|_\\___|_| |___\\___| |_| \u00a0 /_/ \\_\\___|___|_|\\_| |_|\n\nAgent started! Looking for work from work pool 'my-pool'...\n

In this case, Prefect automatically created a new my-queue work queue.

By default, the agent polls the API specified by the PREFECT_API_URL environment variable. To configure the agent to poll from a different server location, use the --api flag, specifying the URL of the server.

In addition, agents can match multiple queues in a work pool by providing a --match string instead of specifying all of the queues. The agent will poll every queue with a name that starts with the given string. New queues matching this prefix will be found by the agent without needing to restart it.

For example:

$ prefect agent start --match \"foo-\"\n

This example will poll every work queue that starts with \"foo-\".

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#configuring-prefetch_1","title":"Configuring prefetch","text":"

By default, the agent begins submission of flow runs a short time (10 seconds) before they are scheduled to run. This allows time for the infrastructure to be created, so the flow run can start on time. In some cases, infrastructure will take longer than this to actually start the flow run. In these cases, the prefetch can be increased using the --prefetch-seconds option or the PREFECT_AGENT_PREFETCH_SECONDS setting. Submission can begin an arbitrary amount of time before the flow run is scheduled to start. If this value is larger than the amount of time it takes for the infrastructure to start, the flow run will wait until its scheduled start time. This allows flow runs to start exactly on time.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#additional-resources","title":"Additional resources","text":"","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"contributing/common-mistakes/","title":"Troubleshooting","text":"

This section provides tips for troubleshooting and resolving common development mistakes and error situations.

","tags":["contributing","development","troubleshooting","errors"],"boost":2},{"location":"contributing/common-mistakes/#api-tests-return-an-unexpected-307-redirected","title":"API tests return an unexpected 307 Redirected","text":"

Summary: Requests require a trailing / in the request URL.

If you write a test that does not include a trailing / when making a request to a specific endpoint:

async def test_example(client):\n    response = await client.post(\"/my_route\")\n    assert response.status_code == 201\n

You'll see a failure like:

E       assert 307 == 201\nE        +  where 307 = <Response [307 Temporary Redirect]>.status_code\n

To resolve this, include the trailing /:

async def test_example(client):\n    response = await client.post(\"/my_route/\")\n    assert response.status_code == 201\n

Note: requests to nested URLs may exhibit the opposite behavior and require no trailing slash:

async def test_nested_example(client):\n    response = await client.post(\"/my_route/filter/\")\n    assert response.status_code == 307\n\n    response = await client.post(\"/my_route/filter\")\n    assert response.status_code == 200\n

Reference: \"HTTPX disabled redirect following by default\" in 0.22.0.

","tags":["contributing","development","troubleshooting","errors"],"boost":2},{"location":"contributing/common-mistakes/#pytestpytestunraisableexceptionwarning-or-resourcewarning","title":"pytest.PytestUnraisableExceptionWarning or ResourceWarning","text":"

As you're working with one of the FlowRunner implementations, you may get a surprising error like:

E               pytest.PytestUnraisableExceptionWarning: Exception ignored in: <ssl.SSLSocket fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>\nE\nE               Traceback (most recent call last):\nE                 File \".../pytest_asyncio/plugin.py\", line 306, in setup\nE                   res = await func(**_add_kwargs(func, kwargs, event_loop, request))\nE               ResourceWarning: unclosed <ssl.SSLSocket fd=10, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 60605), raddr=('127.0.0.1', 6443)>\n\n.../_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning\n

This is saying that your test suite (or the prefect library code) opened a connection to something (like a Docker daemon or a Kubernetes cluster) and didn't close it.

It may help to re-run the specific test with PYTHONTRACEMALLOC=25 pytest ... so that Python can display more of the stack trace where the connection was opened.

","tags":["contributing","development","troubleshooting","errors"],"boost":2},{"location":"contributing/overview/","title":"Contributing","text":"

Thanks for considering contributing to Prefect!

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#setting-up-a-development-environment","title":"Setting up a development environment","text":"

First, you'll need to download the source code and install an editable version of the Python package:

# Clone the repository\ngit clone https://github.com/PrefectHQ/prefect.git\ncd prefect\n\n# We recommend using a virtual environment\npython -m venv .venv\nsource .venv/bin/activate\n\n# Install the package with development dependencies\npip install -e \".[dev]\"\n\n# Setup pre-commit hooks for required formatting\npre-commit install\n

If you don't want to install the pre-commit hooks, you can manually install the formatting dependencies with:

pip install $(./scripts/precommit-versions.py)\n

You'll need to run black and ruff before a contribution can be accepted.

After installation, you can run the test suite with pytest:

# Run all the tests\npytest tests\n\n# Run a subset of tests\npytest tests/test_flows.py\n

Building the Prefect UI

If you intend to run a local Prefect server during development, you must first build the UI. See UI development for instructions.

Windows support is under development

Support for Prefect on Windows is a work in progress.

Right now, we're focused on your ability to develop and run flows and tasks on Windows, along with running the API server, orchestration engine, and UI.

Currently, we cannot guarantee that the tooling for developing Prefect itself in a Windows environment is fully functional.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#prefect-code-of-conduct","title":"Prefect Code of Conduct","text":"","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-pledge","title":"Our Pledge","text":"

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-standards","title":"Our Standards","text":"

Examples of behavior that contributes to creating a positive environment include:

Examples of unacceptable behavior by participants include:

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-responsibilities","title":"Our Responsibilities","text":"

Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#scope","title":"Scope","text":"

This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#enforcement","title":"Enforcement","text":"

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Chris White at chris@prefect.io. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#attribution","title":"Attribution","text":"

This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#developer-tooling","title":"Developer tooling","text":"

The Prefect CLI provides several helpful commands to aid development.

Start all services with hot-reloading on code changes (requires UI dependencies to be installed):

prefect dev start\n

Start a Prefect API that reloads on code changes:

prefect dev api\n

Start a Prefect agent that reloads on code changes:

prefect dev agent\n
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#ui-development","title":"UI development","text":"

Developing the Prefect UI requires that npm is installed.

Start a development UI that reloads on code changes:

prefect dev ui\n

Build the static UI (the UI served by prefect server start):

prefect dev build-ui\n
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#docs-development","title":"Docs Development","text":"

Prefect uses mkdocs for the docs website and the mkdocs-material theme. While we use mkdocs-material-insiders for production, builds can still happen without the extra plugins. Deploy previews are available on pull requests, so you'll be able to browse the final look of your changes before merging.

To build the docs:

mkdocs build\n

To serve the docs locally at http://127.0.0.1:8000/:

mkdocs serve\n

For additional mkdocs help and options:

mkdocs --help\n

We us the mkdocs-material theme. To add additional JavaScript or CSS to the docs, please see the theme documentation here.

Internal developers can install the production theme by running:

pip install -e git+https://github.com/PrefectHQ/mkdocs-material-insiders.git#egg=mkdocs-material\nmkdocs build # or mkdocs build --config-file mkdocs.insiders.yml if needed\n
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#kubernetes-development","title":"Kubernetes development","text":"

Generate a manifest to deploy a development API to a local kubernetes cluster:

prefect dev kubernetes-manifest\n

To access the Prefect UI running in a Kubernetes cluster, use the kubectl port-forward command to forward a port on your local machine to an open port within the cluster. For example:

kubectl port-forward deployment/prefect-dev 4200:4200\n

This forwards port 4200 on the default internal loop IP for localhost to the Prefect server deployment.

To tell the local prefect command how to communicate with the Prefect API running in Kubernetes, set the PREFECT_API_URL environment variable:

export PREFECT_API_URL=http://localhost:4200/api\n

Since you previously configured port forwarding for the localhost port to the Kubernetes environment, you\u2019ll be able to interact with the Prefect API running in Kubernetes when using local Prefect CLI commands.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#adding-database-migrations","title":"Adding Database Migrations","text":"

To make changes to a table, first update the SQLAlchemy model in src/prefect/server/database/orm_models.py. For example, if you wanted to add a new column to the flow_run table, you would add a new column to the FlowRun model:

# src/prefect/server/database/orm_models.py\n\n@declarative_mixin\nclass ORMFlowRun(ORMRun):\n\"\"\"SQLAlchemy model of a flow run.\"\"\"\n    ...\n    new_column = Column(String, nullable=True) # <-- add this line\n

Next, you will need to generate new migration files. You must generate a new migration file for each database type. Migrations will be generated for whatever database type PREFECT_API_DATABASE_CONNECTION_URL is set to. See here for how to set the database connection URL for each database type.

To generate a new migration file, run the following command:

prefect server database revision --autogenerate -m \"<migration name>\"\n

Try to make your migration name brief but descriptive. For example:

The --autogenerate flag will automatically generate a migration file based on the changes to the models.

Always inspect the output of --autogenerate

--autogenerate will generate a migration file based on the changes to the models. However, it is not perfect. Be sure to check the file to make sure it only includes the changes you want to make. Additionally, you may need to remove extra statements that were included and not related to your change.

The new migration can be found in the src/prefect/server/database/migrations/versions/ directory. Each database type has its own subdirectory. For example, the SQLite migrations are stored in src/prefect/server/database/migrations/versions/sqlite/.

After you have inspected the migration file, you can apply the migration to your database by running the following command:

prefect server database upgrade -y\n

Once you have successfully created and applied migrations for all database types, make sure to update MIGRATION-NOTES.md to document your additions.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/style/","title":"Code style and practices","text":"

Generally, we follow the Google Python Style Guide. This document covers sections where we differ or where additional clarification is necessary.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports","title":"Imports","text":"

A brief collection of rules and guidelines for how imports should be handled in this repository.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports-in-__init__-files","title":"Imports in __init__ files","text":"

Leave __init__ files empty unless exposing an interface. If you must expose objects to present a simpler API, please follow these rules.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#exposing-objects-from-submodules","title":"Exposing objects from submodules","text":"

If importing objects from submodules, the __init__ file should use a relative import. This is required for type checkers to understand the exposed interface.

# Correct\nfrom .flows import flow\n
# Wrong\nfrom prefect.flows import flow\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#exposing-submodules","title":"Exposing submodules","text":"

Generally, submodules should not be imported in the __init__ file. Submodules should only be exposed when the module is designed to be imported and used as a namespaced object.

For example, we do this for our schema and model modules because it is important to know if you are working with an API schema or database model, both of which may have similar names.

import prefect.server.schemas as schemas\n\n# The full module is accessible now\nschemas.core.FlowRun\n

If exposing a submodule, use a relative import as you would when exposing an object.

# Correct\nfrom . import flows\n
# Wrong\nimport prefect.flows\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-to-run-side-effects","title":"Importing to run side-effects","text":"

Another use case for importing submodules is perform global side-effects that occur when they are imported.

Often, global side-effects on import are a dangerous pattern. Avoid them if feasible.

We have a couple acceptable use-cases for this currently:

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports-in-modules","title":"Imports in modules","text":"","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-other-modules","title":"Importing other modules","text":"

The from syntax should be reserved for importing objects from modules. Modules should not be imported using the from syntax.

# Correct\nimport prefect.server.schemas  # use with the full name\nimport prefect.server.schemas as schemas  # use the shorter name\n
# Wrong\nfrom prefect.server import schemas\n

Unless in an __init__.py file, relative imports should not be used.

# Correct\nfrom prefect.utilities.foo import bar\n
# Wrong\nfrom .utilities.foo import bar\n

Imports dependent on file location should never be used without explicit indication it is relative. This avoids confusion about the source of a module.

# Correct\nfrom . import test\n
# Wrong\nimport test\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#resolving-circular-dependencies","title":"Resolving circular dependencies","text":"

Sometimes, we must defer an import and perform it within a function to avoid a circular dependency.

## This function in `settings.py` requires a method from the global `context` but the context\n## uses settings\ndef from_context():\n    from prefect.context import get_profile_context\n\n    ...\n

Attempt to avoid circular dependencies. This often reveals overentanglement in the design.

When performing deferred imports, they should all be placed at the top of the function.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#with-type-annotations","title":"With type annotations","text":"

If you are just using the imported object for a type signature, you should use the TYPE_CHECKING flag.

# Correct\nfrom typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n    from prefect.server.schemas.states import State\n\ndef foo(state: \"State\"):\n    pass\n

Note that usage of the type within the module will need quotes e.g. \"State\" since it is not available at runtime.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-optional-requirements","title":"Importing optional requirements","text":"

We do not have a best practice for this yet. See the kubernetes, docker, and distributed implementations for now.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#delaying-expensive-imports","title":"Delaying expensive imports","text":"

Sometimes, imports are slow. We'd like to keep the prefect module import times fast. In these cases, we can lazily import the slow module by deferring import to the relevant function body. For modules that are consumed by many functions, the pattern used for optional requirements may be used instead.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#command-line-interface-cli-output-messages","title":"Command line interface (CLI) output messages","text":"

Upon executing a command that creates an object, the output message should offer: - A short description of what the command just did. - A bullet point list, rehashing user inputs, if possible. - Next steps, like the next command to run, if applicable. - Other relevant, pre-formatted commands that can be copied and pasted, if applicable. - A new line before the first line and after the last line.

Output Example:

$ prefect work-queue create testing\n\nCreated work queue with properties:\n    name - 'abcde'\nuuid - 940f9828-c820-4148-9526-ea8107082bda\n    tags - None\n    deployment_ids - None\n\nStart an agent to pick up flows from the created work queue:\n    prefect agent start -q 'abcde'\n\nInspect the created work queue:\n    prefect work-queue inspect 'abcde'\n

Additionally:

Placholder Example:

Create a work queue with tags:\n    prefect work-queue create '<WORK QUEUE NAME>' -t '<OPTIONAL TAG 1>' -t '<OPTIONAL TAG 2>'\n

Dedent Example:

from textwrap import dedent\n...\noutput_msg = dedent(\n    f\"\"\"\n    Created work queue with properties:\n        name - {name!r}\n        uuid - {result}\n        tags - {tags or None}\n        deployment_ids - {deployment_ids or None}\n\n    Start an agent to pick up flows from the created work queue:\n        prefect agent start -q {name!r}\n\n    Inspect the created work queue:\n        prefect work-queue inspect {name!r}\n    \"\"\"\n)\n

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#api-versioning","title":"API Versioning","text":"

The Prefect client can be run separately from the Prefect orchestration server and communicate entirely via an API. Among other things, the Prefect client includes anything that runs task or flow code, (e.g. agents, and the Python client) or any consumer of Prefect metadata, (e.g. the Prefect UI, and CLI). The Prefect server stores this metadata and serves it via the REST API.

Sometimes, we make breaking changes to the API (for good reasons). In order to check that a Prefect client is compatible with the API it's making requests to, every API call the client makes includes a three-component API_VERSION header with major, minor, and patch versions.

For example, a request with the X-PREFECT-API-VERSION=3.2.1 header has a major version of 3, minor version 2, and patch version 1.

This version header can be changed by modifying the API_VERSION constant in prefect.server.api.server.

When making a breaking change to the API, we should consider if the change might be backwards compatible for clients, meaning that the previous version of the client would still be able to make calls against the updated version of the server code. This might happen if the changes are purely additive: such as adding a non-critical API route. In these cases, we should make sure to bump the patch version.

In almost all other cases, we should bump the minor version, which denotes a non-backwards-compatible API change. We have reserved the major version chanes to denote also-backwards compatible change that might be significant in some way, such as a major release milestone.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/versioning/","title":"Versioning","text":"","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#understanding-version-numbers","title":"Understanding version numbers","text":"

Versions are composed of three parts: MAJOR.MINOR.PATCH. For example, the version 2.5.0 has a major version of 2, a minor version of 5, and patch version of 0.

Ocassionally, we will add a suffix to the version such as rc, a, or b. These indicate pre-release versions that users can opt-into installing to test functionality before it is ready for release.

Each release will increase one of the version numbers. If we increase a number other than the patch version, the versions to the right of it will be reset to zero.

","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#prefects-versioning-scheme","title":"Prefect's versioning scheme","text":"

Prefect will increase the major version when significant and widespread changes are made to the core product. It is very unlikely that the major version will change without extensive warning.

Prefect will increase the minor version when:

Prefect will increase the patch version when:

","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#breaking-changes-and-deprecation","title":"Breaking changes and deprecation","text":"

A breaking change means that your code will need to change to use a new version of Prefect. We strive to avoid breaking changes in all releases.

At times, Prefect will deprecate a feature. This means that a feature has been marked for removal in the future. When you use it, you may see warnings that it will be removed. A feature is deprecated when it will no longer be maintained. Frequently, a deprecated feature will have a new and improved alternative. Deprecated features will be retained for at least 3 minor version increases or 6 months, whichever is longer. We may retain deprecated features longer than this time period.

Prefect will sometimes include changes to behavior to fix a bug. These changes are not categorized as breaking changes.

","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#client-compatibility-with-prefect","title":"Client compatibility with Prefect","text":"

When running a Prefect server, you are in charge of ensuring the version is compatible with those of the clients that are using the server. Prefect aims to maintain backwards compatibility with old clients for each server release. In contrast, sometimes new clients cannot be used with an old server. The new client may expect the server to support functionality that it does not yet include. For this reason, we recommend that all clients are the same version as the server or older.

For example, a client on 2.1.0 can be used with a server on 2.5.0. A client on 2.5.0 cannot be used with a server on 2.1.0.

","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#client-compatibility-with-cloud","title":"Client compatibility with Cloud","text":"

Prefect Cloud targets compatibility with all versions of Prefect clients. If you encounter a compatibility issue, please file a bug report.

","tags":["versioning","semver"],"boost":2},{"location":"getting-started/installation/","title":"Installation","text":"

Prefect requires Python 3.8 or newer.

We recommend installing Prefect using a Python virtual environment manager such as pipenv, conda, or virtualenv/venv.

Windows and Linux requirements

See Windows installation notes and Linux installation notes for details on additional installation requirements and considerations.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#install-prefect","title":"Install Prefect","text":"

The following sections describe how to install Prefect in your development or execution environment.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-the-latest-version","title":"Installing the latest version","text":"

Prefect is published as a Python package. To install the latest release or upgrade an existing Prefect install, run the following command in your terminal:

pip install -U prefect\n

To install a specific version, specify the version number like this:

pip install -U \"prefect==2.10.4\"\n

See available release versions in the Prefect Release Notes.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-the-bleeding-edge","title":"Installing the bleeding edge","text":"

If you'd like to test with the most up-to-date code, you can install directly off the main branch on GitHub:

pip install -U git+https://github.com/PrefectHQ/prefect\n

The main branch may not be stable

Please be aware that this method installs unreleased code and may not be stable.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-for-development","title":"Installing for development","text":"

If you'd like to install a version of Prefect for development:

  1. Clone the Prefect repository.
  2. Install an editable version of the Python package with pip install -e.
  3. Install pre-commit hooks.
$ git clone https://github.com/PrefectHQ/prefect.git\n$ cd prefect\n$ pip install -e \".[dev]\"\n$ pre-commit install\n

See our Contributing guide for more details about standards and practices for contributing to Prefect.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#checking-your-installation","title":"Checking your installation","text":"

To confirm that Prefect was installed correctly, run the command prefect version to print the version and environment details to your console.

$ prefect version\n\nVersion:             2.10.21\nAPI version:         0.8.4\nPython version:      3.10.12\nGit commit:          da816542\nBuilt:               Thu, Jul 13, 2023 2:05 PM\nOS/Arch:             darwin/arm64\nProfile:              local\nServer type:         ephemeral\nServer:\n  Database:          sqlite\n  SQLite version:    3.42.0\n
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#windows-installation-notes","title":"Windows installation notes","text":"

You can install and run Prefect via Windows PowerShell, the Windows Command Prompt, or conda. After installation, you may need to manually add the Python local packages Scripts folder to your Path environment variable.

The Scripts folder path looks something like this (the username and Python version may be different on your system):

C:\\Users\\MyUserNameHere\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\Scripts\n

Watch the pip install output messages for the Scripts folder path on your system.

If you're using Windows Subsystem for Linux (WSL), see Linux installation notes.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#linux-installation-notes","title":"Linux installation notes","text":"

Linux is a popular operating system for running Prefect. You can use Prefect Cloud as your API server, or host your own Prefect server backed by PostgreSQL.

For development, you can use SQLite 2.24 or newer as your database. Note that certain Linux versions of SQLite can be problematic. Compatible versions include Ubuntu 22.04 LTS and Ubuntu 20.04 LTS.

Alternatively, you can install SQLite on Red Hat Enterprise Linux (RHEL) or use the conda virtual environment manager and configure a compatible SQLite version.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#using-a-self-signed-ssl-certificate","title":"Using a self-signed SSL certificate","text":"

If you're using a self-signed SSL certificate, you need to configure your environment to trust the certificate. You can add the certificate to your system bundle and pointing your tools to use that bundle by configuring the SSL_CERT_FILE environment variable.

If the certificate is not part of your system bundle, you can set the PREFECT_API_TLS_INSECURE_SKIP_VERIFY to True to disable certificate verification altogether.

Note: Disabling certificate validation is insecure and only suggested as an option for testing!

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#proxies","title":"Proxies","text":"

Prefect supports communicating via proxies through environment variables. Simply set HTTPS_PROXY and SSL_CERT_FILE in your environment, and the underlying network libraries will route Prefect\u2019s requests appropriately. Read more about using Prefect Cloud with proxies here.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#external-requirements","title":"External requirements","text":"","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#sqlite","title":"SQLite","text":"

You can use Prefect Cloud as your API server, or host your own Prefect server backed by PostgreSQL.

By default, a local Prefect server instance uses SQLite as the backing database. SQLite is not packaged with the Prefect installation. Most systems will already have SQLite installed, because it is typically bundled as a part of Python. Prefect requires SQLite version 3.24.0 or later.

You can check your SQLite version by executing the following command in a terminal:

$ sqlite3 --version\n

Or use the Prefect CLI command prefect version, which prints version and environment details to your console, including the server database and version. For example:

$ prefect version\nVersion:             2.10.21\nAPI version:         0.8.4\nPython version:      3.10.12\nGit commit:          a46cbebb\nBuilt:               Sat, Jul 15, 2023 7:59 AM\nOS/Arch:             darwin/arm64\nProfile:              default\nServer type:         cloud\n
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#install-sqlite-on-rhel","title":"Install SQLite on RHEL","text":"

The following steps are needed to install an appropriate version of SQLite on Red Hat Enterprise Linux (RHEL). Note that some RHEL instances have no C compiler, so you may need to check for and install gcc first:

yum install gcc\n

Download and extract the tarball for SQLite.

wget https://www.sqlite.org/2022/sqlite-autoconf-3390200.tar.gz\ntar -xzf sqlite-autoconf-3390200.tar.gz\n

Move to the extracted SQLite directory, then build and install SQLite.

cd sqlite-autoconf-3390200/\n./configure\nmake\nmake install\n

Add LD_LIBRARY_PATH to your profile.

echo 'export LD_LIBRARY_PATH=\"/usr/local/lib\"' >> /etc/profile\n

Restart your shell to register these changes.

Now you can install Prefect using pip.

pip3 install prefect\n
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#using-prefect-in-an-environment-with-http-proxies","title":"Using Prefect in an environment with HTTP proxies","text":"

If you are using Prefect Cloud or hosting your own Prefect server instance, the Prefect library will connect to the API via any proxies you have listed in the HTTP_PROXY, HTTPS_PROXY, or ALL_PROXY environment variables. You may also use the NO_PROXY environment variable to specify which hosts should not be sent through the proxy.

For more information about these environment variables, see the cURL documentation.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#next-steps","title":"Next steps","text":"

Now that you have Prefect installed and your environment configured, you may want to check out the Tutorial to get more familiar with Prefect.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"guides/","title":"Guides","text":"

Need help?

Get your questions answered with a Prefect product advocate by Booking A Rubber Duck!

","tags":["guides","how to"],"boost":2},{"location":"guides/dask-ray-task-runners/","title":"Dask and Ray task runners","text":"

Task runners provide an execution environment for tasks. In a flow decorator, you can specify a task runner to run the tasks called in that flow.

The default task runner is the ConcurrentTaskRunner.

Use .submit to run your tasks asynchronously

To run tasks asynchronously use the .submit method when you call them. If you call a task as you would normally in Python code it will run synchronously, even if you are calling the task within a flow that uses the ConcurrentTaskRunner, DaskTaskRunner, or RayTaskRunner.

Many real-world data workflows benefit from true parallel, distributed task execution. For these use cases, the following Prefect-developed task runners for parallel task execution may be installed as Prefect Integrations.

These task runners can spin up a local Dask cluster or Ray instance on the fly, or let you connect with a Dask or Ray environment you've set up separately. Then you can take advantage of massively parallel computing environments.

Use Dask or Ray in your flows to choose the execution environment that fits your particular needs.

To show you how they work, let's start small.

Remote storage

We recommend configuring remote file storage for task execution with DaskTaskRunner or RayTaskRunner. This ensures tasks executing in Dask or Ray have access to task result storage, particularly when accessing a Dask or Ray instance outside of your execution environment.

","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#configuring-a-task-runner","title":"Configuring a task runner","text":"

You may have seen this briefly in a previous tutorial, but let's look a bit more closely at how you can configure a specific task runner for a flow.

Let's start with the SequentialTaskRunner. This task runner runs all tasks synchronously and may be useful when used as a debugging tool in conjunction with async code.

Let's start with this simple flow. We import the SequentialTaskRunner, specify a task_runner on the flow, and call the tasks with .submit().

from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=SequentialTaskRunner())\ndef greetings(names):\n    for name in names:\nsay_hello.submit(name)\nsay_goodbye.submit(name)\ngreetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

Save this as sequential_flow.py and run it in a terminal. You'll see output similar to the following:

$ python sequential_flow.py\n16:51:17.967 | INFO    | prefect.engine - Created flow run 'humongous-mink' for flow 'greetings'\n16:51:17.967 | INFO    | Flow run 'humongous-mink' - Starting 'SequentialTaskRunner'; submitted tasks will be run sequentially...\n16:51:18.038 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:51:18.038 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-0' immediately...\nhello arthur\n16:51:18.060 | INFO    | Task run 'say_hello-811087cd-0' - Finished in state Completed()\n16:51:18.107 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:51:18.107 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-0' immediately...\ngoodbye arthur\n16:51:18.123 | INFO    | Task run 'say_goodbye-261e56a8-0' - Finished in state Completed()\n16:51:18.134 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:51:18.134 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-1' immediately...\nhello trillian\n16:51:18.150 | INFO    | Task run 'say_hello-811087cd-1' - Finished in state Completed()\n16:51:18.159 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:51:18.159 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-1' immediately...\ngoodbye trillian\n16:51:18.181 | INFO    | Task run 'say_goodbye-261e56a8-1' - Finished in state Completed()\n16:51:18.190 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:51:18.190 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-2' immediately...\nhello ford\n16:51:18.210 | INFO    | Task run 'say_hello-811087cd-2' - Finished in state Completed()\n16:51:18.219 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:51:18.219 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-2' immediately...\ngoodbye ford\n16:51:18.237 | INFO    | Task run 'say_goodbye-261e56a8-2' - Finished in state Completed()\n16:51:18.246 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:51:18.246 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-3' immediately...\nhello marvin\n16:51:18.264 | INFO    | Task run 'say_hello-811087cd-3' - Finished in state Completed()\n16:51:18.273 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:51:18.273 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-3' immediately...\ngoodbye marvin\n16:51:18.290 | INFO    | Task run 'say_goodbye-261e56a8-3' - Finished in state Completed()\n16:51:18.321 | INFO    | Flow run 'humongous-mink' - Finished in state Completed('All states completed.')\n

If we take out the log messages and just look at the printed output of the tasks, you see they're executed in sequential order:

$ python sequential_flow.py\nhello arthur\ngoodbye arthur\nhello trillian\ngoodbye trillian\nhello ford\ngoodbye ford\nhello marvin\ngoodbye marvin\n
","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#running-parallel-tasks-with-dask","title":"Running parallel tasks with Dask","text":"

You could argue that this simple flow gains nothing from parallel execution, but let's roll with it so you can see just how simple it is to take advantage of the DaskTaskRunner.

To configure your flow to use the DaskTaskRunner:

  1. Make sure the prefect-dask collection is installed by running pip install prefect-dask.
  2. In your flow code, import DaskTaskRunner from prefect_dask.task_runners.
  3. Assign it as the task runner when the flow is defined using the task_runner=DaskTaskRunner argument.
  4. Use the .submit method when calling functions.

This is the same flow as above, with a few minor changes to use DaskTaskRunner where we previously configured SequentialTaskRunner. Install prefect-dask, made these changes, then save the updated code as dask_flow.py.

from prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=DaskTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\nif __name__ == \"__main__\":\ngreetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

Note that, because you're using DaskTaskRunner in a script, you must use if __name__ == \"__main__\": or you'll see warnings and errors.

Now run dask_flow.py. If you get a warning about accepting incoming network connections, that's okay - everything is local in this example.

$ python dask_flow.py\n19:29:03.798 | INFO    | prefect.engine - Created flow run 'fine-bison' for flow 'greetings'\n\n19:29:03.798 | INFO    | Flow run 'fine-bison' - Using task runner 'DaskTaskRunner' 19:29:04.080 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:54:18.465 | INFO    | prefect.engine - Created flow run 'radical-finch' for flow 'greetings'\n16:54:18.465 | INFO    | Flow run 'radical-finch' - Starting 'DaskTaskRunner'; submitted tasks will be run concurrently...\n16:54:18.465 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:54:19.811 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at http://127.0.0.1:8787/status\n16:54:19.881 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:54:20.364 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-0' for execution.\n16:54:20.379 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:54:20.386 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-0' for execution.\n16:54:20.397 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:54:20.401 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-1' for execution.\n16:54:20.417 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:54:20.423 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-1' for execution.\n16:54:20.443 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:54:20.449 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-2' for execution.\n16:54:20.462 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:54:20.474 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-2' for execution.\n16:54:20.500 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:54:20.511 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-3' for execution.\n16:54:20.544 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:54:20.555 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-3' for execution.\nhello arthur\ngoodbye ford\ngoodbye arthur\nhello ford\ngoodbye marvin\ngoodbye trillian\nhello trillian\nhello marvin\n

DaskTaskRunner automatically creates a local Dask cluster, then starts executing all of the tasks in parallel. The results do not return in the same order as the sequential code above.

Notice what happens if you do not use the submit method when calling tasks:

from prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n\n@flow(task_runner=DaskTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello(name)\n        say_goodbye(name)\n\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
$ python dask_flow.py\n\n16:57:34.534 | INFO    | prefect.engine - Created flow run 'papaya-honeybee' for flow 'greetings'\n16:57:34.534 | INFO    | Flow run 'papaya-honeybee' - Starting 'DaskTaskRunner'; submitted tasks will be run concurrently...\n16:57:34.535 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:57:35.715 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at http://127.0.0.1:8787/status\n16:57:35.787 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:57:35.788 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-0' immediately...\nhello arthur\n16:57:35.810 | INFO    | Task run 'say_hello-811087cd-0' - Finished in state Completed()\n16:57:35.820 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:57:35.820 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-0' immediately...\ngoodbye arthur\n16:57:35.840 | INFO    | Task run 'say_goodbye-261e56a8-0' - Finished in state Completed()\n16:57:35.849 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:57:35.849 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-1' immediately...\nhello trillian\n16:57:35.869 | INFO    | Task run 'say_hello-811087cd-1' - Finished in state Completed()\n16:57:35.878 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:57:35.878 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-1' immediately...\ngoodbye trillian\n16:57:35.894 | INFO    | Task run 'say_goodbye-261e56a8-1' - Finished in state Completed()\n16:57:35.907 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:57:35.907 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-2' immediately...\nhello ford\n16:57:35.924 | INFO    | Task run 'say_hello-811087cd-2' - Finished in state Completed()\n16:57:35.933 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:57:35.933 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-2' immediately...\ngoodbye ford\n16:57:35.951 | INFO    | Task run 'say_goodbye-261e56a8-2' - Finished in state Completed()\n16:57:35.959 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:57:35.959 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-3' immediately...\nhello marvin\n16:57:35.976 | INFO    | Task run 'say_hello-811087cd-3' - Finished in state Completed()\n16:57:35.985 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:57:35.985 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-3' immediately...\ngoodbye marvin\n16:57:36.004 | INFO    | Task run 'say_goodbye-261e56a8-3' - Finished in state Completed()\n16:57:36.289 | INFO    | Flow run 'papaya-honeybee' - Finished in state Completed('All states completed.')\n

The tasks are not submitted to the DaskTaskRunner and are run sequentially.

","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#running-parallel-tasks-with-ray","title":"Running parallel tasks with Ray","text":"

To demonstrate the ability to flexibly apply the task runner appropriate for your workflow, use the same flow as above, with a few minor changes to use the RayTaskRunner where we previously configured DaskTaskRunner.

To configure your flow to use the RayTaskRunner:

  1. Make sure the prefect-ray collection is installed by running pip install prefect-ray.
  2. In your flow code, import RayTaskRunner from prefect_ray.task_runners.
  3. Assign it as the task runner when the flow is defined using the task_runner=RayTaskRunner argument.

Ray environment limitations

While we're excited about parallel task execution via Ray to Prefect, there are some inherent limitations with Ray you should be aware of:

See the Ray installation documentation for further compatibility information.

Save this code in ray_flow.py.

from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=RayTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

Now run ray_flow.py RayTaskRunner automatically creates a local Ray instance, then immediately starts executing all of the tasks in parallel. If you have an existing Ray instance, you can provide the address as a parameter to run tasks in the instance. See Running tasks on Ray for details.

","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#using-multiple-task-runners","title":"Using multiple task runners","text":"

Many workflows include a variety of tasks, and not all of them benefit from parallel execution. You'll most likely want to use the Dask or Ray task runners and spin up their respective resources only for those tasks that need them.

Because task runners are specified on flows, you can assign different task runners to tasks by using subflows to organize those tasks.

This example uses the same tasks as the previous examples, but on the parent flow greetings() we use the default ConcurrentTaskRunner. Then we call a ray_greetings() subflow that uses the RayTaskRunner to execute the same tasks in a Ray instance.

from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=RayTaskRunner())\ndef ray_greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\n@flow()\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n    ray_greetings(names)\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

If you save this as ray_subflow.py and run it, you'll see that the flow greetings runs as you'd expect for a concurrent flow, then flow ray-greetings spins up a Ray instance to run the tasks again.

","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/migration-guide/","title":"Migrating from Prefect 1 to Prefect 2","text":"

This guide is designed to help you migrate your workflows from Prefect 1 to Prefect 2.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-stayed-the-same","title":"What stayed the same","text":"

Prefect 2 still:

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed","title":"What changed","text":"

Prefect 2 requires modifications to your existing tasks, flows, and deployment patterns. We've organized this section into the following categories:

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#simplified-patterns","title":"Simplified patterns","text":"

Since Prefect 2 allows running native Python code within the flow function, some abstractions are no longer necessary:

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#conceptual-and-syntax-changes","title":"Conceptual and syntax changes","text":"

The changes listed below require you to modify your workflow code. The following table shows how Prefect 1 concepts have been implemented in Prefect 2. The last column contains references to additional resources that provide more details and examples.

Concept Prefect 1 Prefect 2 Reference links Flow definition. with Flow(\"flow_name\") as flow: @flow(name=\"flow_name\") How can I define a flow? Flow executor that determines how to execute your task runs. Executor such as LocalExecutor. Task runner such as ConcurrentTaskRunner. What is the default TaskRunner (executor)? Configuration that determines how and where to execute your flow runs. Run configuration such as flow.run_config = DockerRun(). Create an infrastructure block such as a Docker Container and specify it as the infrastructure when creating a deployment. How can I run my flow in a Docker container? Assignment of schedules and default parameter values. Schedules are attached to the flow object and default parameter values are defined within the Parameter tasks. Schedules and default parameters are assigned to a flow\u2019s Deployment, rather than to a Flow object. How can I attach a schedule to a flow? Retries @task(max_retries=2, retry_delay=timedelta(seconds=5)) @task(retries=2, retry_delay_seconds=5) How can I specify the retry behavior for a specific task? Logger syntax. Logger is retrieved from prefect.context and can only be used within tasks. In Prefect 2, you can log not only from tasks, but also within flows. To get the logger object, use: prefect.get_run_logger(). How can I add logs to my flow? The syntax and contents of Prefect context. Context is a thread-safe way of accessing variables related to the flow run and task run. The syntax to retrieve it: prefect.context. Context is still available, but its content is much richer, allowing you to retrieve even more information about your flow runs and task runs. The syntax to retrieve it: prefect.context.get_run_context(). How to access Prefect context values? Task library. Included in the main Prefect Core repository. Separated into individual repositories per system, cloud provider, or technology. How to migrate Prefect 1 tasks to Prefect 2 integrations.","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed-in-dataflow-orchestration","title":"What changed in dataflow orchestration?","text":"

Let\u2019s look at the differences in how Prefect 2 transitions your flow and task runs between various execution states.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed-in-flow-deployment-patterns","title":"What changed in flow deployment patterns?","text":"

To deploy your Prefect 1 flows, you have to send flow metadata to the backend in a step called registration. Prefect 2 no longer requires flow pre-registration. Instead, you create a Deployment that specifies the entry point to your flow code and optionally specifies:

The API is now implemented as a REST API rather than GraphQL. This page illustrates how you can interact with the API.

In Prefect 1, the logical grouping of flows was based on projects. Prefect 2 provides a much more flexible way of organizing your flows, tasks, and deployments through customizable filters and\u00a0tags. This page provides more details on how to assign tags to various Prefect 2 objects.

The role of agents has changed:

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#new-features-introduced-in-prefect-2","title":"New features introduced in Prefect 2","text":"

The following new components and capabilities are enabled by Prefect 2.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#orchestration-behind-the-api","title":"Orchestration behind the API","text":"

Apart from new features, Prefect 2 simplifies many usage patterns and provides a much more seamless onboarding experience.

Every time you run a flow, whether it is tracked by the API server or ad-hoc through a Python script, it is on the same UI page for easier debugging and observability.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#code-as-workflows","title":"Code as workflows","text":"

With Prefect 2, your functions\u00a0are\u00a0your flows and tasks. Prefect 2 automatically detects your flows and tasks without the need to define a rigid DAG structure. While use of tasks is encouraged to provide you the maximum visibility into your workflows, they are no longer required. You can add a single @flow decorator to your main function to transform any Python script into a Prefect workflow.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#incremental-adoption","title":"Incremental adoption","text":"

The built-in SQLite database automatically tracks all your locally executed flow runs. As soon as you start a Prefect server and open the Prefect UI in your browser (or authenticate your CLI with your Prefect Cloud workspace), you can see all your locally executed flow runs in the UI. You don't even need to start an agent.

Then, when you want to move toward scheduled, repeatable workflows, you can build a deployment and send it to the server by running a CLI command or a Python script.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#fewer-ambiguities","title":"Fewer ambiguities","text":"

Prefect 2 eliminates ambiguities in many ways. For example. there is no more confusion between Prefect Core and Prefect Server \u2014 Prefect 2 unifies those into a single open source product. This product is also much easier to deploy with no requirement for Docker or docker-compose.

If you want to switch your backend to use Prefect Cloud for an easier production-level managed experience, Prefect profiles let you quickly connect to your workspace.

In Prefect 1, there are several confusing ways you could implement caching. Prefect 2 resolves those ambiguities by providing a single cache_key_fn function paired with cache_expiration, allowing you to define arbitrary caching mechanisms \u2014 no more confusion about whether you need to use cache_for, cache_validator, or file-based caching using targets.

For more details on how to configure caching, check out the following resources:

A similarly confusing concept in Prefect 1 was distinguishing between the functional and imperative APIs. This distinction caused ambiguities with respect to how to define state dependencies between tasks. Prefect 1 users were often unsure whether they should use the functional upstream_tasks keyword argument or the imperative methods such as task.set_upstream(), task.set_downstream(), or flow.set_dependencies(). In Prefect 2, there is only the functional API.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#next-steps","title":"Next steps","text":"

We know migrations can be tough. We encourage you to take it step-by-step and experiment with the new features.

To make the migration process easier for you:

Happy Engineering!

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/push-work-pools/","title":"Push Work to Serverless Computing Infrastructure","text":"

Push work pools are a special type of work pool that allows Prefect Cloud to submit flow runs for execution to serverless computing infrastructure without running a worker. Push work pools currently support execution in GCP Cloud Run Jobs, Azure Container Instances, and AWS ECS Tasks.

In this guide you will:

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#setup","title":"Setup","text":"Google Cloud RunAWS ECSAzure Container Instance

To push work to Cloud Run, a GCP service account and an API Key are required.

Create a service account by navigating to the service accounts page and clicking Create. Name and describe your service account, and click continue to configure permissions.

The service account must have two roles at a minimum, Cloud Run Developer, and Service Account User.

Once the Service account is created, navigate to its Keys page to add an API key. Create a JSON type key, download it, and store it somewhere safe for use in the next section.

To push work to ECS, AWS credentials are required.

Create a user and attach the AmazonECS_FullAccess permissions.

From that user's page create credentials and store them somewhere safe for use in the next section.

To push work to Azure, an Azure subscription, resource worker and tenant secret are required.

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#create-subscription-and-resource-worker","title":"Create Subscription and Resource Worker","text":"
  1. In the Azure portal, create a subscription.
  2. Create a resource group within your subscription.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#create-app-registration","title":"Create App Registration","text":"
  1. In the Azure portal, create an app registration.
  2. In the app registration, create a client secret. Copy the value and store it somewhere safe.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#add-app-registration-to-subscription","title":"Add App Registration to Subscription","text":"
  1. Navigate to the resource group you created earlier.
  2. Click on \"Access control (IAM)\" and then \"Role assignments\".
  3. Search for the app registration and select it. Give it a role that has sufficient privileges to create, run, and delete ACI container groups.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#work-pool-configuration","title":"Work Pool Configuration","text":"

Our push work pool will store information about what type of infrastructure we're running on, what default values to provide to compute jobs, and other important execution environment parameters. Because our push work pool needs to integrate securely with your serverless infrastructure, we need to start by storing our credentials in Prefect Cloud, which we'll do by making a block.

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#creating-a-credentials-block","title":"Creating a Credentials Block","text":"Google Cloud RunAWS ECSAzure Container Instance

Navigate to the blocks page, click create new block, and select GCP Credentials for the type.

For use in a push work pool, this block must have the contents of the JSON key stored in the Service Account Info field, as such:

Provide any other optional information and create your block.

Navigate to the blocks page, click create new block, and select AWS Credentials for the type.

For use in a push work pool, this block must have the region and cluster name filled out, in addiiton to access key and access key secret.

Provide any other optional information and create your block.

Navigate to the blocks page, click create new block, and select Azure Container Instance Credentials for the type.

Locate the client ID and tenant ID on your app registration and use the client secret you saved earlier.

Provide any other optional information and create your block.

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#create-push-work-pool","title":"Create Push Work Pool","text":"

Now navigate to work pools and click create to start configuring your push work pool by selecting a push option in the infrastructure type step.

Google Cloud RunAWS ECSAzure Container Instance

Each step has several optional fields that are detailed in the work pools documentation. For our purposes, select the block you created under the GCP Credentials field. This will allow Prefect Cloud to securely interact with your GCP project.

Each step has several optional fields that are detailed in the work pools documentation. For our purposes, select the block you created under the AWS Credentials field. This will allow Prefect Cloud to securely interact with your ECS cluster.

Fill in the subscription ID and resource group name from the resource group you created. Add the Azure Container Instance Credentials block you created in the step above.

Create your pool and you are ready to deploy flows to your Push work pool.

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#deployment","title":"Deployment","text":"

Deployment details are described in the deployments concept section. Your deployment needs to be configured to send flow runs to our push work pool. For example, if you create a deployment through the interactive command line experience, choose the work pool you just created. If you are deploying an existing prefect.yaml file, the deployment would contain:

  work_pool:\nname: my-push-pool\n

Deploying your flow to the my-push-pool work pool will ensure that runs that are ready for execution will be submitted immediately, without the need for a worker to poll for them.

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#putting-it-all-together","title":"Putting it all Together","text":"

With your deployment created, navigate to its detail page and create a new flow run. You'll see the flow start running without ever having to poll the work pool, because Prefect Cloud securely connected to your serverless infrastructure, created a job, ran the job, and began reporting on its execution.

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/state-change-hooks/","title":"State Change Hooks","text":"

You've read about how state change hooks execute code in response to changes in flow or task run states, enabling you to define actions for specific state transitions in a workflow. Now let's see some real-world use cases!

","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#example-use-cases","title":"Example use cases","text":"","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#send-a-notification-when-a-flow-run-fails","title":"Send a notification when a flow run fails","text":"

State change hooks enable you to customize messages sent when tasks transition between states, such as sending notifications containing sensitive information when tasks enter a Failed state. Let's run a client-side hook upon a flow run entering a Failed state.

from prefect import flow\nfrom prefect.blocks.core import Block\nfrom prefect.settings import PREFECT_API_URL\n\ndef notify_slack(flow, flow_run, state):\n    slack_webhook_block = Block.load(\n        \"slack-webhook/my-slack-webhook\"\n    )\n\n    slack_webhook_block.notify(\n        (\n            f\"Your job {flow_run.name} entered {state.name} \"\n            f\"with message:\\n\\n\"\n            f\"See <https://{PREFECT_API_URL.value()}/flow-runs/\"\n            f\"flow-run/{flow_run.id}|the flow run in the UI>\\n\\n\"\n            f\"Tags: {flow_run.tags}\\n\\n\"\n            f\"Scheduled start: {flow_run.expected_start_time}\"\n        )\n    )\n\n@flow(on_failure=[notify_slack], retries=1)\ndef failing_flow():\n    raise ValueError(\"oops!\")\n\nif __name__ == \"__main__\":\n    failing_flow()\n

Note that because we've configured retries here, the on_failure hook will not run until all retries have completed, when the flow run finally enters a Failed state.

","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#delete-a-cloud-run-job-when-a-flow-crashes","title":"Delete a Cloud Run job when a flow crashes","text":"

State change hooks can aid in managing infrastructure cleanup in scenarios where tasks spin up individual infrastructure resources independently of Prefect. When a flow run crashes, tasks may exit abruptly, resulting in the potential omission of cleanup logic within the tasks. State change hooks can be used to ensure infrastructure is properly cleaned up even when a flow run enters a Crashed state.

Let's create a hook that deletes a Cloud Run job if the flow run crashes.

import os\nfrom prefect import flow, task\nfrom prefect.blocks.system import String\nfrom prefect.client import get_client\nimport prefect.runtime\n\nasync def delete_cloud_run_job(flow, flow_run, state):\n\"\"\"Flow run state change hook that deletes a Cloud Run Job if\n    the flow run crashes.\"\"\"\n\n    # retrieve Cloud Run job name\n    cloud_run_job_name = await String.load(\n        name=\"crashing-flow-cloud-run-job\"\n    )\n\n    # delete Cloud Run job\n    delete_job_command = f\"yes | gcloud beta run jobs delete \n    {cloud_run_job_name.value} --region us-central1\"\n    os.system(delete_job_command)\n\n    # clean up the Cloud Run job string block as well\n    async with get_client() as client:\n        block_document = await client.read_block_document_by_name(\n            \"crashing-flow-cloud-run-job\", block_type_slug=\"string\"\n        )\n        await client.delete_block_document(block_document.id)\n\n@task\ndef my_task_that_crashes():\n    raise SystemExit(\"Crashing on purpose!\")\n\n@flow(on_crashed=[delete_cloud_run_job])\ndef crashing_flow():\n\"\"\"Save the flow run name (i.e. Cloud Run job name) as a \n    String block. It then executes a task that ends up crashing.\"\"\"\n    flow_run_name = prefect.runtime.flow_run.name\n    cloud_run_job_name = String(value=flow_run_name)\n    cloud_run_job_name.save(\n        name=\"crashing-flow-cloud-run-job\", overwrite=True\n    )\n\n    my_task_that_crashes()\n\nif __name__ == \"__main__\":\n    crashing_flow()\n
","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/testing/","title":"Testing","text":"

Once you have some awesome flows, you probably want to test them!

","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/testing/#unit-testing-flows","title":"Unit testing flows","text":"

Prefect provides a simple context manager for unit tests that allows you to run flows and tasks against a temporary local SQLite database.

from prefect import flow\nfrom prefect.testing.utilities import prefect_test_harness\n\n@flow\ndef my_favorite_flow():\n    return 42\n\ndef test_my_favorite_flow():\n  with prefect_test_harness():\n      # run the flow against a temporary testing database\n      assert my_favorite_flow() == 42 \n

For more extensive testing, you can leverage prefect_test_harness as a fixture in your unit testing framework. For example, when using pytest:

from prefect import flow\nimport pytest\nfrom prefect.testing.utilities import prefect_test_harness\n\n@pytest.fixture(autouse=True, scope=\"session\")\ndef prefect_test_fixture():\n    with prefect_test_harness():\n        yield\n\n@flow\ndef my_favorite_flow():\n    return 42\n\ndef test_my_favorite_flow():\n    assert my_favorite_flow() == 42\n

Note

In this example, the fixture is scoped to run once for the entire test session. In most cases, you will not need a clean database for each test and just want to isolate your test runs to a test database. Creating a new test database per test creates significant overhead, so we recommend scoping the fixture to the session. If you need to isolate some tests fully, you can use the test harness again to create a fresh database.

","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/testing/#unit-testing-tasks","title":"Unit testing tasks","text":"

To test an individual task, you can access the original function using .fn:

from prefect import flow, task\n\n@task\ndef my_favorite_task():\n    return 42\n\n@flow\ndef my_favorite_flow():\n    val = my_favorite_task()\n    return val\n\ndef test_my_favorite_task():\n    assert my_favorite_task.fn() == 42\n
","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/troubleshooting/","title":"Troubleshooting","text":"

Don't Panic! If you experience an error with Prefect, there are many paths to understanding and resolving it. The first troubleshooting step is confirming that you are running the latest version of Prefect. If you are not, be sure to upgrade to the latest version, since the issue may have already been fixed. Beyond that, there are several categories of errors:

","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#upgrade","title":"Upgrade","text":"

Prefect is constantly evolving, adding new features and fixing bugs. Chances are that a patch has already been identified and released. Search existing issues for similar reports and check out the Release Notes. Upgrade to the newest version with the following command:

pip install --upgrade prefect\n

Different components may use different versions of Prefect:

Integration Versions

Keep in mind that integrations are versioned and released independently of the core Prefect library. They should be upgraded simultaneously with the core library, using the same method.

","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#logs","title":"Logs","text":"

In many cases, there will be an informative stack trace in Prefect's logs. Read it carefully, locate the source of the error, and try to identify the cause.

There are two types of logs:

If your flow and task logs are empty, there may have been an infrastructure issue that prevented your flow from starting. Check your agent logs for more details.

If there is no clear indication of what went wrong, try updating the logging level from the default INFO level to the DEBUG level. Settings such as the logging level are propagated from the agent or worker environment to the flow run environment and can be set via environment variables or the prefect config set CLI:

# Using the CLI\nprefect config set PREFECT_LOGGING_LEVEL=DEBUG\n\n# Using environment variables\nexport PREFECT_LOGGING_LEVEL=DEBUG\n

The DEBUG logging level produces a high volume of logs so consider setting it back to INFO once any issues are resolved.

","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#cloud","title":"Cloud","text":"

When using Prefect Cloud, there are the additional concerns of authentication and authorization. The Prefect API authenticates users and service accounts - collectively known as actors - with API keys. Missing, incorrect, or expired API keys will result in a 401 response with detail Invalid authentication credentials. Use the following command to check your authentication, replacing $PREFECT_API_KEY with your API key:

curl -s -H \"Authorization: Bearer $PREFECT_API_KEY\" \"https://api.prefect.cloud/api/me/\"\n

Users vs Service Accounts

Service accounts - sometimes referred to as bots - represent non-human actors that interact with Prefect such as agents, workers, and CI/CD systems. Each human that interacts with Prefect should be represented as a user. User API keys start with pnu_ and service account API keys start with pnb_.

Supposing the response succeeds, let's check our authorization. Actors can be members of workspaces. An actor attempting an action in a workspace they are not a member of will result in a 404 response. Use the following command to check your actor's workspace memberships:

curl -s -H \"Authorization: Bearer $PREFECT_API_KEY\" \"https://api.prefect.cloud/api/me/workspaces\"\n

Formatting JSON

Python comes with a helpful tool for formatting JSON. Append the following to the end of the command above to make the output more readable: | python -m json.tool

Make sure your actor is a member of the workspace you are working in. Within a workspace, an actor has a role which grants them certain permissions. Insufficient permissions will result in an error. For example, starting an agent or worker with the Viewer role, will result in errors.

","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#execution","title":"Execution","text":"

Prefect flows can be executed locally by the user, or remotely by an agent or worker. Local execution generally means that you - the user - run your flow directly with a command like python flow.py. Remote execution generally means that an agent or worker runs your flow via a deployment, optionally on different infrastructure.

With remote execution, the creation of your flow run happens separately from its execution. Flow runs are assigned to a work pool and a work queue. In order for flow runs to execute, an agent or worker must be subscribed to the work pool and work queue, otherwise the flow runs will go from Scheduled to Late. Ensure that your work pool and work queue have a subscribed agent or worker.

Local and remote execution can also differ in their treatment of relative imports. If switching from local to remote execution results in local import errors, try replicating the behavior by executing the flow locally with the -m flag (i.e. python -m flow instead of python flow.py). Read more about -m here.

","tags":["troubleshooting","guides","how to"]},{"location":"guides/deployment/aci/","title":"Run an Agent with Azure Container Instances","text":"

Microsoft Azure Container Instances (ACI) provides a convenient and simple service for quickly spinning up a Docker container that can host a Prefect Agent and execute flow runs.

","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#prerequisites","title":"Prerequisites","text":"

To follow this quickstart, you'll need the following:

","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#create-a-resource-group","title":"Create a resource group","text":"

Like most Azure resources, ACI applications must live in a resource group. If you don\u2019t already have a resource group you\u2019d like to use, create a new one by running the az group create command. For example, this example creates a resource group called prefect-agents in the eastus region:

az group create --name prefect-agents --location eastus\n

Feel free to change the group name or location to match your use case. You can also run az account list-locations -o table to see all available resource group locations for your account.

","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#create-the-container-instance","title":"Create the container instance","text":"

Prefect provides pre-configured Docker images you can use to quickly stand up a container instance. These Docker images include Python and Prefect. For example, the image prefecthq/prefect:2-python3.10 includes the latest release version of Prefect and Python 3.10.

To create the container instance, use the az container create command. This example shows the syntax, but you'll need to provide the correct values for [ACCOUNT-ID],[WORKSPACE-ID], [API-KEY], and any dependencies you need to pip install on the instance. These options are discussed below.

az container create \\\n--resource-group prefect-agents \\\n--name prefect-agent-example \\\n--image prefecthq/prefect:2-python3.10 \\\n--secure-environment-variables PREFECT_API_URL='https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]' PREFECT_API_KEY='[API-KEY]' \\\n--command-line \"/bin/bash -c 'pip install adlfs s3fs requests pandas; prefect agent start -p default-agent-pool -q test'\"\n

When the container instance is running, go to Prefect Cloud and select the Work Pools page. Select default-agent-pool, then select the Queues tab to see work queues configured on this work pool. When the container instance is running and the agent has started, the test work queue displays \"Healthy\" status. This work queue and agent are ready to execute deployments configured to run on the test queue.

Agents and queues

The agent running in this container instance can now pick up and execute flow runs for any deployment configured to use the test queue on the default-agent-pool work pool.

","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#container-create-options","title":"Container create options","text":"

Let's break down the details of the az container create command used here.

The az container create command creates a new ACI container.

--resource-group prefect-agents tells Azure which resource group the new container is created in. Here, the examples uses the prefect-agents resource group created earlier.

--name prefect-agent-example determines the container name you will see in the Azure Portal. You can set any name you\u2019d like here to suit your use case, but container instance names must be unique in your resource group.

--image prefecthq/prefect:2-python3.10 tells ACI which Docker images to run. The script above pulls a public Prefect image from Docker Hub. You can also build custom images and push them to a public container registry so ACI can access them. Or you can push your image to a private Azure Container Registry and use it to create a container instance.

--secure-environment-variables sets environment variables that are only visible from inside the container. They do not show up when viewing the container\u2019s metadata. You'll populate these environment variables with a few pieces of information to configure the execution environment of the container instance so it can communicate with your Prefect Cloud workspace:

--command-line lets you override the container\u2019s normal entry point and run a command instead. The script above uses this section to install the adlfs pip package so it can read flow code from Azure Blob Storage, along with s3fs, pandas, and requests. It then runs the Prefect agent, in this case using the default work pool and a test work queue. If you want to use a different work pool or queue, make sure to change these values appropriately.

","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#create-a-deployment","title":"Create a deployment","text":"

Following the example of the Flow deployments tutorial, let's create a deployment that can be executed by the agent on this container instance.

In an environment where you have installed Prefect, create a new folder called health_test, and within it create a new file called health_flow.py containing the following code.

import prefect\nfrom prefect import task, flow\nfrom prefect import get_run_logger\n\n\n@task\ndef say_hi():\n    logger = get_run_logger()\n    logger.info(\"Hello from the Health Check Flow! \ud83d\udc4b\")\n\n\n@task\ndef log_platform_info():\n    import platform\n    import sys\n    from prefect.server.api.server import SERVER_API_VERSION\n\n    logger = get_run_logger()\n    logger.info(\"Host's network name = %s\", platform.node())\n    logger.info(\"Python version = %s\", platform.python_version())\n    logger.info(\"Platform information (instance type) = %s \", platform.platform())\n    logger.info(\"OS/Arch = %s/%s\", sys.platform, platform.machine())\n    logger.info(\"Prefect Version = %s \ud83d\ude80\", prefect.__version__)\n    logger.info(\"Prefect API Version = %s\", SERVER_API_VERSION)\n\n\n@flow(name=\"Health Check Flow\")\ndef health_check_flow():\n    hi = say_hi()\n    log_platform_info(wait_for=[hi])\n

Now create a deployment for this flow script, making sure that it's configured to use the test queue on the default-agent-pool work pool.

prefect deployment build --infra process --storage-block azure/flowsville/health_test --name health-test --pool default-agent-pool --work-queue test --apply health_flow.py:health_check_flow\n

Once created, any flow runs for this deployment will be picked up by the agent running on this container instance.

Infrastructure and storage

This Prefect deployment example was built using the Process infrastructure type and Azure Blob Storage.

You might wonder why your deployment needs process infrastructure rather than DockerContainer infrastructure when you are deploying a Docker image to ACI.

A Prefect deployment\u2019s infrastructure type describes how you want Prefect agents to run flows for the deployment. With DockerContainer infrastructure, the agent will try to use Docker to spin up a new container for each flow run. Since you\u2019ll be starting your own container on ACI, you don\u2019t need Prefect to do it for you. Specifying process infrastructure on the deployment tells Prefect you want to agent to run flows by starting a process in your ACI container.

You can use any storage type as long as you've configured a block for it before creating the deployment.

","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#cleaning-up","title":"Cleaning up","text":"

Note that ACI instances may incur usage charges while running, but must be running for the agent to pick up and execute flow runs.

To stop a container, use the az container stop command:

az container stop --resource-group prefect-agents --name prefect-agent-example\n

To delete a container, use the az container delete command:

az container delete --resource-group prefect-agents --name prefect-agent-example\n
","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/","title":"Developing a New Worker Type","text":"

Advanced Topic

This tutorial is for users who want to extend the Prefect framework and completing this successfully will require deep knowledge of Prefect concepts. For standard use cases, we recommend using one of the available workers instead.

Prefect workers are responsible for setting up execution infrastructure and starting flow runs on that infrastructure.

A list of available workers can be found in the Work Pools, Workers & Agents documentation. What if you want to execute your flow runs on infrastructure that doesn't have an available worker type? This tutorial will walk you through creating a custom worker that can run your flows on your chosen infrastructure.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-configuration","title":"Worker Configuration","text":"

When setting up an execution environment for a flow run, a worker receives configuration for the infrastructure it is designed to work with. Examples of configuration values include memory allocation, CPU allocation, credentials, image name, etc. The worker then uses this configuration to create the execution environment and start the flow run.

How are the configuration values populated?

The work pool that a worker polls for flow runs has a base job template associated with it. The template is the contract for how configuration values populate for each flow run.

The keys in the job_configuration section of this base job template match the worker's configuration class attributes. The values in the job_configuration section of the base job template are used to populate the attributes of the worker's configuration class.

The work pool creator gets to decide how they want to populate the values in the job_configuration section of the base job template. The values can be hard-coded, templated using placeholders, or a mix of these two approaches. Because you, as the worker developer, don't know how the work pool creator will populate the values, you should set sensible defaults for your configuration class attributes as a matter of best practice.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#implementing-a-basejobconfiguration-subclass","title":"Implementing a BaseJobConfiguration Subclass","text":"

A worker developer defines their worker's configuration to function with a class extending BaseJobConfiguration.

BaseJobConfiguration has attributes that are common to all workers:

Attribute Description name The name to assign to the created execution environment. env Environment variables to set in the created execution environment. labels The labels assigned to the created execution environment for metadata purposes. command The command to use when starting a flow run.

Prefect sets values for each attribute before giving the configuration to the worker. If you want to customize the values of these attributes, use the prepare_for_flow_run method.

Here's an example prepare_for_flow_run method that adds a label to the execution environment:

def prepare_for_flow_run(\n    self, flow_run, deployment = None, flow = None,\n):  \n    super().prepare_for_flow_run(flow_run, deployment, flow)  \n    self.labels.append(\"my-custom-label\")\n

A worker configuration class is a Pydantic model, so you can add additional attributes to your configuration class as Pydantic fields. For example, if you want to allow memory and CPU requests for your worker, you can do so like this:

from pydantic import Field\nfrom prefect.workers.base import BaseJobConfiguration\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    memory: int = Field(\n            default=1024,\n            description=\"Memory allocation for the execution environment.\"\n        )\n    cpu: int = Field(\n            default=500, \n            description=\"CPU allocation for the execution environment.\"\n        )\n

This configuration class will populate the job_configuration section of the resulting base job template.

For this example, the base job template would look like this:

job_configuration:\nname: \"{{ name }}\"\nenv: \"{{ env }}\"\nlabels: \"{{ labels }}\"\ncommand: \"{{ command }}\"\nmemory: \"{{ memory }}\"\ncpu: \"{{ cpu }}\"\nvariables:\ntype: object\nproperties:\nname:\ntitle: Name\ndescription: Name given to infrastructure created by a worker.\ntype: string\nenv:\ntitle: Environment Variables\ndescription: Environment variables to set when starting a flow run.\ntype: object\nadditionalProperties:\ntype: string\nlabels:\ntitle: Labels\ndescription: Labels applied to infrastructure created by a worker.\ntype: object\nadditionalProperties:\ntype: string\ncommand:\ntitle: Command\ndescription: The command to use when starting a flow run. In most cases,\nthis should be left blank and the command will be automatically generated\nby the worker.\ntype: string\nmemory:\ntitle: Memory\ndescription: Memory allocation for the execution environment.\ntype: integer\ndefault: 1024\ncpu:\ntitle: CPU\ndescription: CPU allocation for the execution environment.\ntype: integer\ndefault: 500\n

This base job template defines what values can be provided by deployment creators on a per-deployment basis and how those provided values will be translated into the configuration values that the worker will use to create the execution environment.

Notice that each attribute for the class was added in the job_configuration section with placeholders whose name matches the attribute name. The variables section was also populated with the OpenAPI schema for each attribute. If a configuration class is used without explicitly declaring any template variables, the template variables will be inferred from the configuration class attributes.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#customizing-configuration-attribute-templates","title":"Customizing Configuration Attribute Templates","text":"

You can customize the template for each attribute for situations where the configuration values should use more sophisticated templating. For example, if you want to add units for the memory attribute, you can do so like this:

from pydantic import Field\nfrom prefect.workers.base import BaseJobConfiguration\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    memory: str = Field(\n            default=\"1024Mi\",\n            description=\"Memory allocation for the execution environment.\"\n            template=\"{{ memory_request }}Mi\"\n        )\n    cpu: str = Field(\n            default=\"500m\", \n            description=\"CPU allocation for the execution environment.\"\n            template=\"{{ cpu_request }}m\"\n        )\n

Notice that we changed the type of each attribute to str to accommodate the units, and we added a new template attribute to each attribute. The template attribute is used to populate the job_configuration section of the resulting base job template.

For this example, the job_configuration section of the resulting base job template would look like this:

job_configuration:\nname: \"{{ name }}\"\nenv: \"{{ env }}\"\nlabels: \"{{ labels }}\"\ncommand: \"{{ command }}\"\nmemory: \"{{ memory_request }}Mi\"\ncpu: \"{{ cpu_request }}m\"\n

Note that to use custom templates, you will need to declare the template variables used in the template because the names of those variables can no longer be inferred from the configuration class attributes. We will cover how to declare the default variable schema in the Worker Template Variables section.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#rules-for-template-variable-interpolation","title":"Rules for Template Variable Interpolation","text":"

When defining a job configuration model, it's useful to understand how template variables are interpolated into the job configuration. The templating engine follows a few simple rules:

  1. If a template variable is the only value for a key in the job_configuration section, the key will be replaced with the value template variable.
  2. If a template variable is part of a string (i.e., there is text before or after the template variable), the value of the template variable will be interpolated into the string.
  3. If a template variable is the only value for a key in the job_configuration section and no value is provided for the template variable, the key will be removed from the job_configuration section.

These rules allow worker developers and work pool maintainers to define template variables that can be complex types like dictionaries and lists. These rules also mean that worker developers should give reasonable default values to job configuration fields whenever possible because values are not guaranteed to be provided if template variables are unset.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#template-variable-usage-strategies","title":"Template Variable Usage Strategies","text":"

Template variables define the interface that deployment creators interact with to configure the execution environments of their deployments. The complexity of this interface can be controlled via the template variables that are defined for a base job template. This control allows work pool maintainers to find a point along the spectrum of flexibility and simplicity appropriate for their organization.

There are two patterns that are represented in current worker implementations:

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#pass-through","title":"Pass-Through","text":"

In the pass-through pattern, template variables are passed through to the job configuration with little change. This pattern exposes complete control to deployment creators but also requires them to understand the details of the execution environment.

This pattern is useful when the execution environment is simple, and the deployment creators are expected to have high technical knowledge.

The Docker worker is an example of a worker that uses this pattern.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#infrastructure-as-code-templating","title":"Infrastructure as Code Templating","text":"

Depending on the infrastructure they interact with, workers can sometimes employ a declarative infrastructure syntax (i.e., infrastructure as code) to create execution environments (e.g., a Kubernetes manifest or an ECS task definition).

In the IaC pattern, it's often useful to use template variables to template portions of the declarative syntax which then can be used to generate the declarative syntax into a final form.

This approach allows work pool creators to provide a simpler interface to deployment creators while also controlling which portions of infrastructure are configurable by deployment creators.

The Kubernetes worker is an example of a worker that uses this pattern.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#configuring-credentials","title":"Configuring Credentials","text":"

When executing flow runs within cloud services, workers will often need credentials to authenticate with those services. For example, a worker that executes flow runs in AWS Fargate will need AWS credentials. As a worker developer, you can use blocks to accept credentials configuration from the user.

For example, if you want to allow the user to configure AWS credentials, you can do so like this:

from prefect_aws import AwsCredentials\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    aws_credentials: AwsCredentials = Field(\n        default=None,\n        description=\"AWS credentials to use when creating AWS resources.\"\n    )\n

Users can create and assign a block to the aws_credentials attribute in the UI and the worker will use these credentials when interacting with AWS resources.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-template-variables","title":"Worker Template Variables","text":"

Providing template variables for a base job template defines the fields that deployment creators can override per deployment. The work pool creator ultimately defines the template variables for a base job template, but the worker developer is able to define default template variables for the worker to make it easier to use.

Default template variables for a worker are defined by implementing the BaseVariables class. Like the BaseJobConfiguration class, the BaseVariables class has attributes that are common to all workers:

Attribute Description name The name to assign to the created execution environment. env Environment variables to set in the created execution environment. labels The labels assigned the created execution environment for metadata purposes. command The command to use when starting a flow run.

Additional attributes can be added to the BaseVariables class to define additional template variables. For example, if you want to allow memory and CPU requests for your worker, you can do so like this:

from pydantic import Field\nfrom prefect.workers.base import BaseVariables\n\nclass MyWorkerTemplateVariables(BaseVariables):\n    memory_request: int = Field(\n            default=1024,\n            description=\"Memory allocation for the execution environment.\"\n        )\n    cpu_request: int = Field(\n            default=500, \n            description=\"CPU allocation for the execution environment.\"\n        )\n

When MyWorkerTemplateVariables is used in conjunction with MyWorkerConfiguration from the Customizing Configuration Attribute Templates section, the resulting base job template will look like this:

job_configuration:\nname: \"{{ name }}\"\nenv: \"{{ env }}\"\nlabels: \"{{ labels }}\"\ncommand: \"{{ command }}\"\nmemory: \"{{ memory_request }}Mi\"\ncpu: \"{{ cpu_request }}m\"\nvariables:\ntype: object\nproperties:\nname:\ntitle: Name\ndescription: Name given to infrastructure created by a worker.\ntype: string\nenv:\ntitle: Environment Variables\ndescription: Environment variables to set when starting a flow run.\ntype: object\nadditionalProperties:\ntype: string\nlabels:\ntitle: Labels\ndescription: Labels applied to infrastructure created by a worker.\ntype: object\nadditionalProperties:\ntype: string\ncommand:\ntitle: Command\ndescription: The command to use when starting a flow run. In most cases,\nthis should be left blank and the command will be automatically generated\nby the worker.\ntype: string\nmemory_request:\ntitle: Memory Request\ndescription: Memory allocation for the execution environment.\ntype: integer\ndefault: 1024\ncpu_request:\ntitle: CPU Request\ndescription: CPU allocation for the execution environment.\ntype: integer\ndefault: 500\n

Note that template variable classes are never used directly. Instead, they are used to generate a schema that is used to populate the variables section of a base job template and validate the template variables provided by the user.

We don't recommend using template variable classes within your worker implementation for validation purposes because the work pool creator ultimately defines the template variables. The configuration class should handle any necessary run-time validation.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-implementation","title":"Worker Implementation","text":"

Workers set up execution environments using provided configuration. Workers also observe the execution environment as the flow run executes and report any crashes to the Prefect API.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#attributes","title":"Attributes","text":"

To implement a worker, you must implement the BaseWorker class and provide it with the following attributes:

Attribute Description Required type The type of the worker. Yes job_configuration The configuration class for the worker. Yes job_configuration_variables The template variables class for the worker. No _documentation_url Link to documentation for the worker. No _logo_url Link to a logo for the worker. No _description A description of the worker. No","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#methods","title":"Methods","text":"","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#run","title":"run","text":"

In addition to the attributes above, you must also implement a run method. The run method is called for each flow run the worker receives for execution from the work pool.

The run method has the following signature:

 async def run(\n        self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n        ...\n

The run method is passed: the flow run to execute, the execution environment configuration for the flow run, and a task status object that allows the worker to track whether the flow run was submitted successfully.

The run method must also return a BaseWorkerResult object. The BaseWorkerResult object returned contains information about the flow run execution. For the most part, you can implement the BaseWorkerResult with no modifications like so:

from prefect.workers.base import BaseWorkerResult\n\nclass MyWorkerResult(BaseWorkerResult):\n\"\"\"Result returned by the MyWorker.\"\"\"\n

If you would like to return more information about a flow run, then additional attributes can be added to the BaseWorkerResult class.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#kill_infrastructure","title":"kill_infrastructure","text":"

Workers must implement a kill_infrastructure method to support flow run cancellation. The kill_infrastructure method is called when a flow run is canceled and is passed an identifier for the infrastructure to tear down and the execution environment configuration for the flow run.

The infrastructure_pid passed to the kill_infrastructure method is the same identifier used to mark a flow run execution as started in the run method. The infrastructure_pid must be a string, but it can take on any format you choose.

The infrastructure_pid should contain enough information to uniquely identify the infrastructure created for a flow run when used with the job_configuration passed to the kill_infrastructure method. Examples of useful information include: the cluster name, the hostname, the process ID, the container ID, etc.

If a worker cannot tear down infrastructure created for a flow run, the kill_infrastructure command should raise an InfrastructureNotFound or InfrastructureNotAvailable exception.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-implementation-example","title":"Worker Implementation Example","text":"

Below is an example of a worker implementation. This example is not intended to be a complete implementation but to illustrate the aforementioned concepts.

from prefect.workers.base import BaseWorker, BaseWorkerResult, BaseJobConfiguration, BaseVariables\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    memory: str = Field(\n            default=\"1024Mi\",\n            description=\"Memory allocation for the execution environment.\"\n            template=\"{{ memory_request }}Mi\"\n        )\n    cpu: str = Field(\n            default=\"500m\", \n            description=\"CPU allocation for the execution environment.\"\n            template=\"{{ cpu_request }}m\"\n        )\n\nclass MyWorkerTemplateVariables(BaseVariables):\n    memory_request: int = Field(\n            default=1024,\n            description=\"Memory allocation for the execution environment.\"\n        )\n    cpu_request: int = Field(\n            default=500, \n            description=\"CPU allocation for the execution environment.\"\n        )\n\nclass MyWorkerResult(BaseWorkerResult):\n\"\"\"Result returned by the MyWorker.\"\"\"\n\nclass MyWorker(BaseWorker):\n    type = \"my-worker\"\n    job_configuration = MyWorkerConfiguration\n    job_configuration_variables = MyWorkerTemplateVariables\n    _documentation_url = \"https://example.com/docs\"\n    _logo_url = \"https://example.com/logo\"\n    _description = \"My worker description.\"\n\n    async def run(\n        self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n        # Create the execution environment and start execution\n        job = await self._create_and_start_job(configuration)\n\n        if task_status:\n            # Use a unique ID to mark the run as started. This ID is later used to tear down infrastructure\n            # if the flow run is cancelled.\n            task_status.started(job.id) \n\n        # Monitor the execution\n        job_status = await self._watch_job(job, configuration)\n\n        exit_code = job_status.exit_code if job_status else -1 # Get result of execution for reporting\n        return MyWorkerResult(\n            status_code=exit_code,\n            identifier=job.id,\n        )\n\n    async def kill_infrastructure(self, infrastructure_pid: str, configuration: BaseJobConfiguration) -> None:\n        # Tear down the execution environment\n        await self._kill_job(infrastructure_pid, configuration)\n

Most of the execution logic is omitted from the example above, but it shows that the typical order of operations in the run method is: 1. Create the execution environment and start the flow run execution 2. Mark the flow run as started via the passed task_status object 3. Monitor the execution 4. Get the execution's final status from the infrastructure and return a BaseWorkerResult object

To see other examples of worker implementations, see the ProcessWorker and KubernetesWorker implementations.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#integrating-with-the-prefect-cli","title":"Integrating with the Prefect CLI","text":"

Workers can be started via the Prefect CLI by providing the --type option to the prefect worker start CLI command. To make your worker type available via the CLI, it must be available at import time.

If your worker is in a package, you can add an entry point to your setup file in the following format:

entry_points={\n    \"prefect.collections\": [\n        \"my_package_name = my_worker_module\",\n    ]\n},\n

Prefect will discover this entry point and load your work module in the specified module. The entry point will allow the worker to be available via the CLI.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/docker/","title":"Running flows with Docker","text":"

In the Deployments tutorial, we looked at creating configuration that enables creating flow runs via the API and with code that was uploaded to a remotely accessible location.

In this guide, we will \"bake\" your code directly into a Docker image. This will allow for easy storage and allow you to deploy flow code without the need for a storage block. Then, we'll configure the deployment so flow runs are executed in a Docker container based on that image. We'll run our Docker instance locally, but you can extend this guide to run it on remote machines.

In this guide we'll:

","tags":["Docker","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/docker/#prerequisites","title":"Prerequisites","text":"

To run a deployed flow in a Docker container, you'll need the following:

Docker Desktop works fine for local testing if you don't already have Docker Engine configured in your environment.

Run a Prefect server

This guide assumes you're already running a Prefect server with prefect server start, as described in the Deployments tutorial.

If you shut down the server, you can start it again by opening another terminal session and starting the Prefect server with the prefect server start CLI command.

","tags":["Docker","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/docker/#storing-prefect-flow-code-in-a-docker-image","title":"Storing Prefect Flow Code in a Docker Image","text":"

First let's create a directory to work from, docker-tutorial.

In this directory, you will create a sub-directory named flows and put your flow script from the Deployments tutorial. In this case, I've named the flow docker-tutorial-flow.py.

The next file you will add to the docker-tutorial directory is a requirements.txt. In this file make sure to include all dependencies that are required for your docker-tutorial-flow.py script.

Next, you will create a Dockerfile that will be used to create a Docker image that will also store the flow code. This Dockerfile should look similar to the following:

FROM prefecthq/prefect:2-python3.9\nCOPY requirements.txt .\nRUN pip install -r requirements.txt --trusted-host pypi.python.org --no-cache-dir\nADD flows /opt/prefect/flows\n

Finally, we build this image by running:

docker build -t docker-tutorial-image .\n

This will create a Docker image in your Docker repository with your Prefect flow stored in the image.

","tags":["Docker","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/docker/#prefectyaml-file","title":"Prefect.yaml File","text":"

Now that you have created a Docker image that stores your Prefect flow code, you're going to make use of Prefect's deployment recipes. In your terminal, run:

prefect init --recipe docker\n

You will see a prompt to input values for image name and tag, lets use:

image_name: docker-tutorial-image\ntag: latest\n

This will create a prefect.yaml file for us populated with some fields. By default, it will look like this:

# Welcome to your prefect.yaml file! You can you this file for storing and managing\n# configuration for deploying your flows. We recommend committing this file to source\n# control along with your flow code.\n\n# Generic metadata about this project\nname: docker-tutorial\nprefect-version: 2.10.16\n\n# build section allows you to manage and build docker images\nbuild:\n- prefect_docker.deployments.steps.build_docker_image:\nid: build_image\nrequires: prefect-docker>=0.3.0\nimage_name: docker-tutorial-image\ntag: latest\ndockerfile: auto\npush: true\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush: null\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect.deployments.steps.set_working_directory:\ndirectory: /opt/prefect/docker-tutorial\n\n# the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: null\nversion: null\ntags: []\ndescription: null\nschedule: {}\nflow_name: null\nentrypoint: null\nparameters: {}\nwork_pool:\nname: null\nwork_queue_name: null\njob_variables:\nimage: '{{ build_image.image }}'\n

Once you have made any updates you would like to this prefect.yaml file, in your terminal run:

prefect deploy\n

The Prefect deployment wizard will walk you through the deployment experience to deploy your flow in either Prefect Cloud or your local Prefect Server. Once the flow is deployed, you can even save the configuration to your prefect.yaml file for faster deployment in the future.

","tags":["Docker","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/docker/#cleaning-up","title":"Cleaning up","text":"

When you're finished, just close the Prefect UI tab in your browser, and close the terminal sessions running the Prefect server and agent.

","tags":["Docker","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/","title":"Using Prefect Helm Chart for Prefect Workers","text":"

This guide will walk you through the process of deploying Prefect workers using the Prefect Helm Chart. We will also cover how to set up a Kubernetes secret for the API key and mount it as an environment variable.

","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#prerequisites","title":"Prerequisites","text":"

Before you begin, ensure you have the following installed and configured on your machine:

  1. Helm: https://helm.sh/docs/intro/install/
  2. Kubernetes CLI (kubectl): https://kubernetes.io/docs/tasks/tools/install-kubectl/
  3. Access to a Kubernetes cluster (e.g., Minikube, GKE, AKS, EKS)
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#step-1-add-prefect-helm-repository","title":"Step 1: Add Prefect Helm Repository","text":"

First, add the Prefect Helm repository to your Helm client:

helm repo add prefect https://prefecthq.github.io/prefect-helm\nhelm repo update\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#step-2-create-a-namespace-for-prefect-worker","title":"Step 2: Create a Namespace for Prefect Worker","text":"

Create a new namespace in your Kubernetes cluster to deploy the Prefect worker:

kubectl create namespace prefect\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#step-3-create-a-kubernetes-secret-for-the-api-key","title":"Step 3: Create a Kubernetes Secret for the API Key","text":"

Create a file named api-key.yaml with the following contents:

apiVersion: v1\nkind: Secret\nmetadata:\nname: prefect-api-key\nnamespace: prefect\ntype: Opaque\ndata:\nkey:  <base64-encoded-api-key>\n

Replace <base64-encoded-api-key> with your Prefect Cloud API key encoded in base64. The helm chart looks for a secret of this name and schema, this can be overridden in the values.yaml.

You can use the following command to generate the base64-encoded value:

echo -n \"your-prefect-cloud-api-key\" | base64\n

Apply the api-key.yaml file to create the Kubernetes secret:

kubectl apply -f api-key.yaml\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#step-4-configure-prefect-worker-values","title":"Step 4: Configure Prefect Worker Values","text":"

Create a values.yaml file to customize the Prefect worker configuration. Add the following contents to the file:

worker:\ncloudApiConfig:\naccountId: <target account ID>\nworkspaceId: <target workspace ID>\nconfig:\nworkPool: <target work pool name>\n

These settings will ensure that the worker connects to the proper account, workspace, and work pool.

View your Account ID and Workspace ID in your browser URL when logged into Prefect Cloud. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.

","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#step-5-install-prefect-worker-using-helm","title":"Step 5: Install Prefect Worker Using Helm","text":"

Now you can install the Prefect worker using the Helm chart with your custom values.yaml file:

helm install prefect-worker prefect/prefect-worker \\\n--namespace=prefect \\\n-f values.yaml\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#step-6-verify-deployment","title":"Step 6: Verify Deployment","text":"

Check the status of your Prefect worker deployment:

kubectl get pods -n prefect\n

You should see the Prefect worker pod running.

","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#conclusion","title":"Conclusion","text":"

You have now successfully deployed a Prefect worker using the Prefect Helm chart and configured the API key as a Kubernetes secret. The worker will now be able to communicate with Prefect Cloud and execute your Prefect flows.

","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/storage-guide/","title":"Where to Store Your Flow Code","text":"

When a flow runs, the execution environment needs access to its code. Flow code is not stored in a Prefect server database instance or Prefect Cloud. When deploying a flow, you have several flow code storage options.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-1-local-storage","title":"Option 1: Local storage","text":"

Local flow code storage is often used with a Local Subprocess work pool for initial experimentation.

To create a deployment with local storage and a Local Subprocess work pool, do the following:

  1. Run prefect deploy from the root of the directory containing your flow code.
  2. Select that you want to create a new deployment, select the flow code entrypoint, and name your deployment.
  3. Select a process work pool.

You are then shown the location that your flow code will be fetched from when a flow is run. For example:

Your Prefect workers will attempt to load your flow from: \n/my-path/my-flow-file.py. To see more options for managing your flow's code, run:\n\n    $ prefect project recipes ls\n

When deploying a flow to production, you most likely want code to run with infrastructure-specific configuration. The flow code storage options shown below are recommended for production deployments.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-2-git-based-storage","title":"Option 2: Git-based storage","text":"

Git-based version control platforms are popular locations for code storage. They provide redundancy, version control, and easier collaboration.

GitHub is the most popular cloud-based repository hosting provider. GitLab and Bitbucket are other popular options. Prefect supports each of these platforms.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#creating-a-deployment-with-git-based-storage","title":"Creating a deployment with git-based storage","text":"

Run prefect deploy from within a git repository and create a new deployment. You will see a series of prompts. Select that you want to create a new deployment, select the flow code entrypoint, and name your deployment.

Prefect detects that you are in a git repository and asks if you want to store your flow code in a git repository. Select \"y\" and you will be prompted to confirm the URL of your git repository and the branch name, as in the example below:

? Your Prefect workers will need access to this flow's code in order to run it. \nWould you like your workers to pull your flow code from its remote repository when running this flow? [y/n] (y): \n? Is https://github.com/my_username/my_repo.git the correct URL to pull your flow code from? [y/n] (y): \n? Is main the correct branch to pull your flow code from? [y/n] (y): \n? Is this a private repository? [y/n]: y\n

In this example, the git repository is hosted on GitHub. If you are using Bitbucket or GitLab, the URL will match your provider. If the repository is public, enter \"n\" and you are on your way.

If the repository is private, you can enter a token to access your private repository. This token will be saved in an encrypted Prefect Secret block.

? Please enter a token that can be used to access your private repository. This token will be saved as a Secret block via the Prefect API: \"123_abc_this_is_my_token\"\n

Verify that you have a new Secret block in your active workspace named in the format \"deployment-my-deployment-my-flow-name-repo-token\".

Creating access tokens differs for each provider.

GitHubBitbucketGitLab

We recommend using HTTPS with fine-grained Personal Access Tokens so that you can limit access by repository. See the GitHub docs for Personal Access Tokens (PATs).

Under Your Profile->Developer Settings->Personal access tokens->Fine-grained token choose Generate New Token and fill in the required fields. Under Repository access choose Only select repositories and grant the token permissions for Contents.

We recommend using HTTPS with Repository, Project, or Workspace Access Tokens.

You can create a Repository Access Token with Scopes->Repositories->Read.

We recommend using HTTPS with Project Access Tokens.

In your repository in the GitLab UI, select Settings->Repository->Project Access Tokens and check read_repository under Select scopes.

If you want to configure a Secret block ahead of time for use when deploying a prefect.yaml file, create the block via code or the Prefect UI and reference it like this:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://bitbucket.org/org/my-private-repo.git\naccess_token: \"{{ prefect.blocks.secret.my-block-name }}\"\n

Alternatively, you can create a Credentials block ahead of time and reference it in the prefect.yaml pull step.

GitHubBitbucketGitLab
  1. Install the Prefect-Github library with pip install -U prefect-github
  2. Register the blocks in that library to make them available on the server with prefect block register -m prefect_github.
  3. Create a GitHub Credentials block via code or the Prefect UI and reference it as shown above.
  4. In addition to the block name, most users will need to fill in the GitHub Username and GitHub Personal Access Token fields.
pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/discdiver/my-private-repo.git\ncredentials: \"{{ prefect.blocks.github-credentials.my-block-name }}\"\n
  1. Install the relevant library with pip install -U prefect-bitbucket
  2. Register the blocks in that library with prefect block register -m prefect_bitbucket
  3. Create a Bitbucket credentials block via code or the Prefect UI and reference it as shown above.
  4. In addition to the block name, most users will need to fill in the Bitbucket Username and Bitbucket Personal Access Token fields.
pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://bitbucket.org/org/my-private-repo.git\ncredentials: \"{{ prefect.blocks.bitbucket-credentials.my-block-name }}\"\n
  1. Install the relevant library with pip install -U prefect-gitlab
  2. Register the blocks in that library with prefect block register -m prefect_gitlab
  3. Create a GitLab credentials block via code or the Prefect UI and reference it as shown above.
  4. In addition to the block name, most users will need to fill in the GitLab Username and GitLab Personal Access Token fields.
pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://gitlab.com/org/my-private-repo.git\ncredentials: \"{{ prefect.blocks.gitlab-credentials.my-block-name }}\"\n

Push your code

When you make a change to your code, Prefect does push your code to your git-based version control platform. You need to do push your code manually or as part of your CI/CD pipeline. This design decision is an intentional one to avoid confusion about the git history and push process.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-3-docker-based-storage","title":"Option 3: Docker-based storage","text":"

Another popular way to store your flow code is to include it in a Docker image. The following work pools use Docker containers, so the flow code can be directly baked into the image:

CI/CD may not require push or pull steps

You don't need push or pull steps in the prefect.yaml file if using CI/CD to build a Docker image outside of Prefect. Instead, the work pool can reference the image directly.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-4-cloud-provider-storage","title":"Option 4: Cloud-provider storage","text":"

You can store your code in an AWS S3 bucket, Azure Blob Storage container, or GCP GCS bucket and specify the destination directly in the push and pull steps of your prefect.yaml file.

To create a templated prefect.yaml file run prefect init and select the recipe for the applicable cloud-provider storage. Below are the recipe options and the relevant portions of the prefect.yaml file.

AWS S3 bucketAzure Blob Storage containerGCP GCS bucket

Choose s3 as the recipe and enter the bucket name when prompted.

# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_aws.deployments.steps.push_to_s3:\nid: push_code\nrequires: prefect-aws>=0.3.4\nbucket: my-bucket\nfolder: my-folder\ncredentials: \"{{ prefect.blocks.aws-credentials.my-credentials-block }}\" # if private\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_aws.deployments.steps.pull_from_s3:\nid: pull_code\nrequires: prefect-aws>=0.3.4\nbucket: '{{ push_code.bucket }}'\nfolder: '{{ push_code.folder }}'\ncredentials: \"{{ prefect.blocks.aws-credentials.my-credentials-block }}\" # if private \n

If the bucket requires authentication to access it, you can do the following:

  1. Install the Prefect-AWS library with pip install -U prefect-aws
  2. Register the blocks in Prefect-AWS with prefect block register -m prefect_aws
  3. Create a user with a role with read and write permissions to access the bucket. If using the UI, create an access key pair with IAM->Users->Security credentials->Access keys->Create access key. Choose Use case->Other and then copy the Access key and Secret access key values.
  4. Create an AWS Credentials block via code or the Prefect UI. In addition to the block name, most users will fill in the AWS Access Key ID and AWS Access Key Secret fields.
  5. Reference the block as shown in the push and pull steps above.

Choose azure as the recipe and enter the container name when prompted.

# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_azure.deployments.steps.push_to_azure_blob_storage:\nid: push_code\nrequires: prefect-azure>=0.2.8\ncontainer: my-prefect-azure-container\nfolder: my-folder\ncredentials: \"{{ prefect.blocks.azure-blob-storage-credentials.my-credentials-block }}\" # if private\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_azure.deployments.steps.pull_from_azure_blob_storage:\nid: pull_code\nrequires: prefect-azure>=0.2.8\ncontainer: '{{ push_code.container }}'\nfolder: '{{ push_code.folder }}'\ncredentials: \"{{ prefect.blocks.azure-blob-storage-credentials.my-credentials-block }}\" # if private\n

If the blob requires authentication to access it, you can do the following:

  1. Install the Prefect-Azure library with pip install -U prefect-azure
  2. Register the blocks in Prefect-Azure with prefect block register -m prefect_azure
  3. Create an access key for a role with sufficient (read and write) permissions to access the blob. A connection string that will contain all needed information can be created in the UI under Storage Account->Access keys.
  4. Create an Azure Blob Storage Credentials block via code or the Prefect UI. Enter a name for the block and paste the connection string into the Connection String field.
  5. Reference the block as shown in the push and pull steps above.

Choose `gcs`` as the recipe and enter the bucket name when prompted.

# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_gcp.deployment.steps.push_to_gcs:\nid: push_code\nrequires: prefect-gcp>=0.4.3\nbucket: my-bucket\nfolder: my-folder\ncredentials: \"{{ prefect.blocks.gcp-credentials.my-credentials-block }}\" # if private \n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_gcp.deployment.steps.pull_from_gcs:\nid: pull_code\nrequires: prefect-gcp>=0.4.3\nbucket: '{{ push_code.bucket }}'\nfolder: '{{ pull_code.folder }}'\ncredentials: \"{{ prefect.blocks.gcp-credentials.my-credentials-block }}\" # if private \n

If the bucket requires authentication to access it, you can do the following:

  1. Install the Prefect-GCP library with pip install -U prefect-gcp
  2. Register the blocks in Prefect-GCP with prefect block register -m prefect_gcp
  3. Create a service account in GCP for a role with read and write permissions to access the bucket contents. If using the GCP console, go to IAM & Admin->Service accounts->Create service account. After choosing a role with the required permissions, see your service account and click on the three dot menu in the Actions column. Select Manage Keys->ADD KEY->Create new key->JSON. Download the JSON file.
  4. Create a GCP Credentials block via code or the Prefect UI. Enter a name for the block and paste the entire contents of the JSON key file into the Service Account Info field.
  5. Reference the block as shown in the push and pull steps above.

Another option for authentication is for the worker to have access to the the storage location at runtime via SSH keys.

Alternatively, you can inject environment variables into your deployment like this example that uses an environment variable named CUSTOM_FOLDER:

 push:\n- prefect_gcp.deployment.steps.push_to_gcs:\nid: push_code\nrequires: prefect-gcp>=0.4.3\nbucket: my-bucket\nfolder: '{{ $CUSTOM_FOLDER }}'\n
","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#including-and-excluding-files-from-storage","title":"Including and excluding files from storage","text":"

By default, Prefect uploads all files in the current folder to the configured storage location when you create a deployment.

When using a git repository, Docker image, or cloud-provider storage location, you may want to exclude certain files or directories. - If you are familiar with git you are likely familiar with the .gitignore file. - If you are familiar with Docker you are likely familiar with the .dockerignore file. - For cloud-provider storage the .prefectignore file serves the same purpose and follows a similar syntax as those files. So an entry of *.pyc will exclude all .pyc files from upload.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#other-code-storage-creation-methods","title":"Other code storage creation methods","text":"

In earlier versions of Prefect storage blocks were the recommended way to store flow code. Storage blocks are still supported, but not recommended.

As shown above, repositories can be referenced directly through interactive prompts with prefect deploy or in a prefect.yaml. When authentication is needed, Secret or Credential blocks can be referenced, and in some cases created automatically through interactive deployment creation prompts.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#next-steps","title":"Next steps","text":"

You've seen options for where to store your flow code.

We recommend using Docker-based storage or git-based storage for your production deployments.

Check out more guides to reach your goals with Prefect.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"host/","title":"Hosting a Prefect server","text":"

After you install Prefect you have a Python SDK client that can communicate with Prefect Cloud, the platform hosted by Prefect. You also have an API server backed by a database and a UI.

In this section you'll learn how to host your own Prefect server.

Spin up a local Prefect server UI with the prefect server start CLI command in the terminal:

$ prefect server start\n

Open the URL for the Prefect server UI (http://127.0.0.1:4200 by default) in a browser.

Shut down the Prefect server with ctrl + c in the terminal.","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#differences-between-a-prefect-server-and-prefect-cloud","title":"Differences between a Prefect server and Prefect Cloud","text":"

The self-hosted Prefect server and Prefect Cloud share a common set of features. Prefect Cloud also includes the following features:

You can read more about Prefect Cloud in the Cloud section.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#configuring-a-prefect-server","title":"Configuring a Prefect Server","text":"

Go to your terminal session and run this command to set the API URL to point to a Prefect server instance:

$ prefect config set PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n

PREFECT_API_URL required when running Prefect inside a container

You must set the API server address to use Prefect within a container, such as a Docker container.

You can save the API server address in a Prefect profile. Whenever that profile is active, the API endpoint will be be at that address.

See Profiles & Configuration for more information on profiles and configurable Prefect settings.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#prefect-database","title":"Prefect Database","text":"

The Prefect database persists data to track the state of your flow runs and related Prefect concepts, including:

Currently Prefect supports the following databases:

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#using-the-database","title":"Using the database","text":"

A local SQLite database is the default database and is configured upon Prefect installation. The database is located at ~/.prefect/prefect.db by default.

To reset your database, run the CLI command:

prefect server database reset -y\n

This command will clear all data and reapply the schema.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#database-settings","title":"Database settings","text":"

Prefect provides several settings for configuring the database. Here are the default settings:

PREFECT_API_DATABASE_CONNECTION_URL='sqlite+aiosqlite:///${PREFECT_HOME}/prefect.db'\nPREFECT_API_DATABASE_ECHO='False'\nPREFECT_API_DATABASE_MIGRATE_ON_START='True'\nPREFECT_API_DATABASE_PASSWORD='None'\n

You can save a setting to your active Prefect profile with prefect config set.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#configuring-a-postgresql-database","title":"Configuring a PostgreSQL database","text":"

To connect Prefect to a PostgreSQL database, you can set the following environment variable:

prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"postgresql+asyncpg://postgres:yourTopSecretPassword@localhost:5432/prefect\"\n

The above environment variable assumes that:

If you want to quickly start a PostgreSQL instance that can be used as your Prefect database, you can use the following command that will start a Docker container running PostgreSQL:

docker run -d --name prefect-postgres -v prefectdb:/var/lib/postgresql/data -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=yourTopSecretPassword -e POSTGRES_DB=prefect postgres:latest\n

The above command:

You can inspect your profile to be sure that the environment variable has been set properly:

prefect config view --show-sources\n

Start the Prefect server and it should from now on use your PostgreSQL database instance:

prefect server start\n
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#in-memory-database","title":"In-memory database","text":"

One of the benefits of SQLite is in-memory database support.

To use an in-memory SQLite database, set the following environment variable:

prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false\"\n

Use SQLite database for testing only

SQLite is only supported by Prefect for testing purposes and is not compatible with multiprocessing.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#database-versions","title":"Database versions","text":"

The following database versions are required for use with Prefect:

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#migrations","title":"Migrations","text":"

Prefect uses Alembic to manage database migrations. Alembic is a database migration tool for usage with the SQLAlchemy Database Toolkit for Python. Alembic provides a framework for generating and applying schema changes to a database.

To apply migrations to your database you can run the following commands:

To upgrade:

prefect server database upgrade -y\n

To downgrade:

prefect server database downgrade -y\n

You can use the -r flag to specify a specific migration version to upgrade or downgrade to. For example, to downgrade to the previous migration version you can run:

prefect server database downgrade -y -r -1\n

or to downgrade to a specific revision:

prefect server database downgrade -y -r d20618ce678e\n

See the contributing docs for information on how to create new database migrations.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#notifications","title":"Notifications","text":"

When you use Prefect Cloud you gain access to a hosted platform with Workspace & User controls, Events, and Automations. Prefect Cloud has an option for automation notifications. The more limited Notifications option is provided for the self-hosted Prefect server.

Notifications enable you to set up alerts that are sent when a flow enters any state you specify. When your flow and task runs changes state, Prefect notes the state change and checks whether the new state matches any notification policies. If it does, a new notification is queued.

Prefect supports sending notifications via:

Notifications in Prefect Cloud

Prefect Cloud uses the robust Automations interface to enable notifications related to flow run state changes and work queue health.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#configure-notifications","title":"Configure notifications","text":"

To configure a notification in a Prefect server, go to the Notifications page and select Create Notification or the + button.

Notifications are structured just as you would describe them to someone. You can choose:

For email notifications (supported on Prefect Cloud only), the configuration requires email addresses to which the message is sent.

For Slack notifications, the configuration requires webhook credentials for your Slack and the channel to which the message is sent.

For example, to get a Slack message if a flow with a daily-etl tag fails, the notification will read:

If a run of any flow with daily-etl tag enters a failed state, send a notification to my-slack-webhook

When the conditions of the notification are triggered, you\u2019ll receive a message:

The fuzzy-leopard run of the daily-etl flow entered a failed state at 22-06-27 16:21:37 EST.

On the Notifications page you can pause, edit, or delete any configured notification.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"integrations/","title":"Integrations","text":"

Prefect integrations are organized into collections of pre-built tasks, flows, blocks and more that are installable as PyPI packages.

Airbyte

Maintained by Prefect

Alert

Maintained by Khuyen Tran

AWS

Maintained by Prefect

Azure

Maintained by Prefect

Bitbucket

Maintained by Prefect

Census

Maintained by Prefect

CubeJS

Maintained by Alessandro Lollo

Dask

Maintained by Prefect

Databricks

Maintained by Prefect

dbt

Maintained by Prefect

Docker

Maintained by Prefect

Earthdata

Maintained by Giorgio Basile

Email

Maintained by Prefect

Firebolt

Maintained by Prefect

Fivetran

Maintained by Fivetran

Fugue

Maintained by The Fugue Development Team

GCP

Maintained by Prefect

GitHub

Maintained by Prefect

GitLab

Maintained by Prefect

Google Sheets

Maintained by Stefano Cascavilla

Great Expectations

Maintained by Prefect

HashiCorp Vault

Maintained by Pavel Chekin

Hex

Maintained by Prefect

Hightouch

Maintained by Prefect

Jupyter

Maintained by Prefect

Kubernetes

Maintained by Prefect

KV

Maintained by Michael Adkins

MetricFlow

Maintained by Alessandro Lollo

Monday

Maintained by Prefect

MonteCarlo

Maintained by Prefect

OpenAI

Maintained by Prefect

OpenMetadata

Maintained by Prefect

Ray

Maintained by Prefect

Shell

Maintained by Prefect

Sifflet

Maintained by Sifflet and Alessandro Lollo

Slack

Maintained by Prefect

Snowflake

Maintained by Prefect

Soda Core

Maintained by Soda and Alessandro Lollo

Spark on Kubernetes

Maintained by Manoj Babu Katragadda

SQLAlchemy

Maintained by Prefect

Stitch

Maintained by Alessandro Lollo

Transform

Maintained by Alessandro Lollo

Twitter

Maintained by Prefect

","tags":["tasks","flows","blocks","collections","task library","integrations","Airbyte","Alert","AWS","Azure","Bitbucket","Census","CubeJS","Dask","Databricks","dbt","Docker","Earthdata","Email","Firebolt","Fivetran","Fugue","GCP","GitHub","GitLab","Google Sheets","Great Expectations","HashiCorp Vault","Hex","Hightouch","Jupyter","Kubernetes","KV","MetricFlow","Monday","MonteCarlo","OpenAI","OpenMetadata","Ray","Shell","Sifflet","Slack","Snowflake","Soda Core","Spark on Kubernetes","SQLAlchemy","Stitch","Transform","Twitter"],"boost":2},{"location":"integrations/contribute/","title":"Contribute","text":"

We welcome contributors! You can help contribute blocks and integrations by following these steps. We welcome contributors! You can help contribute blocks and integrations by following these steps.

","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contributing-blocks","title":"Contributing Blocks","text":"

Building your own custom block is simple!

  1. Subclass from Block.
  2. Add a description alongside an Attributes and Example section in the docstring.
  3. Set a _logo_url to point to a relevant image.
  4. Create the pydantic.Fields of the block with a type annotation, default or default_factory, and a short description about the field.
  5. Define the methods of the block.

For example, this is how the Secret block is implemented:

from pydantic import Field, SecretStr\nfrom prefect.blocks.core import Block\n\nclass Secret(Block):\n\"\"\"\n    A block that represents a secret value. The value stored in this block will be obfuscated when\n    this block is logged or shown in the UI.\n\n    Attributes:\n        value: A string value that should be kept secret.\n\n    Example:\n        ```python\n        from prefect.blocks.system import Secret\n        secret_block = Secret.load(\"BLOCK_NAME\")\n\n        # Access the stored secret\n        secret_block.get()\n        ```\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5uUmyGBjRejYuGTWbTxz6E/3003e1829293718b3a5d2e909643a331/image8.png?h=250\"\n\n    value: SecretStr = Field(\n        default=..., description=\"A string value that should be kept secret.\"\n    )  # ... indicates it's a required field\n\n    def get(self):\n        return self.value.get_secret_value()\n

To view in Prefect Cloud or the Prefect server UI, register the block.

","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contributing-integrations","title":"Contributing Integrations","text":"

Anyone can create and share a Prefect Integration and we encourage anyone interested in creating a integration to do so!

","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#generate-a-project","title":"Generate a project","text":"

To help you get started with your integration, we've created a template that gives the tools you need to create and publish your integration.

Use the Prefect Integration template to get started creating an integration with a bootstrapped project!

","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#list-a-project-in-the-integrations-catalog","title":"List a project in the Integrations Catalog","text":"

To list your integration in the Prefect Integrations Catalog, submit a PR to the Prefect repository adding a file to the docs/integrations/catalog directory with details about your integration. Please use TEMPLATE.yaml in that folder as a guide.

","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contribute-fixes-or-enhancements-to-integrations","title":"Contribute fixes or enhancements to Integrations","text":"

If you'd like to help contribute to fix an issue or add a feature to any of our Integrations, please propose changes through a pull request from a fork of the repository.

  1. Fork the repository
  2. Clone the forked repository
  3. Install the repository and its dependencies:
    pip install -e \".[dev]\"\n
  4. Make desired changes
  5. Add tests
  6. Insert an entry to the Integration's CHANGELOG.md
  7. Install pre-commit to perform quality checks prior to commit:
    pre-commit install\n
  8. git commit, git push, and create a pull request
","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/usage/","title":"Using Integrations","text":"","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#installing-an-integration","title":"Installing an Integration","text":"

Install the Integration via pip.

For example, to use prefect-aws:

pip install prefect-aws\n
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#registering-blocks-from-an-integration","title":"Registering Blocks from an Integration","text":"

Once the Prefect Integration is installed, register the blocks within the integration to view them in the Prefect Cloud UI:

For example, to register the blocks available in prefect-aws:

prefect block register -m prefect_aws\n

Updating blocks from an integrations

If you install an updated Prefect integration that adds fields to a block type, you will need to re-register that block type.

Loading a block in code

To use the load method on a Block, you must already have a block document saved either through code or through the Prefect UI.

Learn more about Blocks here!

","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#using-tasks-and-flows-from-an-integration","title":"Using Tasks and Flows from an Integration","text":"

Integrations also contain pre-built tasks and flows that can be imported and called within your code.

As an example, to read a secret from AWS Secrets Manager with the read_secret task:

from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import read_secret\n\n@flow\ndef connect_to_database():\n    aws_credentials = AwsCredentials.load(\"MY_BLOCK_NAME\")\n    secret_value = read_secret(\n        secret_name=\"db_password\",\n        aws_credentials=aws_credentials\n    )\n\n    # Use secret_value to connect to a database\n
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#customizing-tasks-and-flows-from-an-integration","title":"Customizing Tasks and Flows from an Integration","text":"

To customize the settings of a task or flow pre-configured in a collection, use with_options:

from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\ncustom_run_dbt_cloud_job = trigger_dbt_cloud_job_run_and_wait_for_completion.with_options(\n    name=\"Run My DBT Cloud Job\",\n    retries=2,\n    retry_delay_seconds=10\n)\n\n@flow\ndef run_dbt_job_flow():\n    run_result = custom_run_dbt_cloud_job(\n        dbt_cloud_credentials=DbtCloudCredentials.load(\"my-dbt-cloud-credentials\"),\n        job_id=1\n    )\n\nrun_dbt_job_flow()\n
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#recipes-and-tutorials","title":"Recipes and Tutorials","text":"

To learn more about how to use Integrations, check out Prefect recipes on GitHub. These recipes provide examples of how Integrations can be used in various scenarios.

","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"recipes/recipes/","title":"Prefect Recipes","text":"

Prefect recipes are common, extensible examples for setting up Prefect in your execution environment with ready-made ingredients such as Dockerfiles, Terraform files, and GitHub Actions.

Recipes are useful when you are looking for tutorials on how to deploy an agent, use event-driven flows, set up unit testing, and more.

The following are Prefect recipes specific to Prefect 2. You can find a full repository of recipes at https://github.com/PrefectHQ/prefect-recipes and additional recipes at Prefect Discourse.

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#recipe-catalog","title":"Recipe catalog","text":"Agent on Azure with Kubernetes

Configure Prefect on Azure with Kubernetes, running a Prefect agent to execute deployment flow runs.

Maintained by Prefect

This recipe uses:

Agent on ECS Fargate with AWS CLI

Run a Prefect 2 agent on ECS Fargate using the AWS CLI.

Maintained by Prefect

This recipe uses:

Agent on ECS Fargate with Terraform

Run a Prefect 2 agent on ECS Fargate using Terraform.

Maintained by Prefect

This recipe uses:

Agent on an Azure VM

Set up an Azure VM and run a Prefect agent.

Maintained by Prefect

This recipe uses:

Flow Deployment with GitHub Actions

Deploy a Prefect flow with storage and infrastructure blocks, update and push Docker image to container registry.

Maintained by Prefect

This recipe uses:

Flow Deployment with GitHub Storage and Docker Infrastructure

Create a deployment with GitHub as a storage and Docker Container as an infrastructure

Maintained by Prefect

This recipe uses:

Prefect server on an AKS Cluster

Deploy a Prefect server to an Azure Kubernetes Service (AKS) Cluster with Azure Blob Storage.

Maintained by Prefect

This recipe uses:

Serverless Prefect with AWS Chalice

Execute Prefect flows in an AWS Lambda function managed by Chalice.

Maintained by Prefect

This recipe uses:

Serverless Workflows with ECSTask Blocks

Deploy a Prefect agent to AWS ECS Fargate using GitHub Actions and ECSTask infrastructure blocks.

Maintained by Prefect

This recipe uses:

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#contributing-recipes","title":"Contributing recipes","text":"

We're always looking for new recipe contributions! See the Prefect Recipes repository for details on how you can add your Prefect recipe, share best practices with fellow Prefect users, and earn some swag.

Prefect recipes provide a vital cookbook where users can find helpful code examples and, when appropriate, common steps for specific Prefect use cases.

We love recipes from anyone who has example code that another Prefect user can benefit from (e.g. a Prefect flow that loads data into Snowflake).

Have a blog post, Discourse article, or tutorial you\u2019d like to share as a recipe? All submissions are welcome. Clone the prefect-recipes repo, create a branch, add a link to your recipe to the README, and submit a PR. Have more questions? Read on.

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-is-a-recipe","title":"What is a recipe?","text":"

A Prefect recipe is like a cookbook recipe: it tells you what you need \u2014 the ingredients \u2014 and some basic steps, but assumes you can put the pieces together. Think of the Hello Fresh meal experience, but for dataflows.

A tutorial, on the other hand, is Julia Child holding your hand through the entire cooking process: explaining each ingredient and procedure, demonstrating best practices, pointing out potential problems, and generally making sure you can\u2019t stray from the happy path to a delicious meal.

We love Julia, and we love tutorials. But we don\u2019t expect that a Prefect recipe should handhold users through every step and possible contingency of a solution. A recipe can start from an expectation of more expertise and problem-solving ability on the part of the reader.

To see an example of a high quality recipe, check out Serverless with AWS Chalice. This recipe includes all of the elements we like to see.

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#steps-to-add-your-recipe","title":"Steps to add your recipe","text":"

Here\u2019s our guide to creating a recipe:

# Clone the repository\ngit clone git@github.com:PrefectHQ/prefect-recipes.git\ncd prefect-recipes\n\n# Create and checkout a new branch\ngit checkout -b new_recipe_branch_name\n
  1. Add your recipe. Your code may simply be a copy/paste of a single Python file or an entire folder. Unsure of where to add your file or folder? Just add under the flows-advanced/ folder. A Prefect Recipes maintainer will help you find the best place for your recipe. Just want to direct others to a project you made, whether it be a repo or a blogpost? Simply link to it in the Prefect Recipes README!
  2. (Optional) Write a README.
  3. Include a dependencies file, if applicable.
  4. Push your code and make a PR to the repository.

That\u2019s it!

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-makes-a-good-recipe","title":"What makes a good recipe?","text":"

Every recipe is useful, as other Prefect users can adapt the recipe to their needs. Particularly good ones help a Prefect user bake a great dataflow solution! Take a look at the prefect-recipes repo to see some examples.

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-are-the-common-ingredients-of-a-good-recipe","title":"What are the common ingredients of a good recipe?","text":"","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-are-some-tips-for-a-good-recipe-readme","title":"What are some tips for a good recipe README?","text":"

A thoughtful README can take a recipe from good to great. Here are some best practices that we\u2019ve found make for a great recipe README:

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#next-steps","title":"Next steps","text":"

We hope you\u2019ll feel comfortable sharing your Prefect solutions as recipes in the prefect-recipes repo. Collaboration and knowledge sharing are defining attributes of our Prefect Community!

Have questions about sharing or using recipes? Reach out on our active Prefect Slack Community!

Happy engineering!

","tags":["recipes","best practices","examples"],"boost":2},{"location":"tutorial/","title":"Tutorial Overview","text":"

This tutorial provides a step by step walk-through of Prefect core concepts and instructions on how to use them.

By the end of this tutorial you will have:

  1. Created a Flow
  2. Added Tasks to It
  3. Created a Work Pool
  4. Deployed a Worker
  5. Deployed the Flow
  6. Run the Flow on the Worker

If you're looking for examples of more advanced operations (like deploying on Kubernetes), check out Prefect's guides.

","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#prerequisites","title":"Prerequisites","text":"
  1. Before you start, make sure you have Python installed, then install Prefect: pip install -U prefect

    1. See the install guide for more detailed instructions.
  2. Create a GitHub repository for your tutorial, let's call it prefect-tutorial.

  3. This tutorial requires a Prefect Server instance, so sign up for a forever free Prefect Cloud Account or, alternatively, self-host a Prefect Server.

","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#what-is-prefect","title":"What is Prefect?","text":"

Prefect orchestrates workflows \u2014 it simplifies the creation, scheduling, and monitoring of complex data pipelines. With Prefect, you define workflows as Python code and let it handle the rest.

Prefect also provides error handling, retry mechanisms, and a user-friendly dashboard for monitoring. It's the easiest way to transform any Python function into a unit of work that can be observed and orchestrated.

Just bring your Python code, sprinkle in a few decorators, and go!

","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#reference-material","title":"Reference Material","text":"

If you've never used Prefect before, check out the core concepts docs. Many of these concepts will be introduced in the tutorial, however, the concept docs provide a more comprehensive overview of each concept.

","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/deployments/","title":"Deploying Flows","text":"","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#why-deploy","title":"Why Deploy?","text":"

One of the most common reasons to use a tool like Prefect is scheduling. You want your flows running on production infrastructure in a consistent and predictable way. Up to this point, we\u2019ve demonstrated running Prefect flows as scripts, but this means you have been the one triggering flow runs. In order to schedule flow runs or trigger them based on events or certain conditions, you\u2019ll need to deploy them.

A deployed flow can be interacted with in additional ways:

","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#what-is-a-deployment","title":"What is a Deployment?","text":"

Deploying a flow is the act of specifying where, when, and how it will run. This information is encapsulated and sent to Prefect as a Deployment which contains the crucial metadata needed for orchestration. Deployments elevate workflows from functions that you call manually to API-managed entities.

Attributes of a deployment include (but are not limited to):

Before you build your first deployment, its helpful to understand how Prefect configures flow run infrastructure. This means setting up a work pool and a worker to enable your flows to run on your infrastructure.

","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#why-work-pools-and-workers","title":"Why Work Pools and Workers?","text":"

Running Prefect flows locally is great for testing and development purposes. But for production settings, Prefect work pools and workers allow you to run flows in the environments best suited to their execution.

Workers and work pools bridge the Prefect orchestration layer with the infrastructure the flows are actually executed on.

Work pools prioritize flow runs and respond to polling from the worker. Workers are light-weight processes that run in the environment where flow runs are executed.

You can create and configure work pools within the Prefect UI.

Work Pools:

graph TD\n\n    subgraph your_infra[\"Your Execution Environment\"]\n        worker[\"Worker\"]\n                subgraph flow_run_infra[Flow Run Infra]\n                    flow_run((\"Flow Run\"))\n                end\n\n    end\n\n    subgraph api[\"Prefect API\"]\n                Deployment --> |assigned to| work_pool\n        work_pool([\"Work Pool\"])\n    end\n\n    worker --> |polls| work_pool\n    worker --> |creates| flow_run_infra

Security Note:

Prefect provides execution through the hybrid model, which allows you to deploy workflows that run in the environments best suited to their execution while allowing you to keep your code and data completely private. There is no ingress required. For more information see here.

Now that we\u2019ve reviewed the concepts of a Work Pool and Worker, let\u2019s create them so that you can deploy your tutorial flow, and execute it later using the Prefect API.

","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#setting-up-the-worker-and-work-pool","title":"Setting up the Worker and Work Pool","text":"

For this tutorial you will create a Process type work pool via the CLI.

The Process work pool type specifies that all work sent to this work pool will run as a subprocess on the same infrastructure on which the worker is running.

Other work pool types

There are work pool types for all major managed code execution platforms, such as Kubernetes services or serverless computing environments such as AWS ECS, Azure Container Instances, and GCP Cloud Run.

These are expanded upon in the Guides section.

In your terminal run the following command to set up a Process type work pool.

prefect work-pool create --type process my-process-pool\n

Let\u2019s confirm that the work pool was successfully created by running the following command in the same terminal. You should see your new my-process-pool in the output list.

prefect work-pool ls 

Finally, let\u2019s double check that you can see this work pool in the Prefect Cloud UI. Navigate to the Work Pool tab and verify that you see my-process-pool listed.

                                             Work Pools                                             \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name                  \u2503 Type          \u2503                                   ID \u2503 Concurrency Limit \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 my-process-pool       \u2502 process       \u2502 33ce63a5-cc80-4f43-9092-11122ccea60b \u2502 None              \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n

When you click into the my-process-pool, select the \"Work Queues\" tab. You should see a red status icon listed for the default work queue signifying that this queue is not ready to submit work. Work queues are an advanced feature. You can learn more about them in the work queue documentation.

To get the work queue healthy and ready to submit flow runs, you need to start a worker.

Workers are a lightweight polling process that kick off scheduled flow runs on a certain type of infrastructure (like Process). To start a worker on your laptop, open a new terminal and confirm that your virtual environment has prefect installed.

Run the following command in this new terminal to start the worker:

prefect worker start --pool my-process-pool\n

You should see the worker start - it's now polling the Prefect API to request any scheduled flow runs it should pick up and then submit for execution. You\u2019ll see your new worker listed in the UI under the worker tab of the Work Pool page with a recent last polled date. You should also be able to see a Healthy status indicator in the default work queue under the work queue tab.

You would need to keep this terminal session active in order for the worker continue to pick up jobs. Since you are running this worker locally, the worker will terminate if you close the terminal. Therefore, in a production setting this worker should be running as a daemonized or managed process. See next steps for more information on this.

Now that we\u2019ve set up your work pool and worker, we have what we need to kick off and execute flow runs of deployed flows. Lets deploy your tutorial flow to my-process-pool.

","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#create-a-deployment","title":"Create a Deployment","text":"","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#from-our-previous-steps-we-now-have","title":"From our previous steps, we now have:","text":"
  1. A flow
  2. A work pool
  3. A worker
  4. An understanding of why Prefect Deployments are useful

Now it\u2019s time to put it all together.

In your terminal (not the terminal in which the worker is running), let\u2019s run the following command from the root of your repo to begin deploying your flow:

prefect deploy\n

Specifying an entrypoint

In non-interactive settings (like CI/CD), you can specify the entrypoint of your flow directly in the CLI.

For example, if my_flow is defined in my_flow.py, provide deployment details with flags prefect deploy my_flow.py:my_flow -n my-deployment -p my-pool.

When running prefect deploy interactively, the CLI will discover all flows in your working directory. Select the flow you want to deploy, and the deployment wizard will walk you through the rest of the deployment creation process:

  1. Deployment name: Choose a name, like my-deployment
  2. Would you like to schedule when this flow runs? (y/n): Type n for now, you can set up a schedule later
  3. Which work pool would you like to deploy this flow to? (use arrow keys): Select the work pool you just created, my-process-pool
  4. When asked if you would like your workers to pull your flow code from its remote repository, select yes if you\u2019ve been following along and defining your flow code script from within a GitHub repository:
    • y (Recommended): Prefect will automatically register your GitHub repo as the the remote location of this flow\u2019s code. This means a worker started on any machine (for example: on your laptop, on your team-mate\u2019s laptop, or in a cloud VM) will be able to facilitate execution of this deployed flow.
    • n: If you would like to continue this tutorial without the use of GitHub, thats ok, Prefect will always first look to see if the flow code exists locally before referring to remote flow code storage, so your local my-process-pool should have all it needs to complete the execution of this deployed flow.

Common Pitfalls

As you continue to use Prefect, you'll likely author many different flows and deployments of them. Check out the next section to learn about defining deployments in a prefect.yaml file.

Did you know?

A Prefect flow can have more than one deployment. This can be useful if you want your flow to run in different execution environments or have multiple different schedules.

","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#next-steps","title":"Next Steps","text":"

And more!

","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/flows/","title":"Flows","text":"","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#what-is-a-flow","title":"What is a Flow?","text":"

Flows are like functions. They can take inputs, perform work, and return an output. In fact, you can turn any function into a Prefect flow by adding the @flow decorator. When a function becomes a flow, its behavior changes, giving it the following advantages:

","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#run-your-first-flow","title":"Run Your First flow:","text":"

The simplest way get started with Prefect is to import\u00a0and annotate your Python function with the\u00a0@flow\u00a0decorator. The script below fetches statistics about the main Prefect repository. Let's turn it into a Prefect flow:

import httpx\nfrom prefect import flow\n@flow\ndef get_repo_info():\nurl = \"https://api.github.com/repos/PrefectHQ/prefect\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

Running this flow from your terminal will result in some interesting output:

12:47:42.792 | INFO    | prefect.engine - Created flow run 'ludicrous-warthog' for flow 'get-repo-info'\nPrefectHQ/prefect repository statistics \ud83e\udd13:\nStars \ud83c\udf20 : 12146\nForks \ud83c\udf74 : 1245\n12:47:45.008 | INFO    | Flow run 'ludicrous-warthog' - Finished in state Completed()\n
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#parameters","title":"Parameters","text":"

As with any Python function, you can pass arguments to a flow. The positional and keyword arguments defined on your flow function are called parameters. Prefect will automatically perform type conversion by using any provided type hints. Let's make the repository a parameter:

import httpx\nfrom prefect import flow\n\n\n@flow\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\nurl = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info()\n
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#logging","title":"Logging","text":"

Prefect enables you to log a variety of useful information in the UI about your flow and task runs, capturing information about your workflows for purposes such as monitoring, troubleshooting, and auditing. Let's add some logging to our flow:

import httpx\nfrom prefect import flow, get_run_logger\n@flow\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\nlogger = get_run_logger()\nlogger.info(f\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\nlogger.info(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\nlogger.info(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\nif __name__ == \"__main__\":\n    get_repo_info()\n

Now the output looks more consistent:

12:47:42.792 | INFO    | prefect.engine - Created flow run 'ludicrous-warthog' for flow 'get-repo-info'\nPrefectHQ/prefect repository statistics \ud83e\udd13:\n12:47:43.016 | INFO    | Flow run 'ludicrous-warthog' - Stars \ud83c\udf20 : 12146\n12:47:43.042 | INFO    | Flow run 'ludicrous-warthog' - Forks \ud83c\udf74 : 1245\n12:47:45.008 | INFO    | Flow run 'ludicrous-warthog' - Finished in state Completed()\n

Prefect can also capture print statements as info logs by specifying log_prints=True in your flow decorator (e.g. @flow(log_prints=True)).

","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#retries","title":"Retries","text":"

So far our script works, but in the future, the GitHub API may be temporarily unavailable or rate limited. Retries help make our script more resilient. Let's add a retry functionality to our example above:

import httpx\nfrom prefect import flow, get_run_logger\n\n\n@flow(retries=3, retry_delay_seconds=5)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\nurl = f\"https://api.github.com/repos/{repo_name}\"\nresponse = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    logger = get_run_logger()\n    logger.info(f\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n    logger.info(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    logger.info(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#next-tasks","title":"Next: Tasks","text":"

As you have seen, adding a flow decorator converts our Python function to a resilient and observable workflow. In the next section, you'll supercharge our flow by using tasks to break down the workflow's complexity and make it more performant.

","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/tasks/","title":"Tasks","text":"","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#what-is-a-task","title":"What is a task?","text":"

A task is a Python function decorated with a @task decorator. Tasks are atomic pieces of work that are executed independently within a flow. Tasks, and the dependencies between them, are displayed in the flow run graph, enabling you to break down a complex flow into something you can observe and understand. When a function becomes a task, it can be executed concurrently and its return value can be cached.

Flows and tasks share some common features:

Network calls (such as our GET requests to the GitHub API) are particularly useful as tasks because they take advantage of task features such as retries, caching, and concurrency.

Tasks must be called from flows

All tasks must be called from within a flow. Tasks may not call other tasks directly.

When to use tasks

Not all functions in a flow need be tasks. Use them only when their features are useful.

Let's take our flow from before and move the request into a task:

import httpx\nfrom prefect import flow, task, get_run_logger\n@task\ndef get_url(url: str, params: dict = None):\nresponse = httpx.get(url, params=params)\nresponse.raise_for_status()\nreturn response.json()\n@flow(retries=3, retry_delay_seconds=5)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    repo = get_url(f\"https://api.github.com/repos/{repo_name}\")\n    logger = get_run_logger()\nlogger.info(f\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\nlogger.info(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    logger.info(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

Running the flow in your terminal will result in something like this:

09:55:55.412 | INFO    | prefect.engine - Created flow run 'great-ammonite' for flow 'get-repo-info'\n09:55:55.499 | INFO    | Flow run 'great-ammonite' - Created task run 'get_url-0' for task 'get_url'\n09:55:55.500 | INFO    | Flow run 'great-ammonite' - Executing 'get_url-0' immediately...\n09:55:55.825 | INFO    | Task run 'get_url-0' - Finished in state Completed()\n09:55:55.827 | INFO    | Flow run 'great-ammonite' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n09:55:55.827 | INFO    | Flow run 'great-ammonite' - Stars \ud83c\udf20 : 12157\n09:55:55.827 | INFO    | Flow run 'great-ammonite' - Forks \ud83c\udf74 : 1251\n09:55:55.849 | INFO    | Flow run 'great-ammonite' - Finished in state Completed('All states completed.')\n
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#caching","title":"Caching","text":"

Tasks support the ability to cache their return value. This allows you to efficiently reuse results of tasks that may be expensive to reproduce with every flow run, or reuse cached results if the inputs to a task have not changed.

To enable caching, specify a cache_key_fn \u2014 a function that returns a cache key \u2014 on your task. You may optionally provide a cache_expiration timedelta indicating when the cache expires. You can define a task that is cached based on its inputs by using the Prefect task_input_hash. Let's add caching to our get_url task:

import httpx\nfrom datetime import timedelta\nfrom prefect import flow, task, get_run_logger\nfrom prefect.tasks import task_input_hash\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(hours=1))\ndef get_url(url: str, params: dict = None):\n    response = httpx.get(url, params=params)\n    response.raise_for_status()\n    return response.json()\n

Task results and caching

Task results are cached in memory during a flow run and persisted to your home directory by default. Prefect Cloud only stores the cache key, not the data itself.

","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#concurrency","title":"Concurrency","text":"

Tasks enable concurrency, allowing you to execute multiple tasks asynchronously. This concurrency can greatly enhance the efficiency and performance of your workflows. Let's expand our script to calculate the average open issues per user. This will require making more requests:

import httpx\nfrom datetime import timedelta\nfrom prefect import flow, task, get_run_logger\nfrom prefect.tasks import task_input_hash\n\n\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(hours=1))\ndef get_url(url: str, params: dict = None):\n    response = httpx.get(url, params=params)\n    response.raise_for_status()\n    return response.json()\n\n\ndef get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\nissues = []\npages = range(1, -(open_issues_count // -per_page) + 1)\nfor page in pages:\nissues.append(\nget_url(\nf\"https://api.github.com/repos/{repo_name}/issues\",\nparams={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n)\n)\nreturn [i for p in issues for i in p]\n@flow(retries=3, retry_delay_seconds=5)\ndef get_repo_info(\n    repo_name: str = \"PrefectHQ/prefect\"\n):\n    repo = get_url(f\"https://api.github.com/repos/{repo_name}\")\nissues = get_open_issues(repo_name, repo[\"open_issues_count\"])\nissues_per_user = len(issues) / len(set([i[\"user\"][\"id\"] for i in issues]))\nlogger = get_run_logger()\n    logger.info(f\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n    logger.info(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    logger.info(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\nlogger.info(f\"Average open issues per user \ud83d\udc8c : {issues_per_user:.2f}\")\nif __name__ == \"__main__\":\n    get_repo_info()\n

Now we're fetching the data we need, but the requests are happening sequentially. Tasks expose a submit method which changes the execution from sequential to concurrent. In our specific example, we also need to use the result method since we are unpacking a list of return values:

def get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n    issues = []\n    pages = range(1, -(open_issues_count // -per_page) + 1)\n    for page in pages:\n        issues.append(\nget_url.submit(\nf\"https://api.github.com/repos/{repo_name}/issues\",\n                params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n            )\n        )\nreturn [i for p in issues for i in p.result()]\n

The logs show that each task is running concurrently:

12:45:28.241 | INFO    | prefect.engine - Created flow run 'intrepid-coua' for flow 'get-repo-info'\n12:45:28.311 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-0' for task 'get_url'\n12:45:28.312 | INFO    | Flow run 'intrepid-coua' - Executing 'get_url-0' immediately...\n12:45:28.543 | INFO    | Task run 'get_url-0' - Finished in state Completed()\n12:45:28.583 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-1' for task 'get_url'\n12:45:28.584 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-1' for execution.\n12:45:28.594 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-2' for task 'get_url'\n12:45:28.594 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-2' for execution.\n12:45:28.609 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-4' for task 'get_url'\n12:45:28.610 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-4' for execution.\n12:45:28.624 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-5' for task 'get_url'\n12:45:28.625 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-5' for execution.\n12:45:28.640 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-6' for task 'get_url'\n12:45:28.641 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-6' for execution.\n12:45:28.708 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-3' for task 'get_url'\n12:45:28.708 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-3' for execution.\n12:45:29.096 | INFO    | Task run 'get_url-6' - Finished in state Completed()\n12:45:29.565 | INFO    | Task run 'get_url-2' - Finished in state Completed()\n12:45:29.721 | INFO    | Task run 'get_url-5' - Finished in state Completed()\n12:45:29.749 | INFO    | Task run 'get_url-4' - Finished in state Completed()\n12:45:29.801 | INFO    | Task run 'get_url-3' - Finished in state Completed()\n12:45:29.817 | INFO    | Task run 'get_url-1' - Finished in state Completed()\n12:45:29.820 | INFO    | Flow run 'intrepid-coua' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n12:45:29.820 | INFO    | Flow run 'intrepid-coua' - Stars \ud83c\udf20 : 12159\n12:45:29.821 | INFO    | Flow run 'intrepid-coua' - Forks \ud83c\udf74 : 1251\nAverage open issues per user \ud83d\udc8c : 2.27\n12:45:29.838 | INFO    | Flow run 'intrepid-coua' - Finished in state Completed('All states completed.')\n
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#subflows","title":"Subflows","text":"

Not only can you call tasks within a flow, but you can also call other flows! Child flows are called\u00a0subflows\u00a0and allow you to efficiently manage, track, and version common multi-task logic.

Subflows are a great way to organize your workflows and offer more visibility within the UI.

Let's add a flow decorator to our get_open_issues function:

@flow\ndef get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n    issues = []\n    pages = range(1, -(open_issues_count // -per_page) + 1)\n    for page in pages:\n        issues.append(\n            get_url.submit(\n                f\"https://api.github.com/repos/{repo_name}/issues\",\n                params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n            )\n        )\n    return [i for p in issues for i in p.result()]\n

Whenever we run the parent flow, a new run will be generated for related functions within that as well. Not only is this run tracked as a subflow run of the main flow, but you can also inspect it independently in the UI!

","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#next-deployments","title":"Next: Deployments","text":"

We now have a flow with tasks, subflows, retries, logging, caching, and concurrent execution. In the next section, we'll see how we can deploy this flow in order to run it on a schedule and/or external infrastructure.

","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"","title":"Welcome to Prefect","text":"

Prefect is a workflow orchestration tool empowering developers to build, observe, and react to data pipelines.

It's the easiest way to transform any Python function into a unit of work that can be observed and orchestrated. Just bring your Python code, sprinkle in a few decorators, and go!

With Prefect you gain:

Prefect UI","tags":["getting started","quick start","overview"],"boost":2},{"location":"#new-to-prefect","title":"New to Prefect?","text":"

Start with the tutorial and then check out the concepts for more in-depth information. For deeper dives on specific use cases, explore our guides for common use-cases.

Concepts Tutorial Guides

Alternatively, read on for a quickstart of Prefect.

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#prefect-quickstart","title":"Prefect Quickstart","text":"","tags":["getting started","quick start","overview"],"boost":2},{"location":"#step-1-install-prefect","title":"Step 1: Install Prefect","text":"
pip install -U prefect\n

See the install guide for more detailed installation instructions.

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#step-2-connect-to-prefects-api","title":"Step 2: Connect to Prefect's API","text":"

Sign up for a forever free Prefect Cloud account or, alternatively, host your own server with a subset of Prefect Cloud's features.

  1. If using Prefect Cloud sign in with an existing account or create a new account at https://app.prefect.cloud/.
  2. If setting up a new account, create a workspace for your account.
  3. Use the prefect cloud login Prefect CLI command to log into Prefect Cloud from your environment.

    prefect cloud login\n

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#step-3-create-a-flow","title":"Step 3: Create a flow","text":"

The fastest way to get started with Prefect is to add a @flow decorator to any python function.

Rules of Thumb

Here is an example flow named Repo Info that contains two tasks:

# my_flow.py\nimport httpx\nfrom prefect import flow, task\n\n@task(retries=2)\ndef get_repo_info(repo_owner: str, repo_name: str):\n\"\"\" Get info about a repo - will retry twice after failing \"\"\"\n    url = f\"https://api.github.com/repos/{repo_owner}/{repo_name}\"\n    api_response = httpx.get(url)\n    api_response.raise_for_status()\n    repo_info = api_response.json()\n    return repo_info\n\n\n@task\ndef get_contributors(repo_info: dict):\n    contributors_url = repo_info[\"contributors_url\"]\n    response = httpx.get(contributors_url)\n    response.raise_for_status()\n    contributors = response.json()\n    return contributors\n\n@flow(name=\"Repo Info\", log_prints=True)  \ndef repo_info(\n    repo_owner: str = \"PrefectHQ\", repo_name: str = \"prefect\"\n):\n    # call our `get_repo_info` task\n    repo_info = get_repo_info(repo_owner, repo_name)\n    print(f\"Stars \ud83c\udf20 : {repo_info['stargazers_count']}\")\n\n    # call our `get_contributors` task, \n    # passing in the upstream result\n    contributors = get_contributors(repo_info)\n    print(\n        f\"Number of contributors \ud83d\udc77: {len(contributors)}\"\n    )\n\n\nif __name__ == \"__main__\":\n    # Call a flow function for a local flow run!\n    repo_info()\n

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#step-4-run-your-flow-locally","title":"Step 4: Run your flow locally","text":"

Call any function that you've decorated with a @flow decorator to see a local instance of a flow run.

python my_flow.py\n
18:21:40.235 | INFO    | prefect.engine - Created flow run 'fine-gorilla' for flow 'Repo Info'\n18:21:40.237 | INFO    | Flow run 'fine-gorilla' - View at https://app.prefect.cloud/account/0ff44498-d380-4d7b-bd68-9b52da03823f/workspace/c859e5b6-1539-4c77-81e0-444c2ddcaafe/flow-runs/flow-run/c5a85193-69ea-4577-aa0e-e11adbf1f659\n18:21:40.837 | INFO    | Flow run 'fine-gorilla' - Created task run 'get_repo_info-0' for task 'get_repo_info'\n18:21:40.838 | INFO    | Flow run 'fine-gorilla' - Executing 'get_repo_info-0' immediately...\n18:21:41.468 | INFO    | Task run 'get_repo_info-0' - Finished in state Completed()\n18:21:41.477 | INFO    | Flow run 'fine-gorilla' - Stars \ud83c\udf20 : 12340\n18:21:41.606 | INFO    | Flow run 'fine-gorilla' - Created task run 'get_contributors-0' for task 'get_contributors'\n18:21:41.607 | INFO    | Flow run 'fine-gorilla' - Executing 'get_contributors-0' immediately...\n18:21:42.225 | INFO    | Task run 'get_contributors-0' - Finished in state Completed()\n18:21:42.232 | INFO    | Flow run 'fine-gorilla' - Number of contributors \ud83d\udc77: 30\n18:21:42.383 | INFO    | Flow run 'fine-gorilla' - Finished in state Completed('All states completed.')\n

You'll find a link directing you to the flow run page conveniently positioned at the top of your flow logs.

Local flow run execution is great for development and testing, but to schedule flow runs or trigger them based on events, you\u2019ll need to deploy your flows.

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#step-5-deploy-the-flow","title":"Step 5: Deploy the flow","text":"

Deploying your flows informs the Prefect API of where, how, and when to run your flows.

Always run prefect deploy commands from the root level of your repo!

When you run the deploy command, Prefect will automatically detect any flows defined in your repository. Select the one you wish to deploy. Then, follow the \ud83e\uddd9 wizard to name your deployment, add an optional schedule, create a work pool, optionally configure remote flow code storage, and more!

prefect deploy\n

It's recommended to save the configuration for the deployment.

Saving the configuration for your deployment will result in a prefect.yaml file populated with your first deployment. You can use this YAML file to edit and define multiple deployments for this repo.

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#step-5-start-a-worker-and-run-the-deployed-flow","title":"Step 5: Start a worker and run the deployed flow","text":"

Start a worker to manage local flow execution. Each worker polls its assigned work pool.

In a new terminal, run:

prefect worker start --pool '<work-pool-name>'\n

Now that your worker is running, you are ready to kick off deployed flow runs from the UI or by running:

prefect deployment run '<flow-name>/<deployment-name>'\n

Check out this flow run's logs from the Flow Runs page in the UI or from the worker logs. Congrats on your first successfully deployed flow run! \ud83c\udf89

You've seen:

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#next-steps","title":"Next Steps","text":"

To learn more, try our tutorial and guides, or go deeper with concepts.

Need help?

Get your questions answered with a Prefect product advocate by Booking A Rubber Duck!

","tags":["getting started","quick start","overview"],"boost":2},{"location":"#community","title":"Community","text":"

Changing from 'Orion'

With the 2.8.1 release, we removed references to \"Orion\" and replaced them with more explicit, conventional nomenclature throughout the codebase. These changes clarify the function of various components, commands, and variables. See the Release Notes for details.

Looking for Prefect 1 Core and Server?

Prefect 2 is used by thousands of teams in production. It is highly recommended that you use Prefect 2 for all new projects.

If you are looking for the Prefect 1 Core and Server documentation, it is still available at http://docs-v1.prefect.io/.

","tags":["getting started","quick start","overview"],"boost":2},{"location":"faq/","title":"Frequently Asked Questions","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#prefect","title":"Prefect","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#how-is-prefect-licensed","title":"How is Prefect licensed?","text":"

Prefect is licensed under the Apache 2.0 License, an OSI approved open-source license. If you have any questions about licensing, please contact us.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#is-the-prefect-v2-cloud-url-different-than-the-prefect-v1-cloud-url","title":"Is the Prefect v2 Cloud URL different than the Prefect v1 Cloud URL?","text":"

Yes. Prefect Cloud for v2 is at app.prefect.cloud/ while Prefect Cloud for v1 is at cloud.prefect.io.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#the-prefect-orchestration-engine","title":"The Prefect Orchestration Engine","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#why-was-the-prefect-orchestration-engine-created","title":"Why was the Prefect orchestration engine created?","text":"

The Prefect orchestration engine has three major objectives:

As Prefect has matured, so has the modern data stack. The on-demand, dynamic, highly scalable workflows that used to exist principally in the domain of data science and analytics are now prevalent throughout all of data engineering. Few companies have workflows that don\u2019t deal with streaming data, uncertain timing, runtime logic, complex dependencies, versioning, or custom scheduling.

This means that the current generation of workflow managers are built around the wrong abstraction: the directed acyclic graph (DAG). DAGs are an increasingly arcane, constrained way of representing the dynamic, heterogeneous range of modern data and computation patterns.

Furthermore, as workflows have become more complex, it has become even more important to focus on the developer experience of building, testing, and monitoring them. Faced with an explosion of available tools, it is more important than ever for development teams to seek orchestration tools that will be compatible with any code, tools, or services they may require in the future.

And finally, this additional complexity means that providing clear and consistent insight into the behavior of the orchestration engine and any decisions it makes is critically important.

The Prefect orchestration engine represents a unified solution to these three problems.

The Prefect orchestration engine is capable of governing any code through a well-defined series of state transitions designed to maximize the user's understanding of what happened during execution. It's popular to describe \"workflows as code\" or \"orchestration as code,\" but the Prefect engine represents \"code as workflows\": rather than ask users to change how they work to meet the requirements of the orchestrator, we've defined an orchestrator that adapts to how our users work.

To achieve this, we've leveraged the familiar tools of native Python: first class functions, type annotations, and async support. Users are free to implement as much \u2014 or as little \u2014 of the Prefect engine as is useful for their objectives.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#if-im-using-prefect-cloud-2-do-i-still-need-to-run-a-prefect-server-locally","title":"If I\u2019m using Prefect Cloud 2, do I still need to run a Prefect server locally?","text":"

No, Prefect Cloud hosts an instance of the Prefect API for you. In fact, each workspace in Prefect Cloud corresponds directly to a single instance of the Prefect orchestration engine. See the Prefect Cloud Overview for more information.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#features","title":"Features","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#does-prefect-support-mapping","title":"Does Prefect support mapping?","text":"

Yes! For more information, see the Task.map API reference

@flow\ndef my_flow():\n\n    # map over a constant\n    for i in range(10):\n        my_mapped_task(i)\n\n    # map over a task's output\n    l = list_task()\n    for i in l.wait().result():\n        my_mapped_task_2(i)\n

Note that when tasks are called on constant values, they cannot detect their upstream edges automatically. In this example, my_mapped_task_2 does not know that it is downstream from list_task(). Prefect will have convenience functions for detecting these associations, and Prefect's .map() operator will automatically track them.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-enforce-ordering-between-tasks-that-dont-share-data","title":"Can I enforce ordering between tasks that don't share data?","text":"

Yes! For more information, see the Tasks section.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#does-prefect-support-proxies","title":"Does Prefect support proxies?","text":"

Yes!

Prefect supports communicating via proxies through the use of environment variables. You can read more about this in the Installation documentation and the article Using Prefect Cloud with proxies.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-run-prefect-flows-on-linux","title":"Can I run Prefect flows on Linux?","text":"

Yes!

See the Installation documentation and Linux installation notes for details on getting started with Prefect on Linux.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-i-run-prefect-flows-on-windows","title":"Can I run Prefect flows on Windows?","text":"

Yes!

See the Installation documentation and Windows installation notes for details on getting started with Prefect on Windows.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#what-external-requirements-does-prefect-have","title":"What external requirements does Prefect have?","text":"

Prefect does not have any additional requirements besides those installed by pip install --pre prefect. The entire system, including the UI and services, can be run in a single process via prefect server start and does not require Docker.

Prefect Cloud users do not need to worry about the Prefect database. Prefect Cloud uses PostgreSQL on GCP behind the scenes. To use PostgreSQL with a self-hosted Prefect server, users must provide the connection string for a running database via the PREFECT_API_DATABASE_CONNECTION_URL environment variable.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#what-databases-does-prefect-support","title":"What databases does Prefect support?","text":"

A self-hosted Prefect server can work with SQLite and PostgreSQL. New Prefect installs default to a SQLite database hosted at ~/.prefect/prefect.db on Mac or Linux machines. SQLite and PostgreSQL are not installed by Prefect.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#how-do-i-choose-between-sqlite-and-postgres","title":"How do I choose between SQLite and Postgres?","text":"

SQLite generally works well for getting started and exploring Prefect. We have tested it with up to hundreds of thousands of task runs. Many users may be able to stay on SQLite for some time. However, for production uses, Prefect Cloud or self-hosted PostgreSQL is highly recommended. Under write-heavy workloads, SQLite performance can begin to suffer. Users running many flows with high degrees of parallelism or concurrency should use PostgreSQL.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#relationship-with-other-prefect-products","title":"Relationship with other Prefect products","text":"","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-a-flow-written-with-prefect-1-be-orchestrated-with-prefect-2-and-vice-versa","title":"Can a flow written with Prefect 1 be orchestrated with Prefect 2 and vice versa?","text":"

No. Flows written with the Prefect 1 client must be rewritten with the Prefect 2 client. For most flows, this should take just a few minutes. See our migration guide and our Upgrade to Prefect 2 post for more information.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"faq/#can-a-use-prefect-1-and-prefect-2-at-the-same-time-on-my-local-machine","title":"Can a use Prefect 1 and Prefect 2 at the same time on my local machine?","text":"

Yes. Just use different virtual environments.

","tags":["FAQ","frequently asked questions","questions","license","databases"]},{"location":"api-ref/","title":"API References","text":"

Prefect provides several APIs.

","tags":["API","Prefect API","Prefect Cloud","REST API","development","orchestration"]},{"location":"api-ref/rest-api-reference/","title":"Prefect server REST API reference","text":"","tags":["REST API","Prefect server"]},{"location":"api-ref/rest-api-reference/#the-prefect-server-rest-api","title":"The Prefect server REST API","text":"

The REST API documentation for a locally hosted open-source Prefect server is available below.

","tags":["REST API","Prefect server"]},{"location":"api-ref/prefect/agent/","title":"prefect.agent","text":"","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent","title":"prefect.agent","text":"

The agent is responsible for checking for flow runs that are ready to run and starting their execution.

","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.OrionAgent","title":"OrionAgent","text":"

Bases: PrefectAgent

Deprecated. Use PrefectAgent instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/agent.py
@deprecated_callable(start_date=\"Feb 2023\", help=\"Use `PrefectAgent` instead.\")\nclass OrionAgent(PrefectAgent):\n\"\"\"\n    Deprecated. Use `PrefectAgent` instead.\n    \"\"\"\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent","title":"PrefectAgent","text":"Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/agent.py
class PrefectAgent:\n    @experimental_parameter(\n        \"work_pool_name\", group=\"work_pools\", when=lambda y: y is not None\n    )\n    def __init__(\n        self,\n        work_queues: List[str] = None,\n        work_queue_prefix: Union[str, List[str]] = None,\n        work_pool_name: str = None,\n        prefetch_seconds: int = None,\n        default_infrastructure: Infrastructure = None,\n        default_infrastructure_document_id: UUID = None,\n        limit: Optional[int] = None,\n    ) -> None:\n        if default_infrastructure and default_infrastructure_document_id:\n            raise ValueError(\n                \"Provide only one of 'default_infrastructure' and\"\n                \" 'default_infrastructure_document_id'.\"\n            )\n\n        self.work_queues: Set[str] = set(work_queues) if work_queues else set()\n        self.work_pool_name = work_pool_name\n        self.prefetch_seconds = prefetch_seconds\n        self.submitting_flow_run_ids = set()\n        self.cancelling_flow_run_ids = set()\n        self.scheduled_task_scopes = set()\n        self.started = False\n        self.logger = get_logger(\"agent\")\n        self.task_group: Optional[anyio.abc.TaskGroup] = None\n        self.limit: Optional[int] = limit\n        self.limiter: Optional[anyio.CapacityLimiter] = None\n        self.client: Optional[PrefectClient] = None\n\n        if isinstance(work_queue_prefix, str):\n            work_queue_prefix = [work_queue_prefix]\n        self.work_queue_prefix = work_queue_prefix\n\n        self._work_queue_cache_expiration: pendulum.DateTime = None\n        self._work_queue_cache: List[WorkQueue] = []\n\n        if default_infrastructure:\n            self.default_infrastructure_document_id = (\n                default_infrastructure._block_document_id\n            )\n            self.default_infrastructure = default_infrastructure\n        elif default_infrastructure_document_id:\n            self.default_infrastructure_document_id = default_infrastructure_document_id\n            self.default_infrastructure = None\n        else:\n            self.default_infrastructure = Process()\n            self.default_infrastructure_document_id = None\n\n    async def update_matched_agent_work_queues(self):\n        if self.work_queue_prefix:\n            if self.work_pool_name:\n                matched_queues = await self.client.read_work_queues(\n                    work_pool_name=self.work_pool_name,\n                    work_queue_filter=WorkQueueFilter(\n                        name=WorkQueueFilterName(startswith_=self.work_queue_prefix)\n                    ),\n                )\n            else:\n                matched_queues = await self.client.match_work_queues(\n                    self.work_queue_prefix\n                )\n            matched_queues = set(q.name for q in matched_queues)\n            if matched_queues != self.work_queues:\n                new_queues = matched_queues - self.work_queues\n                removed_queues = self.work_queues - matched_queues\n                if new_queues:\n                    self.logger.info(\n                        f\"Matched new work queues: {', '.join(new_queues)}\"\n                    )\n                if removed_queues:\n                    self.logger.info(\n                        f\"Work queues no longer matched: {', '.join(removed_queues)}\"\n                    )\n            self.work_queues = matched_queues\n\n    async def get_work_queues(self) -> AsyncIterator[WorkQueue]:\n\"\"\"\n        Loads the work queue objects corresponding to the agent's target work\n        queues. If any of them don't exist, they are created.\n        \"\"\"\n\n        # if the queue cache has not expired, yield queues from the cache\n        now = pendulum.now(\"UTC\")\n        if (self._work_queue_cache_expiration or now) > now:\n            for queue in self._work_queue_cache:\n                yield queue\n            return\n\n        # otherwise clear the cache, set the expiration for 30 seconds, and\n        # reload the work queues\n        self._work_queue_cache.clear()\n        self._work_queue_cache_expiration = now.add(seconds=30)\n\n        await self.update_matched_agent_work_queues()\n\n        for name in self.work_queues:\n            try:\n                work_queue = await self.client.read_work_queue_by_name(\n                    work_pool_name=self.work_pool_name, name=name\n                )\n            except ObjectNotFound:\n                # if the work queue wasn't found, create it\n                if not self.work_queue_prefix:\n                    # do not attempt to create work queues if the agent is polling for\n                    # queues using a regex\n                    try:\n                        work_queue = await self.client.create_work_queue(\n                            work_pool_name=self.work_pool_name, name=name\n                        )\n                        if self.work_pool_name:\n                            self.logger.info(\n                                f\"Created work queue {name!r} in work pool\"\n                                f\" {self.work_pool_name!r}.\"\n                            )\n                        else:\n                            self.logger.info(f\"Created work queue '{name}'.\")\n\n                    # if creating it raises an exception, it was probably just\n                    # created by some other agent; rather than entering a re-read\n                    # loop with new error handling, we log the exception and\n                    # continue.\n                    except Exception:\n                        self.logger.exception(f\"Failed to create work queue {name!r}.\")\n                        continue\n\n            self._work_queue_cache.append(work_queue)\n            yield work_queue\n\n    async def get_and_submit_flow_runs(self) -> List[FlowRun]:\n\"\"\"\n        The principle method on agents. Queries for scheduled flow runs and submits\n        them for execution in parallel.\n        \"\"\"\n        if not self.started:\n            raise RuntimeError(\n                \"Agent is not started. Use `async with PrefectAgent()...`\"\n            )\n\n        self.logger.debug(\"Checking for scheduled flow runs...\")\n\n        before = pendulum.now(\"utc\").add(\n            seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()\n        )\n\n        submittable_runs: List[FlowRun] = []\n\n        if self.work_pool_name:\n            responses = await self.client.get_scheduled_flow_runs_for_work_pool(\n                work_pool_name=self.work_pool_name,\n                work_queue_names=[wq.name async for wq in self.get_work_queues()],\n                scheduled_before=before,\n            )\n            submittable_runs.extend([response.flow_run for response in responses])\n\n        else:\n            # load runs from each work queue\n            async for work_queue in self.get_work_queues():\n                # print a nice message if the work queue is paused\n                if work_queue.is_paused:\n                    self.logger.info(\n                        f\"Work queue {work_queue.name!r} ({work_queue.id}) is paused.\"\n                    )\n\n                else:\n                    try:\n                        queue_runs = await self.client.get_runs_in_work_queue(\n                            id=work_queue.id, limit=10, scheduled_before=before\n                        )\n                        submittable_runs.extend(queue_runs)\n                    except ObjectNotFound:\n                        self.logger.error(\n                            f\"Work queue {work_queue.name!r} ({work_queue.id}) not\"\n                            \" found.\"\n                        )\n                    except Exception as exc:\n                        self.logger.exception(exc)\n\n            submittable_runs.sort(key=lambda run: run.next_scheduled_start_time)\n\n        for flow_run in submittable_runs:\n            # don't resubmit a run\n            if flow_run.id in self.submitting_flow_run_ids:\n                continue\n\n            try:\n                if self.limiter:\n                    self.limiter.acquire_on_behalf_of_nowait(flow_run.id)\n            except anyio.WouldBlock:\n                self.logger.info(\n                    f\"Flow run limit reached; {self.limiter.borrowed_tokens} flow runs\"\n                    \" in progress.\"\n                )\n                break\n            else:\n                self.logger.info(f\"Submitting flow run '{flow_run.id}'\")\n                self.submitting_flow_run_ids.add(flow_run.id)\n                self.task_group.start_soon(\n                    self.submit_run,\n                    flow_run,\n                )\n\n        return list(\n            filter(lambda run: run.id in self.submitting_flow_run_ids, submittable_runs)\n        )\n\n    async def check_for_cancelled_flow_runs(self):\n        if not self.started:\n            raise RuntimeError(\n                \"Agent is not started. Use `async with PrefectAgent()...`\"\n            )\n\n        self.logger.debug(\"Checking for cancelled flow runs...\")\n\n        work_queue_filter = (\n            WorkQueueFilter(name=WorkQueueFilterName(any_=list(self.work_queues)))\n            if self.work_queues\n            else None\n        )\n\n        work_pool_filter = (\n            WorkPoolFilter(name=WorkPoolFilterName(any_=[self.work_pool_name]))\n            if self.work_pool_name\n            else WorkPoolFilter(name=WorkPoolFilterName(any_=[\"default-agent-pool\"]))\n        )\n        named_cancelling_flow_runs = await self.client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n                    name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self.cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=work_pool_filter,\n            work_queue_filter=work_queue_filter,\n        )\n\n        typed_cancelling_flow_runs = await self.client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self.cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=work_pool_filter,\n            work_queue_filter=work_queue_filter,\n        )\n\n        cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n        if cancelling_flow_runs:\n            self.logger.info(\n                f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n            )\n\n        for flow_run in cancelling_flow_runs:\n            self.cancelling_flow_run_ids.add(flow_run.id)\n            self.task_group.start_soon(self.cancel_run, flow_run)\n\n        return cancelling_flow_runs\n\n    async def cancel_run(self, flow_run: FlowRun) -> None:\n\"\"\"\n        Cancel a flow run by killing its infrastructure\n        \"\"\"\n        if not flow_run.infrastructure_pid:\n            self.logger.error(\n                f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n                \" attached. Cancellation cannot be guaranteed.\"\n            )\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": (\n                        \"This flow run is missing infrastructure tracking information\"\n                        \" and cancellation cannot be guaranteed.\"\n                    )\n                },\n            )\n            return\n\n        try:\n            infrastructure = await self.get_infrastructure(flow_run)\n        except Exception:\n            self.logger.exception(\n                f\"Failed to get infrastructure for flow run '{flow_run.id}'. \"\n                \"Flow run cannot be cancelled.\"\n            )\n            # Note: We leave this flow run in the cancelling set because it cannot be\n            #       cancelled and this will prevent additional attempts.\n            return\n\n        if not hasattr(infrastructure, \"kill\"):\n            self.logger.error(\n                f\"Flow run '{flow_run.id}' infrastructure {infrastructure.type!r} \"\n                \"does not support killing created infrastructure. \"\n                \"Cancellation cannot be guaranteed.\"\n            )\n            return\n\n        self.logger.info(\n            f\"Killing {infrastructure.type} {flow_run.infrastructure_pid} for flow run \"\n            f\"'{flow_run.id}'...\"\n        )\n        try:\n            await infrastructure.kill(flow_run.infrastructure_pid)\n        except InfrastructureNotFound as exc:\n            self.logger.warning(f\"{exc} Marking flow run as cancelled.\")\n            await self._mark_flow_run_as_cancelled(flow_run)\n        except InfrastructureNotAvailable as exc:\n            self.logger.warning(f\"{exc} Flow run cannot be cancelled by this agent.\")\n        except Exception:\n            self.logger.exception(\n                \"Encountered exception while killing infrastructure for flow run \"\n                f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n            )\n            # We will try again on generic exceptions\n            self.cancelling_flow_run_ids.remove(flow_run.id)\n            return\n        else:\n            await self._mark_flow_run_as_cancelled(flow_run)\n            self.logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n\n    async def _mark_flow_run_as_cancelled(\n        self, flow_run: FlowRun, state_updates: Optional[dict] = None\n    ) -> None:\n        state_updates = state_updates or {}\n        state_updates.setdefault(\"name\", \"Cancelled\")\n        state_updates.setdefault(\"type\", StateType.CANCELLED)\n        state = flow_run.state.copy(update=state_updates)\n\n        await self.client.set_flow_run_state(flow_run.id, state, force=True)\n\n        # Do not remove the flow run from the cancelling set immediately because\n        # the API caches responses for the `read_flow_runs` and we do not want to\n        # duplicate cancellations.\n        await self._schedule_task(\n            60 * 10, self.cancelling_flow_run_ids.remove, flow_run.id\n        )\n\n    async def get_infrastructure(self, flow_run: FlowRun) -> Infrastructure:\n        deployment = await self.client.read_deployment(flow_run.deployment_id)\n\n        flow = await self.client.read_flow(deployment.flow_id)\n\n        # overrides only apply when configuring known infra blocks\n        if not deployment.infrastructure_document_id:\n            if self.default_infrastructure:\n                infra_block = self.default_infrastructure\n            else:\n                infra_document = await self.client.read_block_document(\n                    self.default_infrastructure_document_id\n                )\n                infra_block = Block._from_block_document(infra_document)\n\n            # Add flow run metadata to the infrastructure\n            prepared_infrastructure = infra_block.prepare_for_flow_run(\n                flow_run, deployment=deployment, flow=flow\n            )\n            return prepared_infrastructure\n\n        ## get infra\n        infra_document = await self.client.read_block_document(\n            deployment.infrastructure_document_id\n        )\n\n        # this piece of logic applies any overrides that may have been set on the\n        # deployment; overrides are defined as dot.delimited paths on possibly nested\n        # attributes of the infrastructure block\n        doc_dict = infra_document.dict()\n        infra_dict = doc_dict.get(\"data\", {})\n        for override, value in (deployment.infra_overrides or {}).items():\n            nested_fields = override.split(\".\")\n            data = infra_dict\n            for field in nested_fields[:-1]:\n                data = data[field]\n\n            # once we reach the end, set the value\n            data[nested_fields[-1]] = value\n\n        # reconstruct the infra block\n        doc_dict[\"data\"] = infra_dict\n        infra_document = BlockDocument(**doc_dict)\n        infrastructure_block = Block._from_block_document(infra_document)\n\n        # TODO: Here the agent may update the infrastructure with agent-level settings\n\n        # Add flow run metadata to the infrastructure\n        prepared_infrastructure = infrastructure_block.prepare_for_flow_run(\n            flow_run, deployment=deployment, flow=flow\n        )\n\n        return prepared_infrastructure\n\n    async def submit_run(self, flow_run: FlowRun) -> None:\n\"\"\"\n        Submit a flow run to the infrastructure\n        \"\"\"\n        ready_to_submit = await self._propose_pending_state(flow_run)\n\n        if ready_to_submit:\n            try:\n                infrastructure = await self.get_infrastructure(flow_run)\n            except Exception as exc:\n                self.logger.exception(\n                    f\"Failed to get infrastructure for flow run '{flow_run.id}'.\"\n                )\n                await self._propose_failed_state(flow_run, exc)\n                if self.limiter:\n                    self.limiter.release_on_behalf_of(flow_run.id)\n            else:\n                # Wait for submission to be completed. Note that the submission function\n                # may continue to run in the background after this exits.\n                readiness_result = await self.task_group.start(\n                    self._submit_run_and_capture_errors, flow_run, infrastructure\n                )\n\n                if readiness_result and not isinstance(readiness_result, Exception):\n                    try:\n                        await self.client.update_flow_run(\n                            flow_run_id=flow_run.id,\n                            infrastructure_pid=str(readiness_result),\n                        )\n                    except Exception:\n                        self.logger.exception(\n                            \"An error occured while setting the `infrastructure_pid`\"\n                            f\" on flow run {flow_run.id!r}. The flow run will not be\"\n                            \" cancellable.\"\n                        )\n\n                self.logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n        else:\n            # If the run is not ready to submit, release the concurrency slot\n            if self.limiter:\n                self.limiter.release_on_behalf_of(flow_run.id)\n\n        self.submitting_flow_run_ids.remove(flow_run.id)\n\n    async def _submit_run_and_capture_errors(\n        self,\n        flow_run: FlowRun,\n        infrastructure: Infrastructure,\n        task_status: anyio.abc.TaskStatus = None,\n    ) -> Union[InfrastructureResult, Exception]:\n        # Note: There is not a clear way to determine if task_status.started() has been\n        #       called without peeking at the internal `_future`. Ideally we could just\n        #       check if the flow run id has been removed from `submitting_flow_run_ids`\n        #       but it is not so simple to guarantee that this coroutine yields back\n        #       to `submit_run` to execute that line when exceptions are raised during\n        #       submission.\n        try:\n            result = await infrastructure.run(task_status=task_status)\n        except Exception as exc:\n            if not task_status._future.done():\n                # This flow run was being submitted and did not start successfully\n                self.logger.exception(\n                    f\"Failed to submit flow run '{flow_run.id}' to infrastructure.\"\n                )\n                # Mark the task as started to prevent agent crash\n                task_status.started(exc)\n                await self._propose_crashed_state(\n                    flow_run, \"Flow run could not be submitted to infrastructure\"\n                )\n            else:\n                self.logger.exception(\n                    f\"An error occured while monitoring flow run '{flow_run.id}'. \"\n                    \"The flow run will not be marked as failed, but an issue may have \"\n                    \"occurred.\"\n                )\n            return exc\n        finally:\n            if self.limiter:\n                self.limiter.release_on_behalf_of(flow_run.id)\n\n        if not task_status._future.done():\n            self.logger.error(\n                f\"Infrastructure returned without reporting flow run '{flow_run.id}' \"\n                \"as started or raising an error. This behavior is not expected and \"\n                \"generally indicates improper implementation of infrastructure. The \"\n                \"flow run will not be marked as failed, but an issue may have occurred.\"\n            )\n            # Mark the task as started to prevent agent crash\n            task_status.started()\n\n        if result.status_code != 0:\n            await self._propose_crashed_state(\n                flow_run,\n                (\n                    \"Flow run infrastructure exited with non-zero status code\"\n                    f\" {result.status_code}.\"\n                ),\n            )\n\n        return result\n\n    async def _propose_pending_state(self, flow_run: FlowRun) -> bool:\n        state = flow_run.state\n        try:\n            state = await propose_state(self.client, Pending(), flow_run_id=flow_run.id)\n        except Abort as exc:\n            self.logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}'. \"\n                    f\"Server sent an abort signal: {exc}\"\n                ),\n            )\n            return False\n        except Exception:\n            self.logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n            return False\n\n        if not state.is_pending():\n            self.logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}': \"\n                    f\"Server returned a non-pending state {state.type.value!r}\"\n                ),\n            )\n            return False\n\n        return True\n\n    async def _propose_failed_state(self, flow_run: FlowRun, exc: Exception) -> None:\n        try:\n            await propose_state(\n                self.client,\n                await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # We've already failed, no need to note the abort but we don't want it to\n            # raise in the agent process\n            pass\n        except Exception:\n            self.logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n\n    async def _propose_crashed_state(self, flow_run: FlowRun, message: str) -> None:\n        try:\n            state = await propose_state(\n                self.client,\n                Crashed(message=message),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # Flow run already marked as failed\n            pass\n        except Exception:\n            self.logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n        else:\n            if state.is_crashed():\n                self.logger.info(\n                    f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n                )\n\n    async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n\"\"\"\n        Schedule a background task to start after some time.\n\n        These tasks will be run immediately when the agent exits instead of waiting.\n\n        The function may be async or sync. Async functions will be awaited.\n        \"\"\"\n\n        async def wrapper(task_status):\n            # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n            # time or shutdown\n            if self.started:\n                with anyio.CancelScope() as scope:\n                    self.scheduled_task_scopes.add(scope)\n                    task_status.started()\n                    await anyio.sleep(__in_seconds)\n\n                self.scheduled_task_scopes.remove(scope)\n            else:\n                task_status.started()\n\n            result = fn(*args, **kwargs)\n            if inspect.iscoroutine(result):\n                await result\n\n        await self.task_group.start(wrapper)\n\n    # Context management ---------------------------------------------------------------\n\n    async def start(self):\n        self.started = True\n        self.task_group = anyio.create_task_group()\n        self.limiter = (\n            anyio.CapacityLimiter(self.limit) if self.limit is not None else None\n        )\n        self.client = get_client()\n        await self.client.__aenter__()\n        await self.task_group.__aenter__()\n\n    async def shutdown(self, *exc_info):\n        self.started = False\n        # We must cancel scheduled task scopes before closing the task group\n        for scope in self.scheduled_task_scopes:\n            scope.cancel()\n        await self.task_group.__aexit__(*exc_info)\n        await self.client.__aexit__(*exc_info)\n        self.task_group = None\n        self.client = None\n        self.submitting_flow_run_ids.clear()\n        self.cancelling_flow_run_ids.clear()\n        self.scheduled_task_scopes.clear()\n        self._work_queue_cache_expiration = None\n        self._work_queue_cache = []\n\n    async def __aenter__(self):\n        await self.start()\n        return self\n\n    async def __aexit__(self, *exc_info):\n        await self.shutdown(*exc_info)\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.cancel_run","title":"cancel_run async","text":"

Cancel a flow run by killing its infrastructure

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/agent.py
async def cancel_run(self, flow_run: FlowRun) -> None:\n\"\"\"\n    Cancel a flow run by killing its infrastructure\n    \"\"\"\n    if not flow_run.infrastructure_pid:\n        self.logger.error(\n            f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n            \" attached. Cancellation cannot be guaranteed.\"\n        )\n        await self._mark_flow_run_as_cancelled(\n            flow_run,\n            state_updates={\n                \"message\": (\n                    \"This flow run is missing infrastructure tracking information\"\n                    \" and cancellation cannot be guaranteed.\"\n                )\n            },\n        )\n        return\n\n    try:\n        infrastructure = await self.get_infrastructure(flow_run)\n    except Exception:\n        self.logger.exception(\n            f\"Failed to get infrastructure for flow run '{flow_run.id}'. \"\n            \"Flow run cannot be cancelled.\"\n        )\n        # Note: We leave this flow run in the cancelling set because it cannot be\n        #       cancelled and this will prevent additional attempts.\n        return\n\n    if not hasattr(infrastructure, \"kill\"):\n        self.logger.error(\n            f\"Flow run '{flow_run.id}' infrastructure {infrastructure.type!r} \"\n            \"does not support killing created infrastructure. \"\n            \"Cancellation cannot be guaranteed.\"\n        )\n        return\n\n    self.logger.info(\n        f\"Killing {infrastructure.type} {flow_run.infrastructure_pid} for flow run \"\n        f\"'{flow_run.id}'...\"\n    )\n    try:\n        await infrastructure.kill(flow_run.infrastructure_pid)\n    except InfrastructureNotFound as exc:\n        self.logger.warning(f\"{exc} Marking flow run as cancelled.\")\n        await self._mark_flow_run_as_cancelled(flow_run)\n    except InfrastructureNotAvailable as exc:\n        self.logger.warning(f\"{exc} Flow run cannot be cancelled by this agent.\")\n    except Exception:\n        self.logger.exception(\n            \"Encountered exception while killing infrastructure for flow run \"\n            f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n        )\n        # We will try again on generic exceptions\n        self.cancelling_flow_run_ids.remove(flow_run.id)\n        return\n    else:\n        await self._mark_flow_run_as_cancelled(flow_run)\n        self.logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.get_and_submit_flow_runs","title":"get_and_submit_flow_runs async","text":"

The principle method on agents. Queries for scheduled flow runs and submits them for execution in parallel.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/agent.py
async def get_and_submit_flow_runs(self) -> List[FlowRun]:\n\"\"\"\n    The principle method on agents. Queries for scheduled flow runs and submits\n    them for execution in parallel.\n    \"\"\"\n    if not self.started:\n        raise RuntimeError(\n            \"Agent is not started. Use `async with PrefectAgent()...`\"\n        )\n\n    self.logger.debug(\"Checking for scheduled flow runs...\")\n\n    before = pendulum.now(\"utc\").add(\n        seconds=self.prefetch_seconds or PREFECT_AGENT_PREFETCH_SECONDS.value()\n    )\n\n    submittable_runs: List[FlowRun] = []\n\n    if self.work_pool_name:\n        responses = await self.client.get_scheduled_flow_runs_for_work_pool(\n            work_pool_name=self.work_pool_name,\n            work_queue_names=[wq.name async for wq in self.get_work_queues()],\n            scheduled_before=before,\n        )\n        submittable_runs.extend([response.flow_run for response in responses])\n\n    else:\n        # load runs from each work queue\n        async for work_queue in self.get_work_queues():\n            # print a nice message if the work queue is paused\n            if work_queue.is_paused:\n                self.logger.info(\n                    f\"Work queue {work_queue.name!r} ({work_queue.id}) is paused.\"\n                )\n\n            else:\n                try:\n                    queue_runs = await self.client.get_runs_in_work_queue(\n                        id=work_queue.id, limit=10, scheduled_before=before\n                    )\n                    submittable_runs.extend(queue_runs)\n                except ObjectNotFound:\n                    self.logger.error(\n                        f\"Work queue {work_queue.name!r} ({work_queue.id}) not\"\n                        \" found.\"\n                    )\n                except Exception as exc:\n                    self.logger.exception(exc)\n\n        submittable_runs.sort(key=lambda run: run.next_scheduled_start_time)\n\n    for flow_run in submittable_runs:\n        # don't resubmit a run\n        if flow_run.id in self.submitting_flow_run_ids:\n            continue\n\n        try:\n            if self.limiter:\n                self.limiter.acquire_on_behalf_of_nowait(flow_run.id)\n        except anyio.WouldBlock:\n            self.logger.info(\n                f\"Flow run limit reached; {self.limiter.borrowed_tokens} flow runs\"\n                \" in progress.\"\n            )\n            break\n        else:\n            self.logger.info(f\"Submitting flow run '{flow_run.id}'\")\n            self.submitting_flow_run_ids.add(flow_run.id)\n            self.task_group.start_soon(\n                self.submit_run,\n                flow_run,\n            )\n\n    return list(\n        filter(lambda run: run.id in self.submitting_flow_run_ids, submittable_runs)\n    )\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.get_work_queues","title":"get_work_queues async","text":"

Loads the work queue objects corresponding to the agent's target work queues. If any of them don't exist, they are created.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/agent.py
async def get_work_queues(self) -> AsyncIterator[WorkQueue]:\n\"\"\"\n    Loads the work queue objects corresponding to the agent's target work\n    queues. If any of them don't exist, they are created.\n    \"\"\"\n\n    # if the queue cache has not expired, yield queues from the cache\n    now = pendulum.now(\"UTC\")\n    if (self._work_queue_cache_expiration or now) > now:\n        for queue in self._work_queue_cache:\n            yield queue\n        return\n\n    # otherwise clear the cache, set the expiration for 30 seconds, and\n    # reload the work queues\n    self._work_queue_cache.clear()\n    self._work_queue_cache_expiration = now.add(seconds=30)\n\n    await self.update_matched_agent_work_queues()\n\n    for name in self.work_queues:\n        try:\n            work_queue = await self.client.read_work_queue_by_name(\n                work_pool_name=self.work_pool_name, name=name\n            )\n        except ObjectNotFound:\n            # if the work queue wasn't found, create it\n            if not self.work_queue_prefix:\n                # do not attempt to create work queues if the agent is polling for\n                # queues using a regex\n                try:\n                    work_queue = await self.client.create_work_queue(\n                        work_pool_name=self.work_pool_name, name=name\n                    )\n                    if self.work_pool_name:\n                        self.logger.info(\n                            f\"Created work queue {name!r} in work pool\"\n                            f\" {self.work_pool_name!r}.\"\n                        )\n                    else:\n                        self.logger.info(f\"Created work queue '{name}'.\")\n\n                # if creating it raises an exception, it was probably just\n                # created by some other agent; rather than entering a re-read\n                # loop with new error handling, we log the exception and\n                # continue.\n                except Exception:\n                    self.logger.exception(f\"Failed to create work queue {name!r}.\")\n                    continue\n\n        self._work_queue_cache.append(work_queue)\n        yield work_queue\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/agent/#prefect.agent.PrefectAgent.submit_run","title":"submit_run async","text":"

Submit a flow run to the infrastructure

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/agent.py
async def submit_run(self, flow_run: FlowRun) -> None:\n\"\"\"\n    Submit a flow run to the infrastructure\n    \"\"\"\n    ready_to_submit = await self._propose_pending_state(flow_run)\n\n    if ready_to_submit:\n        try:\n            infrastructure = await self.get_infrastructure(flow_run)\n        except Exception as exc:\n            self.logger.exception(\n                f\"Failed to get infrastructure for flow run '{flow_run.id}'.\"\n            )\n            await self._propose_failed_state(flow_run, exc)\n            if self.limiter:\n                self.limiter.release_on_behalf_of(flow_run.id)\n        else:\n            # Wait for submission to be completed. Note that the submission function\n            # may continue to run in the background after this exits.\n            readiness_result = await self.task_group.start(\n                self._submit_run_and_capture_errors, flow_run, infrastructure\n            )\n\n            if readiness_result and not isinstance(readiness_result, Exception):\n                try:\n                    await self.client.update_flow_run(\n                        flow_run_id=flow_run.id,\n                        infrastructure_pid=str(readiness_result),\n                    )\n                except Exception:\n                    self.logger.exception(\n                        \"An error occured while setting the `infrastructure_pid`\"\n                        f\" on flow run {flow_run.id!r}. The flow run will not be\"\n                        \" cancellable.\"\n                    )\n\n            self.logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n    else:\n        # If the run is not ready to submit, release the concurrency slot\n        if self.limiter:\n            self.limiter.release_on_behalf_of(flow_run.id)\n\n    self.submitting_flow_run_ids.remove(flow_run.id)\n
","tags":["Python API","agents"]},{"location":"api-ref/prefect/artifacts/","title":"prefect.artifacts","text":"","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts","title":"prefect.artifacts","text":"

Interface for creating and reading artifacts.

","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_link_artifact","title":"create_link_artifact async","text":"

Create a link artifact.

Parameters:

Name Type Description Default link str

The link to create.

required link_text Optional[str]

The link text.

None key Optional[str]

A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.

None description Optional[str]

A user-specified description of the artifact.

None

Returns:

Type Description UUID

The table artifact ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/artifacts.py
@sync_compatible\nasync def create_link_artifact(\n    link: str,\n    link_text: Optional[str] = None,\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n) -> UUID:\n\"\"\"\n    Create a link artifact.\n\n    Arguments:\n        link: The link to create.\n        link_text: The link text.\n        key: A user-provided string identifier.\n          Required for the artifact to show in the Artifacts page in the UI.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n\n\n    Returns:\n        The table artifact ID.\n    \"\"\"\n    formatted_link = f\"[{link_text}]({link})\" if link_text else f\"[{link}]({link})\"\n    artifact = await _create_artifact(\n        key=key,\n        type=\"markdown\",\n        description=description,\n        data=formatted_link,\n    )\n\n    return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_markdown_artifact","title":"create_markdown_artifact async","text":"

Create a markdown artifact.

Parameters:

Name Type Description Default markdown str

The markdown to create.

required key Optional[str]

A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.

None description Optional[str]

A user-specified description of the artifact.

None

Returns:

Type Description UUID

The table artifact ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/artifacts.py
@sync_compatible\nasync def create_markdown_artifact(\n    markdown: str,\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n) -> UUID:\n\"\"\"\n    Create a markdown artifact.\n\n    Arguments:\n        markdown: The markdown to create.\n        key: A user-provided string identifier.\n          Required for the artifact to show in the Artifacts page in the UI.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n\n    Returns:\n        The table artifact ID.\n    \"\"\"\n    artifact = await _create_artifact(\n        key=key,\n        type=\"markdown\",\n        description=description,\n        data=markdown,\n    )\n\n    return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/artifacts/#prefect.artifacts.create_table_artifact","title":"create_table_artifact async","text":"

Create a table artifact.

Parameters:

Name Type Description Default table Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]]

The table to create.

required key Optional[str]

A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes.

None description Optional[str]

A user-specified description of the artifact.

None

Returns:

Type Description UUID

The table artifact ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/artifacts.py
@sync_compatible\nasync def create_table_artifact(\n    table: Union[Dict[str, List[Any]], List[Dict[str, Any]], List[List[Any]]],\n    key: Optional[str] = None,\n    description: Optional[str] = None,\n) -> UUID:\n\"\"\"\n    Create a table artifact.\n\n    Arguments:\n        table: The table to create.\n        key: A user-provided string identifier.\n          Required for the artifact to show in the Artifacts page in the UI.\n          The key must only contain lowercase letters, numbers, and dashes.\n        description: A user-specified description of the artifact.\n\n    Returns:\n        The table artifact ID.\n    \"\"\"\n\n    def _sanitize_nan_values(item):\n\"\"\"\n        Sanitize NaN values in a given item. The item can be a dict, list or float.\n        \"\"\"\n\n        if isinstance(item, list):\n            return [_sanitize_nan_values(sub_item) for sub_item in item]\n\n        elif isinstance(item, dict):\n            return {k: _sanitize_nan_values(v) for k, v in item.items()}\n\n        elif isinstance(item, float) and math.isnan(item):\n            return None\n\n        else:\n            return item\n\n    sanitized_table = _sanitize_nan_values(table)\n\n    if isinstance(table, dict) and all(isinstance(v, list) for v in table.values()):\n        pass\n    elif isinstance(table, list) and all(isinstance(v, (list, dict)) for v in table):\n        pass\n    else:\n        raise TypeError(INVALID_TABLE_TYPE_ERROR)\n\n    formatted_table = json.dumps(sanitized_table)\n\n    artifact = await _create_artifact(\n        key=key,\n        type=\"table\",\n        description=description,\n        data=formatted_table,\n    )\n\n    return artifact.id\n
","tags":["Python API","artifacts"]},{"location":"api-ref/prefect/context/","title":"prefect.context","text":"","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context","title":"prefect.context","text":"

Async and thread safe models for passing runtime context data.

These contexts should never be directly mutated by the user.

For more user-accessible information about the current run, see prefect.runtime.

","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.ContextModel","title":"ContextModel","text":"

Bases: BaseModel

A base model for context data that forbids mutation and extra data while providing a context manager

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class ContextModel(BaseModel):\n\"\"\"\n    A base model for context data that forbids mutation and extra data while providing\n    a context manager\n    \"\"\"\n\n    # The context variable for storing data must be defined by the child class\n    __var__: ContextVar\n    _token: Token = PrivateAttr(None)\n\n    class Config:\n        allow_mutation = False\n        arbitrary_types_allowed = True\n        extra = \"forbid\"\n\n    def __enter__(self):\n        if self._token is not None:\n            raise RuntimeError(\n                \"Context already entered. Context enter calls cannot be nested.\"\n            )\n        self._token = self.__var__.set(self)\n        return self\n\n    def __exit__(self, *_):\n        if not self._token:\n            raise RuntimeError(\n                \"Asymmetric use of context. Context exit called without an enter.\"\n            )\n        self.__var__.reset(self._token)\n        self._token = None\n\n    @classmethod\n    def get(cls: Type[T]) -> Optional[T]:\n        return cls.__var__.get(None)\n\n    def copy(self, **kwargs):\n\"\"\"\n        Duplicate the context model, optionally choosing which fields to include, exclude, or change.\n\n        Attributes:\n            include: Fields to include in new model.\n            exclude: Fields to exclude from new model, as with values this takes precedence over include.\n            update: Values to change/add in the new model. Note: the data is not validated before creating\n                the new model - you should trust this data.\n            deep: Set to `True` to make a deep copy of the model.\n\n        Returns:\n            A new model instance.\n        \"\"\"\n        # Remove the token on copy to avoid re-entrance errors\n        new = super().copy(**kwargs)\n        new._token = None\n        return new\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.ContextModel.copy","title":"copy","text":"

Duplicate the context model, optionally choosing which fields to include, exclude, or change.

Attributes:

Name Type Description include

Fields to include in new model.

exclude

Fields to exclude from new model, as with values this takes precedence over include.

update

Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data.

deep

Set to True to make a deep copy of the model.

Returns:

Type Description

A new model instance.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
def copy(self, **kwargs):\n\"\"\"\n    Duplicate the context model, optionally choosing which fields to include, exclude, or change.\n\n    Attributes:\n        include: Fields to include in new model.\n        exclude: Fields to exclude from new model, as with values this takes precedence over include.\n        update: Values to change/add in the new model. Note: the data is not validated before creating\n            the new model - you should trust this data.\n        deep: Set to `True` to make a deep copy of the model.\n\n    Returns:\n        A new model instance.\n    \"\"\"\n    # Remove the token on copy to avoid re-entrance errors\n    new = super().copy(**kwargs)\n    new._token = None\n    return new\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.FlowRunContext","title":"FlowRunContext","text":"

Bases: RunContext

The context for a flow run. Data in this context is only available from within a flow run function.

Attributes:

Name Type Description flow Flow

The flow instance associated with the run

flow_run FlowRun

The API metadata for the flow run

task_runner BaseTaskRunner

The task runner instance being used for the flow run

task_run_futures List[PrefectFuture]

A list of futures for task runs submitted within this flow run

task_run_states List[State]

A list of states for task runs created within this flow run

task_run_results Dict[int, State]

A mapping of result ids to task run states for this flow run

flow_run_states List[State]

A list of states for flow runs created within this flow run

sync_portal Optional[anyio.abc.BlockingPortal]

A blocking portal for sync task/flow runs in an async flow

timeout_scope Optional[anyio.abc.CancelScope]

The cancellation scope for flow level timeouts

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class FlowRunContext(RunContext):\n\"\"\"\n    The context for a flow run. Data in this context is only available from within a\n    flow run function.\n\n    Attributes:\n        flow: The flow instance associated with the run\n        flow_run: The API metadata for the flow run\n        task_runner: The task runner instance being used for the flow run\n        task_run_futures: A list of futures for task runs submitted within this flow run\n        task_run_states: A list of states for task runs created within this flow run\n        task_run_results: A mapping of result ids to task run states for this flow run\n        flow_run_states: A list of states for flow runs created within this flow run\n        sync_portal: A blocking portal for sync task/flow runs in an async flow\n        timeout_scope: The cancellation scope for flow level timeouts\n    \"\"\"\n\n    flow: \"Flow\"\n    flow_run: FlowRun\n    task_runner: BaseTaskRunner\n    log_prints: bool = False\n    parameters: Dict[str, Any]\n\n    # Result handling\n    result_factory: ResultFactory\n\n    # Counter for task calls allowing unique\n    task_run_dynamic_keys: Dict[str, int] = Field(default_factory=dict)\n\n    # Counter for flow pauses\n    observed_flow_pauses: Dict[str, int] = Field(default_factory=dict)\n\n    # Tracking for objects created by this flow run\n    task_run_futures: List[PrefectFuture] = Field(default_factory=list)\n    task_run_states: List[State] = Field(default_factory=list)\n    task_run_results: Dict[int, State] = Field(default_factory=dict)\n    flow_run_states: List[State] = Field(default_factory=list)\n\n    # The synchronous portal is only created for async flows for creating engine calls\n    # from synchronous task and subflow calls\n    sync_portal: Optional[anyio.abc.BlockingPortal] = None\n    timeout_scope: Optional[anyio.abc.CancelScope] = None\n\n    # Task group that can be used for background tasks during the flow run\n    background_tasks: anyio.abc.TaskGroup\n\n    # Events worker to emit events to Prefect Cloud\n    events: Optional[EventsWorker] = None\n\n    __var__ = ContextVar(\"flow_run\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.PrefectObjectRegistry","title":"PrefectObjectRegistry","text":"

Bases: ContextModel

A context that acts as a registry for all Prefect objects that are registered during load and execution.

Attributes:

Name Type Description start_time DateTimeTZ

The time the object registry was created.

block_code_execution bool

If set, flow calls will be ignored.

capture_failures bool

If set, failures during init will be silenced and tracked.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class PrefectObjectRegistry(ContextModel):\n\"\"\"\n    A context that acts as a registry for all Prefect objects that are\n    registered during load and execution.\n\n    Attributes:\n        start_time: The time the object registry was created.\n        block_code_execution: If set, flow calls will be ignored.\n        capture_failures: If set, failures during __init__ will be silenced and tracked.\n    \"\"\"\n\n    start_time: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n\n    _instance_registry: Dict[Type[T], List[T]] = PrivateAttr(\n        default_factory=lambda: defaultdict(list)\n    )\n\n    # Failures will be a tuple of (exception, instance, args, kwargs)\n    _instance_init_failures: Dict[Type[T], List[Tuple[Exception, T, Tuple, Dict]]] = (\n        PrivateAttr(default_factory=lambda: defaultdict(list))\n    )\n\n    block_code_execution: bool = False\n    capture_failures: bool = False\n\n    __var__ = ContextVar(\"object_registry\")\n\n    def get_instances(self, type_: Type[T]) -> List[T]:\n        instances = []\n        for registered_type, type_instances in self._instance_registry.items():\n            if type_ in registered_type.mro():\n                instances.extend(type_instances)\n        return instances\n\n    def get_instance_failures(\n        self, type_: Type[T]\n    ) -> List[Tuple[Exception, T, Tuple, Dict]]:\n        failures = []\n        for type__ in type_.mro():\n            failures.extend(self._instance_init_failures[type__])\n        return failures\n\n    def register_instance(self, object):\n        # TODO: Consider using a 'Set' to avoid duplicate entries\n        self._instance_registry[type(object)].append(object)\n\n    def register_init_failure(\n        self, exc: Exception, object: Any, init_args: Tuple, init_kwargs: Dict\n    ):\n        self._instance_init_failures[type(object)].append(\n            (exc, object, init_args, init_kwargs)\n        )\n\n    @classmethod\n    def register_instances(cls, type_: Type):\n\"\"\"\n        Decorator for a class that adds registration to the `PrefectObjectRegistry`\n        on initialization of instances.\n        \"\"\"\n        __init__ = type_.__init__\n\n        def __register_init__(__self__, *args, **kwargs):\n            registry = cls.get()\n            try:\n                __init__(__self__, *args, **kwargs)\n            except Exception as exc:\n                if not registry or not registry.capture_failures:\n                    raise\n                else:\n                    registry.register_init_failure(exc, __self__, args, kwargs)\n            else:\n                if registry:\n                    registry.register_instance(__self__)\n\n        type_.__init__ = __register_init__\n        return type_\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.PrefectObjectRegistry.register_instances","title":"register_instances classmethod","text":"

Decorator for a class that adds registration to the PrefectObjectRegistry on initialization of instances.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
@classmethod\ndef register_instances(cls, type_: Type):\n\"\"\"\n    Decorator for a class that adds registration to the `PrefectObjectRegistry`\n    on initialization of instances.\n    \"\"\"\n    __init__ = type_.__init__\n\n    def __register_init__(__self__, *args, **kwargs):\n        registry = cls.get()\n        try:\n            __init__(__self__, *args, **kwargs)\n        except Exception as exc:\n            if not registry or not registry.capture_failures:\n                raise\n            else:\n                registry.register_init_failure(exc, __self__, args, kwargs)\n        else:\n            if registry:\n                registry.register_instance(__self__)\n\n    type_.__init__ = __register_init__\n    return type_\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.RunContext","title":"RunContext","text":"

Bases: ContextModel

The base context for a flow or task run. Data in this context will always be available when get_run_context is called.

Attributes:

Name Type Description start_time DateTimeTZ

The time the run context was entered

client PrefectClient

The Prefect client instance being used for API communication

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class RunContext(ContextModel):\n\"\"\"\n    The base context for a flow or task run. Data in this context will always be\n    available when `get_run_context` is called.\n\n    Attributes:\n        start_time: The time the run context was entered\n        client: The Prefect client instance being used for API communication\n    \"\"\"\n\n    start_time: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n    client: PrefectClient\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.SettingsContext","title":"SettingsContext","text":"

Bases: ContextModel

The context for a Prefect settings.

This allows for safe concurrent access and modification of settings.

Attributes:

Name Type Description profile Profile

The profile that is in use.

settings Settings

The complete settings model.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class SettingsContext(ContextModel):\n\"\"\"\n    The context for a Prefect settings.\n\n    This allows for safe concurrent access and modification of settings.\n\n    Attributes:\n        profile: The profile that is in use.\n        settings: The complete settings model.\n    \"\"\"\n\n    profile: Profile\n    settings: Settings\n\n    __var__ = ContextVar(\"settings\")\n\n    def __hash__(self) -> int:\n        return hash(self.settings)\n\n    def __enter__(self):\n\"\"\"\n        Upon entrance, we ensure the home directory for the profile exists.\n        \"\"\"\n        return_value = super().__enter__()\n\n        try:\n            prefect_home = Path(self.settings.value_of(PREFECT_HOME))\n            prefect_home.mkdir(mode=0o0700, exist_ok=True)\n        except OSError:\n            warnings.warn(\n                (\n                    \"Failed to create the Prefect home directory at \"\n                    f\"{self.settings.value_of(PREFECT_HOME)}\"\n                ),\n                stacklevel=2,\n            )\n\n        return return_value\n\n    @classmethod\n    def get(cls) -> \"SettingsContext\":\n        # Return the global context instead of `None` if no context exists\n        return super().get() or GLOBAL_SETTINGS_CONTEXT\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.TagsContext","title":"TagsContext","text":"

Bases: ContextModel

The context for prefect.tags management.

Attributes:

Name Type Description current_tags Set[str]

A set of current tags in the context

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class TagsContext(ContextModel):\n\"\"\"\n    The context for `prefect.tags` management.\n\n    Attributes:\n        current_tags: A set of current tags in the context\n    \"\"\"\n\n    current_tags: Set[str] = Field(default_factory=set)\n\n    @classmethod\n    def get(cls) -> \"TagsContext\":\n        # Return an empty `TagsContext` instead of `None` if no context exists\n        return cls.__var__.get(TagsContext())\n\n    __var__ = ContextVar(\"tags\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.TaskRunContext","title":"TaskRunContext","text":"

Bases: RunContext

The context for a task run. Data in this context is only available from within a task run function.

Attributes:

Name Type Description task Task

The task instance associated with the task run

task_run TaskRun

The API metadata for this task run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
class TaskRunContext(RunContext):\n\"\"\"\n    The context for a task run. Data in this context is only available from within a\n    task run function.\n\n    Attributes:\n        task: The task instance associated with the task run\n        task_run: The API metadata for this task run\n    \"\"\"\n\n    task: \"Task\"\n    task_run: TaskRun\n    log_prints: bool = False\n    parameters: Dict[str, Any]\n\n    # Result handling\n    result_factory: ResultFactory\n\n    __var__ = ContextVar(\"task_run\")\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.get_run_context","title":"get_run_context","text":"

Get the current run context from within a task or flow function.

Returns:

Type Description Union[FlowRunContext, TaskRunContext]

A FlowRunContext or TaskRunContext depending on the function type.

Raises RuntimeError: If called outside of a flow or task run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
def get_run_context() -> Union[FlowRunContext, TaskRunContext]:\n\"\"\"\n    Get the current run context from within a task or flow function.\n\n    Returns:\n        A `FlowRunContext` or `TaskRunContext` depending on the function type.\n\n    Raises\n        RuntimeError: If called outside of a flow or task run.\n    \"\"\"\n    task_run_ctx = TaskRunContext.get()\n    if task_run_ctx:\n        return task_run_ctx\n\n    flow_run_ctx = FlowRunContext.get()\n    if flow_run_ctx:\n        return flow_run_ctx\n\n    raise MissingContextError(\n        \"No run context available. You are not in a flow or task run context.\"\n    )\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.get_settings_context","title":"get_settings_context","text":"

Get the current settings context which contains profile information and the settings that are being used.

Generally, the settings that are being used are a combination of values from the profile and environment. See prefect.context.use_profile for more details.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
def get_settings_context() -> SettingsContext:\n\"\"\"\n    Get the current settings context which contains profile information and the\n    settings that are being used.\n\n    Generally, the settings that are being used are a combination of values from the\n    profile and environment. See `prefect.context.use_profile` for more details.\n    \"\"\"\n    settings_ctx = SettingsContext.get()\n\n    if not settings_ctx:\n        raise MissingContextError(\"No settings context found.\")\n\n    return settings_ctx\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.registry_from_script","title":"registry_from_script","text":"

Return a fresh registry with instances populated from execution of a script.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
def registry_from_script(\n    path: str,\n    block_code_execution: bool = True,\n    capture_failures: bool = True,\n) -> PrefectObjectRegistry:\n\"\"\"\n    Return a fresh registry with instances populated from execution of a script.\n    \"\"\"\n    with PrefectObjectRegistry(\n        block_code_execution=block_code_execution,\n        capture_failures=capture_failures,\n    ) as registry:\n        load_script_as_module(path)\n\n    return registry\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.root_settings_context","title":"root_settings_context","text":"

Return the settings context that will exist as the root context for the module.

The profile to use is determined with the following precedence - Command line via 'prefect --profile ' - Environment variable via 'PREFECT_PROFILE' - Profiles file via the 'active' key Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py

def root_settings_context():\n\"\"\"\n    Return the settings context that will exist as the root context for the module.\n\n    The profile to use is determined with the following precedence\n    - Command line via 'prefect --profile <name>'\n    - Environment variable via 'PREFECT_PROFILE'\n    - Profiles file via the 'active' key\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    active_name = profiles.active_name\n    profile_source = \"in the profiles file\"\n\n    if \"PREFECT_PROFILE\" in os.environ:\n        active_name = os.environ[\"PREFECT_PROFILE\"]\n        profile_source = \"by environment variable\"\n\n    if (\n        sys.argv[0].endswith(\"/prefect\")\n        and len(sys.argv) >= 3\n        and sys.argv[1] == \"--profile\"\n    ):\n        active_name = sys.argv[2]\n        profile_source = \"by command line argument\"\n\n    if active_name not in profiles.names:\n        print(\n            (\n                f\"WARNING: Active profile {active_name!r} set {profile_source} not \"\n                \"found. The default profile will be used instead. \"\n            ),\n            file=sys.stderr,\n        )\n        active_name = \"default\"\n\n    with use_profile(\n        profiles[active_name],\n        # Override environment variables if the profile was set by the CLI\n        override_environment_variables=profile_source == \"by command line argument\",\n    ) as settings_context:\n        return settings_context\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.tags","title":"tags","text":"

Context manager to add tags to flow and task run calls.

Tags are always combined with any existing tags.

Yields:

Type Description Set[str]

The current set of tags

Examples:

>>> from prefect import tags, task, flow\n>>> @task\n>>> def my_task():\n>>>     pass\n

Run a task with tags

>>> @flow\n>>> def my_flow():\n>>>     with tags(\"a\", \"b\"):\n>>>         my_task()  # has tags: a, b\n

Run a flow with tags

>>> @flow\n>>> def my_flow():\n>>>     pass\n>>> with tags(\"a\", \"b\"):\n>>>     my_flow()  # has tags: a, b\n

Run a task with nested tag contexts

>>> @flow\n>>> def my_flow():\n>>>     with tags(\"a\", \"b\"):\n>>>         with tags(\"c\", \"d\"):\n>>>             my_task()  # has tags: a, b, c, d\n>>>         my_task()  # has tags: a, b\n

Inspect the current tags

>>> @flow\n>>> def my_flow():\n>>>     with tags(\"c\", \"d\"):\n>>>         with tags(\"e\", \"f\") as current_tags:\n>>>              print(current_tags)\n>>> with tags(\"a\", \"b\"):\n>>>     my_flow()\n{\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"}\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
@contextmanager\ndef tags(*new_tags: str) -> Set[str]:\n\"\"\"\n    Context manager to add tags to flow and task run calls.\n\n    Tags are always combined with any existing tags.\n\n    Yields:\n        The current set of tags\n\n    Examples:\n        >>> from prefect import tags, task, flow\n        >>> @task\n        >>> def my_task():\n        >>>     pass\n\n        Run a task with tags\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     with tags(\"a\", \"b\"):\n        >>>         my_task()  # has tags: a, b\n\n        Run a flow with tags\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     pass\n        >>> with tags(\"a\", \"b\"):\n        >>>     my_flow()  # has tags: a, b\n\n        Run a task with nested tag contexts\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     with tags(\"a\", \"b\"):\n        >>>         with tags(\"c\", \"d\"):\n        >>>             my_task()  # has tags: a, b, c, d\n        >>>         my_task()  # has tags: a, b\n\n        Inspect the current tags\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     with tags(\"c\", \"d\"):\n        >>>         with tags(\"e\", \"f\") as current_tags:\n        >>>              print(current_tags)\n        >>> with tags(\"a\", \"b\"):\n        >>>     my_flow()\n        {\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"}\n    \"\"\"\n    current_tags = TagsContext.get().current_tags\n    new_tags = current_tags.union(new_tags)\n    with TagsContext(current_tags=new_tags):\n        yield new_tags\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/context/#prefect.context.use_profile","title":"use_profile","text":"

Switch to a profile for the duration of this context.

Profile contexts are confined to an async context in a single thread.

Parameters:

Name Type Description Default profile Union[Profile, str]

The name of the profile to load or an instance of a Profile.

required override_environment_variable

If set, variables in the profile will take precedence over current environment variables. By default, environment variables will override profile settings.

required include_current_context bool

If set, the new settings will be constructed with the current settings context as a base. If not set, the use_base settings will be loaded from the environment and defaults.

True

Yields:

Type Description

The created SettingsContext object

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/context.py
@contextmanager\ndef use_profile(\n    profile: Union[Profile, str],\n    override_environment_variables: bool = False,\n    include_current_context: bool = True,\n):\n\"\"\"\n    Switch to a profile for the duration of this context.\n\n    Profile contexts are confined to an async context in a single thread.\n\n    Args:\n        profile: The name of the profile to load or an instance of a Profile.\n        override_environment_variable: If set, variables in the profile will take\n            precedence over current environment variables. By default, environment\n            variables will override profile settings.\n        include_current_context: If set, the new settings will be constructed\n            with the current settings context as a base. If not set, the use_base settings\n            will be loaded from the environment and defaults.\n\n    Yields:\n        The created `SettingsContext` object\n    \"\"\"\n    if isinstance(profile, str):\n        profiles = prefect.settings.load_profiles()\n        profile = profiles[profile]\n\n    if not isinstance(profile, Profile):\n        raise TypeError(\n            f\"Unexpected type {type(profile).__name__!r} for `profile`. \"\n            \"Expected 'str' or 'Profile'.\"\n        )\n\n    # Create a copy of the profiles settings as we will mutate it\n    profile_settings = profile.settings.copy()\n\n    existing_context = SettingsContext.get()\n    if existing_context and include_current_context:\n        settings = existing_context.settings\n    else:\n        settings = prefect.settings.get_settings_from_env()\n\n    if not override_environment_variables:\n        for key in os.environ:\n            if key in prefect.settings.SETTING_VARIABLES:\n                profile_settings.pop(prefect.settings.SETTING_VARIABLES[key], None)\n\n    new_settings = settings.copy_with_update(updates=profile_settings)\n\n    with SettingsContext(profile=profile, settings=new_settings) as ctx:\n        yield ctx\n
","tags":["Python API","flow run context","task run context","context"]},{"location":"api-ref/prefect/engine/","title":"prefect.engine","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine","title":"prefect.engine","text":"

Client-side execution and orchestration of flows and tasks.

","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--engine-process-overview","title":"Engine process overview","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--flows","title":"Flows","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine--tasks","title":"Tasks","text":"","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_flow_run","title":"begin_flow_run async","text":"

Begins execution of a flow run; blocks until completion of the flow run

Note that the flow_run contains a parameters attribute which is the serialized parameters sent to the backend while the parameters argument here should be the deserialized and validated dictionary of python objects.

Returns:

Type Description State

The final state of the run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def begin_flow_run(\n    flow: Flow,\n    flow_run: FlowRun,\n    parameters: Dict[str, Any],\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> State:\n\"\"\"\n    Begins execution of a flow run; blocks until completion of the flow run\n\n    - Starts a task runner\n    - Determines the result storage block to use\n    - Orchestrates the flow run (runs the user-function and generates tasks)\n    - Waits for tasks to complete / shutsdown the task runner\n    - Sets a terminal state for the flow run\n\n    Note that the `flow_run` contains a `parameters` attribute which is the serialized\n    parameters sent to the backend while the `parameters` argument here should be the\n    deserialized and validated dictionary of python objects.\n\n    Returns:\n        The final state of the run\n    \"\"\"\n    logger = flow_run_logger(flow_run, flow)\n\n    log_prints = should_log_prints(flow)\n    flow_run_context = PartialModel(FlowRunContext, log_prints=log_prints)\n\n    async with AsyncExitStack() as stack:\n        await stack.enter_async_context(\n            report_flow_run_crashes(flow_run=flow_run, client=client, flow=flow)\n        )\n\n        # Create a task group for background tasks\n        flow_run_context.background_tasks = await stack.enter_async_context(\n            anyio.create_task_group()\n        )\n\n        # If the flow is async, we need to provide a portal so sync tasks can run\n        flow_run_context.sync_portal = (\n            stack.enter_context(start_blocking_portal()) if flow.isasync else None\n        )\n\n        task_runner = flow.task_runner.duplicate()\n        if task_runner is NotImplemented:\n            # Backwards compatibility; will not support concurrent flow runs\n            task_runner = flow.task_runner\n            logger.warning(\n                f\"Task runner {type(task_runner).__name__!r} does not implement the\"\n                \" `duplicate` method and will fail if used for concurrent execution of\"\n                \" the same flow.\"\n            )\n\n        logger.debug(\n            f\"Starting {type(flow.task_runner).__name__!r}; submitted tasks \"\n            f\"will be run {CONCURRENCY_MESSAGES[flow.task_runner.concurrency_type]}...\"\n        )\n\n        flow_run_context.task_runner = await stack.enter_async_context(\n            task_runner.start()\n        )\n\n        flow_run_context.result_factory = await ResultFactory.from_flow(\n            flow, client=client\n        )\n\n        if log_prints:\n            stack.enter_context(patch_print())\n\n        terminal_or_paused_state = await orchestrate_flow_run(\n            flow,\n            flow_run=flow_run,\n            parameters=parameters,\n            wait_for=None,\n            client=client,\n            partial_flow_run_context=flow_run_context,\n            # Orchestration needs to be interruptible if it has a timeout\n            interruptible=flow.timeout_seconds is not None,\n            user_thread=user_thread,\n        )\n\n    if terminal_or_paused_state.is_paused():\n        timeout = terminal_or_paused_state.state_details.pause_timeout\n        msg = \"Currently paused and suspending execution.\"\n        if timeout:\n            msg += f\" Resume before {timeout.to_rfc3339_string()} to finish execution.\"\n        logger.log(level=logging.INFO, msg=msg)\n        await APILogHandler.aflush()\n\n        return terminal_or_paused_state\n    else:\n        terminal_state = terminal_or_paused_state\n\n    # If debugging, use the more complete `repr` than the usual `str` description\n    display_state = repr(terminal_state) if PREFECT_DEBUG_MODE else str(terminal_state)\n\n    logger.log(\n        level=logging.INFO if terminal_state.is_completed() else logging.ERROR,\n        msg=f\"Finished in state {display_state}\",\n    )\n\n    # When a \"root\" flow run finishes, flush logs so we do not have to rely on handling\n    # during interpreter shutdown\n    await APILogHandler.aflush()\n\n    return terminal_state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_task_map","title":"begin_task_map async","text":"

Async entrypoint for task mapping

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def begin_task_map(\n    task: Task,\n    flow_run_context: FlowRunContext,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    task_runner: Optional[BaseTaskRunner],\n) -> List[Union[PrefectFuture, Awaitable[PrefectFuture]]]:\n\"\"\"Async entrypoint for task mapping\"\"\"\n    # We need to resolve some futures to map over their data, collect the upstream\n    # links beforehand to retain relationship tracking.\n    task_inputs = {\n        k: await collect_task_run_inputs(v, max_depth=0) for k, v in parameters.items()\n    }\n\n    # Resolve the top-level parameters in order to get mappable data of a known length.\n    # Nested parameters will be resolved in each mapped child where their relationships\n    # will also be tracked.\n    parameters = await resolve_inputs(parameters, max_depth=1)\n\n    # Ensure that any parameters in kwargs are expanded before this check\n    parameters = explode_variadic_parameter(task.fn, parameters)\n\n    iterable_parameters = {}\n    static_parameters = {}\n    annotated_parameters = {}\n    for key, val in parameters.items():\n        if isinstance(val, (allow_failure, quote)):\n            # Unwrap annotated parameters to determine if they are iterable\n            annotated_parameters[key] = val\n            val = val.unwrap()\n\n        if isinstance(val, unmapped):\n            static_parameters[key] = val.value\n        elif isiterable(val):\n            iterable_parameters[key] = list(val)\n        else:\n            static_parameters[key] = val\n\n    if not len(iterable_parameters):\n        raise MappingMissingIterable(\n            \"No iterable parameters were received. Parameters for map must \"\n            f\"include at least one iterable. Parameters: {parameters}\"\n        )\n\n    iterable_parameter_lengths = {\n        key: len(val) for key, val in iterable_parameters.items()\n    }\n    lengths = set(iterable_parameter_lengths.values())\n    if len(lengths) > 1:\n        raise MappingLengthMismatch(\n            \"Received iterable parameters with different lengths. Parameters for map\"\n            f\" must all be the same length. Got lengths: {iterable_parameter_lengths}\"\n        )\n\n    map_length = list(lengths)[0]\n\n    task_runs = []\n    for i in range(map_length):\n        call_parameters = {key: value[i] for key, value in iterable_parameters.items()}\n        call_parameters.update({key: value for key, value in static_parameters.items()})\n\n        # Add default values for parameters; these are skipped earlier since they should\n        # not be mapped over\n        for key, value in get_parameter_defaults(task.fn).items():\n            call_parameters.setdefault(key, value)\n\n        # Re-apply annotations to each key again\n        for key, annotation in annotated_parameters.items():\n            call_parameters[key] = annotation.rewrap(call_parameters[key])\n\n        # Collapse any previously exploded kwargs\n        call_parameters = collapse_variadic_parameters(task.fn, call_parameters)\n\n        task_runs.append(\n            partial(\n                get_task_call_return_value,\n                task=task,\n                flow_run_context=flow_run_context,\n                parameters=call_parameters,\n                wait_for=wait_for,\n                return_type=return_type,\n                task_runner=task_runner,\n                extra_task_inputs=task_inputs,\n            )\n        )\n\n    # Maintain the order of the task runs when using the sequential task runner\n    runner = task_runner if task_runner else flow_run_context.task_runner\n    if runner.concurrency_type == TaskConcurrencyType.SEQUENTIAL:\n        return [await task_run() for task_run in task_runs]\n\n    return await gather(*task_runs)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.begin_task_run","title":"begin_task_run async","text":"

Entrypoint for task run execution.

This function is intended for submission to the task runner.

This method may be called from a worker so we ensure the settings context has been entered. For example, with a runner that is executing tasks in the same event loop, we will likely not enter the context again because the current context already matches:

main thread: --> Flow called with settings A --> begin_task_run executes same event loop --> Profile A matches and is not entered again

However, with execution on a remote environment, we are going to need to ensure the settings for the task run are respected by entering the context:

main thread: --> Flow called with settings A --> begin_task_run is scheduled on a remote worker, settings A is serialized remote worker: --> Remote worker imports Prefect (may not occur) --> Global settings is loaded with default settings --> begin_task_run executes on a different event loop than the flow --> Current settings is not set or does not match, settings A is entered

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def begin_task_run(\n    task: Task,\n    task_run: TaskRun,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    result_factory: ResultFactory,\n    log_prints: bool,\n    settings: prefect.context.SettingsContext,\n):\n\"\"\"\n    Entrypoint for task run execution.\n\n    This function is intended for submission to the task runner.\n\n    This method may be called from a worker so we ensure the settings context has been\n    entered. For example, with a runner that is executing tasks in the same event loop,\n    we will likely not enter the context again because the current context already\n    matches:\n\n    main thread:\n    --> Flow called with settings A\n    --> `begin_task_run` executes same event loop\n    --> Profile A matches and is not entered again\n\n    However, with execution on a remote environment, we are going to need to ensure the\n    settings for the task run are respected by entering the context:\n\n    main thread:\n    --> Flow called with settings A\n    --> `begin_task_run` is scheduled on a remote worker, settings A is serialized\n    remote worker:\n    --> Remote worker imports Prefect (may not occur)\n    --> Global settings is loaded with default settings\n    --> `begin_task_run` executes on a different event loop than the flow\n    --> Current settings is not set or does not match, settings A is entered\n    \"\"\"\n    maybe_flow_run_context = prefect.context.FlowRunContext.get()\n\n    async with AsyncExitStack() as stack:\n        # The settings context may be null on a remote worker so we use the safe `.get`\n        # method and compare it to the settings required for this task run\n        if prefect.context.SettingsContext.get() != settings:\n            stack.enter_context(settings)\n            setup_logging()\n\n        if maybe_flow_run_context:\n            # Accessible if on a worker that is running in the same thread as the flow\n            client = maybe_flow_run_context.client\n            # Only run the task in an interruptible thread if it in the same thread as\n            # the flow _and_ the flow run has a timeout attached. If the task is on a\n            # worker, the flow run timeout will not be raised in the worker process.\n            interruptible = maybe_flow_run_context.timeout_scope is not None\n        else:\n            # Otherwise, retrieve a new client\n            client = await stack.enter_async_context(get_client())\n            interruptible = False\n            await stack.enter_async_context(anyio.create_task_group())\n\n        await stack.enter_async_context(report_task_run_crashes(task_run, client))\n\n        # TODO: Use the background tasks group to manage logging for this task\n\n        if log_prints:\n            stack.enter_context(patch_print())\n\n        await check_api_reachable(\n            client, f\"Cannot orchestrate task run '{task_run.id}'\"\n        )\n        try:\n            state = await orchestrate_task_run(\n                task=task,\n                task_run=task_run,\n                parameters=parameters,\n                wait_for=wait_for,\n                result_factory=result_factory,\n                log_prints=log_prints,\n                interruptible=interruptible,\n                client=client,\n            )\n\n            if not maybe_flow_run_context:\n                # When a a task run finishes on a remote worker flush logs to prevent\n                # loss if the process exits\n                await APILogHandler.aflush()\n\n        except Abort as abort:\n            # Task run probably already completed, fetch its state\n            task_run = await client.read_task_run(task_run.id)\n\n            if task_run.state.is_final():\n                task_run_logger(task_run).info(\n                    f\"Task run '{task_run.id}' already finished.\"\n                )\n            else:\n                # TODO: This is a concerning case; we should determine when this occurs\n                #       1. This can occur when the flow run is not in a running state\n                task_run_logger(task_run).warning(\n                    f\"Task run '{task_run.id}' received abort during orchestration: \"\n                    f\"{abort} Task run is in {task_run.state.type.value} state.\"\n                )\n            state = task_run.state\n\n        except Pause:\n            task_run_logger(task_run).info(\n                \"Task run encountered a pause signal during orchestration.\"\n            )\n            state = Paused()\n\n        return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.collect_task_run_inputs","title":"collect_task_run_inputs async","text":"

This function recurses through an expression to generate a set of any discernable task run inputs it finds in the data structure. It produces a set of all inputs found.

Examples:

>>> task_inputs = {\n>>>    k: await collect_task_run_inputs(v) for k, v in parameters.items()\n>>> }\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def collect_task_run_inputs(expr: Any, max_depth: int = -1) -> Set[TaskRunInput]:\n\"\"\"\n    This function recurses through an expression to generate a set of any discernable\n    task run inputs it finds in the data structure. It produces a set of all inputs\n    found.\n\n    Examples:\n        >>> task_inputs = {\n        >>>    k: await collect_task_run_inputs(v) for k, v in parameters.items()\n        >>> }\n    \"\"\"\n    # TODO: This function needs to be updated to detect parameters and constants\n\n    inputs = set()\n    futures = set()\n\n    def add_futures_and_states_to_inputs(obj):\n        if isinstance(obj, PrefectFuture):\n            # We need to wait for futures to be submitted before we can get the task\n            # run id but we want to do so asynchronously\n            futures.add(obj)\n        elif is_state(obj):\n            if obj.state_details.task_run_id:\n                inputs.add(TaskRunResult(id=obj.state_details.task_run_id))\n        else:\n            state = get_state_for_result(obj)\n            if state and state.state_details.task_run_id:\n                inputs.add(TaskRunResult(id=state.state_details.task_run_id))\n\n    visit_collection(\n        expr,\n        visit_fn=add_futures_and_states_to_inputs,\n        return_data=False,\n        max_depth=max_depth,\n    )\n\n    await asyncio.gather(*[future._wait_for_submission() for future in futures])\n    for future in futures:\n        inputs.add(TaskRunResult(id=future.task_run.id))\n\n    return inputs\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_and_begin_subflow_run","title":"create_and_begin_subflow_run async","text":"

Async entrypoint for flows calls within a flow run

Subflows differ from parent flows in that they - Resolve futures in passed parameters into values - Create a dummy task for representation in the parent flow - Retrieve default result storage from the parent flow rather than the server

Returns:

Type Description Any

The final state of the run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@inject_client\nasync def create_and_begin_subflow_run(\n    flow: Flow,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> Any:\n\"\"\"\n    Async entrypoint for flows calls within a flow run\n\n    Subflows differ from parent flows in that they\n    - Resolve futures in passed parameters into values\n    - Create a dummy task for representation in the parent flow\n    - Retrieve default result storage from the parent flow rather than the server\n\n    Returns:\n        The final state of the run\n    \"\"\"\n    parent_flow_run_context = FlowRunContext.get()\n    parent_logger = get_run_logger(parent_flow_run_context)\n    log_prints = should_log_prints(flow)\n    terminal_state = None\n\n    parent_logger.debug(f\"Resolving inputs to {flow.name!r}\")\n    task_inputs = {k: await collect_task_run_inputs(v) for k, v in parameters.items()}\n\n    if wait_for:\n        task_inputs[\"wait_for\"] = await collect_task_run_inputs(wait_for)\n\n    rerunning = parent_flow_run_context.flow_run.run_count > 1\n\n    # Generate a task in the parent flow run to represent the result of the subflow run\n    dummy_task = Task(name=flow.name, fn=flow.fn, version=flow.version)\n    parent_task_run = await client.create_task_run(\n        task=dummy_task,\n        flow_run_id=parent_flow_run_context.flow_run.id,\n        dynamic_key=_dynamic_key_for_task_run(parent_flow_run_context, dummy_task),\n        task_inputs=task_inputs,\n        state=Pending(),\n    )\n\n    # Resolve any task futures in the input\n    parameters = await resolve_inputs(parameters)\n\n    if parent_task_run.state.is_final() and not (\n        rerunning and not parent_task_run.state.is_completed()\n    ):\n        # Retrieve the most recent flow run from the database\n        flow_runs = await client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                parent_task_run_id={\"any_\": [parent_task_run.id]}\n            ),\n            sort=FlowRunSort.EXPECTED_START_TIME_ASC,\n        )\n        flow_run = flow_runs[-1]\n\n        # Set up variables required downstream\n        terminal_state = flow_run.state\n        logger = flow_run_logger(flow_run, flow)\n\n    else:\n        flow_run = await client.create_flow_run(\n            flow,\n            parameters=flow.serialize_parameters(parameters),\n            parent_task_run_id=parent_task_run.id,\n            state=parent_task_run.state if not rerunning else Pending(),\n            tags=TagsContext.get().current_tags,\n        )\n\n        parent_logger.info(\n            f\"Created subflow run {flow_run.name!r} for flow {flow.name!r}\"\n        )\n\n        logger = flow_run_logger(flow_run, flow)\n        ui_url = PREFECT_UI_URL.value()\n        if ui_url:\n            logger.info(\n                f\"View at {ui_url}/flow-runs/flow-run/{flow_run.id}\",\n                extra={\"send_to_api\": False},\n            )\n\n        result_factory = await ResultFactory.from_flow(\n            flow, client=parent_flow_run_context.client\n        )\n\n        if flow.should_validate_parameters:\n            try:\n                parameters = flow.validate_parameters(parameters)\n            except Exception:\n                message = \"Validation of flow parameters failed with error:\"\n                logger.exception(message)\n                terminal_state = await propose_state(\n                    client,\n                    state=await exception_to_failed_state(\n                        message=message, result_factory=result_factory\n                    ),\n                    flow_run_id=flow_run.id,\n                )\n\n        if terminal_state is None or not terminal_state.is_final():\n            async with AsyncExitStack() as stack:\n                await stack.enter_async_context(\n                    report_flow_run_crashes(flow_run=flow_run, client=client, flow=flow)\n                )\n\n                task_runner = flow.task_runner.duplicate()\n                if task_runner is NotImplemented:\n                    # Backwards compatibility; will not support concurrent flow runs\n                    task_runner = flow.task_runner\n                    logger.warning(\n                        f\"Task runner {type(task_runner).__name__!r} does not implement\"\n                        \" the `duplicate` method and will fail if used for concurrent\"\n                        \" execution of the same flow.\"\n                    )\n\n                await stack.enter_async_context(task_runner.start())\n\n                if log_prints:\n                    stack.enter_context(patch_print())\n\n                terminal_state = await orchestrate_flow_run(\n                    flow,\n                    flow_run=flow_run,\n                    parameters=parameters,\n                    wait_for=wait_for,\n                    # If the parent flow run has a timeout, then this one needs to be\n                    # interruptible as well\n                    interruptible=parent_flow_run_context.timeout_scope is not None,\n                    client=client,\n                    partial_flow_run_context=PartialModel(\n                        FlowRunContext,\n                        sync_portal=parent_flow_run_context.sync_portal,\n                        task_runner=task_runner,\n                        background_tasks=parent_flow_run_context.background_tasks,\n                        result_factory=result_factory,\n                        log_prints=log_prints,\n                    ),\n                    user_thread=user_thread,\n                )\n\n    # Display the full state (including the result) if debugging\n    display_state = repr(terminal_state) if PREFECT_DEBUG_MODE else str(terminal_state)\n    logger.log(\n        level=logging.INFO if terminal_state.is_completed() else logging.ERROR,\n        msg=f\"Finished in state {display_state}\",\n    )\n\n    # Track the subflow state so the parent flow can use it to determine its final state\n    parent_flow_run_context.flow_run_states.append(terminal_state)\n\n    if return_type == \"state\":\n        return terminal_state\n    elif return_type == \"result\":\n        return await terminal_state.result(fetch=True)\n    else:\n        raise ValueError(f\"Invalid return type for flow engine {return_type!r}.\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.create_then_begin_flow_run","title":"create_then_begin_flow_run async","text":"

Async entrypoint for flow calls

Creates the flow run in the backend, then enters the main flow run engine.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@inject_client\nasync def create_then_begin_flow_run(\n    flow: Flow,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> Any:\n\"\"\"\n    Async entrypoint for flow calls\n\n    Creates the flow run in the backend, then enters the main flow run engine.\n    \"\"\"\n    # TODO: Returns a `State` depending on `return_type` and we can add an overload to\n    #       the function signature to clarify this eventually.\n\n    await check_api_reachable(client, \"Cannot create flow run\")\n\n    state = Pending()\n    if flow.should_validate_parameters:\n        try:\n            parameters = flow.validate_parameters(parameters)\n        except Exception:\n            state = await exception_to_failed_state(\n                message=\"Validation of flow parameters failed with error:\"\n            )\n\n    flow_run = await client.create_flow_run(\n        flow,\n        # Send serialized parameters to the backend\n        parameters=flow.serialize_parameters(parameters),\n        state=state,\n        tags=TagsContext.get().current_tags,\n    )\n\n    engine_logger.info(f\"Created flow run {flow_run.name!r} for flow {flow.name!r}\")\n\n    logger = flow_run_logger(flow_run, flow)\n\n    ui_url = PREFECT_UI_URL.value()\n    if ui_url:\n        logger.info(\n            f\"View at {ui_url}/flow-runs/flow-run/{flow_run.id}\",\n            extra={\"send_to_api\": False},\n        )\n\n    if state.is_failed():\n        logger.error(state.message)\n        engine_logger.info(\n            f\"Flow run {flow_run.name!r} received invalid parameters and is marked as\"\n            \" failed.\"\n        )\n    else:\n        state = await begin_flow_run(\n            flow=flow,\n            flow_run=flow_run,\n            parameters=parameters,\n            client=client,\n            user_thread=user_thread,\n        )\n\n    if return_type == \"state\":\n        return state\n    elif return_type == \"result\":\n        return await state.result(fetch=True)\n    else:\n        raise ValueError(f\"Invalid return type for flow engine {return_type!r}.\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_flow_run_engine_from_flow_call","title":"enter_flow_run_engine_from_flow_call","text":"

Sync entrypoint for flow calls.

This function does the heavy lifting of ensuring we can get into an async context for flow run execution with minimal overhead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
def enter_flow_run_engine_from_flow_call(\n    flow: Flow,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n) -> Union[State, Awaitable[State]]:\n\"\"\"\n    Sync entrypoint for flow calls.\n\n    This function does the heavy lifting of ensuring we can get into an async context\n    for flow run execution with minimal overhead.\n    \"\"\"\n    setup_logging()\n\n    registry = PrefectObjectRegistry.get()\n    if registry and registry.block_code_execution:\n        engine_logger.warning(\n            f\"Script loading is in progress, flow {flow.name!r} will not be executed.\"\n            \" Consider updating the script to only call the flow if executed\"\n            f' directly:\\n\\n\\tif __name__ == \"__main__\":\\n\\t\\t{flow.fn.__name__}()'\n        )\n        return None\n\n    if TaskRunContext.get():\n        raise RuntimeError(\n            \"Flows cannot be run from within tasks. Did you mean to call this \"\n            \"flow in a flow?\"\n        )\n\n    parent_flow_run_context = FlowRunContext.get()\n    is_subflow_run = parent_flow_run_context is not None\n\n    if wait_for is not None and not is_subflow_run:\n        raise ValueError(\"Only flows run as subflows can wait for dependencies.\")\n\n    begin_run = create_call(\n        create_and_begin_subflow_run if is_subflow_run else create_then_begin_flow_run,\n        flow=flow,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        client=parent_flow_run_context.client if is_subflow_run else None,\n        user_thread=threading.current_thread(),\n    )\n\n    # On completion of root flows, wait for the global thread to ensure that\n    # any work there is complete\n    done_callbacks = (\n        [create_call(wait_for_global_loop_exit)] if not is_subflow_run else None\n    )\n\n    # WARNING: You must define any context managers here to pass to our concurrency\n    # api instead of entering them in here in the engine entrypoint. Otherwise, async\n    # flows will not use the context as this function _exits_ to return an awaitable to\n    # the user. Generally, you should enter contexts _within_ the async `begin_run`\n    # instead but if you need to enter a context from the main thread you'll need to do\n    # it here.\n    contexts = [capture_sigterm()]\n\n    if flow.isasync and (\n        not is_subflow_run or (is_subflow_run and parent_flow_run_context.flow.isasync)\n    ):\n        # return a coro for the user to await if the flow is async\n        # unless it is an async subflow called in a sync flow\n        retval = from_async.wait_for_call_in_loop_thread(\n            begin_run,\n            done_callbacks=done_callbacks,\n            contexts=contexts,\n        )\n\n    else:\n        retval = from_sync.wait_for_call_in_loop_thread(\n            begin_run,\n            done_callbacks=done_callbacks,\n            contexts=contexts,\n        )\n\n    return retval\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_flow_run_engine_from_subprocess","title":"enter_flow_run_engine_from_subprocess","text":"

Sync entrypoint for flow runs that have been submitted for execution by an agent

Differs from enter_flow_run_engine_from_flow_call in that we have a flow run id but not a flow object. The flow must be retrieved before execution can begin. Additionally, this assumes that the caller is always in a context without an event loop as this should be called from a fresh process.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
def enter_flow_run_engine_from_subprocess(flow_run_id: UUID) -> State:\n\"\"\"\n    Sync entrypoint for flow runs that have been submitted for execution by an agent\n\n    Differs from `enter_flow_run_engine_from_flow_call` in that we have a flow run id\n    but not a flow object. The flow must be retrieved before execution can begin.\n    Additionally, this assumes that the caller is always in a context without an event\n    loop as this should be called from a fresh process.\n    \"\"\"\n\n    # Ensure collections are imported and have the opportunity to register types before\n    # loading the user code from the deployment\n    prefect.plugins.load_prefect_collections()\n\n    setup_logging()\n\n    state = from_sync.wait_for_call_in_loop_thread(\n        create_call(\n            retrieve_flow_then_begin_flow_run,\n            flow_run_id,\n            user_thread=threading.current_thread(),\n        ),\n        contexts=[capture_sigterm()],\n    )\n\n    APILogHandler.flush()\n    return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.enter_task_run_engine","title":"enter_task_run_engine","text":"

Sync entrypoint for task calls

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
def enter_task_run_engine(\n    task: Task,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    return_type: EngineReturnType,\n    task_runner: Optional[BaseTaskRunner],\n    mapped: bool,\n) -> Union[PrefectFuture, Awaitable[PrefectFuture]]:\n\"\"\"\n    Sync entrypoint for task calls\n    \"\"\"\n\n    flow_run_context = FlowRunContext.get()\n    if not flow_run_context:\n        raise RuntimeError(\n            \"Tasks cannot be run outside of a flow. To call the underlying task\"\n            \" function outside of a flow use `task.fn()`.\"\n        )\n\n    if TaskRunContext.get():\n        raise RuntimeError(\n            \"Tasks cannot be run from within tasks. Did you mean to call this \"\n            \"task in a flow?\"\n        )\n\n    if flow_run_context.timeout_scope and flow_run_context.timeout_scope.cancel_called:\n        raise TimeoutError(\"Flow run timed out\")\n\n    begin_run = create_call(\n        begin_task_map if mapped else get_task_call_return_value,\n        task=task,\n        flow_run_context=flow_run_context,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        task_runner=task_runner,\n    )\n\n    if task.isasync and flow_run_context.flow.isasync:\n        # return a coro for the user to await if an async task in an async flow\n        return from_async.wait_for_call_in_loop_thread(begin_run)\n    else:\n        return from_sync.wait_for_call_in_loop_thread(begin_run)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.get_state_for_result","title":"get_state_for_result","text":"

Get the state related to a result object.

link_state_to_result must have been called first.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
def get_state_for_result(obj: Any) -> Optional[State]:\n\"\"\"\n    Get the state related to a result object.\n\n    `link_state_to_result` must have been called first.\n    \"\"\"\n    flow_run_context = FlowRunContext.get()\n    if flow_run_context:\n        return flow_run_context.task_run_results.get(id(obj))\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.link_state_to_result","title":"link_state_to_result","text":"

Caches a link between a state and a result and its components using the id of the components to map to the state. The cache is persisted to the current flow run context since task relationships are limited to within a flow run.

This allows dependency tracking to occur when results are passed around. Note: Because id is used, we cannot cache links between singleton objects.

We only cache the relationship between components 1-layer deep.

Example

Given the result [1, [\"a\",\"b\"], (\"c\",)], the following elements will be mapped to the state: - [1, [\"a\",\"b\"], (\"c\",)] - [\"a\",\"b\"] - (\"c\",)

Note: the int 1 will not be mapped to the state because it is a singleton.

Other Notes: We do not hash the result because: - If changes are made to the object in the flow between task calls, we can still track that they are related. - Hashing can be expensive. - Not all objects are hashable.

We do not set an attribute, e.g. __prefect_state__, on the result because:

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
def link_state_to_result(state: State, result: Any) -> None:\n\"\"\"\n    Caches a link between a state and a result and its components using\n    the `id` of the components to map to the state. The cache is persisted to the\n    current flow run context since task relationships are limited to within a flow run.\n\n    This allows dependency tracking to occur when results are passed around.\n    Note: Because `id` is used, we cannot cache links between singleton objects.\n\n    We only cache the relationship between components 1-layer deep.\n    Example:\n        Given the result [1, [\"a\",\"b\"], (\"c\",)], the following elements will be\n        mapped to the state:\n        - [1, [\"a\",\"b\"], (\"c\",)]\n        - [\"a\",\"b\"]\n        - (\"c\",)\n\n        Note: the int `1` will not be mapped to the state because it is a singleton.\n\n    Other Notes:\n    We do not hash the result because:\n    - If changes are made to the object in the flow between task calls, we can still\n      track that they are related.\n    - Hashing can be expensive.\n    - Not all objects are hashable.\n\n    We do not set an attribute, e.g. `__prefect_state__`, on the result because:\n\n    - Mutating user's objects is dangerous.\n    - Unrelated equality comparisons can break unexpectedly.\n    - The field can be preserved on copy.\n    - We cannot set this attribute on Python built-ins.\n    \"\"\"\n\n    flow_run_context = FlowRunContext.get()\n\n    def link_if_trackable(obj: Any) -> None:\n\"\"\"Track connection between a task run result and its associated state if it has a unique ID.\n\n        We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256\n        because they are singletons.\n\n        This function will mutate the State if the object is an untrackable type by setting the value\n        for `State.state_details.untrackable_result` to `True`.\n\n        \"\"\"\n        if (type(obj) in UNTRACKABLE_TYPES) or (\n            isinstance(obj, int) and (-5 <= obj <= 256)\n        ):\n            state.state_details.untrackable_result = True\n            return\n        flow_run_context.task_run_results[id(obj)] = state\n\n    if flow_run_context:\n        visit_collection(expr=result, visit_fn=link_if_trackable, max_depth=1)\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.orchestrate_flow_run","title":"orchestrate_flow_run async","text":"

Executes a flow run.

Note on flow timeouts

Since async flows are run directly in the main event loop, timeout behavior will match that described by anyio. If the flow is awaiting something, it will immediately return; otherwise, the next time it awaits it will exit. Sync flows are being task runner in a worker thread, which cannot be interrupted. The worker thread will exit at the next task call. The worker thread also has access to the status of the cancellation scope at FlowRunContext.timeout_scope.cancel_called which allows it to raise a TimeoutError to respect the timeout.

Returns:

Type Description State

The final state of the run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def orchestrate_flow_run(\n    flow: Flow,\n    flow_run: FlowRun,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    interruptible: bool,\n    client: PrefectClient,\n    partial_flow_run_context: PartialModel[FlowRunContext],\n    user_thread: threading.Thread,\n) -> State:\n\"\"\"\n    Executes a flow run.\n\n    Note on flow timeouts:\n        Since async flows are run directly in the main event loop, timeout behavior will\n        match that described by anyio. If the flow is awaiting something, it will\n        immediately return; otherwise, the next time it awaits it will exit. Sync flows\n        are being task runner in a worker thread, which cannot be interrupted. The worker\n        thread will exit at the next task call. The worker thread also has access to the\n        status of the cancellation scope at `FlowRunContext.timeout_scope.cancel_called`\n        which allows it to raise a `TimeoutError` to respect the timeout.\n\n    Returns:\n        The final state of the run\n    \"\"\"\n\n    logger = flow_run_logger(flow_run, flow)\n\n    flow_run_context = None\n    parent_flow_run_context = FlowRunContext.get()\n\n    try:\n        # Resolve futures in any non-data dependencies to ensure they are ready\n        if wait_for is not None:\n            await resolve_inputs({\"wait_for\": wait_for}, return_data=False)\n    except UpstreamTaskError as upstream_exc:\n        return await propose_state(\n            client,\n            Pending(name=\"NotReady\", message=str(upstream_exc)),\n            flow_run_id=flow_run.id,\n            # if orchestrating a run already in a pending state, force orchestration to\n            # update the state name\n            force=flow_run.state.is_pending(),\n        )\n\n    state = await propose_state(client, Running(), flow_run_id=flow_run.id)\n\n    # flag to ensure we only update the flow run name once\n    run_name_set = False\n\n    while state.is_running():\n        waited_for_task_runs = False\n\n        # Update the flow run to the latest data\n        flow_run = await client.read_flow_run(flow_run.id)\n        try:\n            with partial_flow_run_context.finalize(\n                flow=flow,\n                flow_run=flow_run,\n                client=client,\n                parameters=parameters,\n            ) as flow_run_context:\n                # update flow run name\n                if not run_name_set and flow.flow_run_name:\n                    flow_run_name = _resolve_custom_flow_run_name(\n                        flow=flow, parameters=parameters\n                    )\n\n                    await client.update_flow_run(\n                        flow_run_id=flow_run.id, name=flow_run_name\n                    )\n                    logger.extra[\"flow_run_name\"] = flow_run_name\n                    logger.debug(\n                        f\"Renamed flow run {flow_run.name!r} to {flow_run_name!r}\"\n                    )\n                    flow_run.name = flow_run_name\n                    run_name_set = True\n\n                args, kwargs = parameters_to_args_kwargs(flow.fn, parameters)\n                logger.debug(\n                    f\"Executing flow {flow.name!r} for flow run {flow_run.name!r}...\"\n                )\n\n                if PREFECT_DEBUG_MODE:\n                    logger.debug(f\"Executing {call_repr(flow.fn, *args, **kwargs)}\")\n                else:\n                    logger.debug(\n                        \"Beginning execution...\", extra={\"state_message\": True}\n                    )\n\n                flow_call = create_call(flow.fn, *args, **kwargs)\n\n                # This check for a parent call is needed for cases where the engine\n                # was entered directly during testing\n                parent_call = get_current_call()\n\n                if parent_call and (\n                    not parent_flow_run_context\n                    or (\n                        parent_flow_run_context\n                        and parent_flow_run_context.flow.isasync == flow.isasync\n                    )\n                ):\n                    from_async.call_soon_in_waiting_thread(\n                        flow_call, thread=user_thread, timeout=flow.timeout_seconds\n                    )\n                else:\n                    from_async.call_soon_in_new_thread(\n                        flow_call, timeout=flow.timeout_seconds\n                    )\n\n                result = await flow_call.aresult()\n\n                waited_for_task_runs = await wait_for_task_runs_and_report_crashes(\n                    flow_run_context.task_run_futures, client=client\n                )\n        except PausedRun as exc:\n            # could get raised either via utility or by returning Paused from a task run\n            # if a task run pauses, we set its state as the flow's state\n            # to preserve reschedule and timeout behavior\n            paused_flow_run = await client.read_flow_run(flow_run.id)\n            if paused_flow_run.state.is_running():\n                state = await propose_state(\n                    client,\n                    state=exc.state,\n                    flow_run_id=flow_run.id,\n                )\n\n                return state\n            paused_flow_run_state = paused_flow_run.state\n            return paused_flow_run_state\n        except CancelledError as exc:\n            if not flow_call.timedout():\n                # If the flow call was not cancelled by us; this is a crash\n                raise\n            # Construct a new exception as `TimeoutError`\n            original = exc\n            exc = TimeoutError()\n            exc.__cause__ = original\n            logger.exception(\"Encountered exception during execution:\")\n            terminal_state = await exception_to_failed_state(\n                exc,\n                message=f\"Flow run exceeded timeout of {flow.timeout_seconds} seconds\",\n                result_factory=flow_run_context.result_factory,\n                name=\"TimedOut\",\n            )\n        except Exception:\n            # Generic exception in user code\n            logger.exception(\"Encountered exception during execution:\")\n            terminal_state = await exception_to_failed_state(\n                message=\"Flow run encountered an exception.\",\n                result_factory=flow_run_context.result_factory,\n            )\n        else:\n            if result is None:\n                # All tasks and subflows are reference tasks if there is no return value\n                # If there are no tasks, use `None` instead of an empty iterable\n                result = (\n                    flow_run_context.task_run_futures\n                    + flow_run_context.task_run_states\n                    + flow_run_context.flow_run_states\n                ) or None\n\n            terminal_state = await return_value_to_state(\n                await resolve_futures_to_states(result),\n                result_factory=flow_run_context.result_factory,\n            )\n\n        if not waited_for_task_runs:\n            # An exception occured that prevented us from waiting for task runs to\n            # complete. Ensure that we wait for them before proposing a final state\n            # for the flow run.\n            await wait_for_task_runs_and_report_crashes(\n                flow_run_context.task_run_futures, client=client\n            )\n\n        # Before setting the flow run state, store state.data using\n        # block storage and send the resulting data document to the Prefect API instead.\n        # This prevents the pickled return value of flow runs\n        # from being sent to the Prefect API and stored in the Prefect database.\n        # state.data is left as is, otherwise we would have to load\n        # the data from block storage again after storing.\n        state = await propose_state(\n            client,\n            state=terminal_state,\n            flow_run_id=flow_run.id,\n        )\n\n        await _run_flow_hooks(flow=flow, flow_run=flow_run, state=state)\n\n        if state.type != terminal_state.type and PREFECT_DEBUG_MODE:\n            logger.debug(\n                (\n                    f\"Received new state {state} when proposing final state\"\n                    f\" {terminal_state}\"\n                ),\n                extra={\"send_to_api\": False},\n            )\n\n        if not state.is_final() and not state.is_paused():\n            logger.info(\n                (\n                    f\"Received non-final state {state.name!r} when proposing final\"\n                    f\" state {terminal_state.name!r} and will attempt to run again...\"\n                ),\n                extra={\"send_to_api\": False},\n            )\n            # Attempt to enter a running state again\n            state = await propose_state(client, Running(), flow_run_id=flow_run.id)\n\n    return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.orchestrate_task_run","title":"orchestrate_task_run async","text":"

Execute a task run

This function should be submitted to an task runner. We must construct the context here instead of receiving it already populated since we may be in a new environment.

Proposes a RUNNING state, then - if accepted, the task user function will be run - if rejected, the received state will be returned

When the user function is run, the result will be used to determine a final state - if an exception is encountered, it is trapped and stored in a FAILED state - otherwise, return_value_to_state is used to determine the state

If the final state is COMPLETED, we generate a cache key as specified by the task

The final state is then proposed - if accepted, this is the final state and will be returned - if rejected and a new final state is provided, it will be returned - if rejected and a non-final state is provided, we will attempt to enter a RUNNING state again

Returns:

Type Description State

The final state of the run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def orchestrate_task_run(\n    task: Task,\n    task_run: TaskRun,\n    parameters: Dict[str, Any],\n    wait_for: Optional[Iterable[PrefectFuture]],\n    result_factory: ResultFactory,\n    log_prints: bool,\n    interruptible: bool,\n    client: PrefectClient,\n) -> State:\n\"\"\"\n    Execute a task run\n\n    This function should be submitted to an task runner. We must construct the context\n    here instead of receiving it already populated since we may be in a new environment.\n\n    Proposes a RUNNING state, then\n    - if accepted, the task user function will be run\n    - if rejected, the received state will be returned\n\n    When the user function is run, the result will be used to determine a final state\n    - if an exception is encountered, it is trapped and stored in a FAILED state\n    - otherwise, `return_value_to_state` is used to determine the state\n\n    If the final state is COMPLETED, we generate a cache key as specified by the task\n\n    The final state is then proposed\n    - if accepted, this is the final state and will be returned\n    - if rejected and a new final state is provided, it will be returned\n    - if rejected and a non-final state is provided, we will attempt to enter a RUNNING\n        state again\n\n    Returns:\n        The final state of the run\n    \"\"\"\n    flow_run = await client.read_flow_run(task_run.flow_run_id)\n    logger = task_run_logger(task_run, task=task, flow_run=flow_run)\n\n    partial_task_run_context = PartialModel(\n        TaskRunContext,\n        task_run=task_run,\n        task=task,\n        client=client,\n        result_factory=result_factory,\n        log_prints=log_prints,\n    )\n\n    try:\n        # Resolve futures in parameters into data\n        resolved_parameters = await resolve_inputs(parameters)\n        # Resolve futures in any non-data dependencies to ensure they are ready\n        await resolve_inputs({\"wait_for\": wait_for}, return_data=False)\n    except UpstreamTaskError as upstream_exc:\n        return await propose_state(\n            client,\n            Pending(name=\"NotReady\", message=str(upstream_exc)),\n            task_run_id=task_run.id,\n            # if orchestrating a run already in a pending state, force orchestration to\n            # update the state name\n            force=task_run.state.is_pending(),\n        )\n\n    # Generate the cache key to attach to proposed states\n    # The cache key uses a TaskRunContext that does not include a `timeout_context``\n    cache_key = (\n        task.cache_key_fn(\n            partial_task_run_context.finalize(parameters=resolved_parameters),\n            resolved_parameters,\n        )\n        if task.cache_key_fn\n        else None\n    )\n\n    task_run_context = partial_task_run_context.finalize(parameters=resolved_parameters)\n\n    # Ignore the cached results for a cache key, default = false\n    # Setting on task level overrules the Prefect setting (env var)\n    refresh_cache = (\n        task.refresh_cache\n        if task.refresh_cache is not None\n        else PREFECT_TASKS_REFRESH_CACHE.value()\n    )\n\n    # Emit an event to capture that the task run was in the `PENDING` state.\n    last_event = _emit_task_run_state_change_event(\n        task_run=task_run, initial_state=None, validated_state=task_run.state\n    )\n    last_state = task_run.state\n\n    # Transition from `PENDING` -> `RUNNING`\n    state = await propose_state(\n        client,\n        Running(\n            state_details=StateDetails(cache_key=cache_key, refresh_cache=refresh_cache)\n        ),\n        task_run_id=task_run.id,\n    )\n\n    # Emit an event to capture the result of proposing a `RUNNING` state.\n    last_event = _emit_task_run_state_change_event(\n        task_run=task_run,\n        initial_state=last_state,\n        validated_state=state,\n        follows=last_event,\n    )\n    last_state = state\n\n    # flag to ensure we only update the task run name once\n    run_name_set = False\n\n    # Only run the task if we enter a `RUNNING` state\n    while state.is_running():\n        # Retrieve the latest metadata for the task run context\n        task_run = await client.read_task_run(task_run.id)\n\n        with task_run_context.copy(\n            update={\"task_run\": task_run, \"start_time\": pendulum.now(\"UTC\")}\n        ):\n            try:\n                args, kwargs = parameters_to_args_kwargs(task.fn, resolved_parameters)\n                # update task run name\n                if not run_name_set and task.task_run_name:\n                    task_run_name = _resolve_custom_task_run_name(\n                        task=task, parameters=resolved_parameters\n                    )\n                    await client.set_task_run_name(\n                        task_run_id=task_run.id, name=task_run_name\n                    )\n                    logger.extra[\"task_run_name\"] = task_run_name\n                    logger.debug(\n                        f\"Renamed task run {task_run.name!r} to {task_run_name!r}\"\n                    )\n                    task_run.name = task_run_name\n                    run_name_set = True\n\n                if PREFECT_DEBUG_MODE.value():\n                    logger.debug(f\"Executing {call_repr(task.fn, *args, **kwargs)}\")\n                else:\n                    logger.debug(\n                        \"Beginning execution...\", extra={\"state_message\": True}\n                    )\n\n                call = from_async.call_soon_in_new_thread(\n                    create_call(task.fn, *args, **kwargs), timeout=task.timeout_seconds\n                )\n                result = await call.aresult()\n\n            except (CancelledError, asyncio.CancelledError) as exc:\n                if not call.timedout():\n                    # If the task call was not cancelled by us; this is a crash\n                    raise\n                # Construct a new exception as `TimeoutError`\n                original = exc\n                exc = TimeoutError()\n                exc.__cause__ = original\n                logger.exception(\"Encountered exception during execution:\")\n                terminal_state = await exception_to_failed_state(\n                    exc,\n                    message=(\n                        f\"Task run exceeded timeout of {task.timeout_seconds} seconds\"\n                    ),\n                    result_factory=task_run_context.result_factory,\n                    name=\"TimedOut\",\n                )\n            except Exception as exc:\n                logger.exception(\"Encountered exception during execution:\")\n                terminal_state = await exception_to_failed_state(\n                    exc,\n                    message=\"Task run encountered an exception\",\n                    result_factory=task_run_context.result_factory,\n                )\n            else:\n                terminal_state = await return_value_to_state(\n                    result,\n                    result_factory=task_run_context.result_factory,\n                )\n\n                # for COMPLETED tasks, add the cache key and expiration\n                if terminal_state.is_completed():\n                    terminal_state.state_details.cache_expiration = (\n                        (pendulum.now(\"utc\") + task.cache_expiration)\n                        if task.cache_expiration\n                        else None\n                    )\n                    terminal_state.state_details.cache_key = cache_key\n\n            state = await propose_state(client, terminal_state, task_run_id=task_run.id)\n            last_event = _emit_task_run_state_change_event(\n                task_run=task_run,\n                initial_state=last_state,\n                validated_state=state,\n                follows=last_event,\n            )\n            last_state = state\n\n            await _run_task_hooks(\n                task=task,\n                task_run=task_run,\n                state=state,\n            )\n\n            if state.type != terminal_state.type and PREFECT_DEBUG_MODE:\n                logger.debug(\n                    (\n                        f\"Received new state {state} when proposing final state\"\n                        f\" {terminal_state}\"\n                    ),\n                    extra={\"send_to_api\": False},\n                )\n\n            if not state.is_final() and not state.is_paused():\n                logger.info(\n                    (\n                        f\"Received non-final state {state.name!r} when proposing final\"\n                        f\" state {terminal_state.name!r} and will attempt to run\"\n                        \" again...\"\n                    ),\n                    extra={\"send_to_api\": False},\n                )\n                # Attempt to enter a running state again\n                state = await propose_state(client, Running(), task_run_id=task_run.id)\n                last_event = _emit_task_run_state_change_event(\n                    task_run=task_run,\n                    initial_state=last_state,\n                    validated_state=state,\n                    follows=last_event,\n                )\n                last_state = state\n\n    # If debugging, use the more complete `repr` than the usual `str` description\n    display_state = repr(state) if PREFECT_DEBUG_MODE else str(state)\n\n    logger.log(\n        level=logging.INFO if state.is_completed() else logging.ERROR,\n        msg=f\"Finished in state {display_state}\",\n    )\n\n    return state\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.pause_flow_run","title":"pause_flow_run async","text":"

Pauses the current flow run by stopping execution until resumed.

When called within a flow run, execution will block and no downstream tasks will run until the flow is resumed. Task runs that have already started will continue running. A timeout parameter can be passed that will fail the flow run if it has not been resumed within the specified time.

Parameters:

Name Type Description Default flow_run_id UUID

a flow run id. If supplied, this function will attempt to pause the specified flow run outside of the flow run process. When paused, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order pause a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the persist_results option.

None timeout int

the number of seconds to wait for the flow to be resumed before failing. Defaults to 5 minutes (300 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming.

300 poll_interval int

The number of seconds between checking whether the flow has been resumed. Defaults to 10 seconds.

10 reschedule bool

Flag that will reschedule the flow run if resumed. Instead of blocking execution, the flow will gracefully exit (with no result returned) instead. To use this flag, a flow needs to have an associated deployment and results need to be configured with the persist_results option.

False key str

An optional key to prevent calling pauses more than once. This defaults to the number of pauses observed by the flow so far, and prevents pauses that use the \"reschedule\" option from running the same pause twice. A custom key can be supplied for custom pausing behavior.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@sync_compatible\nasync def pause_flow_run(\n    flow_run_id: UUID = None,\n    timeout: int = 300,\n    poll_interval: int = 10,\n    reschedule: bool = False,\n    key: str = None,\n):\n\"\"\"\n    Pauses the current flow run by stopping execution until resumed.\n\n    When called within a flow run, execution will block and no downstream tasks will\n    run until the flow is resumed. Task runs that have already started will continue\n    running. A timeout parameter can be passed that will fail the flow run if it has not\n    been resumed within the specified time.\n\n    Args:\n        flow_run_id: a flow run id. If supplied, this function will attempt to pause\n            the specified flow run outside of the flow run process. When paused, the\n            flow run will continue execution until the NEXT task is orchestrated, at\n            which point the flow will exit. Any tasks that have already started will\n            run until completion. When resumed, the flow run will be rescheduled to\n            finish execution. In order pause a flow run in this way, the flow needs to\n            have an associated deployment and results need to be configured with the\n            `persist_results` option.\n        timeout: the number of seconds to wait for the flow to be resumed before\n            failing. Defaults to 5 minutes (300 seconds). If the pause timeout exceeds\n            any configured flow-level timeout, the flow might fail even after resuming.\n        poll_interval: The number of seconds between checking whether the flow has been\n            resumed. Defaults to 10 seconds.\n        reschedule: Flag that will reschedule the flow run if resumed. Instead of\n            blocking execution, the flow will gracefully exit (with no result returned)\n            instead. To use this flag, a flow needs to have an associated deployment and\n            results need to be configured with the `persist_results` option.\n        key: An optional key to prevent calling pauses more than once. This defaults to\n            the number of pauses observed by the flow so far, and prevents pauses that\n            use the \"reschedule\" option from running the same pause twice. A custom key\n            can be supplied for custom pausing behavior.\n    \"\"\"\n    if flow_run_id:\n        return await _out_of_process_pause(\n            flow_run_id=flow_run_id,\n            timeout=timeout,\n            reschedule=reschedule,\n            key=key,\n        )\n    else:\n        return await _in_process_pause(\n            timeout=timeout, poll_interval=poll_interval, reschedule=reschedule, key=key\n        )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.propose_state","title":"propose_state async","text":"

Propose a new state for a flow run or task run, invoking Prefect orchestration logic.

If the proposed state is accepted, the provided state will be augmented with details and returned.

If the proposed state is rejected, a new state returned by the Prefect API will be returned.

If the proposed state results in a WAIT instruction from the Prefect API, the function will sleep and attempt to propose the state again.

If the proposed state results in an ABORT instruction from the Prefect API, an error will be raised.

Parameters:

Name Type Description Default state State

a new state for the task or flow run

required task_run_id UUID

an optional task run id, used when proposing task run states

None flow_run_id UUID

an optional flow run id, used when proposing flow run states

None

Returns:

Type Description State

a State model representation of the flow or task run state

Raises:

Type Description ValueError

if neither task_run_id or flow_run_id is provided

prefect.exceptions.Abort

if an ABORT instruction is received from the Prefect API

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def propose_state(\n    client: PrefectClient,\n    state: State,\n    force: bool = False,\n    task_run_id: UUID = None,\n    flow_run_id: UUID = None,\n) -> State:\n\"\"\"\n    Propose a new state for a flow run or task run, invoking Prefect orchestration logic.\n\n    If the proposed state is accepted, the provided `state` will be augmented with\n     details and returned.\n\n    If the proposed state is rejected, a new state returned by the Prefect API will be\n    returned.\n\n    If the proposed state results in a WAIT instruction from the Prefect API, the\n    function will sleep and attempt to propose the state again.\n\n    If the proposed state results in an ABORT instruction from the Prefect API, an\n    error will be raised.\n\n    Args:\n        state: a new state for the task or flow run\n        task_run_id: an optional task run id, used when proposing task run states\n        flow_run_id: an optional flow run id, used when proposing flow run states\n\n    Returns:\n        a [State model][prefect.client.schemas.objects.State] representation of the\n            flow or task run state\n\n    Raises:\n        ValueError: if neither task_run_id or flow_run_id is provided\n        prefect.exceptions.Abort: if an ABORT instruction is received from\n            the Prefect API\n    \"\"\"\n\n    # Determine if working with a task run or flow run\n    if not task_run_id and not flow_run_id:\n        raise ValueError(\"You must provide either a `task_run_id` or `flow_run_id`\")\n\n    # Handle task and sub-flow tracing\n    if state.is_final():\n        if isinstance(state.data, BaseResult) and state.data.has_cached_object():\n            # Avoid fetching the result unless it is cached, otherwise we defeat\n            # the purpose of disabling `cache_result_in_memory`\n            result = await state.result(raise_on_failure=False, fetch=True)\n        else:\n            result = state.data\n\n        link_state_to_result(state, result)\n\n    # Handle repeated WAITs in a loop instead of recursively, to avoid\n    # reaching max recursion depth in extreme cases.\n    async def set_state_and_handle_waits(set_state_func) -> OrchestrationResult:\n        response = await set_state_func()\n        while response.status == SetStateStatus.WAIT:\n            engine_logger.debug(\n                f\"Received wait instruction for {response.details.delay_seconds}s: \"\n                f\"{response.details.reason}\"\n            )\n            await anyio.sleep(response.details.delay_seconds)\n            response = await set_state_func()\n        return response\n\n    # Attempt to set the state\n    if task_run_id:\n        set_state = partial(client.set_task_run_state, task_run_id, state, force=force)\n        response = await set_state_and_handle_waits(set_state)\n    elif flow_run_id:\n        set_state = partial(client.set_flow_run_state, flow_run_id, state, force=force)\n        response = await set_state_and_handle_waits(set_state)\n    else:\n        raise ValueError(\n            \"Neither flow run id or task run id were provided. At least one must \"\n            \"be given.\"\n        )\n\n    # Parse the response to return the new state\n    if response.status == SetStateStatus.ACCEPT:\n        # Update the state with the details if provided\n        state.id = response.state.id\n        state.timestamp = response.state.timestamp\n        if response.state.state_details:\n            state.state_details = response.state.state_details\n        return state\n\n    elif response.status == SetStateStatus.ABORT:\n        raise prefect.exceptions.Abort(response.details.reason)\n\n    elif response.status == SetStateStatus.REJECT:\n        if response.state.is_paused():\n            raise Pause(response.details.reason)\n        return response.state\n\n    else:\n        raise ValueError(\n            f\"Received unexpected `SetStateStatus` from server: {response.status!r}\"\n        )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.report_flow_run_crashes","title":"report_flow_run_crashes async","text":"

Detect flow run crashes during this context and update the run to a proper final state.

This context must reraise the exception to properly exit the run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@asynccontextmanager\nasync def report_flow_run_crashes(flow_run: FlowRun, client: PrefectClient, flow: Flow):\n\"\"\"\n    Detect flow run crashes during this context and update the run to a proper final\n    state.\n\n    This context _must_ reraise the exception to properly exit the run.\n    \"\"\"\n\n    try:\n        yield\n    except (Abort, Pause):\n        # Do not capture internal signals as crashes\n        raise\n    except BaseException as exc:\n        state = await exception_to_crashed_state(exc)\n        logger = flow_run_logger(flow_run)\n        with anyio.CancelScope(shield=True):\n            logger.error(f\"Crash detected! {state.message}\")\n            logger.debug(\"Crash details:\", exc_info=exc)\n            flow_run_state = await propose_state(client, state, flow_run_id=flow_run.id)\n            engine_logger.debug(\n                f\"Reported crashed flow run {flow_run.name!r} successfully!\"\n            )\n\n            # Only `on_crashed` and `on_cancellation` flow run state change hooks can be called here.\n            # We call the hooks after the state change proposal to `CRASHED` is validated\n            # or rejected (if it is in a `CANCELLING` state).\n            await _run_flow_hooks(\n                flow=flow,\n                flow_run=flow_run,\n                state=flow_run_state,\n            )\n\n        # Reraise the exception\n        raise\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.report_task_run_crashes","title":"report_task_run_crashes async","text":"

Detect task run crashes during this context and update the run to a proper final state.

This context must reraise the exception to properly exit the run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@asynccontextmanager\nasync def report_task_run_crashes(task_run: TaskRun, client: PrefectClient):\n\"\"\"\n    Detect task run crashes during this context and update the run to a proper final\n    state.\n\n    This context _must_ reraise the exception to properly exit the run.\n    \"\"\"\n    try:\n        yield\n    except (Abort, Pause):\n        # Do not capture internal signals as crashes\n        raise\n    except BaseException as exc:\n        state = await exception_to_crashed_state(exc)\n        logger = task_run_logger(task_run)\n        with anyio.CancelScope(shield=True):\n            logger.error(f\"Crash detected! {state.message}\")\n            logger.debug(\"Crash details:\", exc_info=exc)\n            await client.set_task_run_state(\n                state=state,\n                task_run_id=task_run.id,\n                force=True,\n            )\n            engine_logger.debug(\n                f\"Reported crashed task run {task_run.name!r} successfully!\"\n            )\n\n        # Reraise the exception\n        raise\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.resolve_inputs","title":"resolve_inputs async","text":"

Resolve any Quote, PrefectFuture, or State types nested in parameters into data.

Returns:

Type Description Dict[str, Any]

A copy of the parameters with resolved data

Raises:

Type Description UpstreamTaskError

If any of the upstream states are not COMPLETED

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
async def resolve_inputs(\n    parameters: Dict[str, Any], return_data: bool = True, max_depth: int = -1\n) -> Dict[str, Any]:\n\"\"\"\n    Resolve any `Quote`, `PrefectFuture`, or `State` types nested in parameters into\n    data.\n\n    Returns:\n        A copy of the parameters with resolved data\n\n    Raises:\n        UpstreamTaskError: If any of the upstream states are not `COMPLETED`\n    \"\"\"\n\n    futures = set()\n    states = set()\n    result_by_state = {}\n\n    if not parameters:\n        return {}\n\n    def collect_futures_and_states(expr, context):\n        # Expressions inside quotes should not be traversed\n        if isinstance(context.get(\"annotation\"), quote):\n            raise StopVisiting()\n\n        if isinstance(expr, PrefectFuture):\n            futures.add(expr)\n        if is_state(expr):\n            states.add(expr)\n\n        return expr\n\n    visit_collection(\n        parameters,\n        visit_fn=collect_futures_and_states,\n        return_data=False,\n        max_depth=max_depth,\n        context={},\n    )\n\n    # Wait for all futures so we do not block when we retrieve the state in `resolve_input`\n    states.update(await asyncio.gather(*[future._wait() for future in futures]))\n\n    # Only retrieve the result if requested as it may be expensive\n    if return_data:\n        finished_states = [state for state in states if state.is_final()]\n\n        state_results = await asyncio.gather(\n            *[\n                state.result(raise_on_failure=False, fetch=True)\n                for state in finished_states\n            ]\n        )\n\n        for state, result in zip(finished_states, state_results):\n            result_by_state[state] = result\n\n    def resolve_input(expr, context):\n        state = None\n\n        # Expressions inside quotes should not be modified\n        if isinstance(context.get(\"annotation\"), quote):\n            raise StopVisiting()\n\n        if isinstance(expr, PrefectFuture):\n            state = expr._final_state\n        elif is_state(expr):\n            state = expr\n        else:\n            return expr\n\n        # Do not allow uncompleted upstreams except failures when `allow_failure` has\n        # been used\n        if not state.is_completed() and not (\n            # TODO: Note that the contextual annotation here is only at the current level\n            #       if `allow_failure` is used then another annotation is used, this will\n            #       incorrectly evaulate to false \u2014 to resolve this, we must track all\n            #       annotations wrapping the current expression but this is not yet\n            #       implemented.\n            isinstance(context.get(\"annotation\"), allow_failure)\n            and state.is_failed()\n        ):\n            raise UpstreamTaskError(\n                f\"Upstream task run '{state.state_details.task_run_id}' did not reach a\"\n                \" 'COMPLETED' state.\"\n            )\n\n        return result_by_state.get(state)\n\n    resolved_parameters = {}\n    for parameter, value in parameters.items():\n        try:\n            resolved_parameters[parameter] = visit_collection(\n                value,\n                visit_fn=resolve_input,\n                return_data=return_data,\n                # we're manually going 1 layer deeper here\n                max_depth=max_depth - 1,\n                remove_annotations=True,\n                context={},\n            )\n        except UpstreamTaskError:\n            raise\n        except Exception as exc:\n            raise PrefectException(\n                f\"Failed to resolve inputs in parameter {parameter!r}. If your\"\n                \" parameter type is not supported, consider using the `quote`\"\n                \" annotation to skip resolution of inputs.\"\n            ) from exc\n\n    return resolved_parameters\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.resume_flow_run","title":"resume_flow_run async","text":"

Resumes a paused flow.

Parameters:

Name Type Description Default flow_run_id

the flow_run_id to resume

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@sync_compatible\nasync def resume_flow_run(flow_run_id):\n\"\"\"\n    Resumes a paused flow.\n\n    Args:\n        flow_run_id: the flow_run_id to resume\n    \"\"\"\n    client = get_client()\n    flow_run = await client.read_flow_run(flow_run_id)\n\n    if not flow_run.state.is_paused():\n        raise NotPausedError(\"Cannot resume a run that isn't paused!\")\n\n    response = await client.resume_flow_run(flow_run_id)\n\n    if response.status == SetStateStatus.REJECT:\n        if response.state.type == StateType.FAILED:\n            raise FlowPauseTimeout(\"Flow run can no longer be resumed.\")\n        else:\n            raise RuntimeError(f\"Cannot resume this run: {response.details.reason}\")\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/engine/#prefect.engine.retrieve_flow_then_begin_flow_run","title":"retrieve_flow_then_begin_flow_run async","text":"

Async entrypoint for flow runs that have been submitted for execution by an agent

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/engine.py
@inject_client\nasync def retrieve_flow_then_begin_flow_run(\n    flow_run_id: UUID,\n    client: PrefectClient,\n    user_thread: threading.Thread,\n) -> State:\n\"\"\"\n    Async entrypoint for flow runs that have been submitted for execution by an agent\n\n    - Retrieves the deployment information\n    - Loads the flow object using deployment information\n    - Updates the flow run version\n    \"\"\"\n    flow_run = await client.read_flow_run(flow_run_id)\n    try:\n        flow = await load_flow_from_flow_run(flow_run, client=client)\n    except Exception:\n        message = \"Flow could not be retrieved from deployment.\"\n        flow_run_logger(flow_run).exception(message)\n        state = await exception_to_failed_state(message=message)\n        await client.set_flow_run_state(\n            state=state, flow_run_id=flow_run_id, force=True\n        )\n        return state\n\n    # Update the flow run policy defaults to match settings on the flow\n    # Note: Mutating the flow run object prevents us from performing another read\n    #       operation if these properties are used by the client downstream\n    if flow_run.empirical_policy.retry_delay is None:\n        flow_run.empirical_policy.retry_delay = flow.retry_delay_seconds\n\n    if flow_run.empirical_policy.retries is None:\n        flow_run.empirical_policy.retries = flow.retries\n\n    await client.update_flow_run(\n        flow_run_id=flow_run_id,\n        flow_version=flow.version,\n        empirical_policy=flow_run.empirical_policy,\n    )\n\n    if flow.should_validate_parameters:\n        failed_state = None\n        try:\n            parameters = flow.validate_parameters(flow_run.parameters)\n        except Exception:\n            message = \"Validation of flow parameters failed with error: \"\n            flow_run_logger(flow_run).exception(message)\n            failed_state = await exception_to_failed_state(message=message)\n\n        if failed_state is not None:\n            await propose_state(\n                client,\n                state=failed_state,\n                flow_run_id=flow_run_id,\n            )\n            return failed_state\n    else:\n        parameters = flow_run.parameters\n\n    # Ensure default values are populated\n    parameters = {**get_parameter_defaults(flow.fn), **parameters}\n\n    return await begin_flow_run(\n        flow=flow,\n        flow_run=flow_run,\n        parameters=parameters,\n        client=client,\n        user_thread=user_thread,\n    )\n
","tags":["Python API","flow runs","orchestration","engine","context"]},{"location":"api-ref/prefect/exceptions/","title":"prefect.exceptions","text":"","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions","title":"prefect.exceptions","text":"

Prefect-specific exceptions.

","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.Abort","title":"Abort","text":"

Bases: PrefectSignal

Raised when the API sends an 'ABORT' instruction during state proposal.

Indicates that the run should exit immediately.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class Abort(PrefectSignal):\n\"\"\"\n    Raised when the API sends an 'ABORT' instruction during state proposal.\n\n    Indicates that the run should exit immediately.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.BlockMissingCapabilities","title":"BlockMissingCapabilities","text":"

Bases: PrefectException

Raised when a block does not have required capabilities for a given operation.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class BlockMissingCapabilities(PrefectException):\n\"\"\"\n    Raised when a block does not have required capabilities for a given operation.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.CancelledRun","title":"CancelledRun","text":"

Bases: PrefectException

Raised when the result from a cancelled run is retrieved and an exception is not attached.

This occurs when a string is attached to the state instead of an exception or if the state's data is null.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class CancelledRun(PrefectException):\n\"\"\"\n    Raised when the result from a cancelled run is retrieved and an exception\n    is not attached.\n\n    This occurs when a string is attached to the state instead of an exception\n    or if the state's data is null.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.CrashedRun","title":"CrashedRun","text":"

Bases: PrefectException

Raised when the result from a crashed run is retrieved.

This occurs when a string is attached to the state instead of an exception or if the state's data is null.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class CrashedRun(PrefectException):\n\"\"\"\n    Raised when the result from a crashed run is retrieved.\n\n    This occurs when a string is attached to the state instead of an exception or if\n    the state's data is null.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ExternalSignal","title":"ExternalSignal","text":"

Bases: BaseException

Base type for external signal-like exceptions that should never be caught by users.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ExternalSignal(BaseException):\n\"\"\"\n    Base type for external signal-like exceptions that should never be caught by users.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FailedRun","title":"FailedRun","text":"

Bases: PrefectException

Raised when the result from a failed run is retrieved and an exception is not attached.

This occurs when a string is attached to the state instead of an exception or if the state's data is null.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class FailedRun(PrefectException):\n\"\"\"\n    Raised when the result from a failed run is retrieved and an exception is not\n    attached.\n\n    This occurs when a string is attached to the state instead of an exception or if\n    the state's data is null.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowPauseTimeout","title":"FlowPauseTimeout","text":"

Bases: PrefectException

Raised when a flow pause times out

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class FlowPauseTimeout(PrefectException):\n\"\"\"Raised when a flow pause times out\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.FlowScriptError","title":"FlowScriptError","text":"

Bases: PrefectException

Raised when a script errors during evaluation while attempting to load a flow.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class FlowScriptError(PrefectException):\n\"\"\"\n    Raised when a script errors during evaluation while attempting to load a flow.\n    \"\"\"\n\n    def __init__(\n        self,\n        user_exc: Exception,\n        script_path: str,\n    ) -> None:\n        message = f\"Flow script at {script_path!r} encountered an exception\"\n        super().__init__(message)\n\n        self.user_exc = user_exc\n\n    def rich_user_traceback(self, **kwargs):\n        trace = Traceback.extract(\n            type(self.user_exc),\n            self.user_exc,\n            self.user_exc.__traceback__.tb_next.tb_next.tb_next.tb_next,\n        )\n        return Traceback(trace, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureError","title":"InfrastructureError","text":"

Bases: PrefectException

A base class for exceptions related to infrastructure blocks

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class InfrastructureError(PrefectException):\n\"\"\"\n    A base class for exceptions related to infrastructure blocks\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureNotAvailable","title":"InfrastructureNotAvailable","text":"

Bases: PrefectException

Raised when infrastructure is not accessable from the current machine. For example, if a process was spawned on another machine it cannot be managed.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class InfrastructureNotAvailable(PrefectException):\n\"\"\"\n    Raised when infrastructure is not accessable from the current machine. For example,\n    if a process was spawned on another machine it cannot be managed.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InfrastructureNotFound","title":"InfrastructureNotFound","text":"

Bases: PrefectException

Raised when infrastructure is missing, likely because it has exited or been deleted.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class InfrastructureNotFound(PrefectException):\n\"\"\"\n    Raised when infrastructure is missing, likely because it has exited or been\n    deleted.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InvalidNameError","title":"InvalidNameError","text":"

Bases: PrefectException, ValueError

Raised when a name contains characters that are not permitted.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class InvalidNameError(PrefectException, ValueError):\n\"\"\"\n    Raised when a name contains characters that are not permitted.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.InvalidRepositoryURLError","title":"InvalidRepositoryURLError","text":"

Bases: PrefectException

Raised when an incorrect URL is provided to a GitHub filesystem block.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class InvalidRepositoryURLError(PrefectException):\n\"\"\"Raised when an incorrect URL is provided to a GitHub filesystem block.\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MappingLengthMismatch","title":"MappingLengthMismatch","text":"

Bases: PrefectException

Raised when attempting to call Task.map with arguments of different lengths.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class MappingLengthMismatch(PrefectException):\n\"\"\"\n    Raised when attempting to call Task.map with arguments of different lengths.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MappingMissingIterable","title":"MappingMissingIterable","text":"

Bases: PrefectException

Raised when attempting to call Task.map with all static arguments

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class MappingMissingIterable(PrefectException):\n\"\"\"\n    Raised when attempting to call Task.map with all static arguments\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingContextError","title":"MissingContextError","text":"

Bases: PrefectException, RuntimeError

Raised when a method is called that requires a task or flow run context to be active but one cannot be found.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class MissingContextError(PrefectException, RuntimeError):\n\"\"\"\n    Raised when a method is called that requires a task or flow run context to be\n    active but one cannot be found.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingFlowError","title":"MissingFlowError","text":"

Bases: PrefectException

Raised when a given flow name is not found in the expected script.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class MissingFlowError(PrefectException):\n\"\"\"\n    Raised when a given flow name is not found in the expected script.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingProfileError","title":"MissingProfileError","text":"

Bases: PrefectException, ValueError

Raised when a profile name does not exist.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class MissingProfileError(PrefectException, ValueError):\n\"\"\"\n    Raised when a profile name does not exist.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.MissingResult","title":"MissingResult","text":"

Bases: PrefectException

Raised when a result is missing from a state; often when result persistence is disabled and the state is retrieved from the API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class MissingResult(PrefectException):\n\"\"\"\n    Raised when a result is missing from a state; often when result persistence is\n    disabled and the state is retrieved from the API.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.NotPausedError","title":"NotPausedError","text":"

Bases: PrefectException

Raised when attempting to unpause a run that isn't paused.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class NotPausedError(PrefectException):\n\"\"\"Raised when attempting to unpause a run that isn't paused.\"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ObjectAlreadyExists","title":"ObjectAlreadyExists","text":"

Bases: PrefectException

Raised when the client receives a 409 (conflict) from the API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ObjectAlreadyExists(PrefectException):\n\"\"\"\n    Raised when the client receives a 409 (conflict) from the API.\n    \"\"\"\n\n    def __init__(self, http_exc: Exception, *args, **kwargs):\n        self.http_exc = http_exc\n        super().__init__(*args, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ObjectNotFound","title":"ObjectNotFound","text":"

Bases: PrefectException

Raised when the client receives a 404 (not found) from the API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ObjectNotFound(PrefectException):\n\"\"\"\n    Raised when the client receives a 404 (not found) from the API.\n    \"\"\"\n\n    def __init__(self, http_exc: Exception, *args, **kwargs):\n        self.http_exc = http_exc\n        super().__init__(*args, **kwargs)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ParameterBindError","title":"ParameterBindError","text":"

Bases: TypeError, PrefectException

Raised when args and kwargs cannot be converted to parameters.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ParameterBindError(TypeError, PrefectException):\n\"\"\"\n    Raised when args and kwargs cannot be converted to parameters.\n    \"\"\"\n\n    def __init__(self, msg: str):\n        super().__init__(msg)\n\n    @classmethod\n    def from_bind_failure(\n        cls, fn: Callable, exc: TypeError, call_args: List, call_kwargs: Dict\n    ) -> Self:\n        fn_signature = str(inspect.signature(fn)).strip(\"()\")\n\n        base = f\"Error binding parameters for function '{fn.__name__}': {exc}\"\n        signature = f\"Function '{fn.__name__}' has signature '{fn_signature}'\"\n        received = f\"received args: {call_args} and kwargs: {list(call_kwargs.keys())}\"\n        msg = f\"{base}.\\n{signature} but {received}.\"\n        return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ParameterTypeError","title":"ParameterTypeError","text":"

Bases: PrefectException

Raised when a parameter does not pass Pydantic type validation.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ParameterTypeError(PrefectException):\n\"\"\"\n    Raised when a parameter does not pass Pydantic type validation.\n    \"\"\"\n\n    def __init__(self, msg: str):\n        super().__init__(msg)\n\n    @classmethod\n    def from_validation_error(cls, exc: pydantic.ValidationError) -> Self:\n        bad_params = [f'{err[\"loc\"][0]}: {err[\"msg\"]}' for err in exc.errors()]\n        msg = \"Flow run received invalid parameters:\\n - \" + \"\\n - \".join(bad_params)\n        return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.Pause","title":"Pause","text":"

Bases: PrefectSignal

Raised when a flow run is PAUSED and needs to exit for resubmission.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class Pause(PrefectSignal):\n\"\"\"\n    Raised when a flow run is PAUSED and needs to exit for resubmission.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PausedRun","title":"PausedRun","text":"

Bases: PrefectException

Raised when the result from a paused run is retrieved.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class PausedRun(PrefectException):\n\"\"\"\n    Raised when the result from a paused run is retrieved.\n    \"\"\"\n\n    def __init__(self, *args, state=None, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.state = state\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectException","title":"PrefectException","text":"

Bases: Exception

Base exception type for Prefect errors.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class PrefectException(Exception):\n\"\"\"\n    Base exception type for Prefect errors.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectHTTPStatusError","title":"PrefectHTTPStatusError","text":"

Bases: HTTPStatusError

Raised when client receives a Response that contains an HTTPStatusError.

Used to include API error details in the error messages that the client provides users.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class PrefectHTTPStatusError(HTTPStatusError):\n\"\"\"\n    Raised when client receives a `Response` that contains an HTTPStatusError.\n\n    Used to include API error details in the error messages that the client provides users.\n    \"\"\"\n\n    @classmethod\n    def from_httpx_error(cls: Type[Self], httpx_error: HTTPStatusError) -> Self:\n\"\"\"\n        Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`.\n        \"\"\"\n        try:\n            details = httpx_error.response.json()\n        except Exception:\n            details = None\n\n        error_message, *more_info = str(httpx_error).split(\"\\n\")\n\n        if details:\n            message_components = [error_message, f\"Response: {details}\", *more_info]\n        else:\n            message_components = [error_message, *more_info]\n\n        new_message = \"\\n\".join(message_components)\n\n        return cls(\n            new_message, request=httpx_error.request, response=httpx_error.response\n        )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectHTTPStatusError.from_httpx_error","title":"from_httpx_error classmethod","text":"

Generate a PrefectHTTPStatusError from an httpx.HTTPStatusError.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
@classmethod\ndef from_httpx_error(cls: Type[Self], httpx_error: HTTPStatusError) -> Self:\n\"\"\"\n    Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`.\n    \"\"\"\n    try:\n        details = httpx_error.response.json()\n    except Exception:\n        details = None\n\n    error_message, *more_info = str(httpx_error).split(\"\\n\")\n\n    if details:\n        message_components = [error_message, f\"Response: {details}\", *more_info]\n    else:\n        message_components = [error_message, *more_info]\n\n    new_message = \"\\n\".join(message_components)\n\n    return cls(\n        new_message, request=httpx_error.request, response=httpx_error.response\n    )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.PrefectSignal","title":"PrefectSignal","text":"

Bases: BaseException

Base type for signal-like exceptions that should never be caught by users.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class PrefectSignal(BaseException):\n\"\"\"\n    Base type for signal-like exceptions that should never be caught by users.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ProtectedBlockError","title":"ProtectedBlockError","text":"

Bases: PrefectException

Raised when an operation is prevented due to block protection.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ProtectedBlockError(PrefectException):\n\"\"\"\n    Raised when an operation is prevented due to block protection.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ReservedArgumentError","title":"ReservedArgumentError","text":"

Bases: PrefectException, TypeError

Raised when a function used with Prefect has an argument with a name that is reserved for a Prefect feature

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ReservedArgumentError(PrefectException, TypeError):\n\"\"\"\n    Raised when a function used with Prefect has an argument with a name that is\n    reserved for a Prefect feature\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.ScriptError","title":"ScriptError","text":"

Bases: PrefectException

Raised when a script errors during evaluation while attempting to load data

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class ScriptError(PrefectException):\n\"\"\"\n    Raised when a script errors during evaluation while attempting to load data\n    \"\"\"\n\n    def __init__(\n        self,\n        user_exc: Exception,\n        path: str,\n    ) -> None:\n        message = f\"Script at {str(path)!r} encountered an exception: {user_exc!r}\"\n        super().__init__(message)\n        self.user_exc = user_exc\n\n        # Strip script run information from the traceback\n        self.user_exc.__traceback__ = _trim_traceback(\n            self.user_exc.__traceback__,\n            remove_modules=[prefect.utilities.importtools],\n        )\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.SignatureMismatchError","title":"SignatureMismatchError","text":"

Bases: PrefectException, TypeError

Raised when parameters passed to a function do not match its signature.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class SignatureMismatchError(PrefectException, TypeError):\n\"\"\"Raised when parameters passed to a function do not match its signature.\"\"\"\n\n    def __init__(self, msg: str):\n        super().__init__(msg)\n\n    @classmethod\n    def from_bad_params(cls, expected_params: List[str], provided_params: List[str]):\n        msg = (\n            f\"Function expects parameters {expected_params} but was provided with\"\n            f\" parameters {provided_params}\"\n        )\n        return cls(msg)\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.TerminationSignal","title":"TerminationSignal","text":"

Bases: ExternalSignal

Raised when a flow run receives a termination signal.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class TerminationSignal(ExternalSignal):\n\"\"\"\n    Raised when a flow run receives a termination signal.\n    \"\"\"\n\n    def __init__(self, signal: int):\n        self.signal = signal\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UnfinishedRun","title":"UnfinishedRun","text":"

Bases: PrefectException

Raised when the result from a run that is not finished is retrieved.

For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class UnfinishedRun(PrefectException):\n\"\"\"\n    Raised when the result from a run that is not finished is retrieved.\n\n    For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UnspecifiedFlowError","title":"UnspecifiedFlowError","text":"

Bases: PrefectException

Raised when multiple flows are found in the expected script and no name is given.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class UnspecifiedFlowError(PrefectException):\n\"\"\"\n    Raised when multiple flows are found in the expected script and no name is given.\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.UpstreamTaskError","title":"UpstreamTaskError","text":"

Bases: PrefectException

Raised when a task relies on the result of another task but that task is not 'COMPLETE'

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
class UpstreamTaskError(PrefectException):\n\"\"\"\n    Raised when a task relies on the result of another task but that task is not\n    'COMPLETE'\n    \"\"\"\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/exceptions/#prefect.exceptions.exception_traceback","title":"exception_traceback","text":"

Convert an exception to a printable string with a traceback

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/exceptions.py
def exception_traceback(exc: Exception) -> str:\n\"\"\"\n    Convert an exception to a printable string with a traceback\n    \"\"\"\n    tb = traceback.TracebackException.from_exception(exc)\n    return \"\".join(list(tb.format()))\n
","tags":["Python API","exceptions","error handling","errors"]},{"location":"api-ref/prefect/filesystems/","title":"prefect.filesystems","text":"","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems","title":"prefect.filesystems","text":"","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure","title":"Azure","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on Azure Datalake and Azure Blob Storage.

Example

Load stored Azure config:

from prefect.filesystems import Azure\n\naz_block = Azure.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class Azure(WritableFileSystem, WritableDeploymentStorage):\n\"\"\"\n    Store data as a file on Azure Datalake and Azure Blob Storage.\n\n    Example:\n        Load stored Azure config:\n        ```python\n        from prefect.filesystems import Azure\n\n        az_block = Azure.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Azure\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/6AiQ6HRIft8TspZH7AfyZg/39fd82bdbb186db85560f688746c8cdd/azure.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#azure\"\n\n    bucket_path: str = Field(\n        default=...,\n        description=\"An Azure storage bucket path.\",\n        example=\"my-bucket/a-directory-within\",\n    )\n    azure_storage_connection_string: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage connection string\",\n        description=(\n            \"Equivalent to the AZURE_STORAGE_CONNECTION_STRING environment variable.\"\n        ),\n    )\n    azure_storage_account_name: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage account name\",\n        description=(\n            \"Equivalent to the AZURE_STORAGE_ACCOUNT_NAME environment variable.\"\n        ),\n    )\n    azure_storage_account_key: Optional[SecretStr] = Field(\n        default=None,\n        title=\"Azure storage account key\",\n        description=\"Equivalent to the AZURE_STORAGE_ACCOUNT_KEY environment variable.\",\n    )\n    azure_storage_tenant_id: Optional[SecretStr] = Field(\n        None,\n        title=\"Azure storage tenant ID\",\n        description=\"Equivalent to the AZURE_TENANT_ID environment variable.\",\n    )\n    azure_storage_client_id: Optional[SecretStr] = Field(\n        None,\n        title=\"Azure storage client ID\",\n        description=\"Equivalent to the AZURE_CLIENT_ID environment variable.\",\n    )\n    azure_storage_client_secret: Optional[SecretStr] = Field(\n        None,\n        title=\"Azure storage client secret\",\n        description=\"Equivalent to the AZURE_CLIENT_SECRET environment variable.\",\n    )\n    azure_storage_anon: bool = Field(\n        default=True,\n        title=\"Azure storage anonymous connection\",\n        description=(\n            \"Set the 'anon' flag for ADLFS. This should be False for systems that\"\n            \" require ADLFS to use DefaultAzureCredentials.\"\n        ),\n    )\n\n    _remote_file_system: RemoteFileSystem = None\n\n    @property\n    def basepath(self) -> str:\n        return f\"az://{self.bucket_path}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.azure_storage_connection_string:\n            settings[\"connection_string\"] = (\n                self.azure_storage_connection_string.get_secret_value()\n            )\n        if self.azure_storage_account_name:\n            settings[\"account_name\"] = (\n                self.azure_storage_account_name.get_secret_value()\n            )\n        if self.azure_storage_account_key:\n            settings[\"account_key\"] = self.azure_storage_account_key.get_secret_value()\n        if self.azure_storage_tenant_id:\n            settings[\"tenant_id\"] = self.azure_storage_tenant_id.get_secret_value()\n        if self.azure_storage_client_id:\n            settings[\"client_id\"] = self.azure_storage_client_id.get_secret_value()\n        if self.azure_storage_client_secret:\n            settings[\"client_secret\"] = (\n                self.azure_storage_client_secret.get_secret_value()\n            )\n        settings[\"anon\"] = self.azure_storage_anon\n        self._remote_file_system = RemoteFileSystem(\n            basepath=f\"az://{self.bucket_path}\", settings=settings\n        )\n        return self._remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n\"\"\"\n        Downloads a directory from a given remote path to a local direcotry.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n\"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path, to_path=to_path, ignore_file=ignore_file\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local direcotry.

Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n\"\"\"\n    Downloads a directory from a given remote path to a local direcotry.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.Azure.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory.

Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n\"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path, to_path=to_path, ignore_file=ignore_file\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS","title":"GCS","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on Google Cloud Storage.

Example

Load stored GCS config:

from prefect.filesystems import GCS\n\ngcs_block = GCS.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class GCS(WritableFileSystem, WritableDeploymentStorage):\n\"\"\"\n    Store data as a file on Google Cloud Storage.\n\n    Example:\n        Load stored GCS config:\n        ```python\n        from prefect.filesystems import GCS\n\n        gcs_block = GCS.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/4CD4wwbiIKPkZDt4U3TEuW/c112fe85653da054b6d5334ef662bec4/gcp.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#gcs\"\n\n    bucket_path: str = Field(\n        default=...,\n        description=\"A GCS bucket path.\",\n        example=\"my-bucket/a-directory-within\",\n    )\n    service_account_info: Optional[SecretStr] = Field(\n        default=None,\n        description=\"The contents of a service account keyfile as a JSON string.\",\n    )\n    project: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The project the GCS bucket resides in. If not provided, the project will\"\n            \" be inferred from the credentials or environment.\"\n        ),\n    )\n\n    @property\n    def basepath(self) -> str:\n        return f\"gcs://{self.bucket_path}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.service_account_info:\n            try:\n                settings[\"token\"] = json.loads(\n                    self.service_account_info.get_secret_value()\n                )\n            except json.JSONDecodeError:\n                raise ValueError(\n                    \"Unable to load provided service_account_info. Please make sure\"\n                    \" that the provided value is a valid JSON string.\"\n                )\n        remote_file_system = RemoteFileSystem(\n            basepath=f\"gcs://{self.bucket_path}\", settings=settings\n        )\n        return remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n\"\"\"\n        Downloads a directory from a given remote path to a local directory.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n\"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path, to_path=to_path, ignore_file=ignore_file\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local directory.

Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n\"\"\"\n    Downloads a directory from a given remote path to a local directory.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GCS.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory.

Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n\"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path, to_path=to_path, ignore_file=ignore_file\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GitHub","title":"GitHub","text":"

Bases: ReadableDeploymentStorage

Interact with files stored on GitHub repositories.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class GitHub(ReadableDeploymentStorage):\n\"\"\"\n    Interact with files stored on GitHub repositories.\n    \"\"\"\n\n    _block_type_name = \"GitHub\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/187oCWsD18m5yooahq1vU0/ace41e99ab6dc40c53e5584365a33821/github.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#github\"\n\n    repository: str = Field(\n        default=...,\n        description=(\n            \"The URL of a GitHub repository to read from, in either HTTPS or SSH\"\n            \" format.\"\n        ),\n    )\n    reference: Optional[str] = Field(\n        default=None,\n        description=\"An optional reference to pin to; can be a branch name or tag.\",\n    )\n    access_token: Optional[SecretStr] = Field(\n        name=\"Personal Access Token\",\n        default=None,\n        description=(\n            \"A GitHub Personal Access Token (PAT) with repo scope.\"\n            \" To use a fine-grained PAT, provide '{username}:{PAT}' as the value.\"\n        ),\n    )\n    include_git_objects: bool = Field(\n        default=True,\n        description=(\n            \"Whether to include git objects when copying the repo contents to a\"\n            \" directory.\"\n        ),\n    )\n\n    @validator(\"access_token\")\n    def _ensure_credentials_go_with_https(cls, v: str, values: dict) -> str:\n\"\"\"Ensure that credentials are not provided with 'SSH' formatted GitHub URLs.\n\n        Note: validates `access_token` specifically so that it only fires when\n        private repositories are used.\n        \"\"\"\n        if v is not None:\n            if urllib.parse.urlparse(values[\"repository\"]).scheme != \"https\":\n                raise InvalidRepositoryURLError(\n                    \"Crendentials can only be used with GitHub repositories \"\n                    \"using the 'HTTPS' format. You must either remove the \"\n                    \"credential if you wish to use the 'SSH' format and are not \"\n                    \"using a private repository, or you must change the repository \"\n                    \"URL to the 'HTTPS' format. \"\n                )\n\n        return v\n\n    def _create_repo_url(self) -> str:\n\"\"\"Format the URL provided to the `git clone` command.\n\n        For private repos: https://<oauth-key>@github.com/<username>/<repo>.git\n        All other repos should be the same as `self.repository`.\n        \"\"\"\n        url_components = urllib.parse.urlparse(self.repository)\n        if url_components.scheme == \"https\" and self.access_token is not None:\n            updated_components = url_components._replace(\n                netloc=f\"{self.access_token.get_secret_value()}@{url_components.netloc}\"\n            )\n            full_url = urllib.parse.urlunparse(updated_components)\n        else:\n            full_url = self.repository\n\n        return full_url\n\n    @staticmethod\n    def _get_paths(\n        dst_dir: Union[str, None], src_dir: str, sub_directory: str\n    ) -> Tuple[str, str]:\n\"\"\"Returns the fully formed paths for GitHubRepository contents in the form\n        (content_source, content_destination).\n        \"\"\"\n        if dst_dir is None:\n            content_destination = Path(\".\").absolute()\n        else:\n            content_destination = Path(dst_dir)\n\n        content_source = Path(src_dir)\n\n        if sub_directory:\n            content_destination = content_destination.joinpath(sub_directory)\n            content_source = content_source.joinpath(sub_directory)\n\n        return str(content_source), str(content_destination)\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n\"\"\"\n        Clones a GitHub project specified in `from_path` to the provided `local_path`;\n        defaults to cloning the repository reference configured on the Block to the\n        present working directory.\n\n        Args:\n            from_path: If provided, interpreted as a subdirectory of the underlying\n                repository that will be copied to the provided local path.\n            local_path: A local path to clone to; defaults to present working directory.\n        \"\"\"\n        # CONSTRUCT COMMAND\n        cmd = [\"git\", \"clone\", self._create_repo_url()]\n        if self.reference:\n            cmd += [\"-b\", self.reference]\n\n        # Limit git history\n        cmd += [\"--depth\", \"1\"]\n\n        # Clone to a temporary directory and move the subdirectory over\n        with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n            cmd.append(tmp_dir)\n\n            err_stream = io.StringIO()\n            out_stream = io.StringIO()\n            process = await run_process(cmd, stream_output=(out_stream, err_stream))\n            if process.returncode != 0:\n                err_stream.seek(0)\n                raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n            content_source, content_destination = self._get_paths(\n                dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n            )\n\n            ignore_func = None\n            if not self.include_git_objects:\n                ignore_func = ignore_patterns(\".git\")\n\n            copytree(\n                src=content_source,\n                dst=content_destination,\n                dirs_exist_ok=True,\n                ignore=ignore_func,\n            )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.GitHub.get_directory","title":"get_directory async","text":"

Clones a GitHub project specified in from_path to the provided local_path; defaults to cloning the repository reference configured on the Block to the present working directory.

Parameters:

Name Type Description Default from_path Optional[str]

If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path.

None local_path Optional[str]

A local path to clone to; defaults to present working directory.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n\"\"\"\n    Clones a GitHub project specified in `from_path` to the provided `local_path`;\n    defaults to cloning the repository reference configured on the Block to the\n    present working directory.\n\n    Args:\n        from_path: If provided, interpreted as a subdirectory of the underlying\n            repository that will be copied to the provided local path.\n        local_path: A local path to clone to; defaults to present working directory.\n    \"\"\"\n    # CONSTRUCT COMMAND\n    cmd = [\"git\", \"clone\", self._create_repo_url()]\n    if self.reference:\n        cmd += [\"-b\", self.reference]\n\n    # Limit git history\n    cmd += [\"--depth\", \"1\"]\n\n    # Clone to a temporary directory and move the subdirectory over\n    with TemporaryDirectory(suffix=\"prefect\") as tmp_dir:\n        cmd.append(tmp_dir)\n\n        err_stream = io.StringIO()\n        out_stream = io.StringIO()\n        process = await run_process(cmd, stream_output=(out_stream, err_stream))\n        if process.returncode != 0:\n            err_stream.seek(0)\n            raise OSError(f\"Failed to pull from remote:\\n {err_stream.read()}\")\n\n        content_source, content_destination = self._get_paths(\n            dst_dir=local_path, src_dir=tmp_dir, sub_directory=from_path\n        )\n\n        ignore_func = None\n        if not self.include_git_objects:\n            ignore_func = ignore_patterns(\".git\")\n\n        copytree(\n            src=content_source,\n            dst=content_destination,\n            dirs_exist_ok=True,\n            ignore=ignore_func,\n        )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem","title":"LocalFileSystem","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on a local file system.

Example

Load stored local file system config:

from prefect.filesystems import LocalFileSystem\n\nlocal_file_system_block = LocalFileSystem.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class LocalFileSystem(WritableFileSystem, WritableDeploymentStorage):\n\"\"\"\n    Store data as a file on a local file system.\n\n    Example:\n        Load stored local file system config:\n        ```python\n        from prefect.filesystems import LocalFileSystem\n\n        local_file_system_block = LocalFileSystem.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Local File System\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/EVKjxM7fNyi4NGUSkeTEE/95c958c5dd5a56c59ea5033e919c1a63/image1.png?h=250\"\n    _documentation_url = (\n        \"https://docs.prefect.io/concepts/filesystems/#local-filesystem\"\n    )\n\n    basepath: Optional[str] = Field(\n        default=None, description=\"Default local path for this block to write to.\"\n    )\n\n    @validator(\"basepath\", pre=True)\n    def cast_pathlib(cls, value):\n        if isinstance(value, Path):\n            return str(value)\n        return value\n\n    def _resolve_path(self, path: str) -> Path:\n        # Only resolve the base path at runtime, default to the current directory\n        basepath = (\n            Path(self.basepath).expanduser().resolve()\n            if self.basepath\n            else Path(\".\").resolve()\n        )\n\n        # Determine the path to access relative to the base path, ensuring that paths\n        # outside of the base path are off limits\n        if path is None:\n            return basepath\n\n        path: Path = Path(path).expanduser()\n\n        if not path.is_absolute():\n            path = basepath / path\n        else:\n            path = path.resolve()\n            if basepath not in path.parents and (basepath != path):\n                raise ValueError(\n                    f\"Provided path {path} is outside of the base path {basepath}.\"\n                )\n\n        return path\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: str = None, local_path: str = None\n    ) -> None:\n\"\"\"\n        Copies a directory from one place to another on the local filesystem.\n\n        Defaults to copying the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        if not from_path:\n            from_path = Path(self.basepath).expanduser().resolve()\n        else:\n            from_path = self._resolve_path(from_path)\n\n        if not local_path:\n            local_path = Path(\".\").resolve()\n        else:\n            local_path = Path(local_path).resolve()\n\n        if from_path == local_path:\n            # If the paths are the same there is no need to copy\n            # and we avoid shutil.copytree raising an error\n            return\n\n        # .prefectignore exists in the original location, not the current location which\n        # is most likely temporary\n        if (from_path / Path(\".prefectignore\")).exists():\n            ignore_func = await self._get_ignore_func(\n                local_path=from_path, ignore_file=from_path / Path(\".prefectignore\")\n            )\n        else:\n            ignore_func = None\n\n        copytree(from_path, local_path, dirs_exist_ok=True, ignore=ignore_func)\n\n    async def _get_ignore_func(self, local_path: str, ignore_file: str):\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n        included_files = filter_files(root=local_path, ignore_patterns=ignore_patterns)\n\n        def ignore_func(directory, files):\n            relative_path = Path(directory).relative_to(local_path)\n\n            files_to_ignore = [\n                f for f in files if str(relative_path / f) not in included_files\n            ]\n            return files_to_ignore\n\n        return ignore_func\n\n    @sync_compatible\n    async def put_directory(\n        self, local_path: str = None, to_path: str = None, ignore_file: str = None\n    ) -> None:\n\"\"\"\n        Copies a directory from one place to another on the local filesystem.\n\n        Defaults to copying the entire contents of the current working directory to the block's basepath.\n        An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore.\n        \"\"\"\n        destination_path = self._resolve_path(to_path)\n\n        if not local_path:\n            local_path = Path(\".\").absolute()\n\n        if ignore_file:\n            ignore_func = await self._get_ignore_func(\n                local_path=local_path, ignore_file=ignore_file\n            )\n        else:\n            ignore_func = None\n\n        if local_path == destination_path:\n            pass\n        else:\n            copytree(\n                src=local_path,\n                dst=destination_path,\n                ignore=ignore_func,\n                dirs_exist_ok=True,\n            )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        path: Path = self._resolve_path(path)\n\n        # Check if the path exists\n        if not path.exists():\n            raise ValueError(f\"Path {path} does not exist.\")\n\n        # Validate that its a file\n        if not path.is_file():\n            raise ValueError(f\"Path {path} is not a file.\")\n\n        async with await anyio.open_file(str(path), mode=\"rb\") as f:\n            content = await f.read()\n\n        return content\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        path: Path = self._resolve_path(path)\n\n        # Construct the path if it does not exist\n        path.parent.mkdir(exist_ok=True, parents=True)\n\n        # Check if the file already exists\n        if path.exists() and not path.is_file():\n            raise ValueError(f\"Path {path} already exists and is not a file.\")\n\n        async with await anyio.open_file(path, mode=\"wb\") as f:\n            await f.write(content)\n        # Leave path stringify to the OS\n        return str(path)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem.get_directory","title":"get_directory async","text":"

Copies a directory from one place to another on the local filesystem.

Defaults to copying the entire contents of the block's basepath to the current working directory.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: str = None, local_path: str = None\n) -> None:\n\"\"\"\n    Copies a directory from one place to another on the local filesystem.\n\n    Defaults to copying the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    if not from_path:\n        from_path = Path(self.basepath).expanduser().resolve()\n    else:\n        from_path = self._resolve_path(from_path)\n\n    if not local_path:\n        local_path = Path(\".\").resolve()\n    else:\n        local_path = Path(local_path).resolve()\n\n    if from_path == local_path:\n        # If the paths are the same there is no need to copy\n        # and we avoid shutil.copytree raising an error\n        return\n\n    # .prefectignore exists in the original location, not the current location which\n    # is most likely temporary\n    if (from_path / Path(\".prefectignore\")).exists():\n        ignore_func = await self._get_ignore_func(\n            local_path=from_path, ignore_file=from_path / Path(\".prefectignore\")\n        )\n    else:\n        ignore_func = None\n\n    copytree(from_path, local_path, dirs_exist_ok=True, ignore=ignore_func)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.LocalFileSystem.put_directory","title":"put_directory async","text":"

Copies a directory from one place to another on the local filesystem.

Defaults to copying the entire contents of the current working directory to the block's basepath. An ignore_file path may be provided that can include gitignore style expressions for filepaths to ignore.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self, local_path: str = None, to_path: str = None, ignore_file: str = None\n) -> None:\n\"\"\"\n    Copies a directory from one place to another on the local filesystem.\n\n    Defaults to copying the entire contents of the current working directory to the block's basepath.\n    An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore.\n    \"\"\"\n    destination_path = self._resolve_path(to_path)\n\n    if not local_path:\n        local_path = Path(\".\").absolute()\n\n    if ignore_file:\n        ignore_func = await self._get_ignore_func(\n            local_path=local_path, ignore_file=ignore_file\n        )\n    else:\n        ignore_func = None\n\n    if local_path == destination_path:\n        pass\n    else:\n        copytree(\n            src=local_path,\n            dst=destination_path,\n            ignore=ignore_func,\n            dirs_exist_ok=True,\n        )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem","title":"RemoteFileSystem","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on a remote file system.

Supports any remote file system supported by fsspec. The file system is specified using a protocol. For example, \"s3://my-bucket/my-folder/\" will use S3.

Example

Load stored remote file system config:

from prefect.filesystems import RemoteFileSystem\n\nremote_file_system_block = RemoteFileSystem.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class RemoteFileSystem(WritableFileSystem, WritableDeploymentStorage):\n\"\"\"\n    Store data as a file on a remote file system.\n\n    Supports any remote file system supported by `fsspec`. The file system is specified\n    using a protocol. For example, \"s3://my-bucket/my-folder/\" will use S3.\n\n    Example:\n        Load stored remote file system config:\n        ```python\n        from prefect.filesystems import RemoteFileSystem\n\n        remote_file_system_block = RemoteFileSystem.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Remote File System\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/4CxjycqILlT9S9YchI7o1q/ee62e2089dfceb19072245c62f0c69d2/image12.png?h=250\"\n    _documentation_url = (\n        \"https://docs.prefect.io/concepts/filesystems/#remote-file-system\"\n    )\n\n    basepath: str = Field(\n        default=...,\n        description=\"Default path for this block to write to.\",\n        example=\"s3://my-bucket/my-folder/\",\n    )\n    settings: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Additional settings to pass through to fsspec.\",\n    )\n\n    # Cache for the configured fsspec file system used for access\n    _filesystem: fsspec.AbstractFileSystem = None\n\n    @validator(\"basepath\")\n    def check_basepath(cls, value):\n        scheme, netloc, _, _, _ = urllib.parse.urlsplit(value)\n\n        if not scheme:\n            raise ValueError(f\"Base path must start with a scheme. Got {value!r}.\")\n\n        if not netloc:\n            raise ValueError(\n                f\"Base path must include a location after the scheme. Got {value!r}.\"\n            )\n\n        if scheme == \"file\":\n            raise ValueError(\n                \"Base path scheme cannot be 'file'. Use `LocalFileSystem` instead for\"\n                \" local file access.\"\n            )\n\n        return value\n\n    def _resolve_path(self, path: str) -> str:\n        base_scheme, base_netloc, base_urlpath, _, _ = urllib.parse.urlsplit(\n            self.basepath\n        )\n        scheme, netloc, urlpath, _, _ = urllib.parse.urlsplit(path)\n\n        # Confirm that absolute paths are valid\n        if scheme:\n            if scheme != base_scheme:\n                raise ValueError(\n                    f\"Path {path!r} with scheme {scheme!r} must use the same scheme as\"\n                    f\" the base path {base_scheme!r}.\"\n                )\n\n        if netloc:\n            if (netloc != base_netloc) or not urlpath.startswith(base_urlpath):\n                raise ValueError(\n                    f\"Path {path!r} is outside of the base path {self.basepath!r}.\"\n                )\n\n        return f\"{self.basepath.rstrip('/')}/{urlpath.lstrip('/')}\"\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> None:\n\"\"\"\n        Downloads a directory from a given remote path to a local direcotry.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        if from_path is None:\n            from_path = str(self.basepath)\n        else:\n            from_path = self._resolve_path(from_path)\n\n        if local_path is None:\n            local_path = Path(\".\").absolute()\n\n        # validate that from_path has a trailing slash for proper fsspec behavior across versions\n        if not from_path.endswith(\"/\"):\n            from_path += \"/\"\n\n        return self.filesystem.get(from_path, local_path, recursive=True)\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n        overwrite: bool = True,\n    ) -> int:\n\"\"\"\n        Uploads a directory from a given local path to a remote direcotry.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        if to_path is None:\n            to_path = str(self.basepath)\n        else:\n            to_path = self._resolve_path(to_path)\n\n        if local_path is None:\n            local_path = \".\"\n\n        included_files = None\n        if ignore_file:\n            with open(ignore_file, \"r\") as f:\n                ignore_patterns = f.readlines()\n\n            included_files = filter_files(\n                local_path, ignore_patterns, include_dirs=True\n            )\n\n        counter = 0\n        for f in Path(local_path).rglob(\"*\"):\n            relative_path = f.relative_to(local_path)\n            if included_files and str(relative_path) not in included_files:\n                continue\n\n            if to_path.endswith(\"/\"):\n                fpath = to_path + relative_path.as_posix()\n            else:\n                fpath = to_path + \"/\" + relative_path.as_posix()\n\n            if f.is_dir():\n                pass\n            else:\n                f = f.as_posix()\n                if overwrite:\n                    self.filesystem.put_file(f, fpath, overwrite=True)\n                else:\n                    self.filesystem.put_file(f, fpath)\n\n                counter += 1\n\n        return counter\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        path = self._resolve_path(path)\n\n        with self.filesystem.open(path, \"rb\") as file:\n            content = await run_sync_in_worker_thread(file.read)\n\n        return content\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        path = self._resolve_path(path)\n        dirpath = path[: path.rindex(\"/\")]\n\n        self.filesystem.makedirs(dirpath, exist_ok=True)\n\n        with self.filesystem.open(path, \"wb\") as file:\n            await run_sync_in_worker_thread(file.write, content)\n        return path\n\n    @property\n    def filesystem(self) -> fsspec.AbstractFileSystem:\n        if not self._filesystem:\n            scheme, _, _, _, _ = urllib.parse.urlsplit(self.basepath)\n\n            try:\n                self._filesystem = fsspec.filesystem(scheme, **self.settings)\n            except ImportError as exc:\n                # The path is a remote file system that uses a lib that is not installed\n                raise RuntimeError(\n                    f\"File system created with scheme {scheme!r} from base path \"\n                    f\"{self.basepath!r} could not be created. \"\n                    \"You are likely missing a Python module required to use the given \"\n                    \"storage protocol.\"\n                ) from exc\n\n        return self._filesystem\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local direcotry.

Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> None:\n\"\"\"\n    Downloads a directory from a given remote path to a local direcotry.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    if from_path is None:\n        from_path = str(self.basepath)\n    else:\n        from_path = self._resolve_path(from_path)\n\n    if local_path is None:\n        local_path = Path(\".\").absolute()\n\n    # validate that from_path has a trailing slash for proper fsspec behavior across versions\n    if not from_path.endswith(\"/\"):\n        from_path += \"/\"\n\n    return self.filesystem.get(from_path, local_path, recursive=True)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.RemoteFileSystem.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote direcotry.

Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n    overwrite: bool = True,\n) -> int:\n\"\"\"\n    Uploads a directory from a given local path to a remote direcotry.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    if to_path is None:\n        to_path = str(self.basepath)\n    else:\n        to_path = self._resolve_path(to_path)\n\n    if local_path is None:\n        local_path = \".\"\n\n    included_files = None\n    if ignore_file:\n        with open(ignore_file, \"r\") as f:\n            ignore_patterns = f.readlines()\n\n        included_files = filter_files(\n            local_path, ignore_patterns, include_dirs=True\n        )\n\n    counter = 0\n    for f in Path(local_path).rglob(\"*\"):\n        relative_path = f.relative_to(local_path)\n        if included_files and str(relative_path) not in included_files:\n            continue\n\n        if to_path.endswith(\"/\"):\n            fpath = to_path + relative_path.as_posix()\n        else:\n            fpath = to_path + \"/\" + relative_path.as_posix()\n\n        if f.is_dir():\n            pass\n        else:\n            f = f.as_posix()\n            if overwrite:\n                self.filesystem.put_file(f, fpath, overwrite=True)\n            else:\n                self.filesystem.put_file(f, fpath)\n\n            counter += 1\n\n    return counter\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3","title":"S3","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on AWS S3.

Example

Load stored S3 config:

from prefect.filesystems import S3\n\ns3_block = S3.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class S3(WritableFileSystem, WritableDeploymentStorage):\n\"\"\"\n    Store data as a file on AWS S3.\n\n    Example:\n        Load stored S3 config:\n        ```python\n        from prefect.filesystems import S3\n\n        s3_block = S3.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"S3\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/1jbV4lceHOjGgunX15lUwT/db88e184d727f721575aeb054a37e277/aws.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#s3\"\n\n    bucket_path: str = Field(\n        default=...,\n        description=\"An S3 bucket path.\",\n        example=\"my-bucket/a-directory-within\",\n    )\n    aws_access_key_id: Optional[SecretStr] = Field(\n        default=None,\n        title=\"AWS Access Key ID\",\n        description=\"Equivalent to the AWS_ACCESS_KEY_ID environment variable.\",\n        example=\"AKIAIOSFODNN7EXAMPLE\",\n    )\n    aws_secret_access_key: Optional[SecretStr] = Field(\n        default=None,\n        title=\"AWS Secret Access Key\",\n        description=\"Equivalent to the AWS_SECRET_ACCESS_KEY environment variable.\",\n        example=\"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\",\n    )\n\n    _remote_file_system: RemoteFileSystem = None\n\n    @property\n    def basepath(self) -> str:\n        return f\"s3://{self.bucket_path}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.aws_access_key_id:\n            settings[\"key\"] = self.aws_access_key_id.get_secret_value()\n        if self.aws_secret_access_key:\n            settings[\"secret\"] = self.aws_secret_access_key.get_secret_value()\n        self._remote_file_system = RemoteFileSystem(\n            basepath=f\"s3://{self.bucket_path}\", settings=settings\n        )\n        return self._remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n\"\"\"\n        Downloads a directory from a given remote path to a local directory.\n\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n\"\"\"\n        Uploads a directory from a given local path to a remote directory.\n\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path, to_path=to_path, ignore_file=ignore_file\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local directory.

Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n\"\"\"\n    Downloads a directory from a given remote path to a local directory.\n\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.S3.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory.

Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n\"\"\"\n    Uploads a directory from a given local path to a remote directory.\n\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path, to_path=to_path, ignore_file=ignore_file\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB","title":"SMB","text":"

Bases: WritableFileSystem, WritableDeploymentStorage

Store data as a file on a SMB share.

Example

Load stored SMB config:

from prefect.filesystems import SMB\nsmb_block = SMB.load(\"BLOCK_NAME\")\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
class SMB(WritableFileSystem, WritableDeploymentStorage):\n\"\"\"\n    Store data as a file on a SMB share.\n\n    Example:\n\n        Load stored SMB config:\n\n        ```python\n        from prefect.filesystems import SMB\n        smb_block = SMB.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"SMB\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/6J444m3vW6ukgBOCinSxLk/025f5562d3c165feb7a5df599578a6a8/samba_2010_logo_transparent_151x27.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/concepts/filesystems/#smb\"\n\n    share_path: str = Field(\n        default=...,\n        description=\"SMB target (requires <SHARE>, followed by <PATH>).\",\n        example=\"/SHARE/dir/subdir\",\n    )\n    smb_username: Optional[SecretStr] = Field(\n        default=None,\n        title=\"SMB Username\",\n        description=\"Username with access to the target SMB SHARE.\",\n    )\n    smb_password: Optional[SecretStr] = Field(\n        default=None, title=\"SMB Password\", description=\"Password for SMB access.\"\n    )\n    smb_host: str = Field(\n        default=..., tile=\"SMB server/hostname\", description=\"SMB server/hostname.\"\n    )\n    smb_port: Optional[int] = Field(\n        default=None, title=\"SMB port\", description=\"SMB port (default: 445).\"\n    )\n\n    _remote_file_system: RemoteFileSystem = None\n\n    @property\n    def basepath(self) -> str:\n        return f\"smb://{self.smb_host.rstrip('/')}/{self.share_path.lstrip('/')}\"\n\n    @property\n    def filesystem(self) -> RemoteFileSystem:\n        settings = {}\n        if self.smb_username:\n            settings[\"username\"] = self.smb_username.get_secret_value()\n        if self.smb_password:\n            settings[\"password\"] = self.smb_password.get_secret_value()\n        if self.smb_host:\n            settings[\"host\"] = self.smb_host\n        if self.smb_port:\n            settings[\"port\"] = self.smb_port\n        self._remote_file_system = RemoteFileSystem(\n            basepath=f\"smb://{self.smb_host.rstrip('/')}/{self.share_path.lstrip('/')}\",\n            settings=settings,\n        )\n        return self._remote_file_system\n\n    @sync_compatible\n    async def get_directory(\n        self, from_path: Optional[str] = None, local_path: Optional[str] = None\n    ) -> bytes:\n\"\"\"\n        Downloads a directory from a given remote path to a local directory.\n        Defaults to downloading the entire contents of the block's basepath to the current working directory.\n        \"\"\"\n        return await self.filesystem.get_directory(\n            from_path=from_path, local_path=local_path\n        )\n\n    @sync_compatible\n    async def put_directory(\n        self,\n        local_path: Optional[str] = None,\n        to_path: Optional[str] = None,\n        ignore_file: Optional[str] = None,\n    ) -> int:\n\"\"\"\n        Uploads a directory from a given local path to a remote directory.\n        Defaults to uploading the entire contents of the current working directory to the block's basepath.\n        \"\"\"\n        return await self.filesystem.put_directory(\n            local_path=local_path,\n            to_path=to_path,\n            ignore_file=ignore_file,\n            overwrite=False,\n        )\n\n    @sync_compatible\n    async def read_path(self, path: str) -> bytes:\n        return await self.filesystem.read_path(path)\n\n    @sync_compatible\n    async def write_path(self, path: str, content: bytes) -> str:\n        return await self.filesystem.write_path(path=path, content=content)\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB.get_directory","title":"get_directory async","text":"

Downloads a directory from a given remote path to a local directory. Defaults to downloading the entire contents of the block's basepath to the current working directory.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def get_directory(\n    self, from_path: Optional[str] = None, local_path: Optional[str] = None\n) -> bytes:\n\"\"\"\n    Downloads a directory from a given remote path to a local directory.\n    Defaults to downloading the entire contents of the block's basepath to the current working directory.\n    \"\"\"\n    return await self.filesystem.get_directory(\n        from_path=from_path, local_path=local_path\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/filesystems/#prefect.filesystems.SMB.put_directory","title":"put_directory async","text":"

Uploads a directory from a given local path to a remote directory. Defaults to uploading the entire contents of the current working directory to the block's basepath.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/filesystems.py
@sync_compatible\nasync def put_directory(\n    self,\n    local_path: Optional[str] = None,\n    to_path: Optional[str] = None,\n    ignore_file: Optional[str] = None,\n) -> int:\n\"\"\"\n    Uploads a directory from a given local path to a remote directory.\n    Defaults to uploading the entire contents of the current working directory to the block's basepath.\n    \"\"\"\n    return await self.filesystem.put_directory(\n        local_path=local_path,\n        to_path=to_path,\n        ignore_file=ignore_file,\n        overwrite=False,\n    )\n
","tags":["Python API","filesystems","LocalFileSystem","RemoteFileSystem"]},{"location":"api-ref/prefect/flows/","title":"prefect.flows","text":"","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows","title":"prefect.flows","text":"

Module containing the base workflow class and decorator - for most use cases, using the @flow decorator is preferred.

","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow","title":"Flow","text":"

Bases: Generic[P, R]

A Prefect workflow definition.

Note

We recommend using the @flow decorator for most use-cases.

Wraps a function with an entrypoint to the Prefect engine. To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and \"Returns\" respectively.

Parameters:

Name Type Description Default fn Callable[P, R]

The function defining the workflow.

required name Optional[str]

An optional name for the flow; if not provided, the name will be inferred from the given function.

None version Optional[str]

An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.

None flow_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.

None task_runner Union[Type[BaseTaskRunner], BaseTaskRunner]

An optional task runner to use for task execution within the flow; if not provided, a ConcurrentTaskRunner will be used.

ConcurrentTaskRunner description str

An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.

None timeout_seconds Union[int, float]

An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.

None validate_parameters bool

By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as x: int and \"5\" is passed, it will be resolved to 5. If set to False, no validation will be performed on flow parameters.

True retries Optional[int]

An optional number of times to retry on flow run failure.

None retry_delay_seconds Optional[Union[int, float]]

An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero.

None persist_result Optional[bool]

An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

None result_storage Optional[ResultStorage]

An optional block to use to perist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

None result_serializer Optional[ResultSerializer]

An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of PREFECT_RESULTS_DEFAULT_SERIALIZER will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

None on_failure Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a failed state.

None on_completion Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a completed state.

None on_cancellation Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a cancelling state.

None on_crashed Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of callables to run when the flow enters a crashed state.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
@PrefectObjectRegistry.register_instances\nclass Flow(Generic[P, R]):\n\"\"\"\n    A Prefect workflow definition.\n\n    !!! note\n        We recommend using the [`@flow` decorator][prefect.flows.flow] for most use-cases.\n\n    Wraps a function with an entrypoint to the Prefect engine. To preserve the input\n    and output types, we use the generic type variables `P` and `R` for \"Parameters\" and\n    \"Returns\" respectively.\n\n    Args:\n        fn: The function defining the workflow.\n        name: An optional name for the flow; if not provided, the name will be inferred\n            from the given function.\n        version: An optional version string for the flow; if not provided, we will\n            attempt to create a version string as a hash of the file containing the\n            wrapped function; if the file cannot be located, the version will be null.\n        flow_run_name: An optional name to distinguish runs of this flow; this name can\n            be provided as a string template with the flow's parameters as variables,\n            or a function that returns a string.\n        task_runner: An optional task runner to use for task execution within the flow;\n            if not provided, a `ConcurrentTaskRunner` will be used.\n        description: An optional string description for the flow; if not provided, the\n            description will be pulled from the docstring for the decorated function.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the flow. If the flow exceeds this runtime, it will be marked as failed.\n            Flow execution may continue until the next task is called.\n        validate_parameters: By default, parameters passed to flows are validated by\n            Pydantic. This will check that input values conform to the annotated types\n            on the function. Where possible, values will be coerced into the correct\n            type; for example, if a parameter is defined as `x: int` and \"5\" is passed,\n            it will be resolved to `5`. If set to `False`, no validation will be\n            performed on flow parameters.\n        retries: An optional number of times to retry on flow run failure.\n        retry_delay_seconds: An optional number of seconds to wait before retrying the\n            flow after failure. This is only applicable if `retries` is nonzero.\n        persist_result: An optional toggle indicating whether the result of this flow\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to perist the result of this flow.\n            This value will be used as the default for any tasks in this flow.\n            If not provided, the local file system will be used unless called as\n            a subflow, at which point the default will be loaded from the parent flow.\n        result_serializer: An optional serializer to use to serialize the result of this\n            flow for persistence. This value will be used as the default for any tasks\n            in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER`\n            will be used unless called as a subflow, at which point the default will be\n            loaded from the parent flow.\n        on_failure: An optional list of callables to run when the flow enters a failed state.\n        on_completion: An optional list of callables to run when the flow enters a completed state.\n        on_cancellation: An optional list of callables to run when the flow enters a cancelling state.\n        on_crashed: An optional list of callables to run when the flow enters a crashed state.\n    \"\"\"\n\n    # NOTE: These parameters (types, defaults, and docstrings) should be duplicated\n    #       exactly in the @flow decorator\n    def __init__(\n        self,\n        fn: Callable[P, R],\n        name: Optional[str] = None,\n        version: Optional[str] = None,\n        flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n        retries: Optional[int] = None,\n        retry_delay_seconds: Optional[Union[int, float]] = None,\n        task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = ConcurrentTaskRunner,\n        description: str = None,\n        timeout_seconds: Union[int, float] = None,\n        validate_parameters: bool = True,\n        persist_result: Optional[bool] = None,\n        result_storage: Optional[ResultStorage] = None,\n        result_serializer: Optional[ResultSerializer] = None,\n        cache_result_in_memory: bool = True,\n        log_prints: Optional[bool] = None,\n        on_completion: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n        on_cancellation: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    ):\n        # Validate if hook passed is list and contains callables\n        hook_categories = [on_completion, on_failure, on_cancellation, on_crashed]\n        hook_names = [\"on_completion\", \"on_failure\", \"on_cancellation\", \"on_crashed\"]\n        for hooks, hook_name in zip(hook_categories, hook_names):\n            if hooks is not None:\n                if not hooks:\n                    raise ValueError(f\"Empty list passed for '{hook_name}'\")\n                try:\n                    hooks = list(hooks)\n                except TypeError:\n                    raise TypeError(\n                        f\"Expected iterable for '{hook_name}'; got\"\n                        f\" {type(hooks).__name__} instead. Please provide a list of\"\n                        f\" hooks to '{hook_name}':\\n\\n\"\n                        f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                        \" my_flow():\\n\\tpass\"\n                    )\n\n                for hook in hooks:\n                    if not callable(hook):\n                        raise TypeError(\n                            f\"Expected callables in '{hook_name}'; got\"\n                            f\" {type(hook).__name__} instead. Please provide a list of\"\n                            f\" hooks to '{hook_name}':\\n\\n\"\n                            f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                            \" my_flow():\\n\\tpass\"\n                        )\n\n        if not callable(fn):\n            raise TypeError(\"'fn' must be callable\")\n\n        # Validate name if given\n        if name:\n            raise_on_name_with_banned_characters(name)\n\n        self.name = name or fn.__name__.replace(\"_\", \"-\")\n\n        if flow_run_name is not None:\n            if not isinstance(flow_run_name, str) and not callable(flow_run_name):\n                raise TypeError(\n                    \"Expected string or callable for 'flow_run_name'; got\"\n                    f\" {type(flow_run_name).__name__} instead.\"\n                )\n        self.flow_run_name = flow_run_name\n\n        task_runner = task_runner or ConcurrentTaskRunner()\n        self.task_runner = (\n            task_runner() if isinstance(task_runner, type) else task_runner\n        )\n\n        self.log_prints = log_prints\n\n        self.description = description or inspect.getdoc(fn)\n        update_wrapper(self, fn)\n        self.fn = fn\n        self.isasync = is_async_fn(self.fn)\n\n        raise_for_reserved_arguments(self.fn, [\"return_state\", \"wait_for\"])\n\n        # Version defaults to a hash of the function's file\n        flow_file = inspect.getsourcefile(self.fn)\n        if not version:\n            try:\n                version = file_hash(flow_file)\n            except (FileNotFoundError, TypeError, OSError):\n                pass  # `getsourcefile` can return null values and \"<stdin>\" for objects in repls\n        self.version = version\n\n        self.timeout_seconds = float(timeout_seconds) if timeout_seconds else None\n\n        # FlowRunPolicy settings\n        # TODO: We can instantiate a `FlowRunPolicy` and add Pydantic bound checks to\n        #       validate that the user passes positive numbers here\n        self.retries = (\n            retries if retries is not None else PREFECT_FLOW_DEFAULT_RETRIES.value()\n        )\n\n        self.retry_delay_seconds = (\n            retry_delay_seconds\n            if retry_delay_seconds is not None\n            else PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS.value()\n        )\n\n        self.parameters = parameter_schema(self.fn)\n        self.should_validate_parameters = validate_parameters\n\n        if self.should_validate_parameters:\n            # Try to create the validated function now so that incompatibility can be\n            # raised at declaration time rather than at runtime\n            # We cannot, however, store the validated function on the flow because it\n            # is not picklable in some environments\n            try:\n                ValidatedFunction(self.fn, config=None)\n            except pydantic.ConfigError as exc:\n                raise ValueError(\n                    \"Flow function is not compatible with `validate_parameters`. \"\n                    \"Disable validation or change the argument names.\"\n                ) from exc\n\n        self.persist_result = persist_result\n        self.result_storage = result_storage\n        self.result_serializer = result_serializer\n        self.cache_result_in_memory = cache_result_in_memory\n\n        # Check for collision in the registry\n        registry = PrefectObjectRegistry.get()\n\n        if registry and any(\n            other\n            for other in registry.get_instances(Flow)\n            if other.name == self.name and id(other.fn) != id(self.fn)\n        ):\n            file = inspect.getsourcefile(self.fn)\n            line_number = inspect.getsourcelines(self.fn)[1]\n            warnings.warn(\n                f\"A flow named {self.name!r} and defined at '{file}:{line_number}' \"\n                \"conflicts with another flow. Consider specifying a unique `name` \"\n                \"parameter in the flow definition:\\n\\n \"\n                \"`@flow(name='my_unique_name', ...)`\"\n            )\n        self.on_completion = on_completion\n        self.on_failure = on_failure\n        self.on_cancellation = on_cancellation\n        self.on_crashed = on_crashed\n\n    def with_options(\n        self,\n        *,\n        name: str = None,\n        version: str = None,\n        retries: int = 0,\n        retry_delay_seconds: Union[int, float] = 0,\n        description: str = None,\n        flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n        task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = None,\n        timeout_seconds: Union[int, float] = None,\n        validate_parameters: bool = None,\n        persist_result: Optional[bool] = NotSet,\n        result_storage: Optional[ResultStorage] = NotSet,\n        result_serializer: Optional[ResultSerializer] = NotSet,\n        cache_result_in_memory: bool = None,\n        log_prints: Optional[bool] = NotSet,\n        on_completion: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n        on_cancellation: Optional[\n            List[Callable[[FlowSchema, FlowRun, State], None]]\n        ] = None,\n        on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    ):\n\"\"\"\n        Create a new flow from the current object, updating provided options.\n\n        Args:\n            name: A new name for the flow.\n            version: A new version for the flow.\n            description: A new description for the flow.\n            flow_run_name: An optional name to distinguish runs of this flow; this name\n                can be provided as a string template with the flow's parameters as variables,\n                or a function that returns a string.\n            task_runner: A new task runner for the flow.\n            timeout_seconds: A new number of seconds to fail the flow after if still\n                running.\n            validate_parameters: A new value indicating if flow calls should validate\n                given parameters.\n            retries: A new number of times to retry on flow run failure.\n            retry_delay_seconds: A new number of seconds to wait before retrying the\n                flow after failure. This is only applicable if `retries` is nonzero.\n            persist_result: A new option for enabling or disabling result persistence.\n            result_storage: A new storage type to use for results.\n            result_serializer: A new serializer to use for results.\n            cache_result_in_memory: A new value indicating if the flow's result should\n                be cached in memory.\n            on_failure: A new list of callables to run when the flow enters a failed state.\n            on_completion: A new list of callables to run when the flow enters a completed state.\n            on_cancellation: A new list of callables to run when the flow enters a cancelling state.\n            on_crashed: A new list of callables to run when the flow enters a crashed state.\n\n        Returns:\n            A new `Flow` instance.\n\n        Examples:\n\n            Create a new flow from an existing flow and update the name:\n\n            >>> @flow(name=\"My flow\")\n            >>> def my_flow():\n            >>>     return 1\n            >>>\n            >>> new_flow = my_flow.with_options(name=\"My new flow\")\n\n            Create a new flow from an existing flow, update the task runner, and call\n            it without an intermediate variable:\n\n            >>> from prefect.task_runners import SequentialTaskRunner\n            >>>\n            >>> @flow\n            >>> def my_flow(x, y):\n            >>>     return x + y\n            >>>\n            >>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n            >>> assert state.result() == 4\n\n        \"\"\"\n        return Flow(\n            fn=self.fn,\n            name=name or self.name,\n            description=description or self.description,\n            flow_run_name=flow_run_name,\n            version=version or self.version,\n            task_runner=task_runner or self.task_runner,\n            retries=retries or self.retries,\n            retry_delay_seconds=retry_delay_seconds or self.retry_delay_seconds,\n            timeout_seconds=(\n                timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n            ),\n            validate_parameters=(\n                validate_parameters\n                if validate_parameters is not None\n                else self.should_validate_parameters\n            ),\n            persist_result=(\n                persist_result if persist_result is not NotSet else self.persist_result\n            ),\n            result_storage=(\n                result_storage if result_storage is not NotSet else self.result_storage\n            ),\n            result_serializer=(\n                result_serializer\n                if result_serializer is not NotSet\n                else self.result_serializer\n            ),\n            cache_result_in_memory=(\n                cache_result_in_memory\n                if cache_result_in_memory is not None\n                else self.cache_result_in_memory\n            ),\n            log_prints=log_prints if log_prints is not NotSet else self.log_prints,\n            on_completion=on_completion or self.on_completion,\n            on_failure=on_failure or self.on_failure,\n            on_cancellation=on_cancellation or self.on_cancellation,\n            on_crashed=on_crashed or self.on_crashed,\n        )\n\n    def validate_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n\"\"\"\n        Validate parameters for compatibility with the flow by attempting to cast the inputs to the\n        associated types specified by the function's type annotations.\n\n        Returns:\n            A new dict of parameters that have been cast to the appropriate types\n\n        Raises:\n            ParameterTypeError: if the provided parameters are not valid\n        \"\"\"\n        validated_fn = ValidatedFunction(self.fn, config=None)\n        args, kwargs = parameters_to_args_kwargs(self.fn, parameters)\n\n        try:\n            model = validated_fn.init_model_instance(*args, **kwargs)\n        except pydantic.ValidationError as exc:\n            # We capture the pydantic exception and raise our own because the pydantic\n            # exception is not picklable when using a cythonized pydantic installation\n            raise ParameterTypeError.from_validation_error(exc) from None\n\n        # Get the updated parameter dict with cast values from the model\n        cast_parameters = {\n            k: v\n            for k, v in model._iter()\n            if k in model.__fields_set__ or model.__fields__[k].default_factory\n        }\n        return cast_parameters\n\n    def serialize_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n\"\"\"\n        Convert parameters to a serializable form.\n\n        Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without\n        converting everything directly to a string. This maintains basic types like\n        integers during API roundtrips.\n        \"\"\"\n        serialized_parameters = {}\n        for key, value in parameters.items():\n            try:\n                serialized_parameters[key] = jsonable_encoder(value)\n            except (TypeError, ValueError):\n                logger.debug(\n                    f\"Parameter {key!r} for flow {self.name!r} is of unserializable \"\n                    f\"type {type(value).__name__!r} and will not be stored \"\n                    \"in the backend.\"\n                )\n                serialized_parameters[key] = f\"<{type(value).__name__}>\"\n        return serialized_parameters\n\n    @overload\n    def __call__(self: \"Flow[P, NoReturn]\", *args: P.args, **kwargs: P.kwargs) -> None:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def __call__(\n        self: \"Flow[P, Coroutine[Any, Any, T]]\", *args: P.args, **kwargs: P.kwargs\n    ) -> Awaitable[T]:\n        ...\n\n    @overload\n    def __call__(\n        self: \"Flow[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> T:\n        ...\n\n    @overload\n    def __call__(\n        self: \"Flow[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def __call__(\n        self,\n        *args: \"P.args\",\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: \"P.kwargs\",\n    ):\n\"\"\"\n        Run the flow and return its result.\n\n\n        Flow parameter values must be serializable by Pydantic.\n\n        If writing an async flow, this call must be awaited.\n\n        This will create a new flow run in the API.\n\n        Args:\n            *args: Arguments to run the flow with.\n            return_state: Return a Prefect State containing the result of the\n                flow run.\n            wait_for: Upstream task futures to wait for before starting the flow if called as a subflow\n            **kwargs: Keyword arguments to run the flow with.\n\n        Returns:\n            If `return_state` is False, returns the result of the flow run.\n            If `return_state` is True, returns the result of the flow run\n                wrapped in a Prefect State which provides error handling.\n\n        Examples:\n\n            Define a flow\n\n            >>> @flow\n            >>> def my_flow(name):\n            >>>     print(f\"hello {name}\")\n            >>>     return f\"goodbye {name}\"\n\n            Run a flow\n\n            >>> my_flow(\"marvin\")\n            hello marvin\n            \"goodbye marvin\"\n\n            Run a flow with additional tags\n\n            >>> from prefect import tags\n            >>> with tags(\"db\", \"blue\"):\n            >>>     my_flow(\"foo\")\n        \"\"\"\n        from prefect.engine import enter_flow_run_engine_from_flow_call\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return_type = \"state\" if return_state else \"result\"\n\n        return enter_flow_run_engine_from_flow_call(\n            self,\n            parameters,\n            wait_for=wait_for,\n            return_type=return_type,\n        )\n\n    @overload\n    def _run(self: \"Flow[P, NoReturn]\", *args: P.args, **kwargs: P.kwargs) -> State[T]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def _run(\n        self: \"Flow[P, Coroutine[Any, Any, T]]\", *args: P.args, **kwargs: P.kwargs\n    ) -> Awaitable[T]:\n        ...\n\n    @overload\n    def _run(self: \"Flow[P, T]\", *args: P.args, **kwargs: P.kwargs) -> State[T]:\n        ...\n\n    def _run(\n        self,\n        *args: \"P.args\",\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: \"P.kwargs\",\n    ):\n\"\"\"\n        Run the flow and return its final state.\n\n        Examples:\n\n            Run a flow and get the returned result\n\n            >>> state = my_flow._run(\"marvin\")\n            >>> state.result()\n           \"goodbye marvin\"\n        \"\"\"\n        from prefect.engine import enter_flow_run_engine_from_flow_call\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return enter_flow_run_engine_from_flow_call(\n            self,\n            parameters,\n            wait_for=wait_for,\n            return_type=\"state\",\n        )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.serialize_parameters","title":"serialize_parameters","text":"

Convert parameters to a serializable form.

Uses FastAPI's jsonable_encoder to convert to JSON compatible objects without converting everything directly to a string. This maintains basic types like integers during API roundtrips.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def serialize_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n\"\"\"\n    Convert parameters to a serializable form.\n\n    Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without\n    converting everything directly to a string. This maintains basic types like\n    integers during API roundtrips.\n    \"\"\"\n    serialized_parameters = {}\n    for key, value in parameters.items():\n        try:\n            serialized_parameters[key] = jsonable_encoder(value)\n        except (TypeError, ValueError):\n            logger.debug(\n                f\"Parameter {key!r} for flow {self.name!r} is of unserializable \"\n                f\"type {type(value).__name__!r} and will not be stored \"\n                \"in the backend.\"\n            )\n            serialized_parameters[key] = f\"<{type(value).__name__}>\"\n    return serialized_parameters\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.validate_parameters","title":"validate_parameters","text":"

Validate parameters for compatibility with the flow by attempting to cast the inputs to the associated types specified by the function's type annotations.

Returns:

Type Description Dict[str, Any]

A new dict of parameters that have been cast to the appropriate types

Raises:

Type Description ParameterTypeError

if the provided parameters are not valid

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def validate_parameters(self, parameters: Dict[str, Any]) -> Dict[str, Any]:\n\"\"\"\n    Validate parameters for compatibility with the flow by attempting to cast the inputs to the\n    associated types specified by the function's type annotations.\n\n    Returns:\n        A new dict of parameters that have been cast to the appropriate types\n\n    Raises:\n        ParameterTypeError: if the provided parameters are not valid\n    \"\"\"\n    validated_fn = ValidatedFunction(self.fn, config=None)\n    args, kwargs = parameters_to_args_kwargs(self.fn, parameters)\n\n    try:\n        model = validated_fn.init_model_instance(*args, **kwargs)\n    except pydantic.ValidationError as exc:\n        # We capture the pydantic exception and raise our own because the pydantic\n        # exception is not picklable when using a cythonized pydantic installation\n        raise ParameterTypeError.from_validation_error(exc) from None\n\n    # Get the updated parameter dict with cast values from the model\n    cast_parameters = {\n        k: v\n        for k, v in model._iter()\n        if k in model.__fields_set__ or model.__fields__[k].default_factory\n    }\n    return cast_parameters\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.Flow.with_options","title":"with_options","text":"

Create a new flow from the current object, updating provided options.

Parameters:

Name Type Description Default name str

A new name for the flow.

None version str

A new version for the flow.

None description str

A new description for the flow.

None flow_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.

None task_runner Union[Type[BaseTaskRunner], BaseTaskRunner]

A new task runner for the flow.

None timeout_seconds Union[int, float]

A new number of seconds to fail the flow after if still running.

None validate_parameters bool

A new value indicating if flow calls should validate given parameters.

None retries int

A new number of times to retry on flow run failure.

0 retry_delay_seconds Union[int, float]

A new number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero.

0 persist_result Optional[bool]

A new option for enabling or disabling result persistence.

NotSet result_storage Optional[ResultStorage]

A new storage type to use for results.

NotSet result_serializer Optional[ResultSerializer]

A new serializer to use for results.

NotSet cache_result_in_memory bool

A new value indicating if the flow's result should be cached in memory.

None on_failure Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

A new list of callables to run when the flow enters a failed state.

None on_completion Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

A new list of callables to run when the flow enters a completed state.

None on_cancellation Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

A new list of callables to run when the flow enters a cancelling state.

None on_crashed Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

A new list of callables to run when the flow enters a crashed state.

None

Returns:

Type Description

A new Flow instance.

Examples:

Create a new flow from an existing flow and update the name:

>>> @flow(name=\"My flow\")\n>>> def my_flow():\n>>>     return 1\n>>>\n>>> new_flow = my_flow.with_options(name=\"My new flow\")\n

Create a new flow from an existing flow, update the task runner, and call it without an intermediate variable:

>>> from prefect.task_runners import SequentialTaskRunner\n>>>\n>>> @flow\n>>> def my_flow(x, y):\n>>>     return x + y\n>>>\n>>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n>>> assert state.result() == 4\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def with_options(\n    self,\n    *,\n    name: str = None,\n    version: str = None,\n    retries: int = 0,\n    retry_delay_seconds: Union[int, float] = 0,\n    description: str = None,\n    flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n    task_runner: Union[Type[BaseTaskRunner], BaseTaskRunner] = None,\n    timeout_seconds: Union[int, float] = None,\n    validate_parameters: bool = None,\n    persist_result: Optional[bool] = NotSet,\n    result_storage: Optional[ResultStorage] = NotSet,\n    result_serializer: Optional[ResultSerializer] = NotSet,\n    cache_result_in_memory: bool = None,\n    log_prints: Optional[bool] = NotSet,\n    on_completion: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], None]]\n    ] = None,\n    on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    on_cancellation: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], None]]\n    ] = None,\n    on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n):\n\"\"\"\n    Create a new flow from the current object, updating provided options.\n\n    Args:\n        name: A new name for the flow.\n        version: A new version for the flow.\n        description: A new description for the flow.\n        flow_run_name: An optional name to distinguish runs of this flow; this name\n            can be provided as a string template with the flow's parameters as variables,\n            or a function that returns a string.\n        task_runner: A new task runner for the flow.\n        timeout_seconds: A new number of seconds to fail the flow after if still\n            running.\n        validate_parameters: A new value indicating if flow calls should validate\n            given parameters.\n        retries: A new number of times to retry on flow run failure.\n        retry_delay_seconds: A new number of seconds to wait before retrying the\n            flow after failure. This is only applicable if `retries` is nonzero.\n        persist_result: A new option for enabling or disabling result persistence.\n        result_storage: A new storage type to use for results.\n        result_serializer: A new serializer to use for results.\n        cache_result_in_memory: A new value indicating if the flow's result should\n            be cached in memory.\n        on_failure: A new list of callables to run when the flow enters a failed state.\n        on_completion: A new list of callables to run when the flow enters a completed state.\n        on_cancellation: A new list of callables to run when the flow enters a cancelling state.\n        on_crashed: A new list of callables to run when the flow enters a crashed state.\n\n    Returns:\n        A new `Flow` instance.\n\n    Examples:\n\n        Create a new flow from an existing flow and update the name:\n\n        >>> @flow(name=\"My flow\")\n        >>> def my_flow():\n        >>>     return 1\n        >>>\n        >>> new_flow = my_flow.with_options(name=\"My new flow\")\n\n        Create a new flow from an existing flow, update the task runner, and call\n        it without an intermediate variable:\n\n        >>> from prefect.task_runners import SequentialTaskRunner\n        >>>\n        >>> @flow\n        >>> def my_flow(x, y):\n        >>>     return x + y\n        >>>\n        >>> state = my_flow.with_options(task_runner=SequentialTaskRunner)(1, 3)\n        >>> assert state.result() == 4\n\n    \"\"\"\n    return Flow(\n        fn=self.fn,\n        name=name or self.name,\n        description=description or self.description,\n        flow_run_name=flow_run_name,\n        version=version or self.version,\n        task_runner=task_runner or self.task_runner,\n        retries=retries or self.retries,\n        retry_delay_seconds=retry_delay_seconds or self.retry_delay_seconds,\n        timeout_seconds=(\n            timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n        ),\n        validate_parameters=(\n            validate_parameters\n            if validate_parameters is not None\n            else self.should_validate_parameters\n        ),\n        persist_result=(\n            persist_result if persist_result is not NotSet else self.persist_result\n        ),\n        result_storage=(\n            result_storage if result_storage is not NotSet else self.result_storage\n        ),\n        result_serializer=(\n            result_serializer\n            if result_serializer is not NotSet\n            else self.result_serializer\n        ),\n        cache_result_in_memory=(\n            cache_result_in_memory\n            if cache_result_in_memory is not None\n            else self.cache_result_in_memory\n        ),\n        log_prints=log_prints if log_prints is not NotSet else self.log_prints,\n        on_completion=on_completion or self.on_completion,\n        on_failure=on_failure or self.on_failure,\n        on_cancellation=on_cancellation or self.on_cancellation,\n        on_crashed=on_crashed or self.on_crashed,\n    )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.flow","title":"flow","text":"

Decorator to designate a function as a Prefect workflow.

This decorator may be used for asynchronous or synchronous functions.

Flow parameters must be serializable by Pydantic.

Parameters:

Name Type Description Default name Optional[str]

An optional name for the flow; if not provided, the name will be inferred from the given function.

None version Optional[str]

An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null.

None flow_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string.

None task_runner BaseTaskRunner

An optional task runner to use for task execution within the flow; if not provided, a ConcurrentTaskRunner will be instantiated.

ConcurrentTaskRunner description str

An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function.

None timeout_seconds Union[int, float]

An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called.

None validate_parameters bool

By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as x: int and \"5\" is passed, it will be resolved to 5. If set to False, no validation will be performed on flow parameters.

True retries int

An optional number of times to retry on flow run failure.

None retry_delay_seconds Union[int, float]

An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero.

None persist_result Optional[bool]

An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

None result_storage Optional[ResultStorage]

An optional block to use to perist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

None result_serializer Optional[ResultSerializer]

An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of PREFECT_RESULTS_DEFAULT_SERIALIZER will be used unless called as a subflow, at which point the default will be loaded from the parent flow.

None log_prints Optional[bool]

If set, print statements in the flow will be redirected to the Prefect logger for the flow run. Defaults to None, which indicates that the value from the parent flow should be used. If this is a parent flow, the default is pulled from the PREFECT_LOGGING_LOG_PRINTS setting.

None on_completion Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of functions to call when the flow run is completed. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.

None on_failure Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of functions to call when the flow run fails. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.

None on_cancellation Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of functions to call when the flow run is cancelled. These functions will be passed the flow, flow run, and final state.

None on_crashed Optional[List[Callable[[FlowSchema, FlowRun, State], None]]]

An optional list of functions to call when the flow run crashes. Each function should accept three arguments: the flow, the flow run, and the final state of the flow run.

None

Returns:

Type Description

A callable Flow object which, when called, will run the flow and return its

final state.

Examples:

Define a simple flow

>>> from prefect import flow\n>>> @flow\n>>> def add(x, y):\n>>>     return x + y\n

Define an async flow

>>> @flow\n>>> async def add(x, y):\n>>>     return x + y\n

Define a flow with a version and description

>>> @flow(version=\"first-flow\", description=\"This flow is empty!\")\n>>> def my_flow():\n>>>     pass\n

Define a flow with a custom name

>>> @flow(name=\"The Ultimate Flow\")\n>>> def my_flow():\n>>>     pass\n

Define a flow that submits its tasks to dask

>>> from prefect_dask.task_runners import DaskTaskRunner\n>>>\n>>> @flow(task_runner=DaskTaskRunner)\n>>> def my_flow():\n>>>     pass\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def flow(\n    __fn=None,\n    *,\n    name: Optional[str] = None,\n    version: Optional[str] = None,\n    flow_run_name: Optional[Union[Callable[[], str], str]] = None,\n    retries: int = None,\n    retry_delay_seconds: Union[int, float] = None,\n    task_runner: BaseTaskRunner = ConcurrentTaskRunner,\n    description: str = None,\n    timeout_seconds: Union[int, float] = None,\n    validate_parameters: bool = True,\n    persist_result: Optional[bool] = None,\n    result_storage: Optional[ResultStorage] = None,\n    result_serializer: Optional[ResultSerializer] = None,\n    cache_result_in_memory: bool = True,\n    log_prints: Optional[bool] = None,\n    on_completion: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    on_failure: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n    on_cancellation: Optional[\n        List[Callable[[FlowSchema, FlowRun, State], None]]\n    ] = None,\n    on_crashed: Optional[List[Callable[[FlowSchema, FlowRun, State], None]]] = None,\n):\n\"\"\"\n    Decorator to designate a function as a Prefect workflow.\n\n    This decorator may be used for asynchronous or synchronous functions.\n\n    Flow parameters must be serializable by Pydantic.\n\n    Args:\n        name: An optional name for the flow; if not provided, the name will be inferred\n            from the given function.\n        version: An optional version string for the flow; if not provided, we will\n            attempt to create a version string as a hash of the file containing the\n            wrapped function; if the file cannot be located, the version will be null.\n        flow_run_name: An optional name to distinguish runs of this flow; this name can\n            be provided as a string template with the flow's parameters as variables,\n            or a function that returns a string.\n        task_runner: An optional task runner to use for task execution within the flow; if\n            not provided, a `ConcurrentTaskRunner` will be instantiated.\n        description: An optional string description for the flow; if not provided, the\n            description will be pulled from the docstring for the decorated function.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the flow. If the flow exceeds this runtime, it will be marked as failed.\n            Flow execution may continue until the next task is called.\n        validate_parameters: By default, parameters passed to flows are validated by\n            Pydantic. This will check that input values conform to the annotated types\n            on the function. Where possible, values will be coerced into the correct\n            type; for example, if a parameter is defined as `x: int` and \"5\" is passed,\n            it will be resolved to `5`. If set to `False`, no validation will be\n            performed on flow parameters.\n        retries: An optional number of times to retry on flow run failure.\n        retry_delay_seconds: An optional number of seconds to wait before retrying the\n            flow after failure. This is only applicable if `retries` is nonzero.\n        persist_result: An optional toggle indicating whether the result of this flow\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to perist the result of this flow.\n            This value will be used as the default for any tasks in this flow.\n            If not provided, the local file system will be used unless called as\n            a subflow, at which point the default will be loaded from the parent flow.\n        result_serializer: An optional serializer to use to serialize the result of this\n            flow for persistence. This value will be used as the default for any tasks\n            in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER`\n            will be used unless called as a subflow, at which point the default will be\n            loaded from the parent flow.\n        log_prints: If set, `print` statements in the flow will be redirected to the\n            Prefect logger for the flow run. Defaults to `None`, which indicates that\n            the value from the parent flow should be used. If this is a parent flow,\n            the default is pulled from the `PREFECT_LOGGING_LOG_PRINTS` setting.\n        on_completion: An optional list of functions to call when the flow run is\n            completed. Each function should accept three arguments: the flow, the flow\n            run, and the final state of the flow run.\n        on_failure: An optional list of functions to call when the flow run fails. Each\n            function should accept three arguments: the flow, the flow run, and the\n            final state of the flow run.\n        on_cancellation: An optional list of functions to call when the flow run is\n            cancelled. These functions will be passed the flow, flow run, and final state.\n        on_crashed: An optional list of functions to call when the flow run crashes. Each\n            function should accept three arguments: the flow, the flow run, and the\n            final state of the flow run.\n\n    Returns:\n        A callable `Flow` object which, when called, will run the flow and return its\n        final state.\n\n    Examples:\n        Define a simple flow\n\n        >>> from prefect import flow\n        >>> @flow\n        >>> def add(x, y):\n        >>>     return x + y\n\n        Define an async flow\n\n        >>> @flow\n        >>> async def add(x, y):\n        >>>     return x + y\n\n        Define a flow with a version and description\n\n        >>> @flow(version=\"first-flow\", description=\"This flow is empty!\")\n        >>> def my_flow():\n        >>>     pass\n\n        Define a flow with a custom name\n\n        >>> @flow(name=\"The Ultimate Flow\")\n        >>> def my_flow():\n        >>>     pass\n\n        Define a flow that submits its tasks to dask\n\n        >>> from prefect_dask.task_runners import DaskTaskRunner\n        >>>\n        >>> @flow(task_runner=DaskTaskRunner)\n        >>> def my_flow():\n        >>>     pass\n    \"\"\"\n    if __fn:\n        return cast(\n            Flow[P, R],\n            Flow(\n                fn=__fn,\n                name=name,\n                version=version,\n                flow_run_name=flow_run_name,\n                task_runner=task_runner,\n                description=description,\n                timeout_seconds=timeout_seconds,\n                validate_parameters=validate_parameters,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                log_prints=log_prints,\n                on_completion=on_completion,\n                on_failure=on_failure,\n                on_cancellation=on_cancellation,\n                on_crashed=on_crashed,\n            ),\n        )\n    else:\n        return cast(\n            Callable[[Callable[P, R]], Flow[P, R]],\n            partial(\n                flow,\n                name=name,\n                version=version,\n                flow_run_name=flow_run_name,\n                task_runner=task_runner,\n                description=description,\n                timeout_seconds=timeout_seconds,\n                validate_parameters=validate_parameters,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                log_prints=log_prints,\n                on_completion=on_completion,\n                on_failure=on_failure,\n                on_cancellation=on_cancellation,\n                on_crashed=on_crashed,\n            ),\n        )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_entrypoint","title":"load_flow_from_entrypoint","text":"

Extract a flow object from a script at an entrypoint by running all of the code in the file.

Parameters:

Name Type Description Default entrypoint str

a string in the format <path_to_script>:<flow_func_name>

required

Returns:

Type Description Flow

The flow object from the script

Raises:

Type Description FlowScriptError

If an exception is encountered while running the script

MissingFlowError

If the flow function specified in the entrypoint does not exist

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def load_flow_from_entrypoint(entrypoint: str) -> Flow:\n\"\"\"\n    Extract a flow object from a script at an entrypoint by running all of the code in the file.\n\n    Args:\n        entrypoint: a string in the format `<path_to_script>:<flow_func_name>`\n\n    Returns:\n        The flow object from the script\n\n    Raises:\n        FlowScriptError: If an exception is encountered while running the script\n        MissingFlowError: If the flow function specified in the entrypoint does not exist\n    \"\"\"\n    with PrefectObjectRegistry(\n        block_code_execution=True,\n        capture_failures=True,\n    ):\n        path, func_name = entrypoint.split(\":\")\n        try:\n            flow = import_object(entrypoint)\n        except AttributeError as exc:\n            raise MissingFlowError(\n                f\"Flow function with name {func_name!r} not found in {path!r}. \"\n            ) from exc\n\n        if not isinstance(flow, Flow):\n            raise MissingFlowError(\n                f\"Function with name {func_name!r} is not a flow. Make sure that it is \"\n                \"decorated with '@flow'.\"\n            )\n\n        return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_script","title":"load_flow_from_script","text":"

Extract a flow object from a script by running all of the code in the file.

If the script has multiple flows in it, a flow name must be provided to specify the flow to return.

Parameters:

Name Type Description Default path str

A path to a Python script containing flows

required flow_name str

An optional flow name to look for in the script

None

Returns:

Type Description Flow

The flow object from the script

Raises:

Type Description FlowScriptError

If an exception is encountered while running the script

MissingFlowError

If no flows exist in the iterable

MissingFlowError

If a flow name is provided and that flow does not exist

UnspecifiedFlowError

If multiple flows exist but no flow name was provided

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def load_flow_from_script(path: str, flow_name: str = None) -> Flow:\n\"\"\"\n    Extract a flow object from a script by running all of the code in the file.\n\n    If the script has multiple flows in it, a flow name must be provided to specify\n    the flow to return.\n\n    Args:\n        path: A path to a Python script containing flows\n        flow_name: An optional flow name to look for in the script\n\n    Returns:\n        The flow object from the script\n\n    Raises:\n        FlowScriptError: If an exception is encountered while running the script\n        MissingFlowError: If no flows exist in the iterable\n        MissingFlowError: If a flow name is provided and that flow does not exist\n        UnspecifiedFlowError: If multiple flows exist but no flow name was provided\n    \"\"\"\n    return select_flow(\n        load_flows_from_script(path),\n        flow_name=flow_name,\n        from_message=f\"in script '{path}'\",\n    )\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flow_from_text","title":"load_flow_from_text","text":"

Load a flow from a text script.

The script will be written to a temporary local file path so errors can refer to line numbers and contextual tracebacks can be provided.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def load_flow_from_text(script_contents: AnyStr, flow_name: str):\n\"\"\"\n    Load a flow from a text script.\n\n    The script will be written to a temporary local file path so errors can refer\n    to line numbers and contextual tracebacks can be provided.\n    \"\"\"\n    with NamedTemporaryFile(\n        mode=\"wt\" if isinstance(script_contents, str) else \"wb\",\n        prefix=f\"flow-script-{flow_name}\",\n        suffix=\".py\",\n        delete=False,\n    ) as tmpfile:\n        tmpfile.write(script_contents)\n        tmpfile.flush()\n    try:\n        flow = load_flow_from_script(tmpfile.name, flow_name=flow_name)\n    finally:\n        # windows compat\n        tmpfile.close()\n        os.remove(tmpfile.name)\n    return flow\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.load_flows_from_script","title":"load_flows_from_script","text":"

Load all flow objects from the given python script. All of the code in the file will be executed.

Returns:

Type Description List[Flow]

A list of flows

Raises:

Type Description FlowScriptError

If an exception is encountered while running the script

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def load_flows_from_script(path: str) -> List[Flow]:\n\"\"\"\n    Load all flow objects from the given python script. All of the code in the file\n    will be executed.\n\n    Returns:\n        A list of flows\n\n    Raises:\n        FlowScriptError: If an exception is encountered while running the script\n    \"\"\"\n    return registry_from_script(path).get_instances(Flow)\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/flows/#prefect.flows.select_flow","title":"select_flow","text":"

Select the only flow in an iterable or a flow specified by name.

Returns A single flow object

Raises:

Type Description MissingFlowError

If no flows exist in the iterable

MissingFlowError

If a flow name is provided and that flow does not exist

UnspecifiedFlowError

If multiple flows exist but no flow name was provided

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/flows.py
def select_flow(\n    flows: Iterable[Flow], flow_name: str = None, from_message: str = None\n) -> Flow:\n\"\"\"\n    Select the only flow in an iterable or a flow specified by name.\n\n    Returns\n        A single flow object\n\n    Raises:\n        MissingFlowError: If no flows exist in the iterable\n        MissingFlowError: If a flow name is provided and that flow does not exist\n        UnspecifiedFlowError: If multiple flows exist but no flow name was provided\n    \"\"\"\n    # Convert to flows by name\n    flows = {f.name: f for f in flows}\n\n    # Add a leading space if given, otherwise use an empty string\n    from_message = (\" \" + from_message) if from_message else \"\"\n    if not flows:\n        raise MissingFlowError(f\"No flows found{from_message}.\")\n\n    elif flow_name and flow_name not in flows:\n        raise MissingFlowError(\n            f\"Flow {flow_name!r} not found{from_message}. \"\n            f\"Found the following flows: {listrepr(flows.keys())}. \"\n            \"Check to make sure that your flow function is decorated with `@flow`.\"\n        )\n\n    elif not flow_name and len(flows) > 1:\n        raise UnspecifiedFlowError(\n            (\n                f\"Found {len(flows)} flows{from_message}:\"\n                f\" {listrepr(sorted(flows.keys()))}. Specify a flow name to select a\"\n                \" flow.\"\n            ),\n        )\n\n    if flow_name:\n        return flows[flow_name]\n    else:\n        return list(flows.values())[0]\n
","tags":["Python API","flows","parameters"]},{"location":"api-ref/prefect/futures/","title":"prefect.futures","text":"","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures","title":"prefect.futures","text":"

Futures represent the execution of a task and allow retrieval of the task run's state.

This module contains the definition for futures as well as utilities for resolving futures in nested data structures.

","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture","title":"PrefectFuture","text":"

Bases: Generic[R, A]

Represents the result of a computation happening in a task runner.

When tasks are called, they are submitted to a task runner which creates a future for access to the state and result of the task.

Examples:

Define a task that returns a string

>>> from prefect import flow, task\n>>> @task\n>>> def my_task() -> str:\n>>>     return \"hello\"\n

Calls of this task in a flow will return a future

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()  # PrefectFuture[str, Sync] includes result type\n>>>     future.task_run.id  # UUID for the task run\n

Wait for the task to complete

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     final_state = future.wait()\n

Wait N sconds for the task to complete

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     final_state = future.wait(0.1)\n>>>     if final_state:\n>>>         ... # Task done\n>>>     else:\n>>>         ... # Task not done yet\n

Wait for a task to complete and retrieve its result

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     result = future.result()\n>>>     assert result == \"hello\"\n

Wait N seconds for a task to complete and retrieve its result

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     result = future.result(timeout=5)\n>>>     assert result == \"hello\"\n

Retrieve the state of a task without waiting for completion

>>> @flow\n>>> def my_flow():\n>>>     future = my_task.submit()\n>>>     state = future.get_state()\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
class PrefectFuture(Generic[R, A]):\n\"\"\"\n    Represents the result of a computation happening in a task runner.\n\n    When tasks are called, they are submitted to a task runner which creates a future\n    for access to the state and result of the task.\n\n    Examples:\n        Define a task that returns a string\n\n        >>> from prefect import flow, task\n        >>> @task\n        >>> def my_task() -> str:\n        >>>     return \"hello\"\n\n        Calls of this task in a flow will return a future\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()  # PrefectFuture[str, Sync] includes result type\n        >>>     future.task_run.id  # UUID for the task run\n\n        Wait for the task to complete\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     final_state = future.wait()\n\n        Wait N sconds for the task to complete\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     final_state = future.wait(0.1)\n        >>>     if final_state:\n        >>>         ... # Task done\n        >>>     else:\n        >>>         ... # Task not done yet\n\n        Wait for a task to complete and retrieve its result\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     result = future.result()\n        >>>     assert result == \"hello\"\n\n        Wait N seconds for a task to complete and retrieve its result\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     result = future.result(timeout=5)\n        >>>     assert result == \"hello\"\n\n        Retrieve the state of a task without waiting for completion\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     future = my_task.submit()\n        >>>     state = future.get_state()\n    \"\"\"\n\n    def __init__(\n        self,\n        name: str,\n        key: UUID,\n        task_runner: \"BaseTaskRunner\",\n        asynchronous: A = True,\n        _final_state: State[R] = None,  # Exposed for testing\n    ) -> None:\n        self.key = key\n        self.name = name\n        self.asynchronous = asynchronous\n        self.task_run = None\n        self._final_state = _final_state\n        self._exception: Optional[Exception] = None\n        self._task_runner = task_runner\n        self._submitted = anyio.Event()\n\n        self._loop = asyncio.get_running_loop()\n\n    @overload\n    def wait(\n        self: \"PrefectFuture[R, Async]\", timeout: None = None\n    ) -> Awaitable[State[R]]:\n        ...\n\n    @overload\n    def wait(self: \"PrefectFuture[R, Sync]\", timeout: None = None) -> State[R]:\n        ...\n\n    @overload\n    def wait(\n        self: \"PrefectFuture[R, Async]\", timeout: float\n    ) -> Awaitable[Optional[State[R]]]:\n        ...\n\n    @overload\n    def wait(self: \"PrefectFuture[R, Sync]\", timeout: float) -> Optional[State[R]]:\n        ...\n\n    def wait(self, timeout=None):\n\"\"\"\n        Wait for the run to finish and return the final state\n\n        If the timeout is reached before the run reaches a final state,\n        `None` is returned.\n        \"\"\"\n        wait = create_call(self._wait, timeout=timeout)\n        if self.asynchronous:\n            return from_async.call_soon_in_loop_thread(wait).aresult()\n        else:\n            # type checking cannot handle the overloaded timeout passing\n            return from_sync.call_soon_in_loop_thread(wait).result()  # type: ignore\n\n    @overload\n    async def _wait(self, timeout: None = None) -> State[R]:\n        ...\n\n    @overload\n    async def _wait(self, timeout: float) -> Optional[State[R]]:\n        ...\n\n    async def _wait(self, timeout=None):\n\"\"\"\n        Async implementation for `wait`\n        \"\"\"\n        await self._wait_for_submission()\n\n        if self._final_state:\n            return self._final_state\n\n        self._final_state = await self._task_runner.wait(self.key, timeout)\n        return self._final_state\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Sync]\",\n        timeout: float = None,\n        raise_on_failure: bool = True,\n    ) -> R:\n        ...\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Sync]\",\n        timeout: float = None,\n        raise_on_failure: bool = False,\n    ) -> Union[R, Exception]:\n        ...\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Async]\",\n        timeout: float = None,\n        raise_on_failure: bool = True,\n    ) -> Awaitable[R]:\n        ...\n\n    @overload\n    def result(\n        self: \"PrefectFuture[R, Async]\",\n        timeout: float = None,\n        raise_on_failure: bool = False,\n    ) -> Awaitable[Union[R, Exception]]:\n        ...\n\n    def result(self, timeout: float = None, raise_on_failure: bool = True):\n\"\"\"\n        Wait for the run to finish and return the final state.\n\n        If the timeout is reached before the run reaches a final state, a `TimeoutError`\n        will be raised.\n\n        If `raise_on_failure` is `True` and the task run failed, the task run's\n        exception will be raised.\n        \"\"\"\n        result = create_call(\n            self._result, timeout=timeout, raise_on_failure=raise_on_failure\n        )\n        if self.asynchronous:\n            return from_async.call_soon_in_loop_thread(result).aresult()\n        else:\n            return from_sync.call_soon_in_loop_thread(result).result()\n\n    async def _result(self, timeout: float = None, raise_on_failure: bool = True):\n\"\"\"\n        Async implementation of `result`\n        \"\"\"\n        final_state = await self._wait(timeout=timeout)\n        if not final_state:\n            raise TimeoutError(\"Call timed out before task finished.\")\n        return await final_state.result(raise_on_failure=raise_on_failure, fetch=True)\n\n    @overload\n    def get_state(\n        self: \"PrefectFuture[R, Async]\", client: PrefectClient = None\n    ) -> Awaitable[State[R]]:\n        ...\n\n    @overload\n    def get_state(\n        self: \"PrefectFuture[R, Sync]\", client: PrefectClient = None\n    ) -> State[R]:\n        ...\n\n    def get_state(self, client: PrefectClient = None):\n\"\"\"\n        Get the current state of the task run.\n        \"\"\"\n        if self.asynchronous:\n            return cast(Awaitable[State[R]], self._get_state(client=client))\n        else:\n            return cast(State[R], sync(self._get_state, client=client))\n\n    @inject_client\n    async def _get_state(self, client: PrefectClient = None) -> State[R]:\n        assert client is not None  # always injected\n\n        # We must wait for the task run id to be populated\n        await self._wait_for_submission()\n\n        task_run = await client.read_task_run(self.task_run.id)\n\n        if not task_run:\n            raise RuntimeError(\"Future has no associated task run in the server.\")\n\n        # Update the task run reference\n        self.task_run = task_run\n        return task_run.state\n\n    async def _wait_for_submission(self):\n        await run_coroutine_in_loop_from_async(self._loop, self._submitted.wait())\n\n    def __hash__(self) -> int:\n        return hash(self.key)\n\n    def __repr__(self) -> str:\n        return f\"PrefectFuture({self.name!r})\"\n\n    def __bool__(self) -> bool:\n        warnings.warn(\n            (\n                \"A 'PrefectFuture' from a task call was cast to a boolean; \"\n                \"did you mean to check the result of the task instead? \"\n                \"e.g. `if my_task().result(): ...`\"\n            ),\n            stacklevel=2,\n        )\n        return True\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.get_state","title":"get_state","text":"

Get the current state of the task run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
def get_state(self, client: PrefectClient = None):\n\"\"\"\n    Get the current state of the task run.\n    \"\"\"\n    if self.asynchronous:\n        return cast(Awaitable[State[R]], self._get_state(client=client))\n    else:\n        return cast(State[R], sync(self._get_state, client=client))\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.result","title":"result","text":"

Wait for the run to finish and return the final state.

If the timeout is reached before the run reaches a final state, a TimeoutError will be raised.

If raise_on_failure is True and the task run failed, the task run's exception will be raised.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
def result(self, timeout: float = None, raise_on_failure: bool = True):\n\"\"\"\n    Wait for the run to finish and return the final state.\n\n    If the timeout is reached before the run reaches a final state, a `TimeoutError`\n    will be raised.\n\n    If `raise_on_failure` is `True` and the task run failed, the task run's\n    exception will be raised.\n    \"\"\"\n    result = create_call(\n        self._result, timeout=timeout, raise_on_failure=raise_on_failure\n    )\n    if self.asynchronous:\n        return from_async.call_soon_in_loop_thread(result).aresult()\n    else:\n        return from_sync.call_soon_in_loop_thread(result).result()\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.PrefectFuture.wait","title":"wait","text":"

Wait for the run to finish and return the final state

If the timeout is reached before the run reaches a final state, None is returned.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
def wait(self, timeout=None):\n\"\"\"\n    Wait for the run to finish and return the final state\n\n    If the timeout is reached before the run reaches a final state,\n    `None` is returned.\n    \"\"\"\n    wait = create_call(self._wait, timeout=timeout)\n    if self.asynchronous:\n        return from_async.call_soon_in_loop_thread(wait).aresult()\n    else:\n        # type checking cannot handle the overloaded timeout passing\n        return from_sync.call_soon_in_loop_thread(wait).result()  # type: ignore\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.call_repr","title":"call_repr","text":"

Generate a repr for a function call as \"fn_name(arg_value, kwarg_name=kwarg_value)\"

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
def call_repr(__fn: Callable, *args: Any, **kwargs: Any) -> str:\n\"\"\"\n    Generate a repr for a function call as \"fn_name(arg_value, kwarg_name=kwarg_value)\"\n    \"\"\"\n\n    name = __fn.__name__\n\n    # TODO: If this computation is concerningly expensive, we can iterate checking the\n    #       length at each arg or avoid calling `repr` on args with large amounts of\n    #       data\n    call_args = \", \".join(\n        [repr(arg) for arg in args]\n        + [f\"{key}={repr(val)}\" for key, val in kwargs.items()]\n    )\n\n    # Enforce a maximum length\n    if len(call_args) > 100:\n        call_args = call_args[:100] + \"...\"\n\n    return f\"{name}({call_args})\"\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.resolve_futures_to_data","title":"resolve_futures_to_data async","text":"

Given a Python built-in collection, recursively find PrefectFutures and build a new collection with the same structure with futures resolved to their results. Resolving futures to their results may wait for execution to complete and require communication with the API.

Unsupported object types will be returned without modification.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
async def resolve_futures_to_data(\n    expr: Union[PrefectFuture[R, Any], Any],\n    raise_on_failure: bool = True,\n) -> Union[R, Any]:\n\"\"\"\n    Given a Python built-in collection, recursively find `PrefectFutures` and build a\n    new collection with the same structure with futures resolved to their results.\n    Resolving futures to their results may wait for execution to complete and require\n    communication with the API.\n\n    Unsupported object types will be returned without modification.\n    \"\"\"\n    futures: Set[PrefectFuture] = set()\n\n    maybe_expr = visit_collection(\n        expr,\n        visit_fn=partial(_collect_futures, futures),\n        return_data=False,\n        context={},\n    )\n    if maybe_expr is not None:\n        expr = maybe_expr\n\n    # Get results\n    results = await asyncio.gather(\n        *[\n            # We must wait for the future in the thread it was created in\n            from_async.call_soon_in_loop_thread(\n                create_call(future._result, raise_on_failure=raise_on_failure)\n            ).aresult()\n            for future in futures\n        ]\n    )\n\n    results_by_future = dict(zip(futures, results))\n\n    def replace_futures_with_results(expr, context):\n        # Expressions inside quotes should not be modified\n        if isinstance(context.get(\"annotation\"), quote):\n            raise StopVisiting()\n\n        if isinstance(expr, PrefectFuture):\n            return results_by_future[expr]\n        else:\n            return expr\n\n    return visit_collection(\n        expr,\n        visit_fn=replace_futures_with_results,\n        return_data=True,\n        context={},\n    )\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/futures/#prefect.futures.resolve_futures_to_states","title":"resolve_futures_to_states async","text":"

Given a Python built-in collection, recursively find PrefectFutures and build a new collection with the same structure with futures resolved to their final states. Resolving futures to their final states may wait for execution to complete.

Unsupported object types will be returned without modification.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/futures.py
async def resolve_futures_to_states(\n    expr: Union[PrefectFuture[R, Any], Any]\n) -> Union[State[R], Any]:\n\"\"\"\n    Given a Python built-in collection, recursively find `PrefectFutures` and build a\n    new collection with the same structure with futures resolved to their final states.\n    Resolving futures to their final states may wait for execution to complete.\n\n    Unsupported object types will be returned without modification.\n    \"\"\"\n    futures: Set[PrefectFuture] = set()\n\n    visit_collection(\n        expr,\n        visit_fn=partial(_collect_futures, futures),\n        return_data=False,\n        context={},\n    )\n\n    # Get final states for each future\n    states = await asyncio.gather(\n        *[\n            # We must wait for the future in the thread it was created in\n            from_async.call_soon_in_loop_thread(create_call(future._wait)).aresult()\n            for future in futures\n        ]\n    )\n\n    states_by_future = dict(zip(futures, states))\n\n    def replace_futures_with_states(expr, context):\n        # Expressions inside quotes should not be modified\n        if isinstance(context.get(\"annotation\"), quote):\n            raise StopVisiting()\n\n        if isinstance(expr, PrefectFuture):\n            return states_by_future[expr]\n        else:\n            return expr\n\n    return visit_collection(\n        expr,\n        visit_fn=replace_futures_with_states,\n        return_data=True,\n        context={},\n    )\n
","tags":["Python API","tasks","futures","states"]},{"location":"api-ref/prefect/infrastructure/","title":"prefect.infrastructure","text":"","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure","title":"prefect.infrastructure","text":"","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer","title":"DockerContainer","text":"

Bases: Infrastructure

Runs a command in a container.

Requires a Docker Engine to be connectable. Docker settings will be retrieved from the environment.

Click here to see a tutorial.

Attributes:

Name Type Description auto_remove bool

If set, the container will be removed on completion. Otherwise, the container will remain after exit for inspection.

command bool

A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.

env bool

Environment variables to set for the container.

image str

An optional string specifying the tag of a Docker image to use. Defaults to the Prefect image.

image_pull_policy Optional[ImagePullPolicy]

Specifies if the image should be pulled. One of 'ALWAYS', 'NEVER', 'IF_NOT_PRESENT'.

image_registry Optional[DockerRegistry]

A DockerRegistry block containing credentials to use if image is stored in a private image registry.

labels Optional[DockerRegistry]

An optional dictionary of labels, mapping name to value.

name Optional[DockerRegistry]

An optional name for the container.

network_mode Optional[str]

Set the network mode for the created container. Defaults to 'host' if a local API url is detected, otherwise the Docker default of 'bridge' is used. If 'networks' is set, this cannot be set.

networks List[str]

An optional list of strings specifying Docker networks to connect the container to.

stream_output bool

If set, stream output from the container to local standard output.

volumes List[str]

An optional list of volume mount strings in the format of \"local_path:container_path\".

memswap_limit Union[int, str]

Total memory (memory + swap), -1 to disable swap. Should only be set if mem_limit is also set. If mem_limit is set, this defaults to allowing the container to use as much swap as memory. For example, if mem_limit is 300m and memswap_limit is not set, the container can use 600m in total of memory and swap.

mem_limit Union[float, str]

Memory limit of the created container. Accepts float values to enforce a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g. If a string is given without a unit, bytes are assumed.

privileged bool

Give extended privileges to this container.

","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer--connecting-to-a-locally-hosted-prefect-api","title":"Connecting to a locally hosted Prefect API","text":"

If using a local API URL on Linux, we will update the network mode default to 'host' to enable connectivity. If using another OS or an alternative network mode is used, we will replace 'localhost' in the API URL with 'host.docker.internal'. Generally, this will enable connectivity, but the API URL can be provided as an environment variable to override inference in more complex use-cases.

Note, if using 'host.docker.internal' in the API URL on Linux, the API must be bound to 0.0.0.0 or the Docker IP address to allow connectivity. On macOS, this is not necessary and the API is connectable while bound to localhost.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/container.py
class DockerContainer(Infrastructure):\n\"\"\"\n    Runs a command in a container.\n\n    Requires a Docker Engine to be connectable. Docker settings will be retrieved from\n    the environment.\n\n    Click [here](https://docs.prefect.io/guides/deployment/docker) to see a tutorial.\n\n    Attributes:\n        auto_remove: If set, the container will be removed on completion. Otherwise,\n            the container will remain after exit for inspection.\n        command: A list of strings specifying the command to run in the container to\n            start the flow run. In most cases you should not override this.\n        env: Environment variables to set for the container.\n        image: An optional string specifying the tag of a Docker image to use.\n            Defaults to the Prefect image.\n        image_pull_policy: Specifies if the image should be pulled. One of 'ALWAYS',\n            'NEVER', 'IF_NOT_PRESENT'.\n        image_registry: A `DockerRegistry` block containing credentials to use if `image` is stored in a private\n            image registry.\n        labels: An optional dictionary of labels, mapping name to value.\n        name: An optional name for the container.\n        network_mode: Set the network mode for the created container. Defaults to 'host'\n            if a local API url is detected, otherwise the Docker default of 'bridge' is\n            used. If 'networks' is set, this cannot be set.\n        networks: An optional list of strings specifying Docker networks to connect the\n            container to.\n        stream_output: If set, stream output from the container to local standard output.\n        volumes: An optional list of volume mount strings in the format of\n            \"local_path:container_path\".\n        memswap_limit: Total memory (memory + swap), -1 to disable swap. Should only be\n            set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\n            allowing the container to use as much swap as memory. For example, if\n            `mem_limit` is 300m and `memswap_limit` is not set, the container can use\n            600m in total of memory and swap.\n        mem_limit: Memory limit of the created container. Accepts float values to enforce\n            a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g.\n            If a string is given without a unit, bytes are assumed.\n        privileged: Give extended privileges to this container.\n\n    ## Connecting to a locally hosted Prefect API\n\n    If using a local API URL on Linux, we will update the network mode default to 'host'\n    to enable connectivity. If using another OS or an alternative network mode is used,\n    we will replace 'localhost' in the API URL with 'host.docker.internal'. Generally,\n    this will enable connectivity, but the API URL can be provided as an environment\n    variable to override inference in more complex use-cases.\n\n    Note, if using 'host.docker.internal' in the API URL on Linux, the API must be bound\n    to 0.0.0.0 or the Docker IP address to allow connectivity. On macOS, this is not\n    necessary and the API is connectable while bound to localhost.\n    \"\"\"\n\n    type: Literal[\"docker-container\"] = Field(\n        default=\"docker-container\", description=\"The type of infrastructure.\"\n    )\n    image: str = Field(\n        default_factory=get_prefect_image_name,\n        description=\"Tag of a Docker image to use. Defaults to the Prefect image.\",\n    )\n    image_pull_policy: Optional[ImagePullPolicy] = Field(\n        default=None, description=\"Specifies if the image should be pulled.\"\n    )\n    image_registry: Optional[DockerRegistry] = None\n    networks: List[str] = Field(\n        default_factory=list,\n        description=(\n            \"A list of strings specifying Docker networks to connect the container to.\"\n        ),\n    )\n    network_mode: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The network mode for the created container (e.g. host, bridge). If\"\n            \" 'networks' is set, this cannot be set.\"\n        ),\n    )\n    auto_remove: bool = Field(\n        default=False,\n        description=\"If set, the container will be removed on completion.\",\n    )\n    volumes: List[str] = Field(\n        default_factory=list,\n        description=(\n            \"A list of volume mount strings in the format of\"\n            ' \"local_path:container_path\".'\n        ),\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, the output will be streamed from the container to local standard\"\n            \" output.\"\n        ),\n    )\n    memswap_limit: Union[int, str] = Field(\n        default=None,\n        description=(\n            \"Total memory (memory + swap), -1 to disable swap. Should only be \"\n            \"set if `mem_limit` is also set. If `mem_limit` is set, this defaults to\"\n            \"allowing the container to use as much swap as memory. For example, if \"\n            \"`mem_limit` is 300m and `memswap_limit` is not set, the container can use \"\n            \"600m in total of memory and swap.\"\n        ),\n    )\n    mem_limit: Union[float, str] = Field(\n        default=None,\n        description=(\n            \"Memory limit of the created container. Accepts float values to enforce \"\n            \"a limit in bytes or a string with a unit e.g. 100000b, 1000k, 128m, 1g. \"\n            \"If a string is given without a unit, bytes are assumed.\"\n        ),\n    )\n    privileged: bool = Field(\n        default=False,\n        description=\"Give extended privileges to this container.\",\n    )\n\n    _block_type_name = \"Docker Container\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/2IfXXfMq66mrzJBDFFCHTp/6d8f320d9e4fc4393f045673d61ab612/Moby-logo.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainer\"\n\n    @validator(\"labels\")\n    def convert_labels_to_docker_format(cls, labels: Dict[str, str]):\n        labels = labels or {}\n        new_labels = {}\n        for name, value in labels.items():\n            if \"/\" in name:\n                namespace, key = name.split(\"/\", maxsplit=1)\n                new_namespace = \".\".join(reversed(namespace.split(\".\")))\n                new_labels[f\"{new_namespace}.{key}\"] = value\n            else:\n                new_labels[name] = value\n        return new_labels\n\n    @validator(\"volumes\")\n    def check_volume_format(cls, volumes):\n        for volume in volumes:\n            if \":\" not in volume:\n                raise ValueError(\n                    \"Invalid volume specification. \"\n                    f\"Expected format 'path:container_path', but got {volume!r}\"\n                )\n\n        return volumes\n\n    @sync_compatible\n    async def run(\n        self,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> Optional[bool]:\n        if not self.command:\n            raise ValueError(\"Docker container cannot be run with empty command.\")\n\n        # The `docker` library uses requests instead of an async http library so it must\n        # be run in a thread to avoid blocking the event loop.\n        container = await run_sync_in_worker_thread(self._create_and_start_container)\n        container_pid = self._get_infrastructure_pid(container_id=container.id)\n\n        # Mark as started and return the infrastructure id\n        if task_status:\n            task_status.started(container_pid)\n\n        # Monitor the container\n        container = await run_sync_in_worker_thread(\n            self._watch_container_safe, container\n        )\n\n        exit_code = container.attrs[\"State\"].get(\"ExitCode\")\n        return DockerContainerResult(\n            status_code=exit_code if exit_code is not None else -1,\n            identifier=container_pid,\n        )\n\n    async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n        docker_client = self._get_client()\n        base_url, container_id = self._parse_infrastructure_pid(infrastructure_pid)\n\n        if docker_client.api.base_url != base_url:\n            raise InfrastructureNotAvailable(\n                \"\".join(\n                    [\n                        (\n                            f\"Unable to stop container {container_id!r}: the current\"\n                            \" Docker API \"\n                        ),\n                        (\n                            f\"URL {docker_client.api.base_url!r} does not match the\"\n                            \" expected \"\n                        ),\n                        f\"API base URL {base_url}.\",\n                    ]\n                )\n            )\n        try:\n            container = docker_client.containers.get(container_id=container_id)\n        except docker.errors.NotFound:\n            raise InfrastructureNotFound(\n                f\"Unable to stop container {container_id!r}: The container was not\"\n                \" found.\"\n            )\n\n        try:\n            container.stop(timeout=grace_seconds)\n        except Exception:\n            raise\n\n    def preview(self):\n        # TODO: build and document a more sophisticated preview\n        docker_client = self._get_client()\n        try:\n            return json.dumps(self._build_container_settings(docker_client))\n        finally:\n            docker_client.close()\n\n    def _get_infrastructure_pid(self, container_id: str) -> str:\n\"\"\"Generates a Docker infrastructure_pid string in the form of\n        `<docker_host_base_url>:<container_id>`.\n        \"\"\"\n        docker_client = self._get_client()\n        base_url = docker_client.api.base_url\n        docker_client.close()\n        return f\"{base_url}:{container_id}\"\n\n    def _parse_infrastructure_pid(self, infrastructure_pid: str) -> Tuple[str, str]:\n\"\"\"Splits a Docker infrastructure_pid into its component parts\"\"\"\n\n        # base_url can contain `:` so we only want the last item of the split\n        base_url, container_id = infrastructure_pid.rsplit(\":\", 1)\n        return base_url, str(container_id)\n\n    def _build_container_settings(\n        self,\n        docker_client: \"DockerClient\",\n    ) -> Dict:\n        network_mode = self._get_network_mode()\n        return dict(\n            image=self.image,\n            network=self.networks[0] if self.networks else None,\n            network_mode=network_mode,\n            command=self.command,\n            environment=self._get_environment_variables(network_mode),\n            auto_remove=self.auto_remove,\n            labels={**CONTAINER_LABELS, **self.labels},\n            extra_hosts=self._get_extra_hosts(docker_client),\n            name=self._get_container_name(),\n            volumes=self.volumes,\n            mem_limit=self.mem_limit,\n            memswap_limit=self.memswap_limit,\n            privileged=self.privileged,\n        )\n\n    def _create_and_start_container(self) -> \"Container\":\n        if self.image_registry:\n            # If an image registry block was supplied, load an authenticated Docker\n            # client from the block. Otherwise, use an unauthenticated client to\n            # pull images from public registries.\n            docker_client = self.image_registry.get_docker_client()\n        else:\n            docker_client = self._get_client()\n        container_settings = self._build_container_settings(docker_client)\n\n        if self._should_pull_image(docker_client):\n            self.logger.info(f\"Pulling image {self.image!r}...\")\n            self._pull_image(docker_client)\n\n        container = self._create_container(docker_client, **container_settings)\n\n        # Add additional networks after the container is created; only one network can\n        # be attached at creation time\n        if len(self.networks) > 1:\n            for network_name in self.networks[1:]:\n                network = docker_client.networks.get(network_name)\n                network.connect(container)\n\n        # Start the container\n        container.start()\n\n        docker_client.close()\n\n        return container\n\n    def _get_image_and_tag(self) -> Tuple[str, Optional[str]]:\n        return parse_image_tag(self.image)\n\n    def _determine_image_pull_policy(self) -> ImagePullPolicy:\n\"\"\"\n        Determine the appropriate image pull policy.\n\n        1. If they specified an image pull policy, use that.\n\n        2. If they did not specify an image pull policy and gave us\n           the \"latest\" tag, use ImagePullPolicy.always.\n\n        3. If they did not specify an image pull policy and did not\n           specify a tag, use ImagePullPolicy.always.\n\n        4. If they did not specify an image pull policy and gave us\n           a tag other than \"latest\", use ImagePullPolicy.if_not_present.\n\n        This logic matches the behavior of Kubernetes.\n        See:https://kubernetes.io/docs/concepts/containers/images/#imagepullpolicy-defaulting\n        \"\"\"\n        if not self.image_pull_policy:\n            _, tag = self._get_image_and_tag()\n            if tag == \"latest\" or not tag:\n                return ImagePullPolicy.ALWAYS\n            return ImagePullPolicy.IF_NOT_PRESENT\n        return self.image_pull_policy\n\n    def _get_network_mode(self) -> Optional[str]:\n        # User's value takes precedence; this may collide with the incompatible options\n        # mentioned below.\n        if self.network_mode:\n            if sys.platform != \"linux\" and self.network_mode == \"host\":\n                warnings.warn(\n                    f\"{self.network_mode!r} network mode is not supported on platform \"\n                    f\"{sys.platform!r} and may not work as intended.\"\n                )\n            return self.network_mode\n\n        # Network mode is not compatible with networks or ports (we do not support ports\n        # yet though)\n        if self.networks:\n            return None\n\n        # Check for a local API connection\n        api_url = self.env.get(\"PREFECT_API_URL\", PREFECT_API_URL.value())\n\n        if api_url:\n            try:\n                _, netloc, _, _, _, _ = urllib.parse.urlparse(api_url)\n            except Exception as exc:\n                warnings.warn(\n                    f\"Failed to parse host from API URL {api_url!r} with exception: \"\n                    f\"{exc}\\nThe network mode will not be inferred.\"\n                )\n                return None\n\n            host = netloc.split(\":\")[0]\n\n            # If using a locally hosted API, use a host network on linux\n            if sys.platform == \"linux\" and (host == \"127.0.0.1\" or host == \"localhost\"):\n                return \"host\"\n\n        # Default to unset\n        return None\n\n    def _should_pull_image(self, docker_client: \"DockerClient\") -> bool:\n\"\"\"\n        Decide whether we need to pull the Docker image.\n        \"\"\"\n        image_pull_policy = self._determine_image_pull_policy()\n\n        if image_pull_policy is ImagePullPolicy.ALWAYS:\n            return True\n        elif image_pull_policy is ImagePullPolicy.NEVER:\n            return False\n        elif image_pull_policy is ImagePullPolicy.IF_NOT_PRESENT:\n            try:\n                # NOTE: images.get() wants the tag included with the image\n                # name, while images.pull() wants them split.\n                docker_client.images.get(self.image)\n            except docker.errors.ImageNotFound:\n                self.logger.debug(f\"Could not find Docker image locally: {self.image}\")\n                return True\n        return False\n\n    def _pull_image(self, docker_client: \"DockerClient\"):\n\"\"\"\n        Pull the image we're going to use to create the container.\n        \"\"\"\n        image, tag = self._get_image_and_tag()\n\n        return docker_client.images.pull(image, tag)\n\n    def _create_container(self, docker_client: \"DockerClient\", **kwargs) -> \"Container\":\n\"\"\"\n        Create a docker container with retries on name conflicts.\n\n        If the container already exists with the given name, an incremented index is\n        added.\n        \"\"\"\n        # Create the container with retries on name conflicts (with an incremented idx)\n        index = 0\n        container = None\n        name = original_name = kwargs.pop(\"name\")\n\n        while not container:\n            from docker.errors import APIError\n\n            try:\n                display_name = repr(name) if name else \"with auto-generated name\"\n                self.logger.info(f\"Creating Docker container {display_name}...\")\n                container = docker_client.containers.create(name=name, **kwargs)\n            except APIError as exc:\n                if \"Conflict\" in str(exc) and \"container name\" in str(exc):\n                    self.logger.info(\n                        f\"Docker container name {display_name} already exists; \"\n                        \"retrying...\"\n                    )\n                    index += 1\n                    name = f\"{original_name}-{index}\"\n                else:\n                    raise\n\n        self.logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        return container\n\n    def _watch_container_safe(self, container: \"Container\") -> \"Container\":\n        # Monitor the container capturing the latest snapshot while capturing\n        # not found errors\n        docker_client = self._get_client()\n\n        try:\n            for latest_container in self._watch_container(docker_client, container.id):\n                container = latest_container\n        except docker.errors.NotFound:\n            # The container was removed during watching\n            self.logger.warning(\n                f\"Docker container {container.name} was removed before we could wait \"\n                \"for its completion.\"\n            )\n        finally:\n            docker_client.close()\n\n        return container\n\n    def _watch_container(\n        self, docker_client: \"DockerClient\", container_id: str\n    ) -> Generator[None, None, \"Container\"]:\n        container: \"Container\" = docker_client.containers.get(container_id)\n\n        status = container.status\n        self.logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        yield container\n\n        if self.stream_output:\n            try:\n                for log in container.logs(stream=True):\n                    log: bytes\n                    print(log.decode().rstrip())\n            except docker.errors.APIError as exc:\n                if \"marked for removal\" in str(exc):\n                    self.logger.warning(\n                        f\"Docker container {container.name} was marked for removal\"\n                        \" before logs could be retrieved. Output will not be\"\n                        \" streamed. \"\n                    )\n                else:\n                    self.logger.exception(\n                        \"An unexpected Docker API error occured while streaming output \"\n                        f\"from container {container.name}.\"\n                    )\n\n            container.reload()\n            if container.status != status:\n                self.logger.info(\n                    f\"Docker container {container.name!r} has status\"\n                    f\" {container.status!r}\"\n                )\n            yield container\n\n        container.wait()\n        self.logger.info(\n            f\"Docker container {container.name!r} has status {container.status!r}\"\n        )\n        yield container\n\n    def _get_client(self):\n        try:\n            with warnings.catch_warnings():\n                # Silence warnings due to use of deprecated methods within dockerpy\n                # See https://github.com/docker/docker-py/pull/2931\n                warnings.filterwarnings(\n                    \"ignore\",\n                    message=\"distutils Version classes are deprecated.*\",\n                    category=DeprecationWarning,\n                )\n\n                docker_client = docker.from_env()\n\n        except docker.errors.DockerException as exc:\n            raise RuntimeError(\"Could not connect to Docker.\") from exc\n\n        return docker_client\n\n    def _get_container_name(self) -> Optional[str]:\n\"\"\"\n        Generates a container name to match the configured name, ensuring it is Docker\n        compatible.\n        \"\"\"\n        # Must match `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+` in the end\n        if not self.name:\n            return None\n\n        return (\n            slugify(\n                self.name,\n                lowercase=False,\n                # Docker does not limit length but URL limits apply eventually so\n                # limit the length for safety\n                max_length=250,\n                # Docker allows these characters for container names\n                regex_pattern=r\"[^a-zA-Z0-9_.-]+\",\n            ).lstrip(\n                # Docker does not allow leading underscore, dash, or period\n                \"_-.\"\n            )\n            # Docker does not allow 0 character names so cast to null if the name is\n            # empty after slufification\n            or None\n        )\n\n    def _get_extra_hosts(self, docker_client) -> Dict[str, str]:\n\"\"\"\n        A host.docker.internal -> host-gateway mapping is necessary for communicating\n        with the API on Linux machines. Docker Desktop on macOS will automatically\n        already have this mapping.\n        \"\"\"\n        if sys.platform == \"linux\" and (\n            # Do not warn if the user has specified a host manually that does not use\n            # a local address\n            \"PREFECT_API_URL\" not in self.env\n            or re.search(\n                \".*(localhost)|(127.0.0.1)|(host.docker.internal).*\",\n                self.env[\"PREFECT_API_URL\"],\n            )\n        ):\n            user_version = packaging.version.parse(\n                format_outlier_version_name(docker_client.version()[\"Version\"])\n            )\n            required_version = packaging.version.parse(\"20.10.0\")\n\n            if user_version < required_version:\n                warnings.warn(\n                    \"`host.docker.internal` could not be automatically resolved to\"\n                    \" your local ip address. This feature is not supported on Docker\"\n                    f\" Engine v{user_version}, upgrade to v{required_version}+ if you\"\n                    \" encounter issues.\"\n                )\n                return {}\n            else:\n                # Compatibility for linux -- https://github.com/docker/cli/issues/2290\n                # Only supported by Docker v20.10.0+ which is our minimum recommend version\n                return {\"host.docker.internal\": \"host-gateway\"}\n\n    def _get_environment_variables(self, network_mode):\n        # If the API URL has been set by the base environment rather than the by the\n        # user, update the value to ensure connectivity when using a bridge network by\n        # updating local connections to use the docker internal host unless the\n        # network mode is \"host\" where localhost is available already.\n        env = {**self._base_environment(), **self.env}\n\n        if (\n            \"PREFECT_API_URL\" in env\n            and \"PREFECT_API_URL\" not in self.env\n            and network_mode != \"host\"\n        ):\n            env[\"PREFECT_API_URL\"] = (\n                env[\"PREFECT_API_URL\"]\n                .replace(\"localhost\", \"host.docker.internal\")\n                .replace(\"127.0.0.1\", \"host.docker.internal\")\n            )\n\n        # Drop null values allowing users to \"unset\" variables\n        return {key: value for key, value in env.items() if value is not None}\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.DockerContainerResult","title":"DockerContainerResult","text":"

Bases: InfrastructureResult

Contains information about a completed Docker container

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/container.py
class DockerContainerResult(InfrastructureResult):\n\"\"\"Contains information about a completed Docker container\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Infrastructure","title":"Infrastructure","text":"

Bases: Block, abc.ABC

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/base.py
class Infrastructure(Block, abc.ABC):\n    _block_schema_capabilities = [\"run-infrastructure\"]\n\n    type: str\n\n    env: Dict[str, Optional[str]] = pydantic.Field(\n        default_factory=dict,\n        title=\"Environment\",\n        description=\"Environment variables to set in the configured infrastructure.\",\n    )\n    labels: Dict[str, str] = pydantic.Field(\n        default_factory=dict,\n        description=\"Labels applied to the infrastructure for metadata purposes.\",\n    )\n    name: Optional[str] = pydantic.Field(\n        default=None,\n        description=\"Name applied to the infrastructure for identification.\",\n    )\n    command: Optional[List[str]] = pydantic.Field(\n        default=None,\n        description=\"The command to run in the infrastructure.\",\n    )\n\n    @abc.abstractmethod\n    async def run(\n        self,\n        task_status: anyio.abc.TaskStatus = None,\n    ) -> InfrastructureResult:\n\"\"\"\n        Run the infrastructure.\n\n        If provided a `task_status`, the status will be reported as started when the\n        infrastructure is successfully created. The status return value will be an\n        identifier for the infrastructure.\n\n        The call will then monitor the created infrastructure, returning a result at\n        the end containing a status code indicating if the infrastructure exited cleanly\n        or encountered an error.\n        \"\"\"\n        # Note: implementations should include `sync_compatible`\n\n    @abc.abstractmethod\n    def preview(self) -> str:\n\"\"\"\n        View a preview of the infrastructure that would be run.\n        \"\"\"\n\n    @property\n    def logger(self):\n        return get_logger(f\"prefect.infrastructure.{self.type}\")\n\n    @classmethod\n    def _base_environment(cls) -> Dict[str, str]:\n\"\"\"\n        Environment variables that should be passed to all created infrastructure.\n\n        These values should be overridable with the `env` field.\n        \"\"\"\n        return get_current_settings().to_environment_variables(exclude_unset=True)\n\n    def prepare_for_flow_run(\n        self: Self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"Deployment\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ) -> Self:\n\"\"\"\n        Return an infrastructure block that is prepared to execute a flow run.\n        \"\"\"\n        if deployment is not None:\n            deployment_labels = self._base_deployment_labels(deployment)\n        else:\n            deployment_labels = {}\n\n        if flow is not None:\n            flow_labels = self._base_flow_labels(flow)\n        else:\n            flow_labels = {}\n\n        return self.copy(\n            update={\n                \"env\": {**self._base_flow_run_environment(flow_run), **self.env},\n                \"labels\": {\n                    **self._base_flow_run_labels(flow_run),\n                    **deployment_labels,\n                    **flow_labels,\n                    **self.labels,\n                },\n                \"name\": self.name or flow_run.name,\n                \"command\": self.command or self._base_flow_run_command(),\n            }\n        )\n\n    @staticmethod\n    def _base_flow_run_command() -> List[str]:\n\"\"\"\n        Generate a command for a flow run job.\n        \"\"\"\n        return [\"python\", \"-m\", \"prefect.engine\"]\n\n    @staticmethod\n    def _base_flow_run_labels(flow_run: \"FlowRun\") -> Dict[str, str]:\n\"\"\"\n        Generate a dictionary of labels for a flow run job.\n        \"\"\"\n        return {\n            \"prefect.io/flow-run-id\": str(flow_run.id),\n            \"prefect.io/flow-run-name\": flow_run.name,\n            \"prefect.io/version\": prefect.__version__,\n        }\n\n    @staticmethod\n    def _base_flow_run_environment(flow_run: \"FlowRun\") -> Dict[str, str]:\n\"\"\"\n        Generate a dictionary of environment variables for a flow run job.\n        \"\"\"\n        environment = {}\n        environment[\"PREFECT__FLOW_RUN_ID\"] = flow_run.id.hex\n        return environment\n\n    @staticmethod\n    def _base_deployment_labels(deployment: \"Deployment\") -> Dict[str, str]:\n        labels = {\n            \"prefect.io/deployment-name\": deployment.name,\n        }\n        if deployment.updated is not None:\n            labels[\"prefect.io/deployment-updated\"] = deployment.updated.in_timezone(\n                \"utc\"\n            ).to_iso8601_string()\n        return labels\n\n    @staticmethod\n    def _base_flow_labels(flow: \"Flow\") -> Dict[str, str]:\n        return {\n            \"prefect.io/flow-name\": flow.name,\n        }\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.base.Infrastructure.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

Return an infrastructure block that is prepared to execute a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/base.py
def prepare_for_flow_run(\n    self: Self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"Deployment\"] = None,\n    flow: Optional[\"Flow\"] = None,\n) -> Self:\n\"\"\"\n    Return an infrastructure block that is prepared to execute a flow run.\n    \"\"\"\n    if deployment is not None:\n        deployment_labels = self._base_deployment_labels(deployment)\n    else:\n        deployment_labels = {}\n\n    if flow is not None:\n        flow_labels = self._base_flow_labels(flow)\n    else:\n        flow_labels = {}\n\n    return self.copy(\n        update={\n            \"env\": {**self._base_flow_run_environment(flow_run), **self.env},\n            \"labels\": {\n                **self._base_flow_run_labels(flow_run),\n                **deployment_labels,\n                **flow_labels,\n                **self.labels,\n            },\n            \"name\": self.name or flow_run.name,\n            \"command\": self.command or self._base_flow_run_command(),\n        }\n    )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.base.Infrastructure.preview","title":"preview abstractmethod","text":"

View a preview of the infrastructure that would be run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/base.py
@abc.abstractmethod\ndef preview(self) -> str:\n\"\"\"\n    View a preview of the infrastructure that would be run.\n    \"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.base.Infrastructure.run","title":"run abstractmethod async","text":"

Run the infrastructure.

If provided a task_status, the status will be reported as started when the infrastructure is successfully created. The status return value will be an identifier for the infrastructure.

The call will then monitor the created infrastructure, returning a result at the end containing a status code indicating if the infrastructure exited cleanly or encountered an error.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/base.py
@abc.abstractmethod\nasync def run(\n    self,\n    task_status: anyio.abc.TaskStatus = None,\n) -> InfrastructureResult:\n\"\"\"\n    Run the infrastructure.\n\n    If provided a `task_status`, the status will be reported as started when the\n    infrastructure is successfully created. The status return value will be an\n    identifier for the infrastructure.\n\n    The call will then monitor the created infrastructure, returning a result at\n    the end containing a status code indicating if the infrastructure exited cleanly\n    or encountered an error.\n    \"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesClusterConfig","title":"KubernetesClusterConfig","text":"

Bases: Block

Stores configuration for interaction with Kubernetes clusters.

See from_file for creation.

Attributes:

Name Type Description config Dict

The entire loaded YAML contents of a kubectl config file

context_name str

The name of the kubectl context to use

Example

Load a saved Kubernetes cluster config:

from prefect.blocks.kubernetes import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
class KubernetesClusterConfig(Block):\n\"\"\"\n    Stores configuration for interaction with Kubernetes clusters.\n\n    See `from_file` for creation.\n\n    Attributes:\n        config: The entire loaded YAML contents of a kubectl config file\n        context_name: The name of the kubectl context to use\n\n    Example:\n        Load a saved Kubernetes cluster config:\n        ```python\n        from prefect.blocks.kubernetes import KubernetesClusterConfig\n\n        cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Kubernetes Cluster Config\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/1zrSeY8DZ1MJZs2BAyyyGk/20445025358491b8b72600b8f996125b/Kubernetes_logo_without_workmark.svg.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig\"\n\n    config: Dict = Field(\n        default=..., description=\"The entire contents of a kubectl config file.\"\n    )\n    context_name: str = Field(\n        default=..., description=\"The name of the kubectl context to use.\"\n    )\n\n    @validator(\"config\", pre=True)\n    def parse_yaml_config(cls, value):\n        if isinstance(value, str):\n            return yaml.safe_load(value)\n        return value\n\n    @classmethod\n    def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n\"\"\"\n        Create a cluster config from the a Kubernetes config file.\n\n        By default, the current context in the default Kubernetes config file will be\n        used.\n\n        An alternative file or context may be specified.\n\n        The entire config file will be loaded and stored.\n        \"\"\"\n        kube_config = kubernetes.config.kube_config\n\n        path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n        path = path.expanduser().resolve()\n\n        # Determine the context\n        existing_contexts, current_context = kube_config.list_kube_config_contexts(\n            config_file=str(path)\n        )\n        context_names = {ctx[\"name\"] for ctx in existing_contexts}\n        if context_name:\n            if context_name not in context_names:\n                raise ValueError(\n                    f\"Context {context_name!r} not found. \"\n                    f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n                )\n        else:\n            context_name = current_context[\"name\"]\n\n        # Load the entire config file\n        config_file_contents = path.read_text()\n        config_dict = yaml.safe_load(config_file_contents)\n\n        return cls(config=config_dict, context_name=context_name)\n\n    def get_api_client(self) -> \"ApiClient\":\n\"\"\"\n        Returns a Kubernetes API client for this cluster config.\n        \"\"\"\n        return kubernetes.config.kube_config.new_client_from_config_dict(\n            config_dict=self.config, context=self.context_name\n        )\n\n    def configure_client(self) -> None:\n\"\"\"\n        Activates this cluster configuration by loading the configuration into the\n        Kubernetes Python client. After calling this, Kubernetes API clients can use\n        this config's context.\n        \"\"\"\n        kubernetes.config.kube_config.load_kube_config_from_dict(\n            config_dict=self.config, context=self.context_name\n        )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.blocks.kubernetes.KubernetesClusterConfig.configure_client","title":"configure_client","text":"

Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
def configure_client(self) -> None:\n\"\"\"\n    Activates this cluster configuration by loading the configuration into the\n    Kubernetes Python client. After calling this, Kubernetes API clients can use\n    this config's context.\n    \"\"\"\n    kubernetes.config.kube_config.load_kube_config_from_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.blocks.kubernetes.KubernetesClusterConfig.from_file","title":"from_file classmethod","text":"

Create a cluster config from the a Kubernetes config file.

By default, the current context in the default Kubernetes config file will be used.

An alternative file or context may be specified.

The entire config file will be loaded and stored.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
@classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n\"\"\"\n    Create a cluster config from the a Kubernetes config file.\n\n    By default, the current context in the default Kubernetes config file will be\n    used.\n\n    An alternative file or context may be specified.\n\n    The entire config file will be loaded and stored.\n    \"\"\"\n    kube_config = kubernetes.config.kube_config\n\n    path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n    path = path.expanduser().resolve()\n\n    # Determine the context\n    existing_contexts, current_context = kube_config.list_kube_config_contexts(\n        config_file=str(path)\n    )\n    context_names = {ctx[\"name\"] for ctx in existing_contexts}\n    if context_name:\n        if context_name not in context_names:\n            raise ValueError(\n                f\"Context {context_name!r} not found. \"\n                f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n            )\n    else:\n        context_name = current_context[\"name\"]\n\n    # Load the entire config file\n    config_file_contents = path.read_text()\n    config_dict = yaml.safe_load(config_file_contents)\n\n    return cls(config=config_dict, context_name=context_name)\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.blocks.kubernetes.KubernetesClusterConfig.get_api_client","title":"get_api_client","text":"

Returns a Kubernetes API client for this cluster config.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
def get_api_client(self) -> \"ApiClient\":\n\"\"\"\n    Returns a Kubernetes API client for this cluster config.\n    \"\"\"\n    return kubernetes.config.kube_config.new_client_from_config_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob","title":"KubernetesJob","text":"

Bases: Infrastructure

Runs a command as a Kubernetes Job.

For a guided tutorial, see How to use Kubernetes with Prefect. For more information, including examples for customizing the resulting manifest, see KubernetesJob infrastructure concepts.

Attributes:

Name Type Description cluster_config Optional[KubernetesClusterConfig]

An optional Kubernetes cluster config to use for this job.

command Optional[KubernetesClusterConfig]

A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.

customizations JsonPatch

A list of JSON 6902 patches to apply to the base Job manifest.

env JsonPatch

Environment variables to set for the container.

finished_job_ttl Optional[int]

The number of seconds to retain jobs after completion. If set, finished jobs will be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be manually removed.

image Optional[str]

An optional string specifying the image reference of a container image to use for the job, for example, docker.io/prefecthq/prefect:2-latest. The behavior is as described in https://kubernetes.io/docs/concepts/containers/images/#image-names. Defaults to the Prefect image.

image_pull_policy Optional[KubernetesImagePullPolicy]

The Kubernetes image pull policy to use for job containers.

job KubernetesManifest

The base manifest for the Kubernetes Job.

job_watch_timeout_seconds Optional[int]

Number of seconds to wait for the job to complete before marking it as crashed. Defaults to None, which means no timeout will be enforced.

labels Optional[int]

An optional dictionary of labels to add to the job.

name Optional[int]

An optional name for the job.

namespace Optional[str]

An optional string signifying the Kubernetes namespace to use.

pod_watch_timeout_seconds int

Number of seconds to watch for pod creation before timing out (default 60).

service_account_name Optional[str]

An optional string specifying which Kubernetes service account to use.

stream_output bool

If set, stream output from the job to local standard output.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/kubernetes.py
class KubernetesJob(Infrastructure):\n\"\"\"\n    Runs a command as a Kubernetes Job.\n\n    For a guided tutorial, see [How to use Kubernetes with Prefect](https://medium.com/the-prefect-blog/how-to-use-kubernetes-with-prefect-419b2e8b8cb2/).\n    For more information, including examples for customizing the resulting manifest, see [`KubernetesJob` infrastructure concepts](https://docs.prefect.io/concepts/infrastructure/#kubernetesjob).\n\n    Attributes:\n        cluster_config: An optional Kubernetes cluster config to use for this job.\n        command: A list of strings specifying the command to run in the container to\n            start the flow run. In most cases you should not override this.\n        customizations: A list of JSON 6902 patches to apply to the base Job manifest.\n        env: Environment variables to set for the container.\n        finished_job_ttl: The number of seconds to retain jobs after completion. If set, finished jobs will\n            be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be\n            manually removed.\n        image: An optional string specifying the image reference of a container image\n            to use for the job, for example, docker.io/prefecthq/prefect:2-latest. The\n            behavior is as described in https://kubernetes.io/docs/concepts/containers/images/#image-names.\n            Defaults to the Prefect image.\n        image_pull_policy: The Kubernetes image pull policy to use for job containers.\n        job: The base manifest for the Kubernetes Job.\n        job_watch_timeout_seconds: Number of seconds to wait for the job to complete\n            before marking it as crashed. Defaults to `None`, which means no timeout will be enforced.\n        labels: An optional dictionary of labels to add to the job.\n        name: An optional name for the job.\n        namespace: An optional string signifying the Kubernetes namespace to use.\n        pod_watch_timeout_seconds: Number of seconds to watch for pod creation before timing out (default 60).\n        service_account_name: An optional string specifying which Kubernetes service account to use.\n        stream_output: If set, stream output from the job to local standard output.\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/1zrSeY8DZ1MJZs2BAyyyGk/20445025358491b8b72600b8f996125b/Kubernetes_logo_without_workmark.svg.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJob\"\n\n    type: Literal[\"kubernetes-job\"] = Field(\n        default=\"kubernetes-job\", description=\"The type of infrastructure.\"\n    )\n    # shortcuts for the most common user-serviceable settings\n    image: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The image reference of a container image to use for the job, for example,\"\n            \" `docker.io/prefecthq/prefect:2-latest`.The behavior is as described in\"\n            \" the Kubernetes documentation and uses the latest version of Prefect by\"\n            \" default, unless an image is already present in a provided job manifest.\"\n        ),\n    )\n    namespace: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The Kubernetes namespace to use for this job. Defaults to 'default' \"\n            \"unless a namespace is already present in a provided job manifest.\"\n        ),\n    )\n    service_account_name: Optional[str] = Field(\n        default=None, description=\"The Kubernetes service account to use for this job.\"\n    )\n    image_pull_policy: Optional[KubernetesImagePullPolicy] = Field(\n        default=None,\n        description=\"The Kubernetes image pull policy to use for job containers.\",\n    )\n\n    # connection to a cluster\n    cluster_config: Optional[KubernetesClusterConfig] = Field(\n        default=None, description=\"The Kubernetes cluster config to use for this job.\"\n    )\n\n    # settings allowing full customization of the Job\n    job: KubernetesManifest = Field(\n        default_factory=lambda: KubernetesJob.base_job_manifest(),\n        description=\"The base manifest for the Kubernetes Job.\",\n        title=\"Base Job Manifest\",\n    )\n    customizations: JsonPatch = Field(\n        default_factory=lambda: JsonPatch([]),\n        description=\"A list of JSON 6902 patches to apply to the base Job manifest.\",\n    )\n\n    # controls the behavior of execution\n    job_watch_timeout_seconds: Optional[int] = Field(\n        default=None,\n        description=(\n            \"Number of seconds to wait for the job to complete before marking it as\"\n            \" crashed. Defaults to `None`, which means no timeout will be enforced.\"\n        ),\n    )\n    pod_watch_timeout_seconds: int = Field(\n        default=60,\n        description=\"Number of seconds to watch for pod creation before timing out.\",\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, output will be streamed from the job to local standard output.\"\n        ),\n    )\n    finished_job_ttl: Optional[int] = Field(\n        default=None,\n        description=(\n            \"The number of seconds to retain jobs after completion. If set, finished\"\n            \" jobs will be cleaned up by Kubernetes after the given delay. If None\"\n            \" (default), jobs will need to be manually removed.\"\n        ),\n    )\n\n    # internal-use only right now\n    _api_dns_name: Optional[str] = None  # Replaces 'localhost' in API URL\n\n    _block_type_name = \"Kubernetes Job\"\n\n    @validator(\"job\")\n    def ensure_job_includes_all_required_components(cls, value: KubernetesManifest):\n        patch = JsonPatch.from_diff(value, cls.base_job_manifest())\n        missing_paths = sorted([op[\"path\"] for op in patch if op[\"op\"] == \"add\"])\n        if missing_paths:\n            raise ValueError(\n                \"Job is missing required attributes at the following paths: \"\n                f\"{', '.join(missing_paths)}\"\n            )\n        return value\n\n    @validator(\"job\")\n    def ensure_job_has_compatible_values(cls, value: KubernetesManifest):\n        patch = JsonPatch.from_diff(value, cls.base_job_manifest())\n        incompatible = sorted(\n            [\n                f\"{op['path']} must have value {op['value']!r}\"\n                for op in patch\n                if op[\"op\"] == \"replace\"\n            ]\n        )\n        if incompatible:\n            raise ValueError(\n                \"Job has incompatble values for the following attributes: \"\n                f\"{', '.join(incompatible)}\"\n            )\n        return value\n\n    @validator(\"customizations\", pre=True)\n    def cast_customizations_to_a_json_patch(\n        cls, value: Union[List[Dict], JsonPatch, str]\n    ) -> JsonPatch:\n        if isinstance(value, list):\n            return JsonPatch(value)\n        elif isinstance(value, str):\n            try:\n                return JsonPatch(json.loads(value))\n            except json.JSONDecodeError as exc:\n                raise ValueError(\n                    f\"Unable to parse customizations as JSON: {value}. Please make sure\"\n                    \" that the provided value is a valid JSON string.\"\n                ) from exc\n        return value\n\n    @root_validator\n    def default_namespace(cls, values):\n        job = values.get(\"job\")\n\n        namespace = values.get(\"namespace\")\n        job_namespace = job[\"metadata\"].get(\"namespace\") if job else None\n\n        if not namespace and not job_namespace:\n            values[\"namespace\"] = \"default\"\n\n        return values\n\n    @root_validator\n    def default_image(cls, values):\n        job = values.get(\"job\")\n        image = values.get(\"image\")\n        job_image = (\n            job[\"spec\"][\"template\"][\"spec\"][\"containers\"][0].get(\"image\")\n            if job\n            else None\n        )\n\n        if not image and not job_image:\n            values[\"image\"] = get_prefect_image_name()\n\n        return values\n\n    # Support serialization of the 'JsonPatch' type\n    class Config:\n        arbitrary_types_allowed = True\n        json_encoders = {JsonPatch: lambda p: p.patch}\n\n    def dict(self, *args, **kwargs) -> Dict:\n        d = super().dict(*args, **kwargs)\n        d[\"customizations\"] = self.customizations.patch\n        return d\n\n    @classmethod\n    def base_job_manifest(cls) -> KubernetesManifest:\n\"\"\"Produces the bare minimum allowed Job manifest\"\"\"\n        return {\n            \"apiVersion\": \"batch/v1\",\n            \"kind\": \"Job\",\n            \"metadata\": {\"labels\": {}},\n            \"spec\": {\n                \"template\": {\n                    \"spec\": {\n                        \"parallelism\": 1,\n                        \"completions\": 1,\n                        \"restartPolicy\": \"Never\",\n                        \"containers\": [\n                            {\n                                \"name\": \"prefect-job\",\n                                \"env\": [],\n                            }\n                        ],\n                    }\n                }\n            },\n        }\n\n    # Note that we're using the yaml package to load both YAML and JSON files below.\n    # This works because YAML is a strict superset of JSON:\n    #\n    #   > The YAML 1.23 specification was published in 2009. Its primary focus was\n    #   > making YAML a strict superset of JSON. It also removed many of the problematic\n    #   > implicit typing recommendations.\n    #\n    #   https://yaml.org/spec/1.2.2/#12-yaml-history\n\n    @classmethod\n    def job_from_file(cls, filename: str) -> KubernetesManifest:\n\"\"\"Load a Kubernetes Job manifest from a YAML or JSON file.\"\"\"\n        with open(filename, \"r\", encoding=\"utf-8\") as f:\n            return yaml.load(f, yaml.SafeLoader)\n\n    @classmethod\n    def customize_from_file(cls, filename: str) -> JsonPatch:\n\"\"\"Load an RFC 6902 JSON patch from a YAML or JSON file.\"\"\"\n        with open(filename, \"r\", encoding=\"utf-8\") as f:\n            return JsonPatch(yaml.load(f, yaml.SafeLoader))\n\n    @sync_compatible\n    async def run(\n        self,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> KubernetesJobResult:\n        if not self.command:\n            raise ValueError(\"Kubernetes job cannot be run with empty command.\")\n\n        self._configure_kubernetes_library_client()\n        manifest = self.build_job()\n        job = await run_sync_in_worker_thread(self._create_job, manifest)\n\n        pid = await run_sync_in_worker_thread(self._get_infrastructure_pid, job)\n        # Indicate that the job has started\n        if task_status is not None:\n            task_status.started(pid)\n\n        # Monitor the job until completion\n        status_code = await run_sync_in_worker_thread(\n            self._watch_job, job.metadata.name\n        )\n        return KubernetesJobResult(identifier=pid, status_code=status_code)\n\n    async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n        self._configure_kubernetes_library_client()\n        job_cluster_uid, job_namespace, job_name = self._parse_infrastructure_pid(\n            infrastructure_pid\n        )\n\n        if not job_namespace == self.namespace:\n            raise InfrastructureNotAvailable(\n                f\"Unable to kill job {job_name!r}: The job is running in namespace \"\n                f\"{job_namespace!r} but this block is configured to use \"\n                f\"{self.namespace!r}.\"\n            )\n\n        current_cluster_uid = self._get_cluster_uid()\n        if job_cluster_uid != current_cluster_uid:\n            raise InfrastructureNotAvailable(\n                f\"Unable to kill job {job_name!r}: The job is running on another \"\n                \"cluster.\"\n            )\n\n        with self.get_batch_client() as batch_client:\n            try:\n                batch_client.delete_namespaced_job(\n                    name=job_name,\n                    namespace=job_namespace,\n                    grace_period_seconds=grace_seconds,\n                    # Foreground propagation deletes dependent objects before deleting owner objects.\n                    # This ensures that the pods are cleaned up before the job is marked as deleted.\n                    # See: https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion\n                    propagation_policy=\"Foreground\",\n                )\n            except kubernetes.client.exceptions.ApiException as exc:\n                if exc.status == 404:\n                    raise InfrastructureNotFound(\n                        f\"Unable to kill job {job_name!r}: The job was not found.\"\n                    ) from exc\n                else:\n                    raise\n\n    def preview(self):\n        return yaml.dump(self.build_job())\n\n    def build_job(self) -> KubernetesManifest:\n\"\"\"Builds the Kubernetes Job Manifest\"\"\"\n        job_manifest = copy.copy(self.job)\n        job_manifest = self._shortcut_customizations().apply(job_manifest)\n        job_manifest = self.customizations.apply(job_manifest)\n        return job_manifest\n\n    @contextmanager\n    def get_batch_client(self) -> Generator[\"BatchV1Api\", None, None]:\n        with kubernetes.client.ApiClient() as client:\n            try:\n                yield kubernetes.client.BatchV1Api(api_client=client)\n            finally:\n                client.rest_client.pool_manager.clear()\n\n    @contextmanager\n    def get_client(self) -> Generator[\"CoreV1Api\", None, None]:\n        with kubernetes.client.ApiClient() as client:\n            try:\n                yield kubernetes.client.CoreV1Api(api_client=client)\n            finally:\n                client.rest_client.pool_manager.clear()\n\n    def _get_infrastructure_pid(self, job: \"V1Job\") -> str:\n\"\"\"\n        Generates a Kubernetes infrastructure PID.\n\n        The PID is in the format: \"<cluster uid>:<namespace>:<job name>\".\n        \"\"\"\n        cluster_uid = self._get_cluster_uid()\n        pid = f\"{cluster_uid}:{self.namespace}:{job.metadata.name}\"\n        return pid\n\n    def _parse_infrastructure_pid(\n        self, infrastructure_pid: str\n    ) -> Tuple[str, str, str]:\n\"\"\"\n        Parse a Kubernetes infrastructure PID into its component parts.\n\n        Returns a cluster UID, namespace, and job name.\n        \"\"\"\n        cluster_uid, namespace, job_name = infrastructure_pid.split(\":\", 2)\n        return cluster_uid, namespace, job_name\n\n    def _get_cluster_uid(self) -> str:\n\"\"\"\n        Gets a unique id for the current cluster being used.\n\n        There is no real unique identifier for a cluster. However, the `kube-system`\n        namespace is immutable and has a persistence UID that we use instead.\n\n        PREFECT_KUBERNETES_CLUSTER_UID can be set in cases where the `kube-system`\n        namespace cannot be read e.g. when a cluster role cannot be created. If set,\n        this variable will be used and we will not attempt to read the `kube-system`\n        namespace.\n\n        See https://github.com/kubernetes/kubernetes/issues/44954\n        \"\"\"\n        # Default to an environment variable\n        env_cluster_uid = os.environ.get(\"PREFECT_KUBERNETES_CLUSTER_UID\")\n        if env_cluster_uid:\n            return env_cluster_uid\n\n        # Read the UID from the cluster namespace\n        with self.get_client() as client:\n            namespace = client.read_namespace(\"kube-system\")\n        cluster_uid = namespace.metadata.uid\n\n        return cluster_uid\n\n    def _configure_kubernetes_library_client(self) -> None:\n\"\"\"\n        Set the correct kubernetes client configuration.\n\n        WARNING: This action is not threadsafe and may override the configuration\n                  specified by another `KubernetesJob` instance.\n        \"\"\"\n        # TODO: Investigate returning a configured client so calls on other threads\n        #       will not invalidate the config needed here\n\n        # if a k8s cluster block is provided to the flow runner, use that\n        if self.cluster_config:\n            self.cluster_config.configure_client()\n        else:\n            # If no block specified, try to load Kubernetes configuration within a cluster. If that doesn't\n            # work, try to load the configuration from the local environment, allowing\n            # any further ConfigExceptions to bubble up.\n            try:\n                kubernetes.config.load_incluster_config()\n            except kubernetes.config.ConfigException:\n                kubernetes.config.load_kube_config()\n\n    def _shortcut_customizations(self) -> JsonPatch:\n\"\"\"Produces the JSON 6902 patch for the most commonly used customizations, like\n        image and namespace, which we offer as top-level parameters (with sensible\n        default values)\"\"\"\n        shortcuts = []\n\n        if self.namespace:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/metadata/namespace\",\n                    \"value\": self.namespace,\n                }\n            )\n\n        if self.image:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/containers/0/image\",\n                    \"value\": self.image,\n                }\n            )\n\n        shortcuts += [\n            {\n                \"op\": \"add\",\n                \"path\": (\n                    f\"/metadata/labels/{self._slugify_label_key(key).replace('/', '~1', 1)}\"\n                ),\n                \"value\": self._slugify_label_value(value),\n            }\n            for key, value in self.labels.items()\n        ]\n\n        shortcuts += [\n            {\n                \"op\": \"add\",\n                \"path\": \"/spec/template/spec/containers/0/env/-\",\n                \"value\": {\"name\": key, \"value\": value},\n            }\n            for key, value in self._get_environment_variables().items()\n        ]\n\n        if self.image_pull_policy:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/containers/0/imagePullPolicy\",\n                    \"value\": self.image_pull_policy.value,\n                }\n            )\n\n        if self.service_account_name:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/serviceAccountName\",\n                    \"value\": self.service_account_name,\n                }\n            )\n\n        if self.finished_job_ttl is not None:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/ttlSecondsAfterFinished\",\n                    \"value\": self.finished_job_ttl,\n                }\n            )\n\n        if self.command:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/spec/template/spec/containers/0/args\",\n                    \"value\": self.command,\n                }\n            )\n\n        if self.name:\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/metadata/generateName\",\n                    \"value\": self._slugify_name(self.name) + \"-\",\n                }\n            )\n        else:\n            # Generate name is required\n            shortcuts.append(\n                {\n                    \"op\": \"add\",\n                    \"path\": \"/metadata/generateName\",\n                    \"value\": (\n                        \"prefect-job-\"\n                        # We generate a name using a hash of the primary job settings\n                        + stable_hash(\n                            *self.command,\n                            *self.env.keys(),\n                            *[v for v in self.env.values() if v is not None],\n                        )\n                        + \"-\"\n                    ),\n                }\n            )\n\n        return JsonPatch(shortcuts)\n\n    def _get_job(self, job_id: str) -> Optional[\"V1Job\"]:\n        with self.get_batch_client() as batch_client:\n            try:\n                job = batch_client.read_namespaced_job(job_id, self.namespace)\n            except kubernetes.client.exceptions.ApiException:\n                self.logger.error(f\"Job {job_id!r} was removed.\", exc_info=True)\n                return None\n            return job\n\n    def _get_job_pod(self, job_name: str) -> \"V1Pod\":\n\"\"\"Get the first running pod for a job.\"\"\"\n\n        # Wait until we find a running pod for the job\n        # if `pod_watch_timeout_seconds` is None, no timeout will be enforced\n        watch = kubernetes.watch.Watch()\n        self.logger.debug(f\"Job {job_name!r}: Starting watch for pod start...\")\n        last_phase = None\n        with self.get_client() as client:\n            for event in watch.stream(\n                func=client.list_namespaced_pod,\n                namespace=self.namespace,\n                label_selector=f\"job-name={job_name}\",\n                timeout_seconds=self.pod_watch_timeout_seconds,\n            ):\n                phase = event[\"object\"].status.phase\n                if phase != last_phase:\n                    self.logger.info(f\"Job {job_name!r}: Pod has status {phase!r}.\")\n\n                if phase != \"Pending\":\n                    watch.stop()\n                    return event[\"object\"]\n\n                last_phase = phase\n\n        self.logger.error(f\"Job {job_name!r}: Pod never started.\")\n\n    def _watch_job(self, job_name: str) -> int:\n\"\"\"\n        Watch a job.\n\n        Return the final status code of the first container.\n        \"\"\"\n        self.logger.debug(f\"Job {job_name!r}: Monitoring job...\")\n\n        job = self._get_job(job_name)\n        if not job:\n            return -1\n\n        pod = self._get_job_pod(job_name)\n        if not pod:\n            return -1\n\n        # Calculate the deadline before streaming output\n        deadline = (\n            (time.monotonic() + self.job_watch_timeout_seconds)\n            if self.job_watch_timeout_seconds is not None\n            else None\n        )\n\n        if self.stream_output:\n            with self.get_client() as client:\n                logs = client.read_namespaced_pod_log(\n                    pod.metadata.name,\n                    self.namespace,\n                    follow=True,\n                    _preload_content=False,\n                    container=\"prefect-job\",\n                )\n                try:\n                    for log in logs.stream():\n                        print(log.decode().rstrip())\n\n                        # Check if we have passed the deadline and should stop streaming\n                        # logs\n                        remaining_time = (\n                            deadline - time.monotonic() if deadline else None\n                        )\n                        if deadline and remaining_time <= 0:\n                            break\n\n                except Exception:\n                    self.logger.warning(\n                        (\n                            \"Error occurred while streaming logs - \"\n                            \"Job will continue to run but logs will \"\n                            \"no longer be streamed to stdout.\"\n                        ),\n                        exc_info=True,\n                    )\n\n        with self.get_batch_client() as batch_client:\n            # Check if the job is completed before beginning a watch\n            job = batch_client.read_namespaced_job(\n                name=job_name, namespace=self.namespace\n            )\n            completed = job.status.completion_time is not None\n\n            while not completed:\n                remaining_time = (\n                    math.ceil(deadline - time.monotonic()) if deadline else None\n                )\n                if deadline and remaining_time <= 0:\n                    self.logger.error(\n                        f\"Job {job_name!r}: Job did not complete within \"\n                        f\"timeout of {self.job_watch_timeout_seconds}s.\"\n                    )\n                    return -1\n\n                watch = kubernetes.watch.Watch()\n                # The kubernetes library will disable retries if the timeout kwarg is\n                # present regardless of the value so we do not pass it unless given\n                # https://github.com/kubernetes-client/python/blob/84f5fea2a3e4b161917aa597bf5e5a1d95e24f5a/kubernetes/base/watch/watch.py#LL160\n                timeout_seconds = (\n                    {\"timeout_seconds\": remaining_time} if deadline else {}\n                )\n\n                for event in watch.stream(\n                    func=batch_client.list_namespaced_job,\n                    field_selector=f\"metadata.name={job_name}\",\n                    namespace=self.namespace,\n                    **timeout_seconds,\n                ):\n                    if event[\"type\"] == \"DELETED\":\n                        self.logger.error(f\"Job {job_name!r}: Job has been deleted.\")\n                        completed = True\n                    elif event[\"object\"].status.completion_time:\n                        if not event[\"object\"].status.succeeded:\n                            # Job failed, exit while loop and return pod exit code\n                            self.logger.error(f\"Job {job_name!r}: Job failed.\")\n                        completed = True\n                    # Check if the job has reached its backoff limit\n                    # and stop watching if it has\n                    elif (\n                        event[\"object\"].spec.backoff_limit is not None\n                        and event[\"object\"].status.failed is not None\n                        and event[\"object\"].status.failed\n                        > event[\"object\"].spec.backoff_limit\n                    ):\n                        self.logger.error(\n                            f\"Job {job_name!r}: Job reached backoff limit.\"\n                        )\n                        completed = True\n                    # If the job has no backoff limit, check if it has failed\n                    # and stop watching if it has\n                    elif (\n                        not event[\"object\"].spec.backoff_limit\n                        and event[\"object\"].status.failed\n                    ):\n                        completed = True\n\n                    if completed:\n                        watch.stop()\n                        break\n\n        with self.get_client() as client:\n            # Get all pods for the job\n            pods = client.list_namespaced_pod(\n                namespace=self.namespace, label_selector=f\"job-name={job_name}\"\n            )\n            # Get the status for only the most recently used pod\n            pods.items.sort(\n                key=lambda pod: pod.metadata.creation_timestamp, reverse=True\n            )\n            most_recent_pod = pods.items[0] if pods.items else None\n            first_container_status = (\n                most_recent_pod.status.container_statuses[0]\n                if most_recent_pod\n                else None\n            )\n            if not first_container_status:\n                self.logger.error(f\"Job {job_name!r}: No pods found for job.\")\n                return -1\n\n        return first_container_status.state.terminated.exit_code\n\n    def _create_job(self, job_manifest: KubernetesManifest) -> \"V1Job\":\n\"\"\"\n        Given a Kubernetes Job Manifest, create the Job on the configured Kubernetes\n        cluster and return its name.\n        \"\"\"\n        with self.get_batch_client() as batch_client:\n            job = batch_client.create_namespaced_job(self.namespace, job_manifest)\n        return job\n\n    def _slugify_name(self, name: str) -> str:\n\"\"\"\n        Slugify text for use as a name.\n\n        Keeps only alphanumeric characters and dashes, and caps the length\n        of the slug at 45 chars.\n\n        The 45 character length allows room for the k8s utility\n        \"generateName\" to generate a unique name from the slug while\n        keeping the total length of a name below 63 characters, which is\n        the limit for e.g. label names that follow RFC 1123 (hostnames) and\n        RFC 1035 (domain names).\n\n        Args:\n            name: The name of the job\n\n        Returns:\n            the slugified job name\n        \"\"\"\n        slug = slugify(\n            name,\n            max_length=45,  # Leave enough space for generateName\n            regex_pattern=r\"[^a-zA-Z0-9-]+\",\n        )\n\n        # TODO: Handle the case that the name is an empty string after being\n        # slugified.\n\n        return slug\n\n    def _slugify_label_key(self, key: str) -> str:\n\"\"\"\n        Slugify text for use as a label key.\n\n        Keys are composed of an optional prefix and name, separated by a slash (/).\n\n        Keeps only alphanumeric characters, dashes, underscores, and periods.\n        Limits the length of the label prefix to 253 characters.\n        Limits the length of the label name to 63 characters.\n\n        See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set\n\n        Args:\n            key: The label key\n\n        Returns:\n            The slugified label key\n        \"\"\"\n        if \"/\" in key:\n            prefix, name = key.split(\"/\", maxsplit=1)\n        else:\n            prefix = None\n            name = key\n\n        name_slug = (\n            slugify(name, max_length=63, regex_pattern=r\"[^a-zA-Z0-9-_.]+\").strip(\n                \"_-.\"  # Must start or end with alphanumeric characters\n            )\n            or name\n        )\n        # Fallback to the original if we end up with an empty slug, this will allow\n        # Kubernetes to throw the validation error\n\n        if prefix:\n            prefix_slug = (\n                slugify(\n                    prefix,\n                    max_length=253,\n                    regex_pattern=r\"[^a-zA-Z0-9-\\.]+\",\n                ).strip(\n                    \"_-.\"\n                )  # Must start or end with alphanumeric characters\n                or prefix\n            )\n\n            return f\"{prefix_slug}/{name_slug}\"\n\n        return name_slug\n\n    def _slugify_label_value(self, value: str) -> str:\n\"\"\"\n        Slugify text for use as a label value.\n\n        Keeps only alphanumeric characters, dashes, underscores, and periods.\n        Limits the total length of label text to below 63 characters.\n\n        See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set\n\n        Args:\n            value: The text for the label\n\n        Returns:\n            The slugified value\n        \"\"\"\n        slug = (\n            slugify(value, max_length=63, regex_pattern=r\"[^a-zA-Z0-9-_\\.]+\").strip(\n                \"_-.\"  # Must start or end with alphanumeric characters\n            )\n            or value\n        )\n        # Fallback to the original if we end up with an empty slug, this will allow\n        # Kubernetes to throw the validation error\n\n        return slug\n\n    def _get_environment_variables(self):\n        # If the API URL has been set by the base environment rather than the by the\n        # user, update the value to ensure connectivity when using a bridge network by\n        # updating local connections to use the internal host\n        env = {**self._base_environment(), **self.env}\n\n        if (\n            \"PREFECT_API_URL\" in env\n            and \"PREFECT_API_URL\" not in self.env\n            and self._api_dns_name\n        ):\n            env[\"PREFECT_API_URL\"] = (\n                env[\"PREFECT_API_URL\"]\n                .replace(\"localhost\", self._api_dns_name)\n                .replace(\"127.0.0.1\", self._api_dns_name)\n            )\n\n        # Drop null values allowing users to \"unset\" variables\n        return {key: value for key, value in env.items() if value is not None}\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.kubernetes.KubernetesJob.base_job_manifest","title":"base_job_manifest classmethod","text":"

Produces the bare minimum allowed Job manifest

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/kubernetes.py
@classmethod\ndef base_job_manifest(cls) -> KubernetesManifest:\n\"\"\"Produces the bare minimum allowed Job manifest\"\"\"\n    return {\n        \"apiVersion\": \"batch/v1\",\n        \"kind\": \"Job\",\n        \"metadata\": {\"labels\": {}},\n        \"spec\": {\n            \"template\": {\n                \"spec\": {\n                    \"parallelism\": 1,\n                    \"completions\": 1,\n                    \"restartPolicy\": \"Never\",\n                    \"containers\": [\n                        {\n                            \"name\": \"prefect-job\",\n                            \"env\": [],\n                        }\n                    ],\n                }\n            }\n        },\n    }\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.kubernetes.KubernetesJob.build_job","title":"build_job","text":"

Builds the Kubernetes Job Manifest

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/kubernetes.py
def build_job(self) -> KubernetesManifest:\n\"\"\"Builds the Kubernetes Job Manifest\"\"\"\n    job_manifest = copy.copy(self.job)\n    job_manifest = self._shortcut_customizations().apply(job_manifest)\n    job_manifest = self.customizations.apply(job_manifest)\n    return job_manifest\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.kubernetes.KubernetesJob.customize_from_file","title":"customize_from_file classmethod","text":"

Load an RFC 6902 JSON patch from a YAML or JSON file.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/kubernetes.py
@classmethod\ndef customize_from_file(cls, filename: str) -> JsonPatch:\n\"\"\"Load an RFC 6902 JSON patch from a YAML or JSON file.\"\"\"\n    with open(filename, \"r\", encoding=\"utf-8\") as f:\n        return JsonPatch(yaml.load(f, yaml.SafeLoader))\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.kubernetes.KubernetesJob.job_from_file","title":"job_from_file classmethod","text":"

Load a Kubernetes Job manifest from a YAML or JSON file.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/kubernetes.py
@classmethod\ndef job_from_file(cls, filename: str) -> KubernetesManifest:\n\"\"\"Load a Kubernetes Job manifest from a YAML or JSON file.\"\"\"\n    with open(filename, \"r\", encoding=\"utf-8\") as f:\n        return yaml.load(f, yaml.SafeLoader)\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.KubernetesJobResult","title":"KubernetesJobResult","text":"

Bases: InfrastructureResult

Contains information about the final state of a completed Kubernetes Job

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/kubernetes.py
class KubernetesJobResult(InfrastructureResult):\n\"\"\"Contains information about the final state of a completed Kubernetes Job\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.Process","title":"Process","text":"

Bases: Infrastructure

Run a command in a new process.

Current environment variables and Prefect settings will be included in the created process. Configured environment variables will override any current environment variables.

Attributes:

Name Type Description command

A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this.

env

Environment variables to set for the new process.

labels

Labels for the process. Labels are for metadata purposes only and cannot be attached to the process itself.

name

A name for the process. For display purposes only.

stream_output bool

Whether to stream output to local stdout.

working_dir Union[str, Path, None]

Working directory where the process should be opened. If not set, a tmp directory will be used.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/process.py
class Process(Infrastructure):\n\"\"\"\n    Run a command in a new process.\n\n    Current environment variables and Prefect settings will be included in the created\n    process. Configured environment variables will override any current environment\n    variables.\n\n    Attributes:\n        command: A list of strings specifying the command to run in the container to\n            start the flow run. In most cases you should not override this.\n        env: Environment variables to set for the new process.\n        labels: Labels for the process. Labels are for metadata purposes only and\n            cannot be attached to the process itself.\n        name: A name for the process. For display purposes only.\n        stream_output: Whether to stream output to local stdout.\n        working_dir: Working directory where the process should be opened. If not set,\n            a tmp directory will be used.\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/39WQhVu4JK40rZWltGqhuC/d15be6189a0cb95949a6b43df00dcb9b/image5.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/concepts/infrastructure/#process\"\n\n    type: Literal[\"process\"] = Field(\n        default=\"process\", description=\"The type of infrastructure.\"\n    )\n    stream_output: bool = Field(\n        default=True,\n        description=(\n            \"If set, output will be streamed from the process to local standard output.\"\n        ),\n    )\n    working_dir: Union[str, Path, None] = Field(\n        default=None,\n        description=(\n            \"If set, the process will open within the specified path as the working\"\n            \" directory. Otherwise, a temporary directory will be created.\"\n        ),\n    )  # Underlying accepted types are str, bytes, PathLike[str], None\n\n    @sync_compatible\n    async def run(\n        self,\n        task_status: anyio.abc.TaskStatus = None,\n    ) -> \"ProcessResult\":\n        if not self.command:\n            raise ValueError(\"Process cannot be run with empty command.\")\n\n        _use_threaded_child_watcher()\n        display_name = f\" {self.name!r}\" if self.name else \"\"\n\n        # Open a subprocess to execute the flow run\n        self.logger.info(f\"Opening process{display_name}...\")\n        working_dir_ctx = (\n            tempfile.TemporaryDirectory(suffix=\"prefect\")\n            if not self.working_dir\n            else contextlib.nullcontext(self.working_dir)\n        )\n        with working_dir_ctx as working_dir:\n            self.logger.debug(\n                f\"Process{display_name} running command: {' '.join(self.command)} in\"\n                f\" {working_dir}\"\n            )\n\n            # We must add creationflags to a dict so it is only passed as a function\n            # parameter on Windows, because the presence of creationflags causes\n            # errors on Unix even if set to None\n            kwargs: Dict[str, object] = {}\n            if sys.platform == \"win32\":\n                kwargs[\"creationflags\"] = subprocess.CREATE_NEW_PROCESS_GROUP\n\n            process = await run_process(\n                self.command,\n                stream_output=self.stream_output,\n                task_status=task_status,\n                task_status_handler=_infrastructure_pid_from_process,\n                env=self._get_environment_variables(),\n                cwd=working_dir,\n                **kwargs,\n            )\n\n        # Use the pid for display if no name was given\n        display_name = display_name or f\" {process.pid}\"\n\n        if process.returncode:\n            help_message = None\n            if process.returncode == -9:\n                help_message = (\n                    \"This indicates that the process exited due to a SIGKILL signal. \"\n                    \"Typically, this is either caused by manual cancellation or \"\n                    \"high memory usage causing the operating system to \"\n                    \"terminate the process.\"\n                )\n            if process.returncode == -15:\n                help_message = (\n                    \"This indicates that the process exited due to a SIGTERM signal. \"\n                    \"Typically, this is caused by manual cancellation.\"\n                )\n            elif process.returncode == 247:\n                help_message = (\n                    \"This indicates that the process was terminated due to high \"\n                    \"memory usage.\"\n                )\n            elif (\n                sys.platform == \"win32\" and process.returncode == STATUS_CONTROL_C_EXIT\n            ):\n                help_message = (\n                    \"Process was terminated due to a Ctrl+C or Ctrl+Break signal. \"\n                    \"Typically, this is caused by manual cancellation.\"\n                )\n\n            self.logger.error(\n                f\"Process{display_name} exited with status code: {process.returncode}\"\n                + (f\"; {help_message}\" if help_message else \"\")\n            )\n        else:\n            self.logger.info(f\"Process{display_name} exited cleanly.\")\n\n        return ProcessResult(\n            status_code=process.returncode, identifier=str(process.pid)\n        )\n\n    async def kill(self, infrastructure_pid: str, grace_seconds: int = 30):\n        hostname, pid = _parse_infrastructure_pid(infrastructure_pid)\n\n        if hostname != socket.gethostname():\n            raise InfrastructureNotAvailable(\n                f\"Unable to kill process {pid!r}: The process is running on a different\"\n                f\" host {hostname!r}.\"\n            )\n\n        # In a non-windows environment first send a SIGTERM, then, after\n        # `grace_seconds` seconds have passed subsequent send SIGKILL. In\n        # Windows we use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        if sys.platform == \"win32\":\n            try:\n                os.kill(pid, signal.CTRL_BREAK_EVENT)\n            except (ProcessLookupError, WindowsError):\n                raise InfrastructureNotFound(\n                    f\"Unable to kill process {pid!r}: The process was not found.\"\n                )\n        else:\n            try:\n                os.kill(pid, signal.SIGTERM)\n            except ProcessLookupError:\n                raise InfrastructureNotFound(\n                    f\"Unable to kill process {pid!r}: The process was not found.\"\n                )\n\n            # Throttle how often we check if the process is still alive to keep\n            # from making too many system calls in a short period of time.\n            check_interval = max(grace_seconds / 10, 1)\n\n            with anyio.move_on_after(grace_seconds):\n                while True:\n                    await anyio.sleep(check_interval)\n\n                    # Detect if the process is still alive. If not do an early\n                    # return as the process respected the SIGTERM from above.\n                    try:\n                        os.kill(pid, 0)\n                    except ProcessLookupError:\n                        return\n\n            try:\n                os.kill(pid, signal.SIGKILL)\n            except OSError:\n                # We shouldn't ever end up here, but it's possible that the\n                # process ended right after the check above.\n                return\n\n    def preview(self):\n        environment = self._get_environment_variables(include_os_environ=False)\n        return \" \\\\\\n\".join(\n            [f\"{key}={value}\" for key, value in environment.items()]\n            + [\" \".join(self.command)]\n        )\n\n    def _get_environment_variables(self, include_os_environ: bool = True):\n        os_environ = os.environ if include_os_environ else {}\n        # The base environment must override the current environment or\n        # the Prefect settings context may not be respected\n        env = {**os_environ, **self._base_environment(), **self.env}\n\n        # Drop null values allowing users to \"unset\" variables\n        return {key: value for key, value in env.items() if value is not None}\n\n    def _base_flow_run_command(self):\n        return [sys.executable, \"-m\", \"prefect.engine\"]\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/infrastructure/#prefect.infrastructure.ProcessResult","title":"ProcessResult","text":"

Bases: InfrastructureResult

Contains information about the final state of a completed process

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/infrastructure/process.py
class ProcessResult(InfrastructureResult):\n\"\"\"Contains information about the final state of a completed process\"\"\"\n
","tags":["Python API","infrastructure","Docker","Kubernetes","subprocess","process"]},{"location":"api-ref/prefect/logging/","title":"prefect.logging","text":"","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/#prefect.logging","title":"prefect.logging","text":"","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/#prefect.logging.get_logger","title":"get_logger cached","text":"

Get a prefect logger. These loggers are intended for internal use within the prefect package.

See get_run_logger for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the APILogHandler.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/logging/loggers.py
@lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n\"\"\"\n    Get a `prefect` logger. These loggers are intended for internal use within the\n    `prefect` package.\n\n    See `get_run_logger` for retrieving loggers for use within task or flow runs.\n    By default, only run-related loggers are connected to the `APILogHandler`.\n    \"\"\"\n\n    parent_logger = logging.getLogger(\"prefect\")\n\n    if name:\n        # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n        # should not become \"prefect.prefect.test\"\n        if not name.startswith(parent_logger.name + \".\"):\n            logger = parent_logger.getChild(name)\n        else:\n            logger = logging.getLogger(name)\n    else:\n        logger = parent_logger\n\n    return logger\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/logging/#prefect.logging.get_run_logger","title":"get_run_logger","text":"

Get a Prefect logger for the current task run or flow run.

The logger will be named either prefect.task_runs or prefect.flow_runs. Contextual data about the run will be attached to the log records.

These loggers are connected to the APILogHandler by default to send log records to the API.

Parameters:

Name Type Description Default context RunContext

A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed.

None **kwargs str

Additional keyword arguments will be attached to the log records in addition to the run metadata

{}

Raises:

Type Description RuntimeError

If no context can be found

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/logging/loggers.py
def get_run_logger(\n    context: \"RunContext\" = None, **kwargs: str\n) -> Union[logging.Logger, logging.LoggerAdapter]:\n\"\"\"\n    Get a Prefect logger for the current task run or flow run.\n\n    The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n    Contextual data about the run will be attached to the log records.\n\n    These loggers are connected to the `APILogHandler` by default to send log records to\n    the API.\n\n    Arguments:\n        context: A specific context may be provided as an override. By default, the\n            context is inferred from global state and this should not be needed.\n        **kwargs: Additional keyword arguments will be attached to the log records in\n            addition to the run metadata\n\n    Raises:\n        RuntimeError: If no context can be found\n    \"\"\"\n    # Check for existing contexts\n    task_run_context = prefect.context.TaskRunContext.get()\n    flow_run_context = prefect.context.FlowRunContext.get()\n\n    # Apply the context override\n    if context:\n        if isinstance(context, prefect.context.FlowRunContext):\n            flow_run_context = context\n        elif isinstance(context, prefect.context.TaskRunContext):\n            task_run_context = context\n        else:\n            raise TypeError(\n                f\"Received unexpected type {type(context).__name__!r} for context. \"\n                \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n            )\n\n    # Determine if this is a task or flow run logger\n    if task_run_context:\n        logger = task_run_logger(\n            task_run=task_run_context.task_run,\n            task=task_run_context.task,\n            flow_run=flow_run_context.flow_run if flow_run_context else None,\n            flow=flow_run_context.flow if flow_run_context else None,\n            **kwargs,\n        )\n    elif flow_run_context:\n        logger = flow_run_logger(\n            flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n        )\n    elif (\n        get_logger(\"prefect.flow_run\").disabled\n        and get_logger(\"prefect.task_run\").disabled\n    ):\n        logger = logging.getLogger(\"null\")\n    else:\n        raise MissingContextError(\"There is no active flow or task run context.\")\n\n    return logger\n
","tags":["Python API","logging"]},{"location":"api-ref/prefect/packaging/","title":"Packaging","text":"","tags":["Python API","packaging","deployments"]},{"location":"api-ref/prefect/packaging/#prefect.packaging","title":"prefect.packaging","text":"","tags":["Python API","packaging","deployments"]},{"location":"api-ref/prefect/serializers/","title":"prefect.serializers","text":"","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers","title":"prefect.serializers","text":"

Serializer implementations for converting objects to bytes and bytes to objects.

All serializers are based on the Serializer class and include a type string that allows them to be referenced without referencing the actual class. For example, you can get often specify the JSONSerializer with the string \"json\". Some serializers support additional settings for configuration of serialization. These are stored on the instance so the same settings can be used to load saved objects.

All serializers must implement dumps and loads which convert objects to bytes and bytes to an object respectively.

","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedJSONSerializer","title":"CompressedJSONSerializer","text":"

Bases: CompressedSerializer

A compressed serializer preconfigured to use the json serializer.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
class CompressedJSONSerializer(CompressedSerializer):\n\"\"\"\n    A compressed serializer preconfigured to use the json serializer.\n    \"\"\"\n\n    type: Literal[\"compressed/json\"] = \"compressed/json\"\n    serializer: Serializer = pydantic.Field(default_factory=JSONSerializer)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedPickleSerializer","title":"CompressedPickleSerializer","text":"

Bases: CompressedSerializer

A compressed serializer preconfigured to use the pickle serializer.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
class CompressedPickleSerializer(CompressedSerializer):\n\"\"\"\n    A compressed serializer preconfigured to use the pickle serializer.\n    \"\"\"\n\n    type: Literal[\"compressed/pickle\"] = \"compressed/pickle\"\n    serializer: Serializer = pydantic.Field(default_factory=PickleSerializer)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedSerializer","title":"CompressedSerializer","text":"

Bases: Serializer

Wraps another serializer, compressing its output. Uses lzma by default. See compressionlib for using alternative libraries.

Attributes:

Name Type Description serializer Serializer

The serializer to use before compression.

compressionlib str

The import path of a compression module to use. Must have methods compress(bytes) -> bytes and decompress(bytes) -> bytes.

level str

If not null, the level of compression to pass to compress.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
class CompressedSerializer(Serializer):\n\"\"\"\n    Wraps another serializer, compressing its output.\n    Uses `lzma` by default. See `compressionlib` for using alternative libraries.\n\n    Attributes:\n        serializer: The serializer to use before compression.\n        compressionlib: The import path of a compression module to use.\n            Must have methods `compress(bytes) -> bytes` and `decompress(bytes) -> bytes`.\n        level: If not null, the level of compression to pass to `compress`.\n    \"\"\"\n\n    type: Literal[\"compressed\"] = \"compressed\"\n\n    serializer: Serializer\n    compressionlib: str = \"lzma\"\n\n    @pydantic.validator(\"serializer\", pre=True)\n    def cast_type_names_to_serializers(cls, value):\n        if isinstance(value, str):\n            return Serializer(type=value)\n        return value\n\n    @pydantic.validator(\"compressionlib\")\n    def check_compressionlib(cls, value):\n\"\"\"\n        Check that the given pickle library is importable and has compress/decompress\n        methods.\n        \"\"\"\n        try:\n            compresser = from_qualified_name(value)\n        except (ImportError, AttributeError) as exc:\n            raise ValueError(\n                f\"Failed to import requested compression library: {value!r}.\"\n            ) from exc\n\n        if not callable(getattr(compresser, \"compress\", None)):\n            raise ValueError(\n                f\"Compression library at {value!r} does not have a 'compress' method.\"\n            )\n\n        if not callable(getattr(compresser, \"decompress\", None)):\n            raise ValueError(\n                f\"Compression library at {value!r} does not have a 'decompress' method.\"\n            )\n\n        return value\n\n    def dumps(self, obj: Any) -> bytes:\n        blob = self.serializer.dumps(obj)\n        compresser = from_qualified_name(self.compressionlib)\n        return base64.encodebytes(compresser.compress(blob))\n\n    def loads(self, blob: bytes) -> Any:\n        compresser = from_qualified_name(self.compressionlib)\n        uncompressed = compresser.decompress(base64.decodebytes(blob))\n        return self.serializer.loads(uncompressed)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.CompressedSerializer.check_compressionlib","title":"check_compressionlib","text":"

Check that the given pickle library is importable and has compress/decompress methods.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
@pydantic.validator(\"compressionlib\")\ndef check_compressionlib(cls, value):\n\"\"\"\n    Check that the given pickle library is importable and has compress/decompress\n    methods.\n    \"\"\"\n    try:\n        compresser = from_qualified_name(value)\n    except (ImportError, AttributeError) as exc:\n        raise ValueError(\n            f\"Failed to import requested compression library: {value!r}.\"\n        ) from exc\n\n    if not callable(getattr(compresser, \"compress\", None)):\n        raise ValueError(\n            f\"Compression library at {value!r} does not have a 'compress' method.\"\n        )\n\n    if not callable(getattr(compresser, \"decompress\", None)):\n        raise ValueError(\n            f\"Compression library at {value!r} does not have a 'decompress' method.\"\n        )\n\n    return value\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.JSONSerializer","title":"JSONSerializer","text":"

Bases: Serializer

Serializes data to JSON.

Input types must be compatible with the stdlib json library.

Wraps the json library to serialize to UTF-8 bytes instead of string types.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
class JSONSerializer(Serializer):\n\"\"\"\n    Serializes data to JSON.\n\n    Input types must be compatible with the stdlib json library.\n\n    Wraps the `json` library to serialize to UTF-8 bytes instead of string types.\n    \"\"\"\n\n    type: Literal[\"json\"] = \"json\"\n    jsonlib: str = \"json\"\n    object_encoder: Optional[str] = pydantic.Field(\n        default=\"prefect.serializers.prefect_json_object_encoder\",\n        description=(\n            \"An optional callable to use when serializing objects that are not \"\n            \"supported by the JSON encoder. By default, this is set to a callable that \"\n            \"adds support for all types supported by Pydantic.\"\n        ),\n    )\n    object_decoder: Optional[str] = pydantic.Field(\n        default=\"prefect.serializers.prefect_json_object_decoder\",\n        description=(\n            \"An optional callable to use when deserializing objects. This callable \"\n            \"is passed each dictionary encountered during JSON deserialization. \"\n            \"By default, this is set to a callable that deserializes content created \"\n            \"by our default `object_encoder`.\"\n        ),\n    )\n    dumps_kwargs: dict = pydantic.Field(default_factory=dict)\n    loads_kwargs: dict = pydantic.Field(default_factory=dict)\n\n    @pydantic.validator(\"dumps_kwargs\")\n    def dumps_kwargs_cannot_contain_default(cls, value):\n        # `default` is set by `object_encoder`. A user provided callable would make this\n        # class unserializable anyway.\n        if \"default\" in value:\n            raise ValueError(\n                \"`default` cannot be provided. Use `object_encoder` instead.\"\n            )\n        return value\n\n    @pydantic.validator(\"loads_kwargs\")\n    def loads_kwargs_cannot_contain_object_hook(cls, value):\n        # `object_hook` is set by `object_decoder`. A user provided callable would make\n        # this class unserializable anyway.\n        if \"object_hook\" in value:\n            raise ValueError(\n                \"`object_hook` cannot be provided. Use `object_decoder` instead.\"\n            )\n        return value\n\n    def dumps(self, data: Any) -> bytes:\n        json = from_qualified_name(self.jsonlib)\n        kwargs = self.dumps_kwargs.copy()\n        if self.object_encoder:\n            kwargs[\"default\"] = from_qualified_name(self.object_encoder)\n        result = json.dumps(data, **kwargs)\n        if isinstance(result, str):\n            # The standard library returns str but others may return bytes directly\n            result = result.encode()\n        return result\n\n    def loads(self, blob: bytes) -> Any:\n        json = from_qualified_name(self.jsonlib)\n        kwargs = self.loads_kwargs.copy()\n        if self.object_decoder:\n            kwargs[\"object_hook\"] = from_qualified_name(self.object_decoder)\n        return json.loads(blob.decode(), **kwargs)\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer","title":"PickleSerializer","text":"

Bases: Serializer

Serializes objects using the pickle protocol.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
class PickleSerializer(Serializer):\n\"\"\"\n    Serializes objects using the pickle protocol.\n\n    - Uses `cloudpickle` by default. See `picklelib` for using alternative libraries.\n    - Stores the version of the pickle library to check for compatibility during\n        deserialization.\n    - Wraps pickles in base64 for safe transmission.\n    \"\"\"\n\n    type: Literal[\"pickle\"] = \"pickle\"\n\n    picklelib: str = \"cloudpickle\"\n    picklelib_version: str = None\n\n    @pydantic.validator(\"picklelib\")\n    def check_picklelib(cls, value):\n\"\"\"\n        Check that the given pickle library is importable and has dumps/loads methods.\n        \"\"\"\n        try:\n            pickler = from_qualified_name(value)\n        except (ImportError, AttributeError) as exc:\n            raise ValueError(\n                f\"Failed to import requested pickle library: {value!r}.\"\n            ) from exc\n\n        if not callable(getattr(pickler, \"dumps\", None)):\n            raise ValueError(\n                f\"Pickle library at {value!r} does not have a 'dumps' method.\"\n            )\n\n        if not callable(getattr(pickler, \"loads\", None)):\n            raise ValueError(\n                f\"Pickle library at {value!r} does not have a 'loads' method.\"\n            )\n\n        return value\n\n    @pydantic.root_validator\n    def check_picklelib_version(cls, values):\n\"\"\"\n        Infers a default value for `picklelib_version` if null or ensures it matches\n        the version retrieved from the `pickelib`.\n        \"\"\"\n        picklelib = values.get(\"picklelib\")\n        picklelib_version = values.get(\"picklelib_version\")\n\n        if not picklelib:\n            raise ValueError(\"Unable to check version of unrecognized picklelib module\")\n\n        pickler = from_qualified_name(picklelib)\n        pickler_version = getattr(pickler, \"__version__\", None)\n\n        if not picklelib_version:\n            values[\"picklelib_version\"] = pickler_version\n        elif picklelib_version != pickler_version:\n            warnings.warn(\n                (\n                    f\"Mismatched {picklelib!r} versions. Found {pickler_version} in the\"\n                    f\" environment but {picklelib_version} was requested. This may\"\n                    \" cause the serializer to fail.\"\n                ),\n                RuntimeWarning,\n                stacklevel=3,\n            )\n\n        return values\n\n    def dumps(self, obj: Any) -> bytes:\n        pickler = from_qualified_name(self.picklelib)\n        blob = pickler.dumps(obj)\n        return base64.encodebytes(blob)\n\n    def loads(self, blob: bytes) -> Any:\n        pickler = from_qualified_name(self.picklelib)\n        return pickler.loads(base64.decodebytes(blob))\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer.check_picklelib","title":"check_picklelib","text":"

Check that the given pickle library is importable and has dumps/loads methods.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
@pydantic.validator(\"picklelib\")\ndef check_picklelib(cls, value):\n\"\"\"\n    Check that the given pickle library is importable and has dumps/loads methods.\n    \"\"\"\n    try:\n        pickler = from_qualified_name(value)\n    except (ImportError, AttributeError) as exc:\n        raise ValueError(\n            f\"Failed to import requested pickle library: {value!r}.\"\n        ) from exc\n\n    if not callable(getattr(pickler, \"dumps\", None)):\n        raise ValueError(\n            f\"Pickle library at {value!r} does not have a 'dumps' method.\"\n        )\n\n    if not callable(getattr(pickler, \"loads\", None)):\n        raise ValueError(\n            f\"Pickle library at {value!r} does not have a 'loads' method.\"\n        )\n\n    return value\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.PickleSerializer.check_picklelib_version","title":"check_picklelib_version","text":"

Infers a default value for picklelib_version if null or ensures it matches the version retrieved from the pickelib.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
@pydantic.root_validator\ndef check_picklelib_version(cls, values):\n\"\"\"\n    Infers a default value for `picklelib_version` if null or ensures it matches\n    the version retrieved from the `pickelib`.\n    \"\"\"\n    picklelib = values.get(\"picklelib\")\n    picklelib_version = values.get(\"picklelib_version\")\n\n    if not picklelib:\n        raise ValueError(\"Unable to check version of unrecognized picklelib module\")\n\n    pickler = from_qualified_name(picklelib)\n    pickler_version = getattr(pickler, \"__version__\", None)\n\n    if not picklelib_version:\n        values[\"picklelib_version\"] = pickler_version\n    elif picklelib_version != pickler_version:\n        warnings.warn(\n            (\n                f\"Mismatched {picklelib!r} versions. Found {pickler_version} in the\"\n                f\" environment but {picklelib_version} was requested. This may\"\n                \" cause the serializer to fail.\"\n            ),\n            RuntimeWarning,\n            stacklevel=3,\n        )\n\n    return values\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer","title":"Serializer","text":"

Bases: BaseModel, Generic[D], abc.ABC

A serializer that can encode objects of type 'D' into bytes.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
@add_type_dispatch\nclass Serializer(BaseModel, Generic[D], abc.ABC):\n\"\"\"\n    A serializer that can encode objects of type 'D' into bytes.\n    \"\"\"\n\n    type: str\n\n    @abc.abstractmethod\n    def dumps(self, obj: D) -> bytes:\n\"\"\"Encode the object into a blob of bytes.\"\"\"\n\n    @abc.abstractmethod\n    def loads(self, blob: bytes) -> D:\n\"\"\"Decode the blob of bytes into an object.\"\"\"\n\n    class Config:\n        extra = \"forbid\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer.dumps","title":"dumps abstractmethod","text":"

Encode the object into a blob of bytes.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
@abc.abstractmethod\ndef dumps(self, obj: D) -> bytes:\n\"\"\"Encode the object into a blob of bytes.\"\"\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.Serializer.loads","title":"loads abstractmethod","text":"

Decode the blob of bytes into an object.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
@abc.abstractmethod\ndef loads(self, blob: bytes) -> D:\n\"\"\"Decode the blob of bytes into an object.\"\"\"\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.prefect_json_object_decoder","title":"prefect_json_object_decoder","text":"

JSONDecoder.object_hook for decoding objects from JSON when previously encoded with prefect_json_object_encoder

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
def prefect_json_object_decoder(result: dict):\n\"\"\"\n    `JSONDecoder.object_hook` for decoding objects from JSON when previously encoded\n    with `prefect_json_object_encoder`\n    \"\"\"\n    if \"__class__\" in result:\n        return pydantic.parse_obj_as(\n            from_qualified_name(result[\"__class__\"]), result[\"data\"]\n        )\n    elif \"__exc_type__\" in result:\n        return from_qualified_name(result[\"__exc_type__\"])(result[\"message\"])\n    else:\n        return result\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/serializers/#prefect.serializers.prefect_json_object_encoder","title":"prefect_json_object_encoder","text":"

JSONEncoder.default for encoding objects into JSON with extended type support.

Raises a TypeError to fallback on other encoders on failure.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/serializers.py
def prefect_json_object_encoder(obj: Any) -> Any:\n\"\"\"\n    `JSONEncoder.default` for encoding objects into JSON with extended type support.\n\n    Raises a `TypeError` to fallback on other encoders on failure.\n    \"\"\"\n    if isinstance(obj, BaseException):\n        return {\"__exc_type__\": to_qualified_name(obj.__class__), \"message\": str(obj)}\n    else:\n        return {\n            \"__class__\": to_qualified_name(obj.__class__),\n            \"data\": pydantic_encoder(obj),\n        }\n
","tags":["Python API","serializers","JSON","pickle"]},{"location":"api-ref/prefect/settings/","title":"prefect.settings","text":"","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings","title":"prefect.settings","text":"

Prefect settings management.

Each setting is defined as a Setting type. The name of each setting is stylized in all caps, matching the environment variable that can be used to change the setting.

All settings defined in this file are used to generate a dynamic Pydantic settings class called Settings. When insantiated, this class will load settings from environment variables and pull default values from the setting definitions.

The current instance of Settings being used by the application is stored in a SettingsContext model which allows each instance of the Settings class to be accessed in an async-safe manner.

Aside from environment variables, we allow settings to be changed during the runtime of the process using profiles. Profiles contain setting overrides that the user may persist without setting environment variables. Profiles are also used internally for managing settings during task run execution where differing settings may be used concurrently in the same process and during testing where we need to override settings to ensure their value is respected as intended.

The SettingsContext is set when the prefect module is imported. This context is referred to as the \"root\" settings context for clarity. Generally, this is the only settings context that will be used. When this context is entered, we will instantiate a Settings object, loading settings from environment variables and defaults, then we will load the active profile and use it to override settings. See enter_root_settings_context for details on determining the active profile.

Another SettingsContext may be entered at any time to change the settings being used by the code within the context. Generally, users should not use this. Settings management should be left to Prefect application internals.

Generally, settings should be accessed with SETTING_VARIABLE.value() which will pull the current Settings instance from the current SettingsContext and retrieve the value of the relevant setting.

Accessing a setting's value will also call the Setting.value_callback which allows settings to be dynamically modified on retrieval. This allows us to make settings dependent on the value of other settings or perform other dynamic effects.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_HOME","title":"PREFECT_HOME = Setting(Path, default=Path('~') / '.prefect', value_callback=expanduser_in_path) module-attribute","text":"

Prefect's home directory. Defaults to ~/.prefect. This directory may be created automatically when required.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXTRA_ENTRYPOINTS","title":"PREFECT_EXTRA_ENTRYPOINTS = Setting(str, default='') module-attribute","text":"

Modules for Prefect to import when Prefect is imported.

Values should be separated by commas, e.g. my_module,my_other_module. Objects within modules may be specified by a ':' partition, e.g. my_module:my_object. If a callable object is provided, it will be called with no arguments on import.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_DEBUG_MODE","title":"PREFECT_DEBUG_MODE = Setting(bool, default=False) module-attribute","text":"

If True, places the API in debug mode. This may modify behavior to facilitate debugging, including extra logs and other verbose assistance. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLI_COLORS","title":"PREFECT_CLI_COLORS = Setting(bool, default=True) module-attribute","text":"

If True, use colors in CLI output. If False, output will not include colors codes. Defaults to True.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLI_PROMPT","title":"PREFECT_CLI_PROMPT = Setting(Optional[bool], default=None) module-attribute","text":"

If True, use interactive prompts in CLI commands. If False, no interactive prompts will be used. If None, the value will be dynamically determined based on the presence of an interactive-enabled terminal.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLI_WRAP_LINES","title":"PREFECT_CLI_WRAP_LINES = Setting(bool, default=True) module-attribute","text":"

If True, wrap text by inserting new lines in long lines in CLI output. If False, output will not be wrapped. Defaults to True.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TEST_MODE","title":"PREFECT_TEST_MODE = Setting(bool, default=False) module-attribute","text":"

If True, places the API in test mode. This may modify behavior to facilitate testing. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UNIT_TEST_MODE","title":"PREFECT_UNIT_TEST_MODE = Setting(bool, default=False) module-attribute","text":"

This variable only exists to facilitate unit testing. If True, code is executing in a unit test context. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TEST_SETTING","title":"PREFECT_TEST_SETTING = Setting(Any, default=None, value_callback=only_return_value_in_test_mode) module-attribute","text":"

This variable only exists to facilitate testing of settings. If accessed when PREFECT_TEST_MODE is not set, None is returned.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_TLS_INSECURE_SKIP_VERIFY","title":"PREFECT_API_TLS_INSECURE_SKIP_VERIFY = Setting(bool, default=False) module-attribute","text":"

If True, disables SSL checking to allow insecure requests. This is recommended only during development, e.g. when using self-signed certificates.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_URL","title":"PREFECT_API_URL = Setting(str, default=None) module-attribute","text":"

If provided, the URL of a hosted Prefect API. Defaults to None.

When using Prefect Cloud, this will include an account and workspace.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_KEY","title":"PREFECT_API_KEY = Setting(str, default=None, is_secret=True) module-attribute","text":"

API key used to authenticate with a the Prefect API. Defaults to None.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_ENABLE_HTTP2","title":"PREFECT_API_ENABLE_HTTP2 = Setting(bool, default=True) module-attribute","text":"

If true, enable support for HTTP/2 for communicating with an API.

If the API does not support HTTP/2, this will have no effect and connections will be made via HTTP/1.1.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_MAX_RETRIES","title":"PREFECT_CLIENT_MAX_RETRIES = Setting(int, default=5) module-attribute","text":"

The maximum number of retries to perform on failed HTTP requests.

Defaults to 5. Set to 0 to disable retries.

See PREFECT_CLIENT_RETRY_EXTRA_CODES for details on which HTTP status codes are retried.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_RETRY_JITTER_FACTOR","title":"PREFECT_CLIENT_RETRY_JITTER_FACTOR = Setting(float, default=0.2) module-attribute","text":"

A value greater than or equal to zero to control the amount of jitter added to retried client requests. Higher values introduce larger amounts of jitter.

Set to 0 to disable jitter. See clamped_poisson_interval for details on the how jitter can affect retry lengths.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLIENT_RETRY_EXTRA_CODES","title":"PREFECT_CLIENT_RETRY_EXTRA_CODES = Setting(str, default='', value_callback=status_codes_as_integers_in_range) module-attribute","text":"

A comma-separated list of extra HTTP status codes to retry on. Defaults to an empty string. 429 and 503 are always retried. Please note that not all routes are idempotent and retrying may result in unexpected behavior.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_API_URL","title":"PREFECT_CLOUD_API_URL = Setting(str, default='https://api.prefect.cloud/api', value_callback=check_for_deprecated_cloud_url) module-attribute","text":"

API URL for Prefect Cloud. Used for authentication.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_URL","title":"PREFECT_CLOUD_URL = Setting(str, default=None, deprecated=True, deprecated_start_date='Dec 2022', deprecated_help='Use `PREFECT_CLOUD_API_URL` instead.') module-attribute","text":"","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_URL","title":"PREFECT_UI_URL = Setting(Optional[str], default=None, value_callback=default_ui_url) module-attribute","text":"

The URL for the UI. By default, this is inferred from the PREFECT_API_URL.

When using Prefect Cloud, this will include the account and workspace. When using an ephemeral server, this will be None.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_CLOUD_UI_URL","title":"PREFECT_CLOUD_UI_URL = Setting(str, default=None, value_callback=default_cloud_ui_url) module-attribute","text":"

The URL for the Cloud UI. By default, this is inferred from the PREFECT_CLOUD_API_URL.

PREFECT_UI_URL will be workspace specific and will be usable in the open source too.

In contrast, this value is only valid for Cloud and will not include the workspace.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_REQUEST_TIMEOUT","title":"PREFECT_API_REQUEST_TIMEOUT = Setting(float, default=30.0) module-attribute","text":"

The default timeout for requests to the API

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN","title":"PREFECT_EXPERIMENTAL_WARN = Setting(bool, default=True) module-attribute","text":"

If enabled, warn on usage of expirimental features.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_PROFILES_PATH","title":"PREFECT_PROFILES_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'profiles.toml', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

The path to a profiles configuration files.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RESULTS_DEFAULT_SERIALIZER","title":"PREFECT_RESULTS_DEFAULT_SERIALIZER = Setting(str, default='pickle') module-attribute","text":"

The default serializer to use when not otherwise specified.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_RESULTS_PERSIST_BY_DEFAULT","title":"PREFECT_RESULTS_PERSIST_BY_DEFAULT = Setting(bool, default=False) module-attribute","text":"

The default setting for persisting results when not otherwise specified. If enabled, flow and task results will be persisted unless they opt out.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASKS_REFRESH_CACHE","title":"PREFECT_TASKS_REFRESH_CACHE = Setting(bool, default=False) module-attribute","text":"

If True, enables a refresh of cached results: re-executing the task will refresh the cached results. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_DEFAULT_RETRIES","title":"PREFECT_TASK_DEFAULT_RETRIES = Setting(int, default=0) module-attribute","text":"

This value sets the default number of retries for all tasks. This value does not overwrite individually set retries values on tasks

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_FLOW_DEFAULT_RETRIES","title":"PREFECT_FLOW_DEFAULT_RETRIES = Setting(int, default=0) module-attribute","text":"

This value sets the default number of retries for all flows. This value does not overwrite individually set retries values on a flow

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS","title":"PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS = Setting(Union[int, float], default=0) module-attribute","text":"

This value sets the retry delay seconds for all flows. This value does not overwrite invidually set retry delay seconds

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS","title":"PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS = Setting(Union[float, int, List[float]], default=0) module-attribute","text":"

This value sets the default retry delay seconds for all tasks. This value does not overwrite invidually set retry delay seconds

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOCAL_STORAGE_PATH","title":"PREFECT_LOCAL_STORAGE_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'storage', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

The path to a directory to store things in.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MEMO_STORE_PATH","title":"PREFECT_MEMO_STORE_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'memo_store.toml', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

The path to the memo store file.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION","title":"PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION = Setting(bool, default=True) module-attribute","text":"

Controls whether or not block auto-registration on start up should be memoized. Setting to False may result in slower server start up times.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_LEVEL","title":"PREFECT_LOGGING_LEVEL = Setting(str, default='INFO', value_callback=debug_mode_log_level) module-attribute","text":"

The default logging level for Prefect loggers. Defaults to \"INFO\" during normal operation. Is forced to \"DEBUG\" during debug mode.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_INTERNAL_LEVEL","title":"PREFECT_LOGGING_INTERNAL_LEVEL = Setting(str, default='ERROR', value_callback=debug_mode_log_level) module-attribute","text":"

The default logging level for Prefect's internal machinery loggers. Defaults to \"ERROR\" during normal operation. Is forced to \"DEBUG\" during debug mode.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_SERVER_LEVEL","title":"PREFECT_LOGGING_SERVER_LEVEL = Setting(str, default='WARNING') module-attribute","text":"

The default logging level for the Prefect API server.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_SETTINGS_PATH","title":"PREFECT_LOGGING_SETTINGS_PATH = Setting(Path, default=Path('${PREFECT_HOME}') / 'logging.yml', value_callback=template_with_settings(PREFECT_HOME)) module-attribute","text":"

The path to a custom YAML logging configuration file. If no file is found, the default logging.yml is used. Defaults to a logging.yml in the Prefect home directory.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_EXTRA_LOGGERS","title":"PREFECT_LOGGING_EXTRA_LOGGERS = Setting(str, default='', value_callback=get_extra_loggers) module-attribute","text":"

Additional loggers to attach to Prefect logging at runtime. Values should be comma separated. The handlers attached to the 'prefect' logger will be added to these loggers. Additionally, if the level is not set, it will be set to the same level as the 'prefect' logger.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_LOG_PRINTS","title":"PREFECT_LOGGING_LOG_PRINTS = Setting(bool, default=False) module-attribute","text":"

If set, print statements in flows and tasks will be redirected to the Prefect logger for the given run. This setting can be overriden by individual tasks and flows.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_ENABLED","title":"PREFECT_LOGGING_TO_API_ENABLED = Setting(bool, default=True) module-attribute","text":"

Toggles sending logs to the API. If False, logs sent to the API log handler will not be sent to the API.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_BATCH_INTERVAL","title":"PREFECT_LOGGING_TO_API_BATCH_INTERVAL = Setting(float, default=2.0) module-attribute","text":"

The number of seconds between batched writes of logs to the API.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_BATCH_SIZE","title":"PREFECT_LOGGING_TO_API_BATCH_SIZE = Setting(int, default=4000000) module-attribute","text":"

The maximum size in bytes for a batch of logs.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_MAX_LOG_SIZE","title":"PREFECT_LOGGING_TO_API_MAX_LOG_SIZE = Setting(int, default=1000000) module-attribute","text":"

The maximum size in bytes for a single log.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW","title":"PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW = Setting(Literal['warn', 'error', 'ignore'], default='warn') module-attribute","text":"

Controls the behavior when loggers attempt to send logs to the API handler from outside of a flow.

All logs sent to the API must be associated with a flow run. The API log handler can only be used outside of a flow by manually providing a flow run identifier. Logs that are not associated with a flow run will not be sent to the API. This setting can be used to determine if a warning or error is displayed when the identifier is missing.

The following options are available:

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_COLORS","title":"PREFECT_LOGGING_COLORS = Setting(bool, default=True) module-attribute","text":"

Whether to style console logs with color.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_MARKUP","title":"PREFECT_LOGGING_MARKUP = Setting(bool, default=False) module-attribute","text":"

Whether to interpret strings wrapped in square brackets as a style. This allows styles to be conveniently added to log messages, e.g. [red]This is a red message.[/red]. However, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. DROP TABLE [dbo].[SomeTable];\" outputs DROP TABLE .[SomeTable];.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_AGENT_QUERY_INTERVAL","title":"PREFECT_AGENT_QUERY_INTERVAL = Setting(float, default=15) module-attribute","text":"

The agent loop interval, in seconds. Agents will check for new runs this often. Defaults to 15.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_AGENT_PREFETCH_SECONDS","title":"PREFECT_AGENT_PREFETCH_SECONDS = Setting(int, default=15) module-attribute","text":"

Agents will look for scheduled runs this many seconds in the future and attempt to run them. This accounts for any additional infrastructure spin-up time or latency in preparing a flow run. Note flow runs will not start before their scheduled time, even if they are prefetched. Defaults to 15.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ASYNC_FETCH_STATE_RESULT","title":"PREFECT_ASYNC_FETCH_STATE_RESULT = Setting(bool, default=False) module-attribute","text":"

Determines whether State.result() fetches results automatically or not. In Prefect 2.6.0, the State.result() method was updated to be async to facilitate automatic retrieval of results from storage which means when writing async code you must await the call. For backwards compatibility, the result is not retrieved by default for async users. You may opt into this per call by passing fetch=True or toggle this setting to change the behavior globally. This setting does not affect users writing synchronous tasks and flows. This setting does not affect retrieval of results when using Future.result().

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_BLOCKS_REGISTER_ON_START","title":"PREFECT_API_BLOCKS_REGISTER_ON_START = Setting(bool, default=True) module-attribute","text":"

If set, any block types that have been imported will be registered with the backend on application startup. If not set, block types must be manually registered.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_PASSWORD","title":"PREFECT_API_DATABASE_PASSWORD = Setting(str, default=None, is_secret=True) module-attribute","text":"

Password to template into the PREFECT_API_DATABASE_CONNECTION_URL. This is useful if the password must be provided separately from the connection URL. To use this setting, you must include it in your connection URL.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_CONNECTION_URL","title":"PREFECT_API_DATABASE_CONNECTION_URL = Setting(str, default=None, value_callback=default_database_connection_url, is_secret=True) module-attribute","text":"

A database connection URL in a SQLAlchemy-compatible format. Prefect currently supports SQLite and Postgres. Note that all Prefect database engines must use an async driver - for SQLite, use sqlite+aiosqlite and for Postgres use postgresql+asyncpg.

SQLite in-memory databases can be used by providing the url sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false, which will allow the database to be accessed by multiple threads. Note that in-memory databases can not be accessed from multiple processes and should only be used for simple tests.

Defaults to a sqlite database stored in the Prefect home directory.

If you need to provide password via a different environment variable, you use the PREFECT_API_DATABASE_PASSWORD setting. For example:

PREFECT_API_DATABASE_PASSWORD='mypassword'\nPREFECT_API_DATABASE_CONNECTION_URL='postgresql+asyncpg://postgres:${PREFECT_API_DATABASE_PASSWORD}@localhost/prefect'\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_ECHO","title":"PREFECT_API_DATABASE_ECHO = Setting(bool, default=False) module-attribute","text":"

If True, SQLAlchemy will log all SQL issued to the database. Defaults to False.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_MIGRATE_ON_START","title":"PREFECT_API_DATABASE_MIGRATE_ON_START = Setting(bool, default=True) module-attribute","text":"

If True, the database will be upgraded on application creation. If False, the database will need to be upgraded manually.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_TIMEOUT","title":"PREFECT_API_DATABASE_TIMEOUT = Setting(Optional[float], default=10.0) module-attribute","text":"

A statement timeout, in seconds, applied to all database interactions made by the API. Defaults to 10 seconds.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DATABASE_CONNECTION_TIMEOUT","title":"PREFECT_API_DATABASE_CONNECTION_TIMEOUT = Setting(Optional[float], default=5) module-attribute","text":"

A connection timeout, in seconds, applied to database connections. Defaults to 5.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS","title":"PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS = Setting(float, default=60) module-attribute","text":"

The scheduler loop interval, in seconds. This determines how often the scheduler will attempt to schedule new flow runs, but has no impact on how quickly either flow runs or task runs are actually executed. Defaults to 60.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE","title":"PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE = Setting(int, default=100) module-attribute","text":"

The number of deployments the scheduler will attempt to schedule in a single batch. If there are more deployments than the batch size, the scheduler immediately attempts to schedule the next batch; it does not sleep for scheduler_loop_seconds until it has visited every deployment once. Defaults to 100.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS","title":"PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS = Setting(int, default=100) module-attribute","text":"

The scheduler will attempt to schedule up to this many auto-scheduled runs in the future. Note that runs may have fewer than this many scheduled runs, depending on the value of scheduler_max_scheduled_time. Defaults to 100.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS","title":"PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS = Setting(int, default=3) module-attribute","text":"

The scheduler will attempt to schedule at least this many auto-scheduled runs in the future. Note that runs may have more than this many scheduled runs, depending on the value of scheduler_min_scheduled_time. Defaults to 3.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME","title":"PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME = Setting(timedelta, default=timedelta(days=100)) module-attribute","text":"

The scheduler will create new runs up to this far in the future. Note that this setting will take precedence over scheduler_max_runs: if a flow runs once a month and scheduler_max_scheduled_time is three months, then only three runs will be scheduled. Defaults to 100 days (8640000 seconds).

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME","title":"PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME = Setting(timedelta, default=timedelta(hours=1)) module-attribute","text":"

The scheduler will create new runs at least this far in the future. Note that this setting will take precedence over scheduler_min_runs: if a flow runs every hour and scheduler_min_scheduled_time is three hours, then three runs will be scheduled even if scheduler_min_runs is 1. Defaults to 1 hour (3600 seconds).

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE","title":"PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE = Setting(int, default=500) module-attribute","text":"

The number of flow runs the scheduler will attempt to insert in one batch across all deployments. If the number of flow runs to schedule exceeds this amount, the runs will be inserted in batches of this size. Defaults to 500.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS","title":"PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS = Setting(float, default=5) module-attribute","text":"

The late runs service will look for runs to mark as late this often. Defaults to 5.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS","title":"PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS = Setting(timedelta, default=timedelta(seconds=5)) module-attribute","text":"

The late runs service will mark runs as late after they have exceeded their scheduled start time by this many seconds. Defaults to 5 seconds.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS","title":"PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS = Setting(float, default=5) module-attribute","text":"

The pause expiration service will look for runs to mark as failed this often. Defaults to 5.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS","title":"PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS = Setting(float, default=20) module-attribute","text":"

The cancellation cleanup service will look non-terminal tasks and subflows this often. Defaults to 20.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_DEFAULT_LIMIT","title":"PREFECT_API_DEFAULT_LIMIT = Setting(int, default=200) module-attribute","text":"

The default limit applied to queries that can return multiple objects, such as POST /flow_runs/filter.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_API_HOST","title":"PREFECT_SERVER_API_HOST = Setting(str, default='127.0.0.1') module-attribute","text":"

The API's host address (defaults to 127.0.0.1).

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_API_PORT","title":"PREFECT_SERVER_API_PORT = Setting(int, default=4200) module-attribute","text":"

The API's port address (defaults to 4200).

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_API_KEEPALIVE_TIMEOUT","title":"PREFECT_SERVER_API_KEEPALIVE_TIMEOUT = Setting(int, default=5) module-attribute","text":"

The API's keep alive timeout (defaults to 5). Refer to https://www.uvicorn.org/settings/#timeouts for details.

When the API is hosted behind a load balancer, you may want to set this to a value greater than the load balancer's idle timeout.

Note this setting only applies when calling prefect server start; if hosting the API with another tool you will need to configure this there instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_ENABLED","title":"PREFECT_UI_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to serve the Prefect UI.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_UI_API_URL","title":"PREFECT_UI_API_URL = Setting(str, default=None, value_callback=default_ui_api_url) module-attribute","text":"

The connection url for communication from the UI to the API. Defaults to PREFECT_API_URL if set. Otherwise, the default URL is generated from PREFECT_SERVER_API_HOST and PREFECT_SERVER_API_PORT. If providing a custom value, the aforementioned settings may be templated into the given string.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_SERVER_ANALYTICS_ENABLED","title":"PREFECT_SERVER_ANALYTICS_ENABLED = Setting(bool, default=True) module-attribute","text":"

When enabled, Prefect sends anonymous data (e.g. count of flow runs, package version) on server startup to help us improve our product.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_SCHEDULER_ENABLED","title":"PREFECT_API_SERVICES_SCHEDULER_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the scheduling service in the server application. If disabled, you will need to run this service separately to schedule runs for deployments.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_ENABLED","title":"PREFECT_API_SERVICES_LATE_RUNS_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the late runs service in the server application. If disabled, you will need to run this service separately to have runs past their scheduled start time marked as late.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED","title":"PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the flow run notifications service in the server application. If disabled, you will need to run this service separately to send flow run notifications.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED","title":"PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the paused flow run expiration service in the server application. If disabled, paused flows that have timed out will remain in a Paused state until a resume attempt.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH","title":"PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH = Setting(int, default=2000) module-attribute","text":"

The maximum number of characters allowed for a task run cache key. This setting cannot be changed client-side, it must be set on the server.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED","title":"PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED = Setting(bool, default=True) module-attribute","text":"

Whether or not to start the cancellation cleanup service in the server application. If disabled, task runs and subflow runs belonging to cancelled flows may remain in non-terminal states.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_EVENTS_CLIENT","title":"PREFECT_EXPERIMENTAL_ENABLE_EVENTS_CLIENT = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental Prefect work pools.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_EVENTS_CLIENT","title":"PREFECT_EXPERIMENTAL_WARN_EVENTS_CLIENT = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental Prefect work pools are used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORK_POOLS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORK_POOLS = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental Prefect work pools.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORK_POOLS","title":"PREFECT_EXPERIMENTAL_WARN_WORK_POOLS = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental Prefect work pools are used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORKERS","title":"PREFECT_EXPERIMENTAL_ENABLE_WORKERS = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental Prefect workers.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORKERS","title":"PREFECT_EXPERIMENTAL_WARN_WORKERS = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental Prefect workers are used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_HEARTBEAT_SECONDS","title":"PREFECT_WORKER_HEARTBEAT_SECONDS = Setting(float, default=30) module-attribute","text":"

Number of seconds a worker should wait between sending a heartbeat.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_QUERY_SECONDS","title":"PREFECT_WORKER_QUERY_SECONDS = Setting(float, default=10) module-attribute","text":"

Number of seconds a worker should wait between queries for scheduled flow runs.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_WORKER_PREFETCH_SECONDS","title":"PREFECT_WORKER_PREFETCH_SECONDS = Setting(float, default=10) module-attribute","text":"

The number of seconds into the future a worker should query for scheduled flow runs. Can be used to compensate for infrastructure start up time for a worker.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS","title":"PREFECT_EXPERIMENTAL_ENABLE_ARTIFACTS = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable experimental Prefect artifacts.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_ARTIFACTS","title":"PREFECT_EXPERIMENTAL_WARN_ARTIFACTS = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when experimental Prefect artifacts are used.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_ENABLE_WORKSPACE_DASHBOARD","title":"PREFECT_EXPERIMENTAL_ENABLE_WORKSPACE_DASHBOARD = Setting(bool, default=True) module-attribute","text":"

Whether or not to enable the experimental workspace dashboard.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_EXPERIMENTAL_WARN_WORKSPACE_DASHBOARD","title":"PREFECT_EXPERIMENTAL_WARN_WORKSPACE_DASHBOARD = Setting(bool, default=False) module-attribute","text":"

Whether or not to warn when the experimental workspace dashboard is enabled.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_ORION_ENABLED","title":"PREFECT_LOGGING_ORION_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_LOGGING_TO_API_ENABLED` instead.', deprecated_renamed_to=PREFECT_LOGGING_TO_API_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_LOGGING_TO_API_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_ORION_BATCH_INTERVAL","title":"PREFECT_LOGGING_ORION_BATCH_INTERVAL = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_LOGGING_TO_API_BATCH_INTERVAL` instead.', deprecated_renamed_to=PREFECT_LOGGING_TO_API_BATCH_INTERVAL) module-attribute","text":"

Deprecated. Use PREFECT_LOGGING_TO_API_BATCH_INTERVAL instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_ORION_BATCH_SIZE","title":"PREFECT_LOGGING_ORION_BATCH_SIZE = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_LOGGING_TO_API_BATCH_SIZE` instead.', deprecated_renamed_to=PREFECT_LOGGING_TO_API_BATCH_SIZE) module-attribute","text":"

Deprecated. Use PREFECT_LOGGING_TO_API_BATCH_SIZE instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_ORION_MAX_LOG_SIZE","title":"PREFECT_LOGGING_ORION_MAX_LOG_SIZE = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_LOGGING_TO_API_MAX_LOG_SIZE` instead.', deprecated_renamed_to=PREFECT_LOGGING_TO_API_MAX_LOG_SIZE) module-attribute","text":"

Deprecated. Use PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_LOGGING_ORION_WHEN_MISSING_FLOW","title":"PREFECT_LOGGING_ORION_WHEN_MISSING_FLOW = Setting(Optional[Literal['warn', 'error', 'ignore']], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW` instead.', deprecated_renamed_to=PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW) module-attribute","text":"

Deprecated. Use PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_BLOCKS_REGISTER_ON_START","title":"PREFECT_ORION_BLOCKS_REGISTER_ON_START = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_BLOCKS_REGISTER_ON_START` instead.', deprecated_renamed_to=PREFECT_API_BLOCKS_REGISTER_ON_START) module-attribute","text":"

Deprecated. Use PREFECT_API_BLOCKS_REGISTER_ON_START instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_DATABASE_PASSWORD","title":"PREFECT_ORION_DATABASE_PASSWORD = Setting(Optional[str], default=None, is_secret=True, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DATABASE_PASSWORD` instead.', deprecated_renamed_to=PREFECT_API_DATABASE_PASSWORD) module-attribute","text":"

Deprecated. Use PREFECT_API_DATABASE_PASSWORD instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_DATABASE_CONNECTION_URL","title":"PREFECT_ORION_DATABASE_CONNECTION_URL = Setting(Optional[str], default=None, value_callback=template_with_settings(PREFECT_HOME, PREFECT_API_DATABASE_PASSWORD), is_secret=True, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DATABASE_CONNECTION_URL` instead.', deprecated_renamed_to=PREFECT_API_DATABASE_CONNECTION_URL) module-attribute","text":"

Deprecated. Use PREFECT_API_DATABASE_CONNECTION_URL instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_DATABASE_ECHO","title":"PREFECT_ORION_DATABASE_ECHO = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DATABASE_ECHO` instead.', deprecated_renamed_to=PREFECT_API_DATABASE_ECHO) module-attribute","text":"

Deprecated. Use PREFECT_API_DATABASE_ECHO instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_DATABASE_MIGRATE_ON_START","title":"PREFECT_ORION_DATABASE_MIGRATE_ON_START = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DATABASE_MIGRATE_ON_START` instead.', deprecated_renamed_to=PREFECT_API_DATABASE_MIGRATE_ON_START) module-attribute","text":"

Deprecated. Use PREFECT_API_DATABASE_MIGRATE_ON_START instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_DATABASE_TIMEOUT","title":"PREFECT_ORION_DATABASE_TIMEOUT = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DATABASE_TIMEOUT` instead.', deprecated_renamed_to=PREFECT_API_DATABASE_TIMEOUT) module-attribute","text":"

Deprecated. Use PREFECT_API_DATABASE_TIMEOUT instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_DATABASE_CONNECTION_TIMEOUT","title":"PREFECT_ORION_DATABASE_CONNECTION_TIMEOUT = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DATABASE_CONNECTION_TIMEOUT` instead.', deprecated_renamed_to=PREFECT_API_DATABASE_CONNECTION_TIMEOUT) module-attribute","text":"

Deprecated. Use PREFECT_API_DATABASE_CONNECTION_TIMEOUT instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_LOOP_SECONDS","title":"PREFECT_ORION_SERVICES_SCHEDULER_LOOP_SECONDS = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE","title":"PREFECT_ORION_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_MAX_RUNS","title":"PREFECT_ORION_SERVICES_SCHEDULER_MAX_RUNS = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_MIN_RUNS","title":"PREFECT_ORION_SERVICES_SCHEDULER_MIN_RUNS = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME","title":"PREFECT_ORION_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME = Setting(Optional[timedelta], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME","title":"PREFECT_ORION_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME = Setting(Optional[timedelta], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_INSERT_BATCH_SIZE","title":"PREFECT_ORION_SERVICES_SCHEDULER_INSERT_BATCH_SIZE = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_LATE_RUNS_LOOP_SECONDS","title":"PREFECT_ORION_SERVICES_LATE_RUNS_LOOP_SECONDS = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_LATE_RUNS_AFTER_SECONDS","title":"PREFECT_ORION_SERVICES_LATE_RUNS_AFTER_SECONDS = Setting(Optional[timedelta], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS","title":"PREFECT_ORION_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS","title":"PREFECT_ORION_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS = Setting(Optional[float], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_API_DEFAULT_LIMIT","title":"PREFECT_ORION_API_DEFAULT_LIMIT = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_DEFAULT_LIMIT` instead.', deprecated_renamed_to=PREFECT_API_DEFAULT_LIMIT) module-attribute","text":"

Deprecated. Use PREFECT_API_DEFAULT_LIMIT instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_API_HOST","title":"PREFECT_ORION_API_HOST = Setting(Optional[str], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_SERVER_API_HOST` instead.', deprecated_renamed_to=PREFECT_SERVER_API_HOST) module-attribute","text":"

Deprecated. Use PREFECT_SERVER_API_HOST instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_API_PORT","title":"PREFECT_ORION_API_PORT = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_SERVER_API_PORT` instead.', deprecated_renamed_to=PREFECT_SERVER_API_PORT) module-attribute","text":"

Deprecated. Use PREFECT_SERVER_API_PORT instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_API_KEEPALIVE_TIMEOUT","title":"PREFECT_ORION_API_KEEPALIVE_TIMEOUT = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_SERVER_API_KEEPALIVE_TIMEOUT` instead.', deprecated_renamed_to=PREFECT_SERVER_API_KEEPALIVE_TIMEOUT) module-attribute","text":"

Deprecated. Use PREFECT_SERVER_API_KEEPALIVE_TIMEOUT instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_UI_ENABLED","title":"PREFECT_ORION_UI_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_UI_ENABLED` instead.', deprecated_renamed_to=PREFECT_UI_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_UI_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_UI_API_URL","title":"PREFECT_ORION_UI_API_URL = Setting(Optional[str], default=None, value_callback=default_ui_api_url, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_UI_API_URL` instead.', deprecated_renamed_to=PREFECT_UI_API_URL) module-attribute","text":"

Deprecated. Use PREFECT_UI_API_URL instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_ANALYTICS_ENABLED","title":"PREFECT_ORION_ANALYTICS_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_SERVER_ANALYTICS_ENABLED` instead.', deprecated_renamed_to=PREFECT_SERVER_ANALYTICS_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_SERVER_ANALYTICS_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_SCHEDULER_ENABLED","title":"PREFECT_ORION_SERVICES_SCHEDULER_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_SCHEDULER_ENABLED` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_SCHEDULER_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_SCHEDULER_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_LATE_RUNS_ENABLED","title":"PREFECT_ORION_SERVICES_LATE_RUNS_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_LATE_RUNS_ENABLED` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_LATE_RUNS_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_LATE_RUNS_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED","title":"PREFECT_ORION_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_PAUSE_EXPIRATIONS_ENABLED","title":"PREFECT_ORION_SERVICES_PAUSE_EXPIRATIONS_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_TASK_CACHE_KEY_MAX_LENGTH","title":"PREFECT_ORION_TASK_CACHE_KEY_MAX_LENGTH = Setting(Optional[int], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH` instead.', deprecated_renamed_to=PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH) module-attribute","text":"

Deprecated. Use PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.PREFECT_ORION_SERVICES_CANCELLATION_CLEANUP_ENABLED","title":"PREFECT_ORION_SERVICES_CANCELLATION_CLEANUP_ENABLED = Setting(Optional[bool], default=None, deprecated=True, deprecated_start_date='Feb 2023', deprecated_help='Use `PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED` instead.', deprecated_renamed_to=PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED) module-attribute","text":"

Deprecated. Use PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED instead.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting","title":"Setting","text":"

Bases: Generic[T]

Setting definition type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
class Setting(Generic[T]):\n\"\"\"\n    Setting definition type.\n    \"\"\"\n\n    def __init__(\n        self,\n        type: Type[T],\n        *,\n        deprecated: bool = False,\n        deprecated_start_date: Optional[str] = None,\n        deprecated_end_date: Optional[str] = None,\n        deprecated_help: str = \"\",\n        deprecated_when_message: str = \"\",\n        deprecated_when: Optional[Callable[[Any], bool]] = None,\n        deprecated_renamed_to: Optional[\"Setting\"] = None,\n        value_callback: Callable[[\"Settings\", T], T] = None,\n        is_secret: bool = False,\n        **kwargs,\n    ) -> None:\n        self.field: pydantic.fields.FieldInfo = Field(**kwargs)\n        self.type = type\n        self.value_callback = value_callback\n        self._name = None\n        self.is_secret = is_secret\n        self.deprecated = deprecated\n        self.deprecated_start_date = deprecated_start_date\n        self.deprecated_end_date = deprecated_end_date\n        self.deprecated_help = deprecated_help\n        self.deprecated_when = deprecated_when or (lambda _: True)\n        self.deprecated_when_message = deprecated_when_message\n        self.deprecated_renamed_to = deprecated_renamed_to\n        self.deprecated_renamed_from = None\n        self.__doc__ = self.field.description\n\n        # Validate the deprecation settings, will throw an error at setting definition\n        # time if the developer has not configured it correctly\n        if deprecated:\n            generate_deprecation_message(\n                name=\"...\",  # setting names not populated until after init\n                start_date=self.deprecated_start_date,\n                end_date=self.deprecated_end_date,\n                help=self.deprecated_help,\n                when=self.deprecated_when_message,\n            )\n\n        if deprecated_renamed_to is not None:\n            # Track the deprecation both ways\n            deprecated_renamed_to.deprecated_renamed_from = self\n\n    def value(self, bypass_callback: bool = False) -> T:\n\"\"\"\n        Get the current value of a setting.\n\n        Example:\n        ```python\n        from prefect.settings import PREFECT_API_URL\n        PREFECT_API_URL.value()\n        ```\n        \"\"\"\n        return self.value_from(get_current_settings(), bypass_callback=bypass_callback)\n\n    def value_from(self, settings: \"Settings\", bypass_callback: bool = False) -> T:\n\"\"\"\n        Get the value of a setting from a settings object\n\n        Example:\n        ```python\n        from prefect.settings import get_default_settings\n        PREFECT_API_URL.value_from(get_default_settings())\n        ```\n        \"\"\"\n        value = settings.value_of(self, bypass_callback=bypass_callback)\n\n        if not bypass_callback and self.deprecated and self.deprecated_when(value):\n            # Check if this setting is deprecated and someone is accessing the value\n            # via the old name\n            warnings.warn(self.deprecated_message, DeprecationWarning, stacklevel=3)\n\n            # If the the value is empty, return the new setting's value for compat\n            if value is None and self.deprecated_renamed_to is not None:\n                return self.deprecated_renamed_to.value_from(settings)\n\n        if not bypass_callback and self.deprecated_renamed_from is not None:\n            # Check if this setting is a rename of a deprecated setting and the\n            # deprecated setting is set and should be used for compatibility\n            deprecated_value = self.deprecated_renamed_from.value_from(\n                settings, bypass_callback=True\n            )\n            if deprecated_value is not None:\n                warnings.warn(\n                    (\n                        f\"{self.deprecated_renamed_from.deprecated_message} Because\"\n                        f\" {self.deprecated_renamed_from.name!r} is set it will be used\"\n                        f\" instead of {self.name!r} for backwards compatibility.\"\n                    ),\n                    DeprecationWarning,\n                    stacklevel=3,\n                )\n            return deprecated_value or value\n\n        return value\n\n    @property\n    def name(self):\n        if self._name:\n            return self._name\n\n        # Lookup the name on first access\n        for name, val in tuple(globals().items()):\n            if val == self:\n                self._name = name\n                return name\n\n        raise ValueError(\"Setting not found in `prefect.settings` module.\")\n\n    @name.setter\n    def name(self, value: str):\n        self._name = value\n\n    @property\n    def deprecated_message(self):\n        return generate_deprecation_message(\n            name=f\"Setting {self.name!r}\",\n            start_date=self.deprecated_start_date,\n            end_date=self.deprecated_end_date,\n            help=self.deprecated_help,\n            when=self.deprecated_when_message,\n        )\n\n    def __repr__(self) -> str:\n        return f\"<{self.name}: {self.type.__name__}>\"\n\n    def __bool__(self) -> bool:\n\"\"\"\n        Returns a truthy check of the current value.\n        \"\"\"\n        return bool(self.value())\n\n    def __eq__(self, __o: object) -> bool:\n        return __o.__eq__(self.value())\n\n    def __hash__(self) -> int:\n        return hash((type(self), self.name))\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting.value","title":"value","text":"

Get the current value of a setting.

Example:

from prefect.settings import PREFECT_API_URL\nPREFECT_API_URL.value()\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def value(self, bypass_callback: bool = False) -> T:\n\"\"\"\n    Get the current value of a setting.\n\n    Example:\n    ```python\n    from prefect.settings import PREFECT_API_URL\n    PREFECT_API_URL.value()\n    ```\n    \"\"\"\n    return self.value_from(get_current_settings(), bypass_callback=bypass_callback)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Setting.value_from","title":"value_from","text":"

Get the value of a setting from a settings object

Example:

from prefect.settings import get_default_settings\nPREFECT_API_URL.value_from(get_default_settings())\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def value_from(self, settings: \"Settings\", bypass_callback: bool = False) -> T:\n\"\"\"\n    Get the value of a setting from a settings object\n\n    Example:\n    ```python\n    from prefect.settings import get_default_settings\n    PREFECT_API_URL.value_from(get_default_settings())\n    ```\n    \"\"\"\n    value = settings.value_of(self, bypass_callback=bypass_callback)\n\n    if not bypass_callback and self.deprecated and self.deprecated_when(value):\n        # Check if this setting is deprecated and someone is accessing the value\n        # via the old name\n        warnings.warn(self.deprecated_message, DeprecationWarning, stacklevel=3)\n\n        # If the the value is empty, return the new setting's value for compat\n        if value is None and self.deprecated_renamed_to is not None:\n            return self.deprecated_renamed_to.value_from(settings)\n\n    if not bypass_callback and self.deprecated_renamed_from is not None:\n        # Check if this setting is a rename of a deprecated setting and the\n        # deprecated setting is set and should be used for compatibility\n        deprecated_value = self.deprecated_renamed_from.value_from(\n            settings, bypass_callback=True\n        )\n        if deprecated_value is not None:\n            warnings.warn(\n                (\n                    f\"{self.deprecated_renamed_from.deprecated_message} Because\"\n                    f\" {self.deprecated_renamed_from.name!r} is set it will be used\"\n                    f\" instead of {self.name!r} for backwards compatibility.\"\n                ),\n                DeprecationWarning,\n                stacklevel=3,\n            )\n        return deprecated_value or value\n\n    return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings","title":"Settings","text":"

Bases: SettingsFieldsMixin

Contains validated Prefect settings.

Settings should be accessed using the relevant Setting object. For example:

from prefect.settings import PREFECT_HOME\nPREFECT_HOME.value()\n

Accessing a setting attribute directly will ignore any value_callback mutations. This is not recommended:

from prefect.settings import Settings\nSettings().PREFECT_PROFILES_PATH  # PosixPath('${PREFECT_HOME}/profiles.toml')\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
@add_cloudpickle_reduction\nclass Settings(SettingsFieldsMixin):\n\"\"\"\n    Contains validated Prefect settings.\n\n    Settings should be accessed using the relevant `Setting` object. For example:\n    ```python\n    from prefect.settings import PREFECT_HOME\n    PREFECT_HOME.value()\n    ```\n\n    Accessing a setting attribute directly will ignore any `value_callback` mutations.\n    This is not recommended:\n    ```python\n    from prefect.settings import Settings\n    Settings().PREFECT_PROFILES_PATH  # PosixPath('${PREFECT_HOME}/profiles.toml')\n    ```\n    \"\"\"\n\n    def value_of(self, setting: Setting[T], bypass_callback: bool = False) -> T:\n\"\"\"\n        Retrieve a setting's value.\n        \"\"\"\n        value = getattr(self, setting.name)\n        if setting.value_callback and not bypass_callback:\n            value = setting.value_callback(self, value)\n        return value\n\n    @validator(PREFECT_LOGGING_LEVEL.name, PREFECT_LOGGING_SERVER_LEVEL.name)\n    def check_valid_log_level(cls, value):\n        if isinstance(value, str):\n            value = value.upper()\n        logging._checkLevel(value)\n        return value\n\n    @root_validator\n    def post_root_validators(cls, values):\n\"\"\"\n        Add root validation functions for settings here.\n        \"\"\"\n        # TODO: We could probably register these dynamically but this is the simpler\n        #       approach for now. We can explore more interesting validation features\n        #       in the future.\n        values = max_log_size_smaller_than_batch_size(values)\n        values = warn_on_database_password_value_without_usage(values)\n        return values\n\n    def copy_with_update(\n        self,\n        updates: Mapping[Setting, Any] = None,\n        set_defaults: Mapping[Setting, Any] = None,\n        restore_defaults: Iterable[Setting] = None,\n    ) -> \"Settings\":\n\"\"\"\n        Create a new `Settings` object with validation.\n\n        Arguments:\n            updates: A mapping of settings to new values. Existing values for the\n                given settings will be overridden.\n            set_defaults: A mapping of settings to new default values. Existing values for\n                the given settings will only be overridden if they were not set.\n            restore_defaults: An iterable of settings to restore to their default values.\n\n        Returns:\n            A new `Settings` object.\n        \"\"\"\n        updates = updates or {}\n        set_defaults = set_defaults or {}\n        restore_defaults = restore_defaults or set()\n        restore_defaults_names = {setting.name for setting in restore_defaults}\n\n        return self.__class__(\n            **{\n                **{setting.name: value for setting, value in set_defaults.items()},\n                **self.dict(exclude_unset=True, exclude=restore_defaults_names),\n                **{setting.name: value for setting, value in updates.items()},\n            }\n        )\n\n    def with_obfuscated_secrets(self):\n\"\"\"\n        Returns a copy of this settings object with secret setting values obfuscated.\n        \"\"\"\n        settings = self.copy(\n            update={\n                setting.name: obfuscate(self.value_of(setting))\n                for setting in SETTING_VARIABLES.values()\n                if setting.is_secret\n                # Exclude deprecated settings with null values to avoid warnings\n                and not (setting.deprecated and self.value_of(setting) is None)\n            }\n        )\n        # Ensure that settings that have not been marked as \"set\" before are still so\n        # after we have updated their value above\n        settings.__fields_set__.intersection_update(self.__fields_set__)\n        return settings\n\n    def to_environment_variables(\n        self, include: Iterable[Setting] = None, exclude_unset: bool = False\n    ) -> Dict[str, str]:\n\"\"\"\n        Convert the settings object to environment variables.\n\n        Note that setting values will not be run through their `value_callback` allowing\n        dynamic resolution to occur when loaded from the returned environment.\n\n        Args:\n            include_keys: An iterable of settings to include in the return value.\n                If not set, all settings are used.\n            exclude_unset: Only include settings that have been set (i.e. the value is\n                not from the default). If set, unset keys will be dropped even if they\n                are set in `include_keys`.\n\n        Returns:\n            A dictionary of settings with values cast to strings\n        \"\"\"\n        include = set(include or SETTING_VARIABLES.values())\n\n        if exclude_unset:\n            set_keys = {\n                # Collect all of the \"set\" keys and cast to `Setting` objects\n                SETTING_VARIABLES[key]\n                for key in self.dict(exclude_unset=True)\n            }\n            include.intersection_update(set_keys)\n\n        # Validate the types of items in `include` to prevent exclusion bugs\n        for key in include:\n            if not isinstance(key, Setting):\n                raise TypeError(\n                    \"Invalid type {type(key).__name__!r} for key in `include`.\"\n                )\n\n        env = {\n            # Use `getattr` instead of `value_of` to avoid value callback resolution\n            key: getattr(self, key)\n            for key, setting in SETTING_VARIABLES.items()\n            if setting in include\n        }\n\n        # Cast to strings and drop null values\n        return {key: str(value) for key, value in env.items() if value is not None}\n\n    class Config:\n        frozen = True\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.value_of","title":"value_of","text":"

Retrieve a setting's value.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def value_of(self, setting: Setting[T], bypass_callback: bool = False) -> T:\n\"\"\"\n    Retrieve a setting's value.\n    \"\"\"\n    value = getattr(self, setting.name)\n    if setting.value_callback and not bypass_callback:\n        value = setting.value_callback(self, value)\n    return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.post_root_validators","title":"post_root_validators","text":"

Add root validation functions for settings here.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
@root_validator\ndef post_root_validators(cls, values):\n\"\"\"\n    Add root validation functions for settings here.\n    \"\"\"\n    # TODO: We could probably register these dynamically but this is the simpler\n    #       approach for now. We can explore more interesting validation features\n    #       in the future.\n    values = max_log_size_smaller_than_batch_size(values)\n    values = warn_on_database_password_value_without_usage(values)\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.with_obfuscated_secrets","title":"with_obfuscated_secrets","text":"

Returns a copy of this settings object with secret setting values obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def with_obfuscated_secrets(self):\n\"\"\"\n    Returns a copy of this settings object with secret setting values obfuscated.\n    \"\"\"\n    settings = self.copy(\n        update={\n            setting.name: obfuscate(self.value_of(setting))\n            for setting in SETTING_VARIABLES.values()\n            if setting.is_secret\n            # Exclude deprecated settings with null values to avoid warnings\n            and not (setting.deprecated and self.value_of(setting) is None)\n        }\n    )\n    # Ensure that settings that have not been marked as \"set\" before are still so\n    # after we have updated their value above\n    settings.__fields_set__.intersection_update(self.__fields_set__)\n    return settings\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Settings.to_environment_variables","title":"to_environment_variables","text":"

Convert the settings object to environment variables.

Note that setting values will not be run through their value_callback allowing dynamic resolution to occur when loaded from the returned environment.

Parameters:

Name Type Description Default include_keys

An iterable of settings to include in the return value. If not set, all settings are used.

required exclude_unset bool

Only include settings that have been set (i.e. the value is not from the default). If set, unset keys will be dropped even if they are set in include_keys.

False

Returns:

Type Description Dict[str, str]

A dictionary of settings with values cast to strings

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def to_environment_variables(\n    self, include: Iterable[Setting] = None, exclude_unset: bool = False\n) -> Dict[str, str]:\n\"\"\"\n    Convert the settings object to environment variables.\n\n    Note that setting values will not be run through their `value_callback` allowing\n    dynamic resolution to occur when loaded from the returned environment.\n\n    Args:\n        include_keys: An iterable of settings to include in the return value.\n            If not set, all settings are used.\n        exclude_unset: Only include settings that have been set (i.e. the value is\n            not from the default). If set, unset keys will be dropped even if they\n            are set in `include_keys`.\n\n    Returns:\n        A dictionary of settings with values cast to strings\n    \"\"\"\n    include = set(include or SETTING_VARIABLES.values())\n\n    if exclude_unset:\n        set_keys = {\n            # Collect all of the \"set\" keys and cast to `Setting` objects\n            SETTING_VARIABLES[key]\n            for key in self.dict(exclude_unset=True)\n        }\n        include.intersection_update(set_keys)\n\n    # Validate the types of items in `include` to prevent exclusion bugs\n    for key in include:\n        if not isinstance(key, Setting):\n            raise TypeError(\n                \"Invalid type {type(key).__name__!r} for key in `include`.\"\n            )\n\n    env = {\n        # Use `getattr` instead of `value_of` to avoid value callback resolution\n        key: getattr(self, key)\n        for key, setting in SETTING_VARIABLES.items()\n        if setting in include\n    }\n\n    # Cast to strings and drop null values\n    return {key: str(value) for key, value in env.items() if value is not None}\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile","title":"Profile","text":"

Bases: pydantic.BaseModel

A user profile containing settings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
class Profile(pydantic.BaseModel):\n\"\"\"\n    A user profile containing settings.\n    \"\"\"\n\n    name: str\n    settings: Dict[Setting, Any] = Field(default_factory=dict)\n    source: Optional[Path]\n\n    @pydantic.validator(\"settings\", pre=True)\n    def map_names_to_settings(cls, value):\n        if value is None:\n            return value\n\n        # Cast string setting names to variables\n        validated = {}\n        for setting, val in value.items():\n            if isinstance(setting, str) and setting in SETTING_VARIABLES:\n                validated[SETTING_VARIABLES[setting]] = val\n            elif isinstance(setting, Setting):\n                validated[setting] = val\n            else:\n                raise ValueError(f\"Unknown setting {setting!r}.\")\n\n        return validated\n\n    def validate_settings(self) -> None:\n\"\"\"\n        Validate the settings contained in this profile.\n\n        Raises:\n            pydantic.ValidationError: When settings do not have valid values.\n        \"\"\"\n        # Create a new `Settings` instance with the settings from this profile relying\n        # on Pydantic validation to raise an error.\n        # We do not return the `Settings` object because this is not the recommended\n        # path for constructing settings with a profile. See `use_profile` instead.\n        Settings(**{setting.name: value for setting, value in self.settings.items()})\n\n    def convert_deprecated_renamed_settings(self) -> List[Tuple[Setting, Setting]]:\n\"\"\"\n        Update settings in place to replace deprecated settings with new settings when\n        renamed.\n\n        Returns a list of tuples with the old and new setting.\n        \"\"\"\n        changed = []\n        for setting in tuple(self.settings):\n            if (\n                setting.deprecated\n                and setting.deprecated_renamed_to\n                and setting.deprecated_renamed_to not in self.settings\n            ):\n                self.settings[setting.deprecated_renamed_to] = self.settings.pop(\n                    setting\n                )\n                changed.append((setting, setting.deprecated_renamed_to))\n        return changed\n\n    class Config:\n        arbitrary_types_allowed = True\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile.validate_settings","title":"validate_settings","text":"

Validate the settings contained in this profile.

Raises:

Type Description pydantic.ValidationError

When settings do not have valid values.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def validate_settings(self) -> None:\n\"\"\"\n    Validate the settings contained in this profile.\n\n    Raises:\n        pydantic.ValidationError: When settings do not have valid values.\n    \"\"\"\n    # Create a new `Settings` instance with the settings from this profile relying\n    # on Pydantic validation to raise an error.\n    # We do not return the `Settings` object because this is not the recommended\n    # path for constructing settings with a profile. See `use_profile` instead.\n    Settings(**{setting.name: value for setting, value in self.settings.items()})\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.Profile.convert_deprecated_renamed_settings","title":"convert_deprecated_renamed_settings","text":"

Update settings in place to replace deprecated settings with new settings when renamed.

Returns a list of tuples with the old and new setting.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def convert_deprecated_renamed_settings(self) -> List[Tuple[Setting, Setting]]:\n\"\"\"\n    Update settings in place to replace deprecated settings with new settings when\n    renamed.\n\n    Returns a list of tuples with the old and new setting.\n    \"\"\"\n    changed = []\n    for setting in tuple(self.settings):\n        if (\n            setting.deprecated\n            and setting.deprecated_renamed_to\n            and setting.deprecated_renamed_to not in self.settings\n        ):\n            self.settings[setting.deprecated_renamed_to] = self.settings.pop(\n                setting\n            )\n            changed.append((setting, setting.deprecated_renamed_to))\n    return changed\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection","title":"ProfilesCollection","text":"

\" A utility class for working with a collection of profiles.

Profiles in the collection must have unique names.

The collection may store the name of the active profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
class ProfilesCollection:\n\"\"\" \"\n    A utility class for working with a collection of profiles.\n\n    Profiles in the collection must have unique names.\n\n    The collection may store the name of the active profile.\n    \"\"\"\n\n    def __init__(\n        self, profiles: Iterable[Profile], active: Optional[str] = None\n    ) -> None:\n        self.profiles_by_name = {profile.name: profile for profile in profiles}\n        self.active_name = active\n\n    @property\n    def names(self) -> Set[str]:\n\"\"\"\n        Return a set of profile names in this collection.\n        \"\"\"\n        return set(self.profiles_by_name.keys())\n\n    @property\n    def active_profile(self) -> Optional[Profile]:\n\"\"\"\n        Retrieve the active profile in this collection.\n        \"\"\"\n        if self.active_name is None:\n            return None\n        return self[self.active_name]\n\n    def set_active(self, name: Optional[str], check: bool = True):\n\"\"\"\n        Set the active profile name in the collection.\n\n        A null value may be passed to indicate that this collection does not determine\n        the active profile.\n        \"\"\"\n        if check and name is not None and name not in self.names:\n            raise ValueError(f\"Unknown profile name {name!r}.\")\n        self.active_name = name\n\n    def update_profile(\n        self, name: str, settings: Mapping[Union[Dict, str], Any], source: Path = None\n    ) -> Profile:\n\"\"\"\n        Add a profile to the collection or update the existing on if the name is already\n        present in this collection.\n\n        If updating an existing profile, the settings will be merged. Settings can\n        be dropped from the existing profile by setting them to `None` in the new\n        profile.\n\n        Returns the new profile object.\n        \"\"\"\n        existing = self.profiles_by_name.get(name)\n\n        # Convert the input to a `Profile` to cast settings to the correct type\n        profile = Profile(name=name, settings=settings, source=source)\n\n        if existing:\n            new_settings = {**existing.settings, **profile.settings}\n\n            # Drop null keys to restore to default\n            for key, value in tuple(new_settings.items()):\n                if value is None:\n                    new_settings.pop(key)\n\n            new_profile = Profile(\n                name=profile.name,\n                settings=new_settings,\n                source=source or profile.source,\n            )\n        else:\n            new_profile = profile\n\n        self.profiles_by_name[new_profile.name] = new_profile\n\n        return new_profile\n\n    def add_profile(self, profile: Profile) -> None:\n\"\"\"\n        Add a profile to the collection.\n\n        If the profile name already exists, an exception will be raised.\n        \"\"\"\n        if profile.name in self.profiles_by_name:\n            raise ValueError(\n                f\"Profile name {profile.name!r} already exists in collection.\"\n            )\n\n        self.profiles_by_name[profile.name] = profile\n\n    def remove_profile(self, name: str) -> None:\n\"\"\"\n        Remove a profile from the collection.\n        \"\"\"\n        self.profiles_by_name.pop(name)\n\n    def without_profile_source(self, path: Optional[Path]) -> \"ProfilesCollection\":\n\"\"\"\n        Remove profiles that were loaded from a given path.\n\n        Returns a new collection.\n        \"\"\"\n        return ProfilesCollection(\n            [\n                profile\n                for profile in self.profiles_by_name.values()\n                if profile.source != path\n            ],\n            active=self.active_name,\n        )\n\n    def to_dict(self):\n\"\"\"\n        Convert to a dictionary suitable for writing to disk.\n        \"\"\"\n        return {\n            \"active\": self.active_name,\n            \"profiles\": {\n                profile.name: {\n                    setting.name: value for setting, value in profile.settings.items()\n                }\n                for profile in self.profiles_by_name.values()\n            },\n        }\n\n    def __getitem__(self, name: str) -> Profile:\n        return self.profiles_by_name[name]\n\n    def __iter__(self):\n        return self.profiles_by_name.__iter__()\n\n    def items(self):\n        return self.profiles_by_name.items()\n\n    def __eq__(self, __o: object) -> bool:\n        if not isinstance(__o, ProfilesCollection):\n            return False\n\n        return (\n            self.profiles_by_name == __o.profiles_by_name\n            and self.active_name == __o.active_name\n        )\n\n    def __repr__(self) -> str:\n        return (\n            f\"ProfilesCollection(profiles={list(self.profiles_by_name.values())!r},\"\n            f\" active={self.active_name!r})>\"\n        )\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.names","title":"names: Set[str] property","text":"

Return a set of profile names in this collection.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.active_profile","title":"active_profile: Optional[Profile] property","text":"

Retrieve the active profile in this collection.

","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.set_active","title":"set_active","text":"

Set the active profile name in the collection.

A null value may be passed to indicate that this collection does not determine the active profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def set_active(self, name: Optional[str], check: bool = True):\n\"\"\"\n    Set the active profile name in the collection.\n\n    A null value may be passed to indicate that this collection does not determine\n    the active profile.\n    \"\"\"\n    if check and name is not None and name not in self.names:\n        raise ValueError(f\"Unknown profile name {name!r}.\")\n    self.active_name = name\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.update_profile","title":"update_profile","text":"

Add a profile to the collection or update the existing on if the name is already present in this collection.

If updating an existing profile, the settings will be merged. Settings can be dropped from the existing profile by setting them to None in the new profile.

Returns the new profile object.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def update_profile(\n    self, name: str, settings: Mapping[Union[Dict, str], Any], source: Path = None\n) -> Profile:\n\"\"\"\n    Add a profile to the collection or update the existing on if the name is already\n    present in this collection.\n\n    If updating an existing profile, the settings will be merged. Settings can\n    be dropped from the existing profile by setting them to `None` in the new\n    profile.\n\n    Returns the new profile object.\n    \"\"\"\n    existing = self.profiles_by_name.get(name)\n\n    # Convert the input to a `Profile` to cast settings to the correct type\n    profile = Profile(name=name, settings=settings, source=source)\n\n    if existing:\n        new_settings = {**existing.settings, **profile.settings}\n\n        # Drop null keys to restore to default\n        for key, value in tuple(new_settings.items()):\n            if value is None:\n                new_settings.pop(key)\n\n        new_profile = Profile(\n            name=profile.name,\n            settings=new_settings,\n            source=source or profile.source,\n        )\n    else:\n        new_profile = profile\n\n    self.profiles_by_name[new_profile.name] = new_profile\n\n    return new_profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.add_profile","title":"add_profile","text":"

Add a profile to the collection.

If the profile name already exists, an exception will be raised.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def add_profile(self, profile: Profile) -> None:\n\"\"\"\n    Add a profile to the collection.\n\n    If the profile name already exists, an exception will be raised.\n    \"\"\"\n    if profile.name in self.profiles_by_name:\n        raise ValueError(\n            f\"Profile name {profile.name!r} already exists in collection.\"\n        )\n\n    self.profiles_by_name[profile.name] = profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.remove_profile","title":"remove_profile","text":"

Remove a profile from the collection.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def remove_profile(self, name: str) -> None:\n\"\"\"\n    Remove a profile from the collection.\n    \"\"\"\n    self.profiles_by_name.pop(name)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.ProfilesCollection.without_profile_source","title":"without_profile_source","text":"

Remove profiles that were loaded from a given path.

Returns a new collection.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def without_profile_source(self, path: Optional[Path]) -> \"ProfilesCollection\":\n\"\"\"\n    Remove profiles that were loaded from a given path.\n\n    Returns a new collection.\n    \"\"\"\n    return ProfilesCollection(\n        [\n            profile\n            for profile in self.profiles_by_name.values()\n            if profile.source != path\n        ],\n        active=self.active_name,\n    )\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_extra_loggers","title":"get_extra_loggers","text":"

value_callback for PREFECT_LOGGING_EXTRA_LOGGERSthat parses the CSV string into a list and trims whitespace from logger names.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def get_extra_loggers(_: \"Settings\", value: str) -> List[str]:\n\"\"\"\n    `value_callback` for `PREFECT_LOGGING_EXTRA_LOGGERS`that parses the CSV string into a\n    list and trims whitespace from logger names.\n    \"\"\"\n    return [name.strip() for name in value.split(\",\")] if value else []\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.debug_mode_log_level","title":"debug_mode_log_level","text":"

value_callback for PREFECT_LOGGING_LEVEL that overrides the log level to DEBUG when debug mode is enabled.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def debug_mode_log_level(settings, value):\n\"\"\"\n    `value_callback` for `PREFECT_LOGGING_LEVEL` that overrides the log level to DEBUG\n    when debug mode is enabled.\n    \"\"\"\n    if PREFECT_DEBUG_MODE.value_from(settings):\n        return \"DEBUG\"\n    else:\n        return value\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.only_return_value_in_test_mode","title":"only_return_value_in_test_mode","text":"

value_callback for PREFECT_TEST_SETTING that only allows access during test mode

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def only_return_value_in_test_mode(settings, value):\n\"\"\"\n    `value_callback` for `PREFECT_TEST_SETTING` that only allows access during test mode\n    \"\"\"\n    if PREFECT_TEST_MODE.value_from(settings):\n        return value\n    else:\n        return None\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.default_ui_api_url","title":"default_ui_api_url","text":"

value_callback for PREFECT_UI_API_URL that sets the default value to PREFECT_API_URL if set otherwise it constructs an API URL from the API settings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def default_ui_api_url(settings, value):\n\"\"\"\n    `value_callback` for `PREFECT_UI_API_URL` that sets the default value to\n    `PREFECT_API_URL` if set otherwise it constructs an API URL from the API settings.\n    \"\"\"\n    if value is None:\n        # Set a default value\n        if PREFECT_API_URL.value_from(settings):\n            value = \"${PREFECT_API_URL}\"\n        else:\n            value = \"http://${PREFECT_SERVER_API_HOST}:${PREFECT_SERVER_API_PORT}/api\"\n\n    return template_with_settings(\n        PREFECT_SERVER_API_HOST, PREFECT_SERVER_API_PORT, PREFECT_API_URL\n    )(settings, value)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.status_codes_as_integers_in_range","title":"status_codes_as_integers_in_range","text":"

value_callback for PREFECT_CLIENT_RETRY_EXTRA_CODES that ensures status codes are integers in the range 100-599.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def status_codes_as_integers_in_range(_, value):\n\"\"\"\n    `value_callback` for `PREFECT_CLIENT_RETRY_EXTRA_CODES` that ensures status codes\n    are integers in the range 100-599.\n    \"\"\"\n    if value == \"\":\n        return set()\n\n    values = {v.strip() for v in value.split(\",\")}\n\n    if any(not v.isdigit() or int(v) < 100 or int(v) > 599 for v in values):\n        raise ValueError(\n            \"PREFECT_CLIENT_RETRY_EXTRA_CODES must be a comma separated list of \"\n            \"integers between 100 and 599.\"\n        )\n\n    values = {int(v) for v in values}\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.template_with_settings","title":"template_with_settings","text":"

Returns a value_callback that will template the given settings into the runtime value for the setting.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def template_with_settings(*upstream_settings: Setting) -> Callable[[\"Settings\", T], T]:\n\"\"\"\n    Returns a `value_callback` that will template the given settings into the runtime\n    value for the setting.\n    \"\"\"\n\n    def templater(settings, value):\n        if value is None:\n            return value  # Do not attempt to template a null string\n\n        original_type = type(value)\n        template_values = {\n            setting.name: setting.value_from(settings) for setting in upstream_settings\n        }\n        template = string.Template(str(value))\n        return original_type(template.substitute(template_values))\n\n    return templater\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.max_log_size_smaller_than_batch_size","title":"max_log_size_smaller_than_batch_size","text":"

Validator for settings asserting the batch size and match log size are compatible

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def max_log_size_smaller_than_batch_size(values):\n\"\"\"\n    Validator for settings asserting the batch size and match log size are compatible\n    \"\"\"\n    if (\n        values[\"PREFECT_LOGGING_TO_API_BATCH_SIZE\"]\n        < values[\"PREFECT_LOGGING_TO_API_MAX_LOG_SIZE\"]\n    ):\n        raise ValueError(\n            \"`PREFECT_LOGGING_TO_API_MAX_LOG_SIZE` cannot be larger than\"\n            \" `PREFECT_LOGGING_TO_API_BATCH_SIZE`\"\n        )\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.warn_on_database_password_value_without_usage","title":"warn_on_database_password_value_without_usage","text":"

Validator for settings warning if the database password is set but not used.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def warn_on_database_password_value_without_usage(values):\n\"\"\"\n    Validator for settings warning if the database password is set but not used.\n    \"\"\"\n    value = values[\"PREFECT_API_DATABASE_PASSWORD\"]\n    if (\n        value\n        and not value.startswith(OBFUSCATED_PREFIX)\n        and (\n            \"PREFECT_API_DATABASE_PASSWORD\"\n            not in values[\"PREFECT_API_DATABASE_CONNECTION_URL\"]\n        )\n    ):\n        warnings.warn(\n            \"PREFECT_API_DATABASE_PASSWORD is set but not included in the \"\n            \"PREFECT_API_DATABASE_CONNECTION_URL. \"\n            \"The provided password will be ignored.\"\n        )\n    return values\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_current_settings","title":"get_current_settings","text":"

Returns a settings object populated with values from the current settings context or, if no settings context is active, the environment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def get_current_settings() -> Settings:\n\"\"\"\n    Returns a settings object populated with values from the current settings context\n    or, if no settings context is active, the environment.\n    \"\"\"\n    from prefect.context import SettingsContext\n\n    settings_context = SettingsContext.get()\n    if settings_context is not None:\n        return settings_context.settings\n\n    return get_settings_from_env()\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_settings_from_env","title":"get_settings_from_env","text":"

Returns a settings object populated with default values and overrides from environment variables, ignoring any values in profiles.

Calls with the same environment return a cached object instead of reconstructing to avoid validation overhead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def get_settings_from_env() -> Settings:\n\"\"\"\n    Returns a settings object populated with default values and overrides from\n    environment variables, ignoring any values in profiles.\n\n    Calls with the same environment return a cached object instead of reconstructing\n    to avoid validation overhead.\n    \"\"\"\n    # Since os.environ is a Dict[str, str] we can safely hash it by contents, but we\n    # must be careful to avoid hashing a generator instead of a tuple\n    cache_key = hash(tuple((key, value) for key, value in os.environ.items()))\n\n    if cache_key not in _FROM_ENV_CACHE:\n        _FROM_ENV_CACHE[cache_key] = Settings()\n\n    return _FROM_ENV_CACHE[cache_key]\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.get_default_settings","title":"get_default_settings","text":"

Returns a settings object populated with default values, ignoring any overrides from environment variables or profiles.

This is cached since the defaults should not change during the lifetime of the module.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def get_default_settings() -> Settings:\n\"\"\"\n    Returns a settings object populated with default values, ignoring any overrides\n    from environment variables or profiles.\n\n    This is cached since the defaults should not change during the lifetime of the\n    module.\n    \"\"\"\n    global _DEFAULTS_CACHE\n\n    if not _DEFAULTS_CACHE:\n        old = os.environ\n        try:\n            os.environ = {}\n            settings = get_settings_from_env()\n        finally:\n            os.environ = old\n\n        _DEFAULTS_CACHE = settings\n\n    return _DEFAULTS_CACHE\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.temporary_settings","title":"temporary_settings","text":"

Temporarily override the current settings by entering a new profile.

See Settings.copy_with_update for details on different argument behavior.

Examples:

>>> from prefect.settings import PREFECT_API_URL\n>>>\n>>> with temporary_settings(updates={PREFECT_API_URL: \"foo\"}):\n>>>    assert PREFECT_API_URL.value() == \"foo\"\n>>>\n>>>    with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"}):\n>>>         assert PREFECT_API_URL.value() == \"foo\"\n>>>\n>>>    with temporary_settings(restore_defaults={PREFECT_API_URL}):\n>>>         assert PREFECT_API_URL.value() is None\n>>>\n>>>         with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"})\n>>>             assert PREFECT_API_URL.value() == \"bar\"\n>>> assert PREFECT_API_URL.value() is None\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
@contextmanager\ndef temporary_settings(\n    updates: Mapping[Setting, Any] = None,\n    set_defaults: Mapping[Setting, Any] = None,\n    restore_defaults: Iterable[Setting] = None,\n) -> Settings:\n\"\"\"\n    Temporarily override the current settings by entering a new profile.\n\n    See `Settings.copy_with_update` for details on different argument behavior.\n\n    Examples:\n        >>> from prefect.settings import PREFECT_API_URL\n        >>>\n        >>> with temporary_settings(updates={PREFECT_API_URL: \"foo\"}):\n        >>>    assert PREFECT_API_URL.value() == \"foo\"\n        >>>\n        >>>    with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"}):\n        >>>         assert PREFECT_API_URL.value() == \"foo\"\n        >>>\n        >>>    with temporary_settings(restore_defaults={PREFECT_API_URL}):\n        >>>         assert PREFECT_API_URL.value() is None\n        >>>\n        >>>         with temporary_settings(set_defaults={PREFECT_API_URL: \"bar\"})\n        >>>             assert PREFECT_API_URL.value() == \"bar\"\n        >>> assert PREFECT_API_URL.value() is None\n    \"\"\"\n    import prefect.context\n\n    context = prefect.context.get_settings_context()\n\n    new_settings = context.settings.copy_with_update(\n        updates=updates, set_defaults=set_defaults, restore_defaults=restore_defaults\n    )\n\n    with prefect.context.SettingsContext(\n        profile=context.profile, settings=new_settings\n    ):\n        yield new_settings\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_profiles","title":"load_profiles","text":"

Load all profiles from the default and current profile paths.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def load_profiles() -> ProfilesCollection:\n\"\"\"\n    Load all profiles from the default and current profile paths.\n    \"\"\"\n    profiles = _read_profiles_from(DEFAULT_PROFILES_PATH)\n\n    user_profiles_path = PREFECT_PROFILES_PATH.value()\n    if user_profiles_path.exists():\n        user_profiles = _read_profiles_from(user_profiles_path)\n\n        # Merge all of the user profiles with the defaults\n        for name in user_profiles:\n            profiles.update_profile(\n                name,\n                settings=user_profiles[name].settings,\n                source=user_profiles[name].source,\n            )\n\n        if user_profiles.active_name:\n            profiles.set_active(user_profiles.active_name, check=False)\n\n    return profiles\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_current_profile","title":"load_current_profile","text":"

Load the current profile from the default and current profile paths.

This will not include settings from the current settings context. Only settings that have been persisted to the profiles file will be saved.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def load_current_profile():\n\"\"\"\n    Load the current profile from the default and current profile paths.\n\n    This will _not_ include settings from the current settings context. Only settings\n    that have been persisted to the profiles file will be saved.\n    \"\"\"\n    from prefect.context import SettingsContext\n\n    profiles = load_profiles()\n    context = SettingsContext.get()\n\n    if context:\n        profiles.set_active(context.profile.name)\n\n    return profiles.active_profile\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.save_profiles","title":"save_profiles","text":"

Writes all non-default profiles to the current profiles path.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def save_profiles(profiles: ProfilesCollection) -> None:\n\"\"\"\n    Writes all non-default profiles to the current profiles path.\n    \"\"\"\n    profiles_path = PREFECT_PROFILES_PATH.value()\n    profiles = profiles.without_profile_source(DEFAULT_PROFILES_PATH)\n    return _write_profiles_to(profiles_path, profiles)\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.load_profile","title":"load_profile","text":"

Load a single profile by name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def load_profile(name: str) -> Profile:\n\"\"\"\n    Load a single profile by name.\n    \"\"\"\n    profiles = load_profiles()\n    try:\n        return profiles[name]\n    except KeyError:\n        raise ValueError(f\"Profile {name!r} not found.\")\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/settings/#prefect.settings.update_current_profile","title":"update_current_profile","text":"

Update the persisted data for the profile currently in-use.

If the profile does not exist in the profiles file, it will be created.

Given settings will be merged with the existing settings as described in ProfilesCollection.update_profile.

Returns:

Type Description Profile

The new profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/settings.py
def update_current_profile(settings: Dict[Union[str, Setting], Any]) -> Profile:\n\"\"\"\n    Update the persisted data for the profile currently in-use.\n\n    If the profile does not exist in the profiles file, it will be created.\n\n    Given settings will be merged with the existing settings as described in\n    `ProfilesCollection.update_profile`.\n\n    Returns:\n        The new profile.\n    \"\"\"\n    import prefect.context\n\n    current_profile = prefect.context.get_settings_context().profile\n\n    if not current_profile:\n        raise MissingProfileError(\"No profile is currently in use.\")\n\n    profiles = load_profiles()\n\n    # Ensure the current profile's settings are present\n    profiles.update_profile(current_profile.name, current_profile.settings)\n    # Then merge the new settings in\n    new_profile = profiles.update_profile(current_profile.name, settings)\n\n    # Validate before saving\n    new_profile.validate_settings()\n\n    save_profiles(profiles)\n\n    return profiles[current_profile.name]\n
","tags":["Python API","settings","configuration","environment variables"]},{"location":"api-ref/prefect/states/","title":"prefect.states","text":"","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states","title":"prefect.states","text":"","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.AwaitingRetry","title":"AwaitingRetry","text":"

Convenience function for creating AwaitingRetry states.

Returns:

Name Type Description State State

a AwaitingRetry state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def AwaitingRetry(\n    cls: Type[State] = State, scheduled_time: datetime.datetime = None, **kwargs\n) -> State:\n\"\"\"Convenience function for creating `AwaitingRetry` states.\n\n    Returns:\n        State: a AwaitingRetry state\n    \"\"\"\n    return Scheduled(\n        cls=cls, scheduled_time=scheduled_time, name=\"AwaitingRetry\", **kwargs\n    )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Cancelled","title":"Cancelled","text":"

Convenience function for creating Cancelled states.

Returns:

Name Type Description State State

a Cancelled state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Cancelled(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Cancelled` states.\n\n    Returns:\n        State: a Cancelled state\n    \"\"\"\n    return cls(type=StateType.CANCELLED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Cancelling","title":"Cancelling","text":"

Convenience function for creating Cancelling states.

Returns:

Name Type Description State State

a Cancelling state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Cancelling(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Cancelling` states.\n\n    Returns:\n        State: a Cancelling state\n    \"\"\"\n    return cls(type=StateType.CANCELLING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Completed","title":"Completed","text":"

Convenience function for creating Completed states.

Returns:

Name Type Description State State

a Completed state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Completed(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Completed` states.\n\n    Returns:\n        State: a Completed state\n    \"\"\"\n    return cls(type=StateType.COMPLETED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Crashed","title":"Crashed","text":"

Convenience function for creating Crashed states.

Returns:

Name Type Description State State

a Crashed state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Crashed(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Crashed` states.\n\n    Returns:\n        State: a Crashed state\n    \"\"\"\n    return cls(type=StateType.CRASHED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Failed","title":"Failed","text":"

Convenience function for creating Failed states.

Returns:

Name Type Description State State

a Failed state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Failed(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Failed` states.\n\n    Returns:\n        State: a Failed state\n    \"\"\"\n    return cls(type=StateType.FAILED, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Late","title":"Late","text":"

Convenience function for creating Late states.

Returns:

Name Type Description State State

a Late state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Late(\n    cls: Type[State] = State, scheduled_time: datetime.datetime = None, **kwargs\n) -> State:\n\"\"\"Convenience function for creating `Late` states.\n\n    Returns:\n        State: a Late state\n    \"\"\"\n    return Scheduled(cls=cls, scheduled_time=scheduled_time, name=\"Late\", **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Paused","title":"Paused","text":"

Convenience function for creating Paused states.

Returns:

Name Type Description State State

a Paused state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Paused(\n    cls: Type[State] = State,\n    timeout_seconds: int = None,\n    pause_expiration_time: datetime.datetime = None,\n    reschedule: bool = False,\n    pause_key: str = None,\n    **kwargs,\n) -> State:\n\"\"\"Convenience function for creating `Paused` states.\n\n    Returns:\n        State: a Paused state\n    \"\"\"\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n\n    if state_details.pause_timeout:\n        raise ValueError(\"An extra pause timeout was provided in state_details\")\n\n    if pause_expiration_time is not None and timeout_seconds is not None:\n        raise ValueError(\n            \"Cannot supply both a pause_expiration_time and timeout_seconds\"\n        )\n\n    if pause_expiration_time is None and timeout_seconds is None:\n        pass\n    else:\n        state_details.pause_timeout = pause_expiration_time or (\n            pendulum.now(\"UTC\") + pendulum.Duration(seconds=timeout_seconds)\n        )\n\n    state_details.pause_reschedule = reschedule\n    state_details.pause_key = pause_key\n\n    return cls(type=StateType.PAUSED, state_details=state_details, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Pending","title":"Pending","text":"

Convenience function for creating Pending states.

Returns:

Name Type Description State State

a Pending state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Pending(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Pending` states.\n\n    Returns:\n        State: a Pending state\n    \"\"\"\n    return cls(type=StateType.PENDING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Retrying","title":"Retrying","text":"

Convenience function for creating Retrying states.

Returns:

Name Type Description State State

a Retrying state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Retrying(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Retrying` states.\n\n    Returns:\n        State: a Retrying state\n    \"\"\"\n    return cls(type=StateType.RUNNING, name=\"Retrying\", **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Running","title":"Running","text":"

Convenience function for creating Running states.

Returns:

Name Type Description State State

a Running state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Running(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Running` states.\n\n    Returns:\n        State: a Running state\n    \"\"\"\n    return cls(type=StateType.RUNNING, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.Scheduled","title":"Scheduled","text":"

Convenience function for creating Scheduled states.

Returns:

Name Type Description State State

a Scheduled state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def Scheduled(\n    cls: Type[State] = State, scheduled_time: datetime.datetime = None, **kwargs\n) -> State:\n\"\"\"Convenience function for creating `Scheduled` states.\n\n    Returns:\n        State: a Scheduled state\n    \"\"\"\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n    if scheduled_time is None:\n        scheduled_time = pendulum.now(\"UTC\")\n    elif state_details.scheduled_time:\n        raise ValueError(\"An extra scheduled_time was provided in state_details\")\n    state_details.scheduled_time = scheduled_time\n\n    return cls(type=StateType.SCHEDULED, state_details=state_details, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.exception_to_crashed_state","title":"exception_to_crashed_state async","text":"

Takes an exception that occurs outside of user code and converts it to a 'Crash' exception with a 'Crashed' state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
async def exception_to_crashed_state(\n    exc: BaseException,\n    result_factory: Optional[ResultFactory] = None,\n) -> State:\n\"\"\"\n    Takes an exception that occurs _outside_ of user code and converts it to a\n    'Crash' exception with a 'Crashed' state.\n    \"\"\"\n    state_message = None\n\n    if isinstance(exc, anyio.get_cancelled_exc_class()):\n        state_message = \"Execution was cancelled by the runtime environment.\"\n\n    elif isinstance(exc, KeyboardInterrupt):\n        state_message = \"Execution was aborted by an interrupt signal.\"\n\n    elif isinstance(exc, TerminationSignal):\n        state_message = \"Execution was aborted by a termination signal.\"\n\n    elif isinstance(exc, SystemExit):\n        state_message = \"Execution was aborted by Python system exit call.\"\n\n    elif isinstance(exc, (httpx.TimeoutException, httpx.ConnectError)):\n        try:\n            request: httpx.Request = exc.request\n        except RuntimeError:\n            # The request property is not set\n            state_message = (\n                \"Request failed while attempting to contact the server:\"\n                f\" {format_exception(exc)}\"\n            )\n        else:\n            # TODO: We can check if this is actually our API url\n            state_message = f\"Request to {request.url} failed: {format_exception(exc)}.\"\n\n    else:\n        state_message = (\n            \"Execution was interrupted by an unexpected exception:\"\n            f\" {format_exception(exc)}\"\n        )\n\n    if result_factory:\n        data = await result_factory.create_result(exc)\n    else:\n        # Attach the exception for local usage, will not be available when retrieved\n        # from the API\n        data = exc\n\n    return Crashed(message=state_message, data=data)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.exception_to_failed_state","title":"exception_to_failed_state async","text":"

Convenience function for creating Failed states from exceptions

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
async def exception_to_failed_state(\n    exc: Optional[BaseException] = None,\n    result_factory: Optional[ResultFactory] = None,\n    **kwargs,\n) -> State:\n\"\"\"\n    Convenience function for creating `Failed` states from exceptions\n    \"\"\"\n    if not exc:\n        _, exc, _ = sys.exc_info()\n        if exc is None:\n            raise ValueError(\n                \"Exception was not passed and no active exception could be found.\"\n            )\n    else:\n        pass\n\n    if result_factory:\n        data = await result_factory.create_result(exc)\n    else:\n        # Attach the exception for local usage, will not be available when retrieved\n        # from the API\n        data = exc\n\n    existing_message = kwargs.pop(\"message\", \"\")\n    if existing_message and not existing_message.endswith(\" \"):\n        existing_message += \" \"\n\n    # TODO: Consider if we want to include traceback information, it is intentionally\n    #       excluded from messages for now\n    message = existing_message + format_exception(exc)\n\n    return Failed(data=data, message=message, **kwargs)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.get_state_exception","title":"get_state_exception async","text":"

If not given a FAILED or CRASHED state, this raise a value error.

If the state result is a state, its exception will be returned.

If the state result is an iterable of states, the exception of the first failure will be returned.

If the state result is a string, a wrapper exception will be returned with the string as the message.

If the state result is null, a wrapper exception will be returned with the state message attached.

If the state result is not of a known type, a TypeError will be returned.

When a wrapper exception is returned, the type will be: - FailedRun if the state type is FAILED. - CrashedRun if the state type is CRASHED. - CancelledRun if the state type is CANCELLED.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
@sync_compatible\nasync def get_state_exception(state: State) -> BaseException:\n\"\"\"\n    If not given a FAILED or CRASHED state, this raise a value error.\n\n    If the state result is a state, its exception will be returned.\n\n    If the state result is an iterable of states, the exception of the first failure\n    will be returned.\n\n    If the state result is a string, a wrapper exception will be returned with the\n    string as the message.\n\n    If the state result is null, a wrapper exception will be returned with the state\n    message attached.\n\n    If the state result is not of a known type, a `TypeError` will be returned.\n\n    When a wrapper exception is returned, the type will be:\n        - `FailedRun` if the state type is FAILED.\n        - `CrashedRun` if the state type is CRASHED.\n        - `CancelledRun` if the state type is CANCELLED.\n    \"\"\"\n\n    if state.is_failed():\n        wrapper = FailedRun\n        default_message = \"Run failed.\"\n    elif state.is_crashed():\n        wrapper = CrashedRun\n        default_message = \"Run crashed.\"\n    elif state.is_cancelled():\n        wrapper = CancelledRun\n        default_message = \"Run cancelled.\"\n    else:\n        raise ValueError(f\"Expected failed or crashed state got {state!r}.\")\n\n    if isinstance(state.data, BaseResult):\n        result = await state.data.get()\n    elif state.data is None:\n        result = None\n    else:\n        result = state.data\n\n    if result is None:\n        return wrapper(state.message or default_message)\n\n    if isinstance(result, Exception):\n        return result\n\n    elif isinstance(result, BaseException):\n        return result\n\n    elif isinstance(result, str):\n        return wrapper(result)\n\n    elif is_state(result):\n        # Return the exception from the inner state\n        return await get_state_exception(result)\n\n    elif is_state_iterable(result):\n        # Return the first failure\n        for state in result:\n            if state.is_failed() or state.is_crashed() or state.is_cancelled():\n                return await get_state_exception(state)\n\n        raise ValueError(\n            \"Failed state result was an iterable of states but none were failed.\"\n        )\n\n    else:\n        raise TypeError(\n            f\"Unexpected result for failed state: {result!r} \u2014\u2014 \"\n            f\"{type(result).__name__} cannot be resolved into an exception\"\n        )\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.get_state_result","title":"get_state_result","text":"

Get the result from a state.

See State.result()

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def get_state_result(\n    state: State[R], raise_on_failure: bool = True, fetch: Optional[bool] = None\n) -> R:\n\"\"\"\n    Get the result from a state.\n\n    See `State.result()`\n    \"\"\"\n\n    if fetch is None and (\n        PREFECT_ASYNC_FETCH_STATE_RESULT or not in_async_main_thread()\n    ):\n        # Fetch defaults to `True` for sync users or async users who have opted in\n        fetch = True\n\n    if not fetch:\n        if fetch is None and in_async_main_thread():\n            warnings.warn(\n                (\n                    \"State.result() was called from an async context but not awaited. \"\n                    \"This method will be updated to return a coroutine by default in \"\n                    \"the future. Pass `fetch=True` and `await` the call to get rid of \"\n                    \"this warning.\"\n                ),\n                DeprecationWarning,\n                stacklevel=2,\n            )\n        # Backwards compatibility\n        if isinstance(state.data, DataDocument):\n            return result_from_state_with_data_document(\n                state, raise_on_failure=raise_on_failure\n            )\n        else:\n            return state.data\n    else:\n        return _get_state_result(state, raise_on_failure=raise_on_failure)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.is_state","title":"is_state","text":"

Check if the given object is a state instance

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def is_state(obj: Any) -> TypeGuard[State]:\n\"\"\"\n    Check if the given object is a state instance\n    \"\"\"\n    # We may want to narrow this to client-side state types but for now this provides\n    # backwards compatibility\n    from prefect.server.schemas.states import State as State_\n\n    return isinstance(obj, (State, State_))\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.is_state_iterable","title":"is_state_iterable","text":"

Check if a the given object is an iterable of states types

Supported iterables are: - set - list - tuple

Other iterables will return False even if they contain states.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
def is_state_iterable(obj: Any) -> TypeGuard[Iterable[State]]:\n\"\"\"\n    Check if a the given object is an iterable of states types\n\n    Supported iterables are:\n    - set\n    - list\n    - tuple\n\n    Other iterables will return `False` even if they contain states.\n    \"\"\"\n    # We do not check for arbitary iterables because this is not intended to be used\n    # for things like dictionaries, dataframes, or pydantic models\n    if (\n        not isinstance(obj, BaseAnnotation)\n        and isinstance(obj, (list, set, tuple))\n        and obj\n    ):\n        return all([is_state(o) for o in obj])\n    else:\n        return False\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.raise_state_exception","title":"raise_state_exception async","text":"

Given a FAILED or CRASHED state, raise the contained exception.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
@sync_compatible\nasync def raise_state_exception(state: State) -> None:\n\"\"\"\n    Given a FAILED or CRASHED state, raise the contained exception.\n    \"\"\"\n    if not (state.is_failed() or state.is_crashed() or state.is_cancelled()):\n        return None\n\n    raise await get_state_exception(state)\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/states/#prefect.states.return_value_to_state","title":"return_value_to_state async","text":"

Given a return value from a user's function, create a State the run should be placed in.

The aggregate rule says that given multiple states we will determine the final state such that:

Callers should resolve all futures into states before passing return values to this function.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/states.py
async def return_value_to_state(retval: R, result_factory: ResultFactory) -> State[R]:\n\"\"\"\n    Given a return value from a user's function, create a `State` the run should\n    be placed in.\n\n    - If data is returned, we create a 'COMPLETED' state with the data\n    - If a single, manually created state is returned, we use that state as given\n        (manual creation is determined by the lack of ids)\n    - If an upstream state or iterable of upstream states is returned, we apply the\n        aggregate rule\n\n    The aggregate rule says that given multiple states we will determine the final state\n    such that:\n\n    - If any states are not COMPLETED the final state is FAILED\n    - If all of the states are COMPLETED the final state is COMPLETED\n    - The states will be placed in the final state `data` attribute\n\n    Callers should resolve all futures into states before passing return values to this\n    function.\n    \"\"\"\n\n    if (\n        is_state(retval)\n        # Check for manual creation\n        and not retval.state_details.flow_run_id\n        and not retval.state_details.task_run_id\n    ):\n        state = retval\n\n        # Do not modify states with data documents attached; backwards compatibility\n        if isinstance(state.data, DataDocument):\n            return state\n\n        # Unless the user has already constructed a result explicitly, use the factory\n        # to update the data to the correct type\n        if not isinstance(state.data, BaseResult):\n            state.data = await result_factory.create_result(state.data)\n\n        return state\n\n    # Determine a new state from the aggregate of contained states\n    if is_state(retval) or is_state_iterable(retval):\n        states = StateGroup(ensure_iterable(retval))\n\n        # Determine the new state type\n        if states.all_completed():\n            new_state_type = StateType.COMPLETED\n        elif states.any_cancelled():\n            new_state_type = StateType.CANCELLED\n        elif states.any_paused():\n            new_state_type = StateType.PAUSED\n        else:\n            new_state_type = StateType.FAILED\n\n        # Generate a nice message for the aggregate\n        if states.all_completed():\n            message = \"All states completed.\"\n        elif states.any_cancelled():\n            message = f\"{states.cancelled_count}/{states.total_count} states cancelled.\"\n        elif states.any_paused():\n            message = f\"{states.paused_count}/{states.total_count} states paused.\"\n        elif states.any_failed():\n            message = f\"{states.fail_count}/{states.total_count} states failed.\"\n        elif not states.all_final():\n            message = (\n                f\"{states.not_final_count}/{states.total_count} states are not final.\"\n            )\n        else:\n            message = \"Given states: \" + states.counts_message()\n\n        # TODO: We may actually want to set the data to a `StateGroup` object and just\n        #       allow it to be unpacked into a tuple and such so users can interact with\n        #       it\n        return State(\n            type=new_state_type,\n            message=message,\n            data=await result_factory.create_result(retval),\n        )\n\n    # Generators aren't portable, implicitly convert them to a list.\n    if isinstance(retval, GeneratorType):\n        data = list(retval)\n    else:\n        data = retval\n\n    # Otherwise, they just gave data and this is a completed retval\n    return Completed(data=await result_factory.create_result(data))\n
","tags":["Python API","states"]},{"location":"api-ref/prefect/task-runners/","title":"prefect.task_runners","text":"","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners","title":"prefect.task_runners","text":"

Interface and implementations of various task runners.

Task Runners in Prefect are responsible for managing the execution of Prefect task runs. Generally speaking, users are not expected to interact with task runners outside of configuring and initializing them for a flow.

Example
>>> from prefect import flow, task\n>>> from prefect.task_runners import SequentialTaskRunner\n>>> from typing import List\n>>>\n>>> @task\n>>> def say_hello(name):\n...     print(f\"hello {name}\")\n>>>\n>>> @task\n>>> def say_goodbye(name):\n...     print(f\"goodbye {name}\")\n>>>\n>>> @flow(task_runner=SequentialTaskRunner())\n>>> def greetings(names: List[str]):\n...     for name in names:\n...         say_hello(name)\n...         say_goodbye(name)\n>>>\n>>> greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\nhello arthur\ngoodbye arthur\nhello trillian\ngoodbye trillian\nhello ford\ngoodbye ford\nhello marvin\ngoodbye marvin\n

Switching to a DaskTaskRunner:

>>> from prefect_dask.task_runners import DaskTaskRunner\n>>> flow.task_runner = DaskTaskRunner()\n>>> greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\nhello arthur\ngoodbye arthur\nhello trillian\nhello ford\ngoodbye marvin\nhello marvin\ngoodbye ford\ngoodbye trillian\n

For usage details, see the Task Runners documentation.

","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner","title":"BaseTaskRunner","text":"Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
class BaseTaskRunner(metaclass=abc.ABCMeta):\n    def __init__(self) -> None:\n        self.logger = get_logger(f\"task_runner.{self.name}\")\n        self._started: bool = False\n\n    @property\n    @abc.abstractmethod\n    def concurrency_type(self) -> TaskConcurrencyType:\n        pass  # noqa\n\n    @property\n    def name(self):\n        return type(self).__name__.lower().replace(\"taskrunner\", \"\")\n\n    def duplicate(self):\n\"\"\"\n        Return a new task runner instance with the same options.\n        \"\"\"\n        # The base class returns `NotImplemented` to indicate that this is not yet\n        # implemented by a given task runner.\n        return NotImplemented\n\n    def __eq__(self, other: object) -> bool:\n\"\"\"\n        Returns true if the task runners use the same options.\n        \"\"\"\n        if type(other) == type(self) and (\n            # Compare public attributes for naive equality check\n            # Subclasses should implement this method with a check init option equality\n            {k: v for k, v in self.__dict__.items() if not k.startswith(\"_\")}\n            == {k: v for k, v in other.__dict__.items() if not k.startswith(\"_\")}\n        ):\n            return True\n        else:\n            return NotImplemented\n\n    @abc.abstractmethod\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[..., Awaitable[State[R]]],\n    ) -> None:\n\"\"\"\n        Submit a call for execution and return a `PrefectFuture` that can be used to\n        get the call result.\n\n        Args:\n            task_run: The task run being submitted.\n            task_key: A unique key for this orchestration run of the task. Can be used\n                for caching.\n            call: The function to be executed\n            run_kwargs: A dict of keyword arguments to pass to `call`\n\n        Returns:\n            A future representing the result of `call` execution\n        \"\"\"\n        raise NotImplementedError()\n\n    @abc.abstractmethod\n    async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n\"\"\"\n        Given a `PrefectFuture`, wait for its return state up to `timeout` seconds.\n        If it is not finished after the timeout expires, `None` should be returned.\n\n        Implementers should be careful to ensure that this function never returns or\n        raises an exception.\n        \"\"\"\n        raise NotImplementedError()\n\n    @asynccontextmanager\n    async def start(\n        self: T,\n    ) -> AsyncIterator[T]:\n\"\"\"\n        Start the task runner, preparing any resources necessary for task submission.\n\n        Children should implement `_start` to prepare and clean up resources.\n\n        Yields:\n            The prepared task runner\n        \"\"\"\n        if self._started:\n            raise RuntimeError(\"The task runner is already started!\")\n\n        async with AsyncExitStack() as exit_stack:\n            self.logger.debug(\"Starting task runner...\")\n            try:\n                await self._start(exit_stack)\n                self._started = True\n                yield self\n            finally:\n                self.logger.debug(\"Shutting down task runner...\")\n                self._started = False\n\n    async def _start(self, exit_stack: AsyncExitStack) -> None:\n\"\"\"\n        Create any resources required for this task runner to submit work.\n\n        Cleanup of resources should be submitted to the `exit_stack`.\n        \"\"\"\n        pass  # noqa\n\n    def __str__(self) -> str:\n        return type(self).__name__\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.duplicate","title":"duplicate","text":"

Return a new task runner instance with the same options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
def duplicate(self):\n\"\"\"\n    Return a new task runner instance with the same options.\n    \"\"\"\n    # The base class returns `NotImplemented` to indicate that this is not yet\n    # implemented by a given task runner.\n    return NotImplemented\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.start","title":"start async","text":"

Start the task runner, preparing any resources necessary for task submission.

Children should implement _start to prepare and clean up resources.

Yields:

Type Description AsyncIterator[T]

The prepared task runner

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
@asynccontextmanager\nasync def start(\n    self: T,\n) -> AsyncIterator[T]:\n\"\"\"\n    Start the task runner, preparing any resources necessary for task submission.\n\n    Children should implement `_start` to prepare and clean up resources.\n\n    Yields:\n        The prepared task runner\n    \"\"\"\n    if self._started:\n        raise RuntimeError(\"The task runner is already started!\")\n\n    async with AsyncExitStack() as exit_stack:\n        self.logger.debug(\"Starting task runner...\")\n        try:\n            await self._start(exit_stack)\n            self._started = True\n            yield self\n        finally:\n            self.logger.debug(\"Shutting down task runner...\")\n            self._started = False\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.submit","title":"submit abstractmethod async","text":"

Submit a call for execution and return a PrefectFuture that can be used to get the call result.

Parameters:

Name Type Description Default task_run

The task run being submitted.

required task_key

A unique key for this orchestration run of the task. Can be used for caching.

required call Callable[..., Awaitable[State[R]]]

The function to be executed

required run_kwargs

A dict of keyword arguments to pass to call

required

Returns:

Type Description None

A future representing the result of call execution

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
@abc.abstractmethod\nasync def submit(\n    self,\n    key: UUID,\n    call: Callable[..., Awaitable[State[R]]],\n) -> None:\n\"\"\"\n    Submit a call for execution and return a `PrefectFuture` that can be used to\n    get the call result.\n\n    Args:\n        task_run: The task run being submitted.\n        task_key: A unique key for this orchestration run of the task. Can be used\n            for caching.\n        call: The function to be executed\n        run_kwargs: A dict of keyword arguments to pass to `call`\n\n    Returns:\n        A future representing the result of `call` execution\n    \"\"\"\n    raise NotImplementedError()\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.BaseTaskRunner.wait","title":"wait abstractmethod async","text":"

Given a PrefectFuture, wait for its return state up to timeout seconds. If it is not finished after the timeout expires, None should be returned.

Implementers should be careful to ensure that this function never returns or raises an exception.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
@abc.abstractmethod\nasync def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n\"\"\"\n    Given a `PrefectFuture`, wait for its return state up to `timeout` seconds.\n    If it is not finished after the timeout expires, `None` should be returned.\n\n    Implementers should be careful to ensure that this function never returns or\n    raises an exception.\n    \"\"\"\n    raise NotImplementedError()\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.ConcurrentTaskRunner","title":"ConcurrentTaskRunner","text":"

Bases: BaseTaskRunner

A concurrent task runner that allows tasks to switch when blocking on IO. Synchronous tasks will be submitted to a thread pool maintained by anyio.

Example
Using a thread for concurrency:\n>>> from prefect import flow\n>>> from prefect.task_runners import ConcurrentTaskRunner\n>>> @flow(task_runner=ConcurrentTaskRunner)\n>>> def my_flow():\n>>>     ...\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
class ConcurrentTaskRunner(BaseTaskRunner):\n\"\"\"\n    A concurrent task runner that allows tasks to switch when blocking on IO.\n    Synchronous tasks will be submitted to a thread pool maintained by `anyio`.\n\n    Example:\n        ```\n        Using a thread for concurrency:\n        >>> from prefect import flow\n        >>> from prefect.task_runners import ConcurrentTaskRunner\n        >>> @flow(task_runner=ConcurrentTaskRunner)\n        >>> def my_flow():\n        >>>     ...\n        ```\n    \"\"\"\n\n    def __init__(self):\n        # TODO: Consider adding `max_workers` support using anyio capacity limiters\n\n        # Runtime attributes\n        self._task_group: anyio.abc.TaskGroup = None\n        self._result_events: Dict[UUID, Event] = {}\n        self._results: Dict[UUID, Any] = {}\n        self._keys: Set[UUID] = set()\n\n        super().__init__()\n\n    @property\n    def concurrency_type(self) -> TaskConcurrencyType:\n        return TaskConcurrencyType.CONCURRENT\n\n    def duplicate(self):\n        return type(self)()\n\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[[], Awaitable[State[R]]],\n    ) -> None:\n        if not self._started:\n            raise RuntimeError(\n                \"The task runner must be started before submitting work.\"\n            )\n\n        if not self._task_group:\n            raise RuntimeError(\n                \"The concurrent task runner cannot be used to submit work after \"\n                \"serialization.\"\n            )\n\n        # Create an event to set on completion\n        self._result_events[key] = Event()\n\n        # Rely on the event loop for concurrency\n        self._task_group.start_soon(self._run_and_store_result, key, call)\n\n    async def wait(\n        self,\n        key: UUID,\n        timeout: float = None,\n    ) -> Optional[State]:\n        if not self._task_group:\n            raise RuntimeError(\n                \"The concurrent task runner cannot be used to wait for work after \"\n                \"serialization.\"\n            )\n\n        return await self._get_run_result(key, timeout)\n\n    async def _run_and_store_result(\n        self, key: UUID, call: Callable[[], Awaitable[State[R]]]\n    ):\n\"\"\"\n        Simple utility to store the orchestration result in memory on completion\n\n        Since this run is occuring on the main thread, we capture exceptions to prevent\n        task crashes from crashing the flow run.\n        \"\"\"\n        try:\n            result = await call()\n        except BaseException as exc:\n            result = await exception_to_crashed_state(exc)\n\n        self._results[key] = result\n        self._result_events[key].set()\n\n    async def _get_run_result(\n        self, key: UUID, timeout: float = None\n    ) -> Optional[State]:\n\"\"\"\n        Block until the run result has been populated.\n        \"\"\"\n        result = None  # retval on tiemout\n\n        # Note we do not use `asyncio.wrap_future` and instead use an `Event` to avoid\n        # stdlib behavior where the wrapped future is cancelled if the parent future is\n        # cancelled (as it would be during a timeout here)\n        with anyio.move_on_after(timeout):\n            await self._result_events[key].wait()\n            result = self._results[key]\n\n        return result  # timeout reached\n\n    async def _start(self, exit_stack: AsyncExitStack):\n\"\"\"\n        Start the process pool\n        \"\"\"\n        self._task_group = await exit_stack.enter_async_context(\n            anyio.create_task_group()\n        )\n\n    def __getstate__(self):\n\"\"\"\n        Allow the `ConcurrentTaskRunner` to be serialized by dropping the task group.\n        \"\"\"\n        data = self.__dict__.copy()\n        data.update({k: None for k in {\"_task_group\"}})\n        return data\n\n    def __setstate__(self, data: dict):\n\"\"\"\n        When deserialized, we will no longer have a reference to the task group.\n        \"\"\"\n        self.__dict__.update(data)\n        self._task_group = None\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/task-runners/#prefect.task_runners.SequentialTaskRunner","title":"SequentialTaskRunner","text":"

Bases: BaseTaskRunner

A simple task runner that executes calls as they are submitted.

If writing synchronous tasks, this runner will always execute tasks sequentially. If writing async tasks, this runner will execute tasks sequentially unless grouped using anyio.create_task_group or asyncio.gather.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/task_runners.py
class SequentialTaskRunner(BaseTaskRunner):\n\"\"\"\n    A simple task runner that executes calls as they are submitted.\n\n    If writing synchronous tasks, this runner will always execute tasks sequentially.\n    If writing async tasks, this runner will execute tasks sequentially unless grouped\n    using `anyio.create_task_group` or `asyncio.gather`.\n    \"\"\"\n\n    def __init__(self) -> None:\n        super().__init__()\n        self._results: Dict[str, State] = {}\n\n    @property\n    def concurrency_type(self) -> TaskConcurrencyType:\n        return TaskConcurrencyType.SEQUENTIAL\n\n    def duplicate(self):\n        return type(self)()\n\n    async def submit(\n        self,\n        key: UUID,\n        call: Callable[..., Awaitable[State[R]]],\n    ) -> None:\n        # Run the function immediately and store the result in memory\n        try:\n            result = await call()\n        except BaseException as exc:\n            result = await exception_to_crashed_state(exc)\n\n        self._results[key] = result\n\n    async def wait(self, key: UUID, timeout: float = None) -> Optional[State]:\n        return self._results[key]\n
","tags":["Python API","tasks","task runners","Dask","Ray"]},{"location":"api-ref/prefect/tasks/","title":"prefect.tasks","text":"","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks","title":"prefect.tasks","text":"

Module containing the base workflow task class and decorator - for most use cases, using the @task decorator is preferred.

","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task","title":"Task","text":"

Bases: Generic[P, R]

A Prefect task definition.

Note

We recommend using the @task decorator for most use-cases.

Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function creates a new task run.

To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and \"Returns\" respectively.

Parameters:

Name Type Description Default fn Callable[P, R]

The function defining the task.

required name str

An optional name for the task; if not provided, the name will be inferred from the given function.

None description str

An optional string description for the task.

None tags Iterable[str]

An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags context at task runtime.

None version str

An optional string specifying the version of this task definition

None cache_key_fn Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]

An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again.

None cache_expiration datetime.timedelta

An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire.

None task_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.

None retries Optional[int]

An optional number of times to retry on task run failure.

None retry_delay_seconds Optional[Union[float, int, List[float], Callable[[int], List[float]]]]

Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.

None retry_jitter_factor Optional[float]

An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".

None persist_result Optional[bool]

An optional toggle indicating whether the result of this task should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

None result_storage Optional[ResultStorage]

An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in.

None result_storage_key Optional[str]

An optional key to store the result in storage at when persisted. Defaults to a unique identifier.

None result_serializer Optional[ResultSerializer]

An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in.

None timeout_seconds Union[int, float]

An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed.

None log_prints Optional[bool]

If set, print statements in the task will be redirected to the Prefect logger for the task run. Defaults to None, which indicates that the value from the flow should be used.

False refresh_cache Optional[bool]

If set, cached results for the cache key are not used. Defaults to None, which indicates that a cached result from a previous execution with matching cache key is used.

None on_failure Optional[List[Callable[[Task, TaskRun, State], None]]]

An optional list of callables to run when the task enters a failed state.

None on_completion Optional[List[Callable[[Task, TaskRun, State], None]]]

An optional list of callables to run when the task enters a completed state.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
@PrefectObjectRegistry.register_instances\nclass Task(Generic[P, R]):\n\"\"\"\n    A Prefect task definition.\n\n    !!! note\n        We recommend using [the `@task` decorator][prefect.tasks.task] for most use-cases.\n\n    Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function\n    creates a new task run.\n\n    To preserve the input and output types, we use the generic type variables P and R for \"Parameters\" and\n    \"Returns\" respectively.\n\n    Args:\n        fn: The function defining the task.\n        name: An optional name for the task; if not provided, the name will be inferred\n            from the given function.\n        description: An optional string description for the task.\n        tags: An optional set of tags to be associated with runs of this task. These\n            tags are combined with any tags defined by a `prefect.tags` context at\n            task runtime.\n        version: An optional string specifying the version of this task definition\n        cache_key_fn: An optional callable that, given the task run context and call\n            parameters, generates a string key; if the key matches a previous completed\n            state, that state result will be restored instead of running the task again.\n        cache_expiration: An optional amount of time indicating how long cached states\n            for this task should be restorable; if not provided, cached states will\n            never expire.\n        task_run_name: An optional name to distinguish runs of this task; this name can be provided\n            as a string template with the task's keyword arguments as variables,\n            or a function that returns a string.\n        retries: An optional number of times to retry on task run failure.\n        retry_delay_seconds: Optionally configures how long to wait before retrying the\n            task after failure. This is only applicable if `retries` is nonzero. This\n            setting can either be a number of seconds, a list of retry delays, or a\n            callable that, given the total number of retries, generates a list of retry\n            delays. If a number of seconds, that delay will be applied to all retries.\n            If a list, each retry will wait for the corresponding delay before retrying.\n            When passing a callable or a list, the number of configured retry delays\n            cannot exceed 50.\n        retry_jitter_factor: An optional factor that defines the factor to which a retry\n            can be jittered in order to avoid a \"thundering herd\".\n        persist_result: An optional toggle indicating whether the result of this task\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to persist the result of this task.\n            Defaults to the value set in the flow the task is called in.\n        result_storage_key: An optional key to store the result in storage at when persisted.\n            Defaults to a unique identifier.\n        result_serializer: An optional serializer to use to serialize the result of this\n            task for persistence. Defaults to the value set in the flow the task is\n            called in.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the task. If the task exceeds this runtime, it will be marked as failed.\n        log_prints: If set, `print` statements in the task will be redirected to the\n            Prefect logger for the task run. Defaults to `None`, which indicates\n            that the value from the flow should be used.\n        refresh_cache: If set, cached results for the cache key are not used.\n            Defaults to `None`, which indicates that a cached result from a previous\n            execution with matching cache key is used.\n        on_failure: An optional list of callables to run when the task enters a failed state.\n        on_completion: An optional list of callables to run when the task enters a completed state.\n    \"\"\"\n\n    # NOTE: These parameters (types, defaults, and docstrings) should be duplicated\n    #       exactly in the @task decorator\n    def __init__(\n        self,\n        fn: Callable[P, R],\n        name: str = None,\n        description: str = None,\n        tags: Iterable[str] = None,\n        version: str = None,\n        cache_key_fn: Callable[\n            [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n        ] = None,\n        cache_expiration: datetime.timedelta = None,\n        task_run_name: Optional[Union[Callable[[], str], str]] = None,\n        retries: Optional[int] = None,\n        retry_delay_seconds: Optional[\n            Union[\n                float,\n                int,\n                List[float],\n                Callable[[int], List[float]],\n            ]\n        ] = None,\n        retry_jitter_factor: Optional[float] = None,\n        persist_result: Optional[bool] = None,\n        result_storage: Optional[ResultStorage] = None,\n        result_serializer: Optional[ResultSerializer] = None,\n        result_storage_key: Optional[str] = None,\n        cache_result_in_memory: bool = True,\n        timeout_seconds: Union[int, float] = None,\n        log_prints: Optional[bool] = False,\n        refresh_cache: Optional[bool] = None,\n        on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n        on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n    ):\n        # Validate if hook passed is list and contains callables\n        hook_categories = [on_completion, on_failure]\n        hook_names = [\"on_completion\", \"on_failure\"]\n        for hooks, hook_name in zip(hook_categories, hook_names):\n            if hooks is not None:\n                if not hooks:\n                    raise ValueError(f\"Empty list passed for '{hook_name}'\")\n                try:\n                    hooks = list(hooks)\n                except TypeError:\n                    raise TypeError(\n                        f\"Expected iterable for '{hook_name}'; got\"\n                        f\" {type(hooks).__name__} instead. Please provide a list of\"\n                        f\" hooks to '{hook_name}':\\n\\n\"\n                        f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                        \" my_flow():\\n\\tpass\"\n                    )\n\n                for hook in hooks:\n                    if not callable(hook):\n                        raise TypeError(\n                            f\"Expected callables in '{hook_name}'; got\"\n                            f\" {type(hook).__name__} instead. Please provide a list of\"\n                            f\" hooks to '{hook_name}':\\n\\n\"\n                            f\"@flow({hook_name}=[hook1, hook2])\\ndef\"\n                            \" my_flow():\\n\\tpass\"\n                        )\n\n        if not callable(fn):\n            raise TypeError(\"'fn' must be callable\")\n\n        self.description = description or inspect.getdoc(fn)\n        update_wrapper(self, fn)\n        self.fn = fn\n        self.isasync = inspect.iscoroutinefunction(self.fn)\n\n        if not name:\n            if not hasattr(self.fn, \"__name__\"):\n                self.name = type(self.fn).__name__\n            else:\n                self.name = self.fn.__name__\n        else:\n            self.name = name\n\n        if task_run_name is not None:\n            if not isinstance(task_run_name, str) and not callable(task_run_name):\n                raise TypeError(\n                    \"Expected string or callable for 'task_run_name'; got\"\n                    f\" {type(task_run_name).__name__} instead.\"\n                )\n        self.task_run_name = task_run_name\n\n        self.version = version\n        self.log_prints = log_prints\n\n        raise_for_reserved_arguments(self.fn, [\"return_state\", \"wait_for\"])\n\n        self.tags = set(tags if tags else [])\n\n        if not hasattr(self.fn, \"__qualname__\"):\n            self.task_key = to_qualified_name(type(self.fn))\n        else:\n            self.task_key = to_qualified_name(self.fn)\n\n        self.cache_key_fn = cache_key_fn\n        self.cache_expiration = cache_expiration\n        self.refresh_cache = refresh_cache\n\n        # TaskRunPolicy settings\n        # TODO: We can instantiate a `TaskRunPolicy` and add Pydantic bound checks to\n        #       validate that the user passes positive numbers here\n\n        self.retries = (\n            retries if retries is not None else PREFECT_TASK_DEFAULT_RETRIES.value()\n        )\n        if retry_delay_seconds is None:\n            retry_delay_seconds = PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS.value()\n\n        if callable(retry_delay_seconds):\n            self.retry_delay_seconds = retry_delay_seconds(retries)\n        else:\n            self.retry_delay_seconds = retry_delay_seconds\n\n        if isinstance(self.retry_delay_seconds, list) and (\n            len(self.retry_delay_seconds) > 50\n        ):\n            raise ValueError(\"Can not configure more than 50 retry delays per task.\")\n\n        if retry_jitter_factor is not None and retry_jitter_factor < 0:\n            raise ValueError(\"`retry_jitter_factor` must be >= 0.\")\n\n        self.retry_jitter_factor = retry_jitter_factor\n\n        self.persist_result = persist_result\n        self.result_storage = result_storage\n        self.result_serializer = result_serializer\n        self.result_storage_key = result_storage_key\n        self.cache_result_in_memory = cache_result_in_memory\n        self.timeout_seconds = float(timeout_seconds) if timeout_seconds else None\n        # Warn if this task's `name` conflicts with another task while having a\n        # different function. This is to detect the case where two or more tasks\n        # share a name or are lambdas, which should result in a warning, and to\n        # differentiate it from the case where the task was 'copied' via\n        # `with_options`, which should not result in a warning.\n        registry = PrefectObjectRegistry.get()\n\n        if registry and any(\n            other\n            for other in registry.get_instances(Task)\n            if other.name == self.name and id(other.fn) != id(self.fn)\n        ):\n            try:\n                file = inspect.getsourcefile(self.fn)\n                line_number = inspect.getsourcelines(self.fn)[1]\n            except TypeError:\n                file = \"unknown\"\n                line_number = \"unknown\"\n\n            warnings.warn(\n                f\"A task named {self.name!r} and defined at '{file}:{line_number}' \"\n                \"conflicts with another task. Consider specifying a unique `name` \"\n                \"parameter in the task definition:\\n\\n \"\n                \"`@task(name='my_unique_name', ...)`\"\n            )\n        self.on_completion = on_completion\n        self.on_failure = on_failure\n\n    def with_options(\n        self,\n        *,\n        name: str = None,\n        description: str = None,\n        tags: Iterable[str] = None,\n        cache_key_fn: Callable[\n            [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n        ] = None,\n        task_run_name: Optional[Union[Callable[[], str], str]] = None,\n        cache_expiration: datetime.timedelta = None,\n        retries: Optional[int] = NotSet,\n        retry_delay_seconds: Union[\n            float,\n            int,\n            List[float],\n            Callable[[int], List[float]],\n        ] = NotSet,\n        retry_jitter_factor: Optional[float] = NotSet,\n        persist_result: Optional[bool] = NotSet,\n        result_storage: Optional[ResultStorage] = NotSet,\n        result_serializer: Optional[ResultSerializer] = NotSet,\n        result_storage_key: Optional[str] = NotSet,\n        cache_result_in_memory: Optional[bool] = None,\n        timeout_seconds: Union[int, float] = None,\n        log_prints: Optional[bool] = NotSet,\n        refresh_cache: Optional[bool] = NotSet,\n        on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n        on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n    ):\n\"\"\"\n        Create a new task from the current object, updating provided options.\n\n        Args:\n            name: A new name for the task.\n            description: A new description for the task.\n            tags: A new set of tags for the task. If given, existing tags are ignored,\n                not merged.\n            cache_key_fn: A new cache key function for the task.\n            cache_expiration: A new cache expiration time for the task.\n            task_run_name: An optional name to distinguish runs of this task; this name can be provided\n                as a string template with the task's keyword arguments as variables,\n                or a function that returns a string.\n            retries: A new number of times to retry on task run failure.\n            retry_delay_seconds: Optionally configures how long to wait before retrying\n                the task after failure. This is only applicable if `retries` is nonzero.\n                This setting can either be a number of seconds, a list of retry delays,\n                or a callable that, given the total number of retries, generates a list\n                of retry delays. If a number of seconds, that delay will be applied to\n                all retries. If a list, each retry will wait for the corresponding delay\n                before retrying. When passing a callable or a list, the number of\n                configured retry delays cannot exceed 50.\n            retry_jitter_factor: An optional factor that defines the factor to which a\n                retry can be jittered in order to avoid a \"thundering herd\".\n            persist_result: A new option for enabling or disabling result persistence.\n            result_storage: A new storage type to use for results.\n            result_serializer: A new serializer to use for results.\n            result_storage_key: A new key for the persisted result to be stored at.\n            timeout_seconds: A new maximum time for the task to complete in seconds.\n            log_prints: A new option for enabling or disabling redirection of `print` statements.\n            refresh_cache: A new option for enabling or disabling cache refresh.\n            on_completion: A new list of callables to run when the task enters a completed state.\n            on_failure: A new list of callables to run when the task enters a failed state.\n\n        Returns:\n            A new `Task` instance.\n\n        Examples:\n\n            Create a new task from an existing task and update the name\n\n            >>> @task(name=\"My task\")\n            >>> def my_task():\n            >>>     return 1\n            >>>\n            >>> new_task = my_task.with_options(name=\"My new task\")\n\n            Create a new task from an existing task and update the retry settings\n\n            >>> from random import randint\n            >>>\n            >>> @task(retries=1, retry_delay_seconds=5)\n            >>> def my_task():\n            >>>     x = randint(0, 5)\n            >>>     if x >= 3:  # Make a task that fails sometimes\n            >>>         raise ValueError(\"Retry me please!\")\n            >>>     return x\n            >>>\n            >>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\n            Use a task with updated options within a flow\n\n            >>> @task(name=\"My task\")\n            >>> def my_task():\n            >>>     return 1\n            >>>\n            >>> @flow\n            >>> my_flow():\n            >>>     new_task = my_task.with_options(name=\"My new task\")\n            >>>     new_task()\n        \"\"\"\n        return Task(\n            fn=self.fn,\n            name=name or self.name,\n            description=description or self.description,\n            tags=tags or copy(self.tags),\n            cache_key_fn=cache_key_fn or self.cache_key_fn,\n            cache_expiration=cache_expiration or self.cache_expiration,\n            task_run_name=task_run_name,\n            retries=retries if retries is not NotSet else self.retries,\n            retry_delay_seconds=(\n                retry_delay_seconds\n                if retry_delay_seconds is not NotSet\n                else self.retry_delay_seconds\n            ),\n            retry_jitter_factor=(\n                retry_jitter_factor\n                if retry_jitter_factor is not NotSet\n                else self.retry_jitter_factor\n            ),\n            persist_result=(\n                persist_result if persist_result is not NotSet else self.persist_result\n            ),\n            result_storage=(\n                result_storage if result_storage is not NotSet else self.result_storage\n            ),\n            result_storage_key=(\n                result_storage_key\n                if result_storage_key is not NotSet\n                else self.result_storage_key\n            ),\n            result_serializer=(\n                result_serializer\n                if result_serializer is not NotSet\n                else self.result_serializer\n            ),\n            cache_result_in_memory=(\n                cache_result_in_memory\n                if cache_result_in_memory is not None\n                else self.cache_result_in_memory\n            ),\n            timeout_seconds=(\n                timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n            ),\n            log_prints=(log_prints if log_prints is not NotSet else self.log_prints),\n            refresh_cache=(\n                refresh_cache if refresh_cache is not NotSet else self.refresh_cache\n            ),\n            on_completion=on_completion or self.on_completion,\n            on_failure=on_failure or self.on_failure,\n        )\n\n    @overload\n    def __call__(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> None:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def __call__(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> T:\n        ...\n\n    @overload\n    def __call__(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def __call__(\n        self,\n        *args: P.args,\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: P.kwargs,\n    ):\n\"\"\"\n        Run the task and return the result. If `return_state` is True returns\n        the result is wrapped in a Prefect State which provides error handling.\n        \"\"\"\n        from prefect.engine import enter_task_run_engine\n        from prefect.task_runners import SequentialTaskRunner\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return_type = \"state\" if return_state else \"result\"\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            task_runner=SequentialTaskRunner(),\n            return_type=return_type,\n            mapped=False,\n        )\n\n    @overload\n    def _run(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> PrefectFuture[None, Sync]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def _run(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[State[T]]:\n        ...\n\n    @overload\n    def _run(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def _run(\n        self,\n        *args: P.args,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: P.kwargs,\n    ) -> Union[State, Awaitable[State]]:\n\"\"\"\n        Run the task and return the final state.\n        \"\"\"\n        from prefect.engine import enter_task_run_engine\n        from prefect.task_runners import SequentialTaskRunner\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            return_type=\"state\",\n            task_runner=SequentialTaskRunner(),\n            mapped=False,\n        )\n\n    @overload\n    def submit(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> PrefectFuture[None, Sync]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[PrefectFuture[T, Async]]:\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> PrefectFuture[T, Sync]:\n        ...\n\n    @overload\n    def submit(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> State[T]:\n        ...\n\n    def submit(\n        self,\n        *args: Any,\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: Any,\n    ) -> Union[PrefectFuture, Awaitable[PrefectFuture]]:\n\"\"\"\n        Submit a run of the task to a worker.\n\n        Must be called within a flow function. If writing an async task, this call must\n        be awaited.\n\n        Will create a new task run in the backing API and submit the task to the flow's\n        task runner. This call only blocks execution while the task is being submitted,\n        once it is submitted, the flow function will continue executing. However, note\n        that the `SequentialTaskRunner` does not implement parallel execution for sync tasks\n        and they are fully resolved on submission.\n\n        Args:\n            *args: Arguments to run the task with\n            return_state: Return the result of the flow run wrapped in a\n                Prefect State.\n            wait_for: Upstream task futures to wait for before starting the task\n            **kwargs: Keyword arguments to run the task with\n\n        Returns:\n            If `return_state` is False a future allowing asynchronous access to\n                the state of the task\n            If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to\n                the state of the task\n\n        Examples:\n\n            Define a task\n\n            >>> from prefect import task\n            >>> @task\n            >>> def my_task():\n            >>>     return \"hello\"\n\n            Run a task in a flow\n\n            >>> from prefect import flow\n            >>> @flow\n            >>> def my_flow():\n            >>>     my_task.submit()\n\n            Wait for a task to finish\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     my_task.submit().wait()\n\n            Use the result from a task in a flow\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     print(my_task.submit().result())\n            >>>\n            >>> my_flow()\n            hello\n\n            Run an async task in an async flow\n\n            >>> @task\n            >>> async def my_async_task():\n            >>>     pass\n            >>>\n            >>> @flow\n            >>> async def my_flow():\n            >>>     await my_async_task.submit()\n\n            Run a sync task in an async flow\n\n            >>> @flow\n            >>> async def my_flow():\n            >>>     my_task.submit()\n\n            Enforce ordering between tasks that do not exchange data\n            >>> @task\n            >>> def task_1():\n            >>>     pass\n            >>>\n            >>> @task\n            >>> def task_2():\n            >>>     pass\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     x = task_1.submit()\n            >>>\n            >>>     # task 2 will wait for task_1 to complete\n            >>>     y = task_2.submit(wait_for=[x])\n\n        \"\"\"\n\n        from prefect.engine import enter_task_run_engine\n\n        # Convert the call args/kwargs to a parameter dict\n        parameters = get_call_parameters(self.fn, args, kwargs)\n        return_type = \"state\" if return_state else \"future\"\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            return_type=return_type,\n            task_runner=None,  # Use the flow's task runner\n            mapped=False,\n        )\n\n    @overload\n    def map(\n        self: \"Task[P, NoReturn]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> List[PrefectFuture[None, Sync]]:\n        # `NoReturn` matches if a type can't be inferred for the function which stops a\n        # sync function from matching the `Coroutine` overload\n        ...\n\n    @overload\n    def map(\n        self: \"Task[P, Coroutine[Any, Any, T]]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> Awaitable[List[PrefectFuture[T, Async]]]:\n        ...\n\n    @overload\n    def map(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        **kwargs: P.kwargs,\n    ) -> List[PrefectFuture[T, Sync]]:\n        ...\n\n    @overload\n    def map(\n        self: \"Task[P, T]\",\n        *args: P.args,\n        return_state: Literal[True],\n        **kwargs: P.kwargs,\n    ) -> List[State[T]]:\n        ...\n\n    def map(\n        self,\n        *args: Any,\n        return_state: bool = False,\n        wait_for: Optional[Iterable[PrefectFuture]] = None,\n        **kwargs: Any,\n    ) -> Any:\n\"\"\"\n        Submit a mapped run of the task to a worker.\n\n        Must be called within a flow function. If writing an async task, this\n        call must be awaited.\n\n        Must be called with at least one iterable and all iterables must be\n        the same length. Any arguments that are not iterable will be treated as\n        a static value and each task run will recieve the same value.\n\n        Will create as many task runs as the length of the iterable(s) in the\n        backing API and submit the task runs to the flow's task runner. This\n        call blocks if given a future as input while the future is resolved. It\n        also blocks while the tasks are being submitted, once they are\n        submitted, the flow function will continue executing. However, note\n        that the `SequentialTaskRunner` does not implement parallel execution\n        for sync tasks and they are fully resolved on submission.\n\n        Args:\n            *args: Iterable and static arguments to run the tasks with\n            return_state: Return a list of Prefect States that wrap the results\n                of each task run.\n            wait_for: Upstream task futures to wait for before starting the\n                task\n            **kwargs: Keyword iterable arguments to run the task with\n\n        Returns:\n            A list of futures allowing asynchronous access to the state of the\n            tasks\n\n        Examples:\n\n            Define a task\n\n            >>> from prefect import task\n            >>> @task\n            >>> def my_task(x):\n            >>>     return x + 1\n\n            Create mapped tasks\n\n            >>> from prefect import flow\n            >>> @flow\n            >>> def my_flow():\n            >>>     my_task.map([1, 2, 3])\n\n            Wait for all mapped tasks to finish\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     futures = my_task.map([1, 2, 3])\n            >>>     for future in futures:\n            >>>         future.wait()\n            >>>     # Now all of the mapped tasks have finished\n            >>>     my_task(10)\n\n            Use the result from mapped tasks in a flow\n\n            >>> @flow\n            >>> def my_flow():\n            >>>     futures = my_task.map([1, 2, 3])\n            >>>     for future in futures:\n            >>>         print(future.result())\n            >>> my_flow()\n            2\n            3\n            4\n\n            Enforce ordering between tasks that do not exchange data\n            >>> @task\n            >>> def task_1(x):\n            >>>     pass\n            >>>\n            >>> @task\n            >>> def task_2(y):\n            >>>     pass\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     x = task_1.submit()\n            >>>\n            >>>     # task 2 will wait for task_1 to complete\n            >>>     y = task_2.map([1, 2, 3], wait_for=[x])\n\n            Use a non-iterable input as a constant across mapped tasks\n            >>> @task\n            >>> def display(prefix, item):\n            >>>    print(prefix, item)\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     display.map(\"Check it out: \", [1, 2, 3])\n            >>>\n            >>> my_flow()\n            Check it out: 1\n            Check it out: 2\n            Check it out: 3\n\n            Use `unmapped` to treat an iterable argument as a constant\n            >>> from prefect import unmapped\n            >>>\n            >>> @task\n            >>> def add_n_to_items(items, n):\n            >>>     return [item + n for item in items]\n            >>>\n            >>> @flow\n            >>> def my_flow():\n            >>>     return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n            >>>\n            >>> my_flow()\n            [[11, 21], [12, 22], [13, 23]]\n        \"\"\"\n\n        from prefect.engine import enter_task_run_engine\n\n        # Convert the call args/kwargs to a parameter dict; do not apply defaults\n        # since they should not be mapped over\n        parameters = get_call_parameters(self.fn, args, kwargs, apply_defaults=False)\n        return_type = \"state\" if return_state else \"future\"\n\n        return enter_task_run_engine(\n            self,\n            parameters=parameters,\n            wait_for=wait_for,\n            return_type=return_type,\n            task_runner=None,\n            mapped=True,\n        )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.map","title":"map","text":"

Submit a mapped run of the task to a worker.

Must be called within a flow function. If writing an async task, this call must be awaited.

Must be called with at least one iterable and all iterables must be the same length. Any arguments that are not iterable will be treated as a static value and each task run will recieve the same value.

Will create as many task runs as the length of the iterable(s) in the backing API and submit the task runs to the flow's task runner. This call blocks if given a future as input while the future is resolved. It also blocks while the tasks are being submitted, once they are submitted, the flow function will continue executing. However, note that the SequentialTaskRunner does not implement parallel execution for sync tasks and they are fully resolved on submission.

Parameters:

Name Type Description Default *args Any

Iterable and static arguments to run the tasks with

() return_state bool

Return a list of Prefect States that wrap the results of each task run.

False wait_for Optional[Iterable[PrefectFuture]]

Upstream task futures to wait for before starting the task

None **kwargs Any

Keyword iterable arguments to run the task with

{}

Returns:

Type Description Any

A list of futures allowing asynchronous access to the state of the

Any

tasks

Examples:

Define a task

>>> from prefect import task\n>>> @task\n>>> def my_task(x):\n>>>     return x + 1\n

Create mapped tasks

>>> from prefect import flow\n>>> @flow\n>>> def my_flow():\n>>>     my_task.map([1, 2, 3])\n

Wait for all mapped tasks to finish

>>> @flow\n>>> def my_flow():\n>>>     futures = my_task.map([1, 2, 3])\n>>>     for future in futures:\n>>>         future.wait()\n>>>     # Now all of the mapped tasks have finished\n>>>     my_task(10)\n

Use the result from mapped tasks in a flow

>>> @flow\n>>> def my_flow():\n>>>     futures = my_task.map([1, 2, 3])\n>>>     for future in futures:\n>>>         print(future.result())\n>>> my_flow()\n2\n3\n4\n

Enforce ordering between tasks that do not exchange data

>>> @task\n>>> def task_1(x):\n>>>     pass\n>>>\n>>> @task\n>>> def task_2(y):\n>>>     pass\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     x = task_1.submit()\n>>>\n>>>     # task 2 will wait for task_1 to complete\n>>>     y = task_2.map([1, 2, 3], wait_for=[x])\n

Use a non-iterable input as a constant across mapped tasks

>>> @task\n>>> def display(prefix, item):\n>>>    print(prefix, item)\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     display.map(\"Check it out: \", [1, 2, 3])\n>>>\n>>> my_flow()\nCheck it out: 1\nCheck it out: 2\nCheck it out: 3\n

Use unmapped to treat an iterable argument as a constant

>>> from prefect import unmapped\n>>>\n>>> @task\n>>> def add_n_to_items(items, n):\n>>>     return [item + n for item in items]\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n>>>\n>>> my_flow()\n[[11, 21], [12, 22], [13, 23]]\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
def map(\n    self,\n    *args: Any,\n    return_state: bool = False,\n    wait_for: Optional[Iterable[PrefectFuture]] = None,\n    **kwargs: Any,\n) -> Any:\n\"\"\"\n    Submit a mapped run of the task to a worker.\n\n    Must be called within a flow function. If writing an async task, this\n    call must be awaited.\n\n    Must be called with at least one iterable and all iterables must be\n    the same length. Any arguments that are not iterable will be treated as\n    a static value and each task run will recieve the same value.\n\n    Will create as many task runs as the length of the iterable(s) in the\n    backing API and submit the task runs to the flow's task runner. This\n    call blocks if given a future as input while the future is resolved. It\n    also blocks while the tasks are being submitted, once they are\n    submitted, the flow function will continue executing. However, note\n    that the `SequentialTaskRunner` does not implement parallel execution\n    for sync tasks and they are fully resolved on submission.\n\n    Args:\n        *args: Iterable and static arguments to run the tasks with\n        return_state: Return a list of Prefect States that wrap the results\n            of each task run.\n        wait_for: Upstream task futures to wait for before starting the\n            task\n        **kwargs: Keyword iterable arguments to run the task with\n\n    Returns:\n        A list of futures allowing asynchronous access to the state of the\n        tasks\n\n    Examples:\n\n        Define a task\n\n        >>> from prefect import task\n        >>> @task\n        >>> def my_task(x):\n        >>>     return x + 1\n\n        Create mapped tasks\n\n        >>> from prefect import flow\n        >>> @flow\n        >>> def my_flow():\n        >>>     my_task.map([1, 2, 3])\n\n        Wait for all mapped tasks to finish\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     futures = my_task.map([1, 2, 3])\n        >>>     for future in futures:\n        >>>         future.wait()\n        >>>     # Now all of the mapped tasks have finished\n        >>>     my_task(10)\n\n        Use the result from mapped tasks in a flow\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     futures = my_task.map([1, 2, 3])\n        >>>     for future in futures:\n        >>>         print(future.result())\n        >>> my_flow()\n        2\n        3\n        4\n\n        Enforce ordering between tasks that do not exchange data\n        >>> @task\n        >>> def task_1(x):\n        >>>     pass\n        >>>\n        >>> @task\n        >>> def task_2(y):\n        >>>     pass\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     x = task_1.submit()\n        >>>\n        >>>     # task 2 will wait for task_1 to complete\n        >>>     y = task_2.map([1, 2, 3], wait_for=[x])\n\n        Use a non-iterable input as a constant across mapped tasks\n        >>> @task\n        >>> def display(prefix, item):\n        >>>    print(prefix, item)\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     display.map(\"Check it out: \", [1, 2, 3])\n        >>>\n        >>> my_flow()\n        Check it out: 1\n        Check it out: 2\n        Check it out: 3\n\n        Use `unmapped` to treat an iterable argument as a constant\n        >>> from prefect import unmapped\n        >>>\n        >>> @task\n        >>> def add_n_to_items(items, n):\n        >>>     return [item + n for item in items]\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3])\n        >>>\n        >>> my_flow()\n        [[11, 21], [12, 22], [13, 23]]\n    \"\"\"\n\n    from prefect.engine import enter_task_run_engine\n\n    # Convert the call args/kwargs to a parameter dict; do not apply defaults\n    # since they should not be mapped over\n    parameters = get_call_parameters(self.fn, args, kwargs, apply_defaults=False)\n    return_type = \"state\" if return_state else \"future\"\n\n    return enter_task_run_engine(\n        self,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        task_runner=None,\n        mapped=True,\n    )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.submit","title":"submit","text":"

Submit a run of the task to a worker.

Must be called within a flow function. If writing an async task, this call must be awaited.

Will create a new task run in the backing API and submit the task to the flow's task runner. This call only blocks execution while the task is being submitted, once it is submitted, the flow function will continue executing. However, note that the SequentialTaskRunner does not implement parallel execution for sync tasks and they are fully resolved on submission.

Parameters:

Name Type Description Default *args Any

Arguments to run the task with

() return_state bool

Return the result of the flow run wrapped in a Prefect State.

False wait_for Optional[Iterable[PrefectFuture]]

Upstream task futures to wait for before starting the task

None **kwargs Any

Keyword arguments to run the task with

{}

Returns:

Type Description Union[PrefectFuture, Awaitable[PrefectFuture]]

If return_state is False a future allowing asynchronous access to the state of the task

Union[PrefectFuture, Awaitable[PrefectFuture]]

If return_state is True a future wrapped in a Prefect State allowing asynchronous access to the state of the task

Examples:

Define a task

>>> from prefect import task\n>>> @task\n>>> def my_task():\n>>>     return \"hello\"\n

Run a task in a flow

>>> from prefect import flow\n>>> @flow\n>>> def my_flow():\n>>>     my_task.submit()\n

Wait for a task to finish

>>> @flow\n>>> def my_flow():\n>>>     my_task.submit().wait()\n

Use the result from a task in a flow

>>> @flow\n>>> def my_flow():\n>>>     print(my_task.submit().result())\n>>>\n>>> my_flow()\nhello\n

Run an async task in an async flow

>>> @task\n>>> async def my_async_task():\n>>>     pass\n>>>\n>>> @flow\n>>> async def my_flow():\n>>>     await my_async_task.submit()\n

Run a sync task in an async flow

>>> @flow\n>>> async def my_flow():\n>>>     my_task.submit()\n

Enforce ordering between tasks that do not exchange data

>>> @task\n>>> def task_1():\n>>>     pass\n>>>\n>>> @task\n>>> def task_2():\n>>>     pass\n>>>\n>>> @flow\n>>> def my_flow():\n>>>     x = task_1.submit()\n>>>\n>>>     # task 2 will wait for task_1 to complete\n>>>     y = task_2.submit(wait_for=[x])\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
def submit(\n    self,\n    *args: Any,\n    return_state: bool = False,\n    wait_for: Optional[Iterable[PrefectFuture]] = None,\n    **kwargs: Any,\n) -> Union[PrefectFuture, Awaitable[PrefectFuture]]:\n\"\"\"\n    Submit a run of the task to a worker.\n\n    Must be called within a flow function. If writing an async task, this call must\n    be awaited.\n\n    Will create a new task run in the backing API and submit the task to the flow's\n    task runner. This call only blocks execution while the task is being submitted,\n    once it is submitted, the flow function will continue executing. However, note\n    that the `SequentialTaskRunner` does not implement parallel execution for sync tasks\n    and they are fully resolved on submission.\n\n    Args:\n        *args: Arguments to run the task with\n        return_state: Return the result of the flow run wrapped in a\n            Prefect State.\n        wait_for: Upstream task futures to wait for before starting the task\n        **kwargs: Keyword arguments to run the task with\n\n    Returns:\n        If `return_state` is False a future allowing asynchronous access to\n            the state of the task\n        If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to\n            the state of the task\n\n    Examples:\n\n        Define a task\n\n        >>> from prefect import task\n        >>> @task\n        >>> def my_task():\n        >>>     return \"hello\"\n\n        Run a task in a flow\n\n        >>> from prefect import flow\n        >>> @flow\n        >>> def my_flow():\n        >>>     my_task.submit()\n\n        Wait for a task to finish\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     my_task.submit().wait()\n\n        Use the result from a task in a flow\n\n        >>> @flow\n        >>> def my_flow():\n        >>>     print(my_task.submit().result())\n        >>>\n        >>> my_flow()\n        hello\n\n        Run an async task in an async flow\n\n        >>> @task\n        >>> async def my_async_task():\n        >>>     pass\n        >>>\n        >>> @flow\n        >>> async def my_flow():\n        >>>     await my_async_task.submit()\n\n        Run a sync task in an async flow\n\n        >>> @flow\n        >>> async def my_flow():\n        >>>     my_task.submit()\n\n        Enforce ordering between tasks that do not exchange data\n        >>> @task\n        >>> def task_1():\n        >>>     pass\n        >>>\n        >>> @task\n        >>> def task_2():\n        >>>     pass\n        >>>\n        >>> @flow\n        >>> def my_flow():\n        >>>     x = task_1.submit()\n        >>>\n        >>>     # task 2 will wait for task_1 to complete\n        >>>     y = task_2.submit(wait_for=[x])\n\n    \"\"\"\n\n    from prefect.engine import enter_task_run_engine\n\n    # Convert the call args/kwargs to a parameter dict\n    parameters = get_call_parameters(self.fn, args, kwargs)\n    return_type = \"state\" if return_state else \"future\"\n\n    return enter_task_run_engine(\n        self,\n        parameters=parameters,\n        wait_for=wait_for,\n        return_type=return_type,\n        task_runner=None,  # Use the flow's task runner\n        mapped=False,\n    )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.Task.with_options","title":"with_options","text":"

Create a new task from the current object, updating provided options.

Parameters:

Name Type Description Default name str

A new name for the task.

None description str

A new description for the task.

None tags Iterable[str]

A new set of tags for the task. If given, existing tags are ignored, not merged.

None cache_key_fn Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]

A new cache key function for the task.

None cache_expiration datetime.timedelta

A new cache expiration time for the task.

None task_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.

None retries Optional[int]

A new number of times to retry on task run failure.

NotSet retry_delay_seconds Union[float, int, List[float], Callable[[int], List[float]]]

Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.

NotSet retry_jitter_factor Optional[float]

An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".

NotSet persist_result Optional[bool]

A new option for enabling or disabling result persistence.

NotSet result_storage Optional[ResultStorage]

A new storage type to use for results.

NotSet result_serializer Optional[ResultSerializer]

A new serializer to use for results.

NotSet result_storage_key Optional[str]

A new key for the persisted result to be stored at.

NotSet timeout_seconds Union[int, float]

A new maximum time for the task to complete in seconds.

None log_prints Optional[bool]

A new option for enabling or disabling redirection of print statements.

NotSet refresh_cache Optional[bool]

A new option for enabling or disabling cache refresh.

NotSet on_completion Optional[List[Callable[[Task, TaskRun, State], None]]]

A new list of callables to run when the task enters a completed state.

None on_failure Optional[List[Callable[[Task, TaskRun, State], None]]]

A new list of callables to run when the task enters a failed state.

None

Returns:

Type Description

A new Task instance.

Examples:

Create a new task from an existing task and update the name

>>> @task(name=\"My task\")\n>>> def my_task():\n>>>     return 1\n>>>\n>>> new_task = my_task.with_options(name=\"My new task\")\n

Create a new task from an existing task and update the retry settings

>>> from random import randint\n>>>\n>>> @task(retries=1, retry_delay_seconds=5)\n>>> def my_task():\n>>>     x = randint(0, 5)\n>>>     if x >= 3:  # Make a task that fails sometimes\n>>>         raise ValueError(\"Retry me please!\")\n>>>     return x\n>>>\n>>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n

Use a task with updated options within a flow

>>> @task(name=\"My task\")\n>>> def my_task():\n>>>     return 1\n>>>\n>>> @flow\n>>> my_flow():\n>>>     new_task = my_task.with_options(name=\"My new task\")\n>>>     new_task()\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
def with_options(\n    self,\n    *,\n    name: str = None,\n    description: str = None,\n    tags: Iterable[str] = None,\n    cache_key_fn: Callable[\n        [\"TaskRunContext\", Dict[str, Any]], Optional[str]\n    ] = None,\n    task_run_name: Optional[Union[Callable[[], str], str]] = None,\n    cache_expiration: datetime.timedelta = None,\n    retries: Optional[int] = NotSet,\n    retry_delay_seconds: Union[\n        float,\n        int,\n        List[float],\n        Callable[[int], List[float]],\n    ] = NotSet,\n    retry_jitter_factor: Optional[float] = NotSet,\n    persist_result: Optional[bool] = NotSet,\n    result_storage: Optional[ResultStorage] = NotSet,\n    result_serializer: Optional[ResultSerializer] = NotSet,\n    result_storage_key: Optional[str] = NotSet,\n    cache_result_in_memory: Optional[bool] = None,\n    timeout_seconds: Union[int, float] = None,\n    log_prints: Optional[bool] = NotSet,\n    refresh_cache: Optional[bool] = NotSet,\n    on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n    on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n):\n\"\"\"\n    Create a new task from the current object, updating provided options.\n\n    Args:\n        name: A new name for the task.\n        description: A new description for the task.\n        tags: A new set of tags for the task. If given, existing tags are ignored,\n            not merged.\n        cache_key_fn: A new cache key function for the task.\n        cache_expiration: A new cache expiration time for the task.\n        task_run_name: An optional name to distinguish runs of this task; this name can be provided\n            as a string template with the task's keyword arguments as variables,\n            or a function that returns a string.\n        retries: A new number of times to retry on task run failure.\n        retry_delay_seconds: Optionally configures how long to wait before retrying\n            the task after failure. This is only applicable if `retries` is nonzero.\n            This setting can either be a number of seconds, a list of retry delays,\n            or a callable that, given the total number of retries, generates a list\n            of retry delays. If a number of seconds, that delay will be applied to\n            all retries. If a list, each retry will wait for the corresponding delay\n            before retrying. When passing a callable or a list, the number of\n            configured retry delays cannot exceed 50.\n        retry_jitter_factor: An optional factor that defines the factor to which a\n            retry can be jittered in order to avoid a \"thundering herd\".\n        persist_result: A new option for enabling or disabling result persistence.\n        result_storage: A new storage type to use for results.\n        result_serializer: A new serializer to use for results.\n        result_storage_key: A new key for the persisted result to be stored at.\n        timeout_seconds: A new maximum time for the task to complete in seconds.\n        log_prints: A new option for enabling or disabling redirection of `print` statements.\n        refresh_cache: A new option for enabling or disabling cache refresh.\n        on_completion: A new list of callables to run when the task enters a completed state.\n        on_failure: A new list of callables to run when the task enters a failed state.\n\n    Returns:\n        A new `Task` instance.\n\n    Examples:\n\n        Create a new task from an existing task and update the name\n\n        >>> @task(name=\"My task\")\n        >>> def my_task():\n        >>>     return 1\n        >>>\n        >>> new_task = my_task.with_options(name=\"My new task\")\n\n        Create a new task from an existing task and update the retry settings\n\n        >>> from random import randint\n        >>>\n        >>> @task(retries=1, retry_delay_seconds=5)\n        >>> def my_task():\n        >>>     x = randint(0, 5)\n        >>>     if x >= 3:  # Make a task that fails sometimes\n        >>>         raise ValueError(\"Retry me please!\")\n        >>>     return x\n        >>>\n        >>> new_task = my_task.with_options(retries=5, retry_delay_seconds=2)\n\n        Use a task with updated options within a flow\n\n        >>> @task(name=\"My task\")\n        >>> def my_task():\n        >>>     return 1\n        >>>\n        >>> @flow\n        >>> my_flow():\n        >>>     new_task = my_task.with_options(name=\"My new task\")\n        >>>     new_task()\n    \"\"\"\n    return Task(\n        fn=self.fn,\n        name=name or self.name,\n        description=description or self.description,\n        tags=tags or copy(self.tags),\n        cache_key_fn=cache_key_fn or self.cache_key_fn,\n        cache_expiration=cache_expiration or self.cache_expiration,\n        task_run_name=task_run_name,\n        retries=retries if retries is not NotSet else self.retries,\n        retry_delay_seconds=(\n            retry_delay_seconds\n            if retry_delay_seconds is not NotSet\n            else self.retry_delay_seconds\n        ),\n        retry_jitter_factor=(\n            retry_jitter_factor\n            if retry_jitter_factor is not NotSet\n            else self.retry_jitter_factor\n        ),\n        persist_result=(\n            persist_result if persist_result is not NotSet else self.persist_result\n        ),\n        result_storage=(\n            result_storage if result_storage is not NotSet else self.result_storage\n        ),\n        result_storage_key=(\n            result_storage_key\n            if result_storage_key is not NotSet\n            else self.result_storage_key\n        ),\n        result_serializer=(\n            result_serializer\n            if result_serializer is not NotSet\n            else self.result_serializer\n        ),\n        cache_result_in_memory=(\n            cache_result_in_memory\n            if cache_result_in_memory is not None\n            else self.cache_result_in_memory\n        ),\n        timeout_seconds=(\n            timeout_seconds if timeout_seconds is not None else self.timeout_seconds\n        ),\n        log_prints=(log_prints if log_prints is not NotSet else self.log_prints),\n        refresh_cache=(\n            refresh_cache if refresh_cache is not NotSet else self.refresh_cache\n        ),\n        on_completion=on_completion or self.on_completion,\n        on_failure=on_failure or self.on_failure,\n    )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.exponential_backoff","title":"exponential_backoff","text":"

A task retry backoff utility that configures exponential backoff for task retries. The exponential backoff design matches the urllib3 implementation.

Parameters:

Name Type Description Default backoff_factor float

the base delay for the first retry, subsequent retries will increase the delay time by powers of 2.

required

Returns:

Type Description Callable[[int], List[float]]

a callable that can be passed to the task constructor

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
def exponential_backoff(backoff_factor: float) -> Callable[[int], List[float]]:\n\"\"\"\n    A task retry backoff utility that configures exponential backoff for task retries.\n    The exponential backoff design matches the urllib3 implementation.\n\n    Arguments:\n        backoff_factor: the base delay for the first retry, subsequent retries will\n            increase the delay time by powers of 2.\n\n    Returns:\n        a callable that can be passed to the task constructor\n    \"\"\"\n\n    def retry_backoff_callable(retries: int) -> List[float]:\n        # no more than 50 retry delays can be configured on a task\n        retries = min(retries, 50)\n\n        return [backoff_factor * max(0, 2**r) for r in range(retries)]\n\n    return retry_backoff_callable\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.task","title":"task","text":"

Decorator to designate a function as a task in a Prefect workflow.

This decorator may be used for asynchronous or synchronous functions.

Parameters:

Name Type Description Default name str

An optional name for the task; if not provided, the name will be inferred from the given function.

None description str

An optional string description for the task.

None tags Iterable[str]

An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags context at task runtime.

None version str

An optional string specifying the version of this task definition

None cache_key_fn Callable[[TaskRunContext, Dict[str, Any]], Optional[str]]

An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again.

None cache_expiration datetime.timedelta

An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire.

None task_run_name Optional[Union[Callable[[], str], str]]

An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string.

None retries int

An optional number of times to retry on task run failure

None retry_delay_seconds Union[float, int, List[float], Callable[[int], List[float]]]

Optionally configures how long to wait before retrying the task after failure. This is only applicable if retries is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50.

None retry_jitter_factor Optional[float]

An optional factor that defines the factor to which a retry can be jittered in order to avoid a \"thundering herd\".

None persist_result Optional[bool]

An optional toggle indicating whether the result of this task should be persisted to result storage. Defaults to None, which indicates that Prefect should choose whether the result should be persisted depending on the features being used.

None result_storage Optional[ResultStorage]

An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in.

None result_storage_key Optional[str]

An optional key to store the result in storage at when persisted. Defaults to a unique identifier.

None result_serializer Optional[ResultSerializer]

An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in.

None timeout_seconds Union[int, float]

An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed.

None log_prints Optional[bool]

If set, print statements in the task will be redirected to the Prefect logger for the task run. Defaults to None, which indicates that the value from the flow should be used.

None refresh_cache Optional[bool]

If set, cached results for the cache key are not used. Defaults to None, which indicates that a cached result from a previous execution with matching cache key is used.

None on_failure Optional[List[Callable[[Task, TaskRun, State], None]]]

An optional list of callables to run when the task enters a failed state.

None on_completion Optional[List[Callable[[Task, TaskRun, State], None]]]

An optional list of callables to run when the task enters a completed state.

None

Returns:

Type Description

A callable Task object which, when called, will submit the task for execution.

Examples:

Define a simple task

>>> @task\n>>> def add(x, y):\n>>>     return x + y\n

Define an async task

>>> @task\n>>> async def add(x, y):\n>>>     return x + y\n

Define a task with tags and a description

>>> @task(tags={\"a\", \"b\"}, description=\"This task is empty but its my first!\")\n>>> def my_task():\n>>>     pass\n

Define a task with a custom name

>>> @task(name=\"The Ultimate Task\")\n>>> def my_task():\n>>>     pass\n

Define a task that retries 3 times with a 5 second delay between attempts

>>> from random import randint\n>>>\n>>> @task(retries=3, retry_delay_seconds=5)\n>>> def my_task():\n>>>     x = randint(0, 5)\n>>>     if x >= 3:  # Make a task that fails sometimes\n>>>         raise ValueError(\"Retry me please!\")\n>>>     return x\n

Define a task that is cached for a day based on its inputs

>>> from prefect.tasks import task_input_hash\n>>> from datetime import timedelta\n>>>\n>>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\n>>> def my_task():\n>>>     return \"hello\"\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
def task(\n    __fn=None,\n    *,\n    name: str = None,\n    description: str = None,\n    tags: Iterable[str] = None,\n    version: str = None,\n    cache_key_fn: Callable[[\"TaskRunContext\", Dict[str, Any]], Optional[str]] = None,\n    cache_expiration: datetime.timedelta = None,\n    task_run_name: Optional[Union[Callable[[], str], str]] = None,\n    retries: int = None,\n    retry_delay_seconds: Union[\n        float,\n        int,\n        List[float],\n        Callable[[int], List[float]],\n    ] = None,\n    retry_jitter_factor: Optional[float] = None,\n    persist_result: Optional[bool] = None,\n    result_storage: Optional[ResultStorage] = None,\n    result_storage_key: Optional[str] = None,\n    result_serializer: Optional[ResultSerializer] = None,\n    cache_result_in_memory: bool = True,\n    timeout_seconds: Union[int, float] = None,\n    log_prints: Optional[bool] = None,\n    refresh_cache: Optional[bool] = None,\n    on_completion: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n    on_failure: Optional[List[Callable[[\"Task\", TaskRun, State], None]]] = None,\n):\n\"\"\"\n    Decorator to designate a function as a task in a Prefect workflow.\n\n    This decorator may be used for asynchronous or synchronous functions.\n\n    Args:\n        name: An optional name for the task; if not provided, the name will be inferred\n            from the given function.\n        description: An optional string description for the task.\n        tags: An optional set of tags to be associated with runs of this task. These\n            tags are combined with any tags defined by a `prefect.tags` context at\n            task runtime.\n        version: An optional string specifying the version of this task definition\n        cache_key_fn: An optional callable that, given the task run context and call\n            parameters, generates a string key; if the key matches a previous completed\n            state, that state result will be restored instead of running the task again.\n        cache_expiration: An optional amount of time indicating how long cached states\n            for this task should be restorable; if not provided, cached states will\n            never expire.\n        task_run_name: An optional name to distinguish runs of this task; this name can be provided\n            as a string template with the task's keyword arguments as variables,\n            or a function that returns a string.\n        retries: An optional number of times to retry on task run failure\n        retry_delay_seconds: Optionally configures how long to wait before retrying the\n            task after failure. This is only applicable if `retries` is nonzero. This\n            setting can either be a number of seconds, a list of retry delays, or a\n            callable that, given the total number of retries, generates a list of retry\n            delays. If a number of seconds, that delay will be applied to all retries.\n            If a list, each retry will wait for the corresponding delay before retrying.\n            When passing a callable or a list, the number of configured retry delays\n            cannot exceed 50.\n        retry_jitter_factor: An optional factor that defines the factor to which a retry\n            can be jittered in order to avoid a \"thundering herd\".\n        persist_result: An optional toggle indicating whether the result of this task\n            should be persisted to result storage. Defaults to `None`, which indicates\n            that Prefect should choose whether the result should be persisted depending on\n            the features being used.\n        result_storage: An optional block to use to persist the result of this task.\n            Defaults to the value set in the flow the task is called in.\n        result_storage_key: An optional key to store the result in storage at when persisted.\n            Defaults to a unique identifier.\n        result_serializer: An optional serializer to use to serialize the result of this\n            task for persistence. Defaults to the value set in the flow the task is\n            called in.\n        timeout_seconds: An optional number of seconds indicating a maximum runtime for\n            the task. If the task exceeds this runtime, it will be marked as failed.\n        log_prints: If set, `print` statements in the task will be redirected to the\n            Prefect logger for the task run. Defaults to `None`, which indicates\n            that the value from the flow should be used.\n        refresh_cache: If set, cached results for the cache key are not used.\n            Defaults to `None`, which indicates that a cached result from a previous\n            execution with matching cache key is used.\n        on_failure: An optional list of callables to run when the task enters a failed state.\n        on_completion: An optional list of callables to run when the task enters a completed state.\n\n    Returns:\n        A callable `Task` object which, when called, will submit the task for execution.\n\n    Examples:\n        Define a simple task\n\n        >>> @task\n        >>> def add(x, y):\n        >>>     return x + y\n\n        Define an async task\n\n        >>> @task\n        >>> async def add(x, y):\n        >>>     return x + y\n\n        Define a task with tags and a description\n\n        >>> @task(tags={\"a\", \"b\"}, description=\"This task is empty but its my first!\")\n        >>> def my_task():\n        >>>     pass\n\n        Define a task with a custom name\n\n        >>> @task(name=\"The Ultimate Task\")\n        >>> def my_task():\n        >>>     pass\n\n        Define a task that retries 3 times with a 5 second delay between attempts\n\n        >>> from random import randint\n        >>>\n        >>> @task(retries=3, retry_delay_seconds=5)\n        >>> def my_task():\n        >>>     x = randint(0, 5)\n        >>>     if x >= 3:  # Make a task that fails sometimes\n        >>>         raise ValueError(\"Retry me please!\")\n        >>>     return x\n\n        Define a task that is cached for a day based on its inputs\n\n        >>> from prefect.tasks import task_input_hash\n        >>> from datetime import timedelta\n        >>>\n        >>> @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\n        >>> def my_task():\n        >>>     return \"hello\"\n    \"\"\"\n    if __fn:\n        return cast(\n            Task[P, R],\n            Task(\n                fn=__fn,\n                name=name,\n                description=description,\n                tags=tags,\n                version=version,\n                cache_key_fn=cache_key_fn,\n                cache_expiration=cache_expiration,\n                task_run_name=task_run_name,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                retry_jitter_factor=retry_jitter_factor,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_storage_key=result_storage_key,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                timeout_seconds=timeout_seconds,\n                log_prints=log_prints,\n                refresh_cache=refresh_cache,\n                on_completion=on_completion,\n                on_failure=on_failure,\n            ),\n        )\n    else:\n        return cast(\n            Callable[[Callable[P, R]], Task[P, R]],\n            partial(\n                task,\n                name=name,\n                description=description,\n                tags=tags,\n                version=version,\n                cache_key_fn=cache_key_fn,\n                cache_expiration=cache_expiration,\n                task_run_name=task_run_name,\n                retries=retries,\n                retry_delay_seconds=retry_delay_seconds,\n                retry_jitter_factor=retry_jitter_factor,\n                persist_result=persist_result,\n                result_storage=result_storage,\n                result_storage_key=result_storage_key,\n                result_serializer=result_serializer,\n                cache_result_in_memory=cache_result_in_memory,\n                timeout_seconds=timeout_seconds,\n                log_prints=log_prints,\n                refresh_cache=refresh_cache,\n                on_completion=on_completion,\n                on_failure=on_failure,\n            ),\n        )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/tasks/#prefect.tasks.task_input_hash","title":"task_input_hash","text":"

A task cache key implementation which hashes all inputs to the task using a JSON or cloudpickle serializer. If any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, this will return a null key indicating that a cache key could not be generated for the given inputs.

Parameters:

Name Type Description Default context TaskRunContext

the active TaskRunContext

required arguments Dict[str, Any]

a dictionary of arguments to be passed to the underlying task

required

Returns:

Type Description Optional[str]

a string hash if hashing succeeded, else None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/tasks.py
def task_input_hash(\n    context: \"TaskRunContext\", arguments: Dict[str, Any]\n) -> Optional[str]:\n\"\"\"\n    A task cache key implementation which hashes all inputs to the task using a JSON or\n    cloudpickle serializer. If any arguments are not JSON serializable, the pickle\n    serializer is used as a fallback. If cloudpickle fails, this will return a null key\n    indicating that a cache key could not be generated for the given inputs.\n\n    Arguments:\n        context: the active `TaskRunContext`\n        arguments: a dictionary of arguments to be passed to the underlying task\n\n    Returns:\n        a string hash if hashing succeeded, else `None`\n    \"\"\"\n    return hash_objects(\n        # We use the task key to get the qualified name for the task and include the\n        # task functions `co_code` bytes to avoid caching when the underlying function\n        # changes\n        context.task.task_key,\n        context.task.fn.__code__.co_code.hex(),\n        arguments,\n    )\n
","tags":["Python API","tasks","caching"]},{"location":"api-ref/prefect/blocks/core/","title":"prefect.blocks.core","text":"","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core","title":"prefect.blocks.core","text":"","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block","title":"Block","text":"

Bases: BaseModel, ABC

A base class for implementing a block that wraps an external service.

This class can be defined with an arbitrary set of fields and methods, and couples business logic with data contained in an block document. _block_document_name, _block_document_id, _block_schema_id, and _block_type_id are reserved by Prefect as Block metadata fields, but otherwise a Block can implement arbitrary logic. Blocks can be instantiated without populating these metadata fields, but can only be used interactively, not with the Prefect API.

Instead of the init method, a block implementation allows the definition of a block_initialization method that is called after initialization.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@register_base_type\n@instrument_method_calls_on_class_instances\nclass Block(BaseModel, ABC):\n\"\"\"\n    A base class for implementing a block that wraps an external service.\n\n    This class can be defined with an arbitrary set of fields and methods, and\n    couples business logic with data contained in an block document.\n    `_block_document_name`, `_block_document_id`, `_block_schema_id`, and\n    `_block_type_id` are reserved by Prefect as Block metadata fields, but\n    otherwise a Block can implement arbitrary logic. Blocks can be instantiated\n    without populating these metadata fields, but can only be used interactively,\n    not with the Prefect API.\n\n    Instead of the __init__ method, a block implementation allows the\n    definition of a `block_initialization` method that is called after\n    initialization.\n    \"\"\"\n\n    class Config:\n        extra = \"allow\"\n\n        json_encoders = {SecretDict: lambda v: v.dict()}\n\n        @staticmethod\n        def schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n\"\"\"\n            Customizes Pydantic's schema generation feature to add blocks related information.\n            \"\"\"\n            schema[\"block_type_slug\"] = model.get_block_type_slug()\n            # Ensures args and code examples aren't included in the schema\n            description = model.get_description()\n            if description:\n                schema[\"description\"] = description\n            else:\n                # Prevent the description of the base class from being included in the schema\n                schema.pop(\"description\", None)\n\n            # create a list of secret field names\n            # secret fields include both top-level keys and dot-delimited nested secret keys\n            # A wildcard (*) means that all fields under a given key are secret.\n            # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n            # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n            # nested under the \"child\" key are all secret. There is no limit to nesting.\n            secrets = schema[\"secret_fields\"] = []\n            for field in model.__fields__.values():\n                _collect_secret_fields(field.name, field.type_, secrets)\n\n            # create block schema references\n            refs = schema[\"block_schema_references\"] = {}\n            for field in model.__fields__.values():\n                if Block.is_block_class(field.type_):\n                    refs[field.name] = field.type_._to_block_schema_reference_dict()\n                if get_origin(field.type_) is Union:\n                    for type_ in get_args(field.type_):\n                        if Block.is_block_class(type_):\n                            if isinstance(refs.get(field.name), list):\n                                refs[field.name].append(\n                                    type_._to_block_schema_reference_dict()\n                                )\n                            elif isinstance(refs.get(field.name), dict):\n                                refs[field.name] = [\n                                    refs[field.name],\n                                    type_._to_block_schema_reference_dict(),\n                                ]\n                            else:\n                                refs[field.name] = (\n                                    type_._to_block_schema_reference_dict()\n                                )\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.block_initialization()\n\n    def __str__(self) -> str:\n        return self.__repr__()\n\n    def __repr_args__(self):\n        repr_args = super().__repr_args__()\n        data_keys = self.schema()[\"properties\"].keys()\n        return [\n            (key, value) for key, value in repr_args if key is None or key in data_keys\n        ]\n\n    def block_initialization(self) -> None:\n        pass\n\n    # -- private class variables\n    # set by the class itself\n\n    # Attribute to customize the name of the block type created\n    # when the block is registered with the API. If not set, block\n    # type name will default to the class name.\n    _block_type_name: Optional[str] = None\n    _block_type_slug: Optional[str] = None\n\n    # Attributes used to set properties on a block type when registered\n    # with the API.\n    _logo_url: Optional[HttpUrl] = None\n    _documentation_url: Optional[HttpUrl] = None\n    _description: Optional[str] = None\n    _code_example: Optional[str] = None\n\n    # -- private instance variables\n    # these are set when blocks are loaded from the API\n    _block_type_id: Optional[UUID] = None\n    _block_schema_id: Optional[UUID] = None\n    _block_schema_capabilities: Optional[List[str]] = None\n    _block_schema_version: Optional[str] = None\n    _block_document_id: Optional[UUID] = None\n    _block_document_name: Optional[str] = None\n    _is_anonymous: Optional[bool] = None\n\n    # Exclude `save` as it uses the `sync_compatible` decorator and needs to be\n    # decorated directly.\n    _events_excluded_methods = [\"block_initialization\", \"save\", \"dict\"]\n\n    @classmethod\n    def __dispatch_key__(cls):\n        if cls.__name__ == \"Block\":\n            return None  # The base class is abstract\n        return block_schema_to_key(cls._to_block_schema())\n\n    @classmethod\n    def get_block_type_name(cls):\n        return cls._block_type_name or cls.__name__\n\n    @classmethod\n    def get_block_type_slug(cls):\n        return slugify(cls._block_type_slug or cls.get_block_type_name())\n\n    @classmethod\n    def get_block_capabilities(cls) -> FrozenSet[str]:\n\"\"\"\n        Returns the block capabilities for this Block. Recursively collects all block\n        capabilities of all parent classes into a single frozenset.\n        \"\"\"\n        return frozenset(\n            {\n                c\n                for base in (cls,) + cls.__mro__\n                for c in getattr(base, \"_block_schema_capabilities\", []) or []\n            }\n        )\n\n    @classmethod\n    def _get_current_package_version(cls):\n        current_module = inspect.getmodule(cls)\n        if current_module:\n            top_level_module = sys.modules[\n                current_module.__name__.split(\".\")[0] or \"__main__\"\n            ]\n            try:\n                version = Version(top_level_module.__version__)\n                # Strips off any local version information\n                return version.base_version\n            except (AttributeError, InvalidVersion):\n                # Module does not have a __version__ attribute or is not a parsable format\n                pass\n        return DEFAULT_BLOCK_SCHEMA_VERSION\n\n    @classmethod\n    def get_block_schema_version(cls) -> str:\n        return cls._block_schema_version or cls._get_current_package_version()\n\n    @classmethod\n    def _to_block_schema_reference_dict(cls):\n        return dict(\n            block_type_slug=cls.get_block_type_slug(),\n            block_schema_checksum=cls._calculate_schema_checksum(),\n        )\n\n    @classmethod\n    def _calculate_schema_checksum(\n        cls, block_schema_fields: Optional[Dict[str, Any]] = None\n    ):\n\"\"\"\n        Generates a unique hash for the underlying schema of block.\n\n        Args:\n            block_schema_fields: Dictionary detailing block schema fields to generate a\n                checksum for. The fields of the current class is used if this parameter\n                is not provided.\n\n        Returns:\n            str: The calculated checksum prefixed with the hashing algorithm used.\n        \"\"\"\n        block_schema_fields = (\n            cls.schema() if block_schema_fields is None else block_schema_fields\n        )\n        fields_for_checksum = remove_nested_keys([\"secret_fields\"], block_schema_fields)\n        if fields_for_checksum.get(\"definitions\"):\n            non_block_definitions = _get_non_block_reference_definitions(\n                fields_for_checksum, fields_for_checksum[\"definitions\"]\n            )\n            if non_block_definitions:\n                fields_for_checksum[\"definitions\"] = non_block_definitions\n            else:\n                # Pop off definitions entirely instead of empty dict for consistency\n                # with the OpenAPI specification\n                fields_for_checksum.pop(\"definitions\")\n        checksum = hash_objects(fields_for_checksum, hash_algo=hashlib.sha256)\n        if checksum is None:\n            raise ValueError(\"Unable to compute checksum for block schema\")\n        else:\n            return f\"sha256:{checksum}\"\n\n    def _to_block_document(\n        self,\n        name: Optional[str] = None,\n        block_schema_id: Optional[UUID] = None,\n        block_type_id: Optional[UUID] = None,\n        is_anonymous: Optional[bool] = None,\n    ) -> BlockDocument:\n\"\"\"\n        Creates the corresponding block document based on the data stored in a block.\n        The corresponding block document name, block type ID, and block schema ID must\n        either be passed into the method or configured on the block.\n\n        Args:\n            name: The name of the created block document. Not required if anonymous.\n            block_schema_id: UUID of the corresponding block schema.\n            block_type_id: UUID of the corresponding block type.\n            is_anonymous: if True, an anonymous block is created. Anonymous\n                blocks are not displayed in the UI and used primarily for system\n                operations and features that need to automatically generate blocks.\n\n        Returns:\n            BlockDocument: Corresponding block document\n                populated with the block's configured data.\n        \"\"\"\n        if is_anonymous is None:\n            is_anonymous = self._is_anonymous or False\n\n        # name must be present if not anonymous\n        if not is_anonymous and not name and not self._block_document_name:\n            raise ValueError(\"No name provided, either as an argument or on the block.\")\n\n        if not block_schema_id and not self._block_schema_id:\n            raise ValueError(\n                \"No block schema ID provided, either as an argument or on the block.\"\n            )\n        if not block_type_id and not self._block_type_id:\n            raise ValueError(\n                \"No block type ID provided, either as an argument or on the block.\"\n            )\n\n        # The keys passed to `include` must NOT be aliases, else some items will be missed\n        # i.e. must do `self.schema_` vs `self.schema` to get a `schema_ = Field(alias=\"schema\")`\n        # reported from https://github.com/PrefectHQ/prefect-dbt/issues/54\n        data_keys = self.schema(by_alias=False)[\"properties\"].keys()\n\n        # `block_document_data`` must return the aliased version for it to show in the UI\n        block_document_data = self.dict(by_alias=True, include=data_keys)\n\n        # Iterate through and find blocks that already have saved block documents to\n        # create references to those saved block documents.\n        for key in data_keys:\n            field_value = getattr(self, key)\n            if (\n                isinstance(field_value, Block)\n                and field_value._block_document_id is not None\n            ):\n                block_document_data[key] = {\n                    \"$ref\": {\"block_document_id\": field_value._block_document_id}\n                }\n\n        return BlockDocument(\n            id=self._block_document_id or uuid4(),\n            name=(name or self._block_document_name) if not is_anonymous else None,\n            block_schema_id=block_schema_id or self._block_schema_id,\n            block_type_id=block_type_id or self._block_type_id,\n            data=block_document_data,\n            block_schema=self._to_block_schema(\n                block_type_id=block_type_id or self._block_type_id,\n            ),\n            block_type=self._to_block_type(),\n            is_anonymous=is_anonymous,\n        )\n\n    @classmethod\n    def _to_block_schema(cls, block_type_id: Optional[UUID] = None) -> BlockSchema:\n\"\"\"\n        Creates the corresponding block schema of the block.\n        The corresponding block_type_id must either be passed into\n        the method or configured on the block.\n\n        Args:\n            block_type_id: UUID of the corresponding block type.\n\n        Returns:\n            BlockSchema: The corresponding block schema.\n        \"\"\"\n        fields = cls.schema()\n        return BlockSchema(\n            id=cls._block_schema_id if cls._block_schema_id is not None else uuid4(),\n            checksum=cls._calculate_schema_checksum(),\n            fields=fields,\n            block_type_id=block_type_id or cls._block_type_id,\n            block_type=cls._to_block_type(),\n            capabilities=list(cls.get_block_capabilities()),\n            version=cls.get_block_schema_version(),\n        )\n\n    @classmethod\n    def _parse_docstring(cls) -> List[DocstringSection]:\n\"\"\"\n        Parses the docstring into list of DocstringSection objects.\n        Helper method used primarily to suppress irrelevant logs, e.g.\n        `<module>:11: No type or annotation for parameter 'write_json'`\n        because griffe is unable to parse the types from pydantic.BaseModel.\n        \"\"\"\n        with disable_logger(\"griffe.docstrings.google\"):\n            with disable_logger(\"griffe.agents.nodes\"):\n                docstring = Docstring(cls.__doc__)\n                parsed = parse(docstring, Parser.google)\n        return parsed\n\n    @classmethod\n    def get_description(cls) -> Optional[str]:\n\"\"\"\n        Returns the description for the current block. Attempts to parse\n        description from class docstring if an override is not defined.\n        \"\"\"\n        description = cls._description\n        # If no description override has been provided, find the first text section\n        # and use that as the description\n        if description is None and cls.__doc__ is not None:\n            parsed = cls._parse_docstring()\n            parsed_description = next(\n                (\n                    section.as_dict().get(\"value\")\n                    for section in parsed\n                    if section.kind == DocstringSectionKind.text\n                ),\n                None,\n            )\n            if isinstance(parsed_description, str):\n                description = parsed_description.strip()\n        return description\n\n    @classmethod\n    def get_code_example(cls) -> Optional[str]:\n\"\"\"\n        Returns the code example for the given block. Attempts to parse\n        code example from the class docstring if an override is not provided.\n        \"\"\"\n        code_example = (\n            dedent(cls._code_example) if cls._code_example is not None else None\n        )\n        # If no code example override has been provided, attempt to find a examples\n        # section or an admonition with the annotation \"example\" and use that as the\n        # code example\n        if code_example is None and cls.__doc__ is not None:\n            parsed = cls._parse_docstring()\n            for section in parsed:\n                # Section kind will be \"examples\" if Examples section heading is used.\n                if section.kind == DocstringSectionKind.examples:\n                    # Examples sections are made up of smaller sections that need to be\n                    # joined with newlines. Smaller sections are represented as tuples\n                    # with shape (DocstringSectionKind, str)\n                    code_example = \"\\n\".join(\n                        (part[1] for part in section.as_dict().get(\"value\", []))\n                    )\n                    break\n                # Section kind will be \"admonition\" if Example section heading is used.\n                if section.kind == DocstringSectionKind.admonition:\n                    value = section.as_dict().get(\"value\", {})\n                    if value.get(\"annotation\") == \"example\":\n                        code_example = value.get(\"description\")\n                        break\n\n        if code_example is None:\n            # If no code example has been specified or extracted from the class\n            # docstring, generate a sensible default\n            code_example = cls._generate_code_example()\n\n        return code_example\n\n    @classmethod\n    def _generate_code_example(cls) -> str:\n\"\"\"Generates a default code example for the current class\"\"\"\n        qualified_name = to_qualified_name(cls)\n        module_str = \".\".join(qualified_name.split(\".\")[:-1])\n        class_name = cls.__name__\n        block_variable_name = f'{cls.get_block_type_slug().replace(\"-\", \"_\")}_block'\n\n        return dedent(\n            f\"\"\"\\\n        ```python\n        from {module_str} import {class_name}\n\n{block_variable_name} = {class_name}.load(\"BLOCK_NAME\")\n        ```\"\"\"\n        )\n\n    @classmethod\n    def _to_block_type(cls) -> BlockType:\n\"\"\"\n        Creates the corresponding block type of the block.\n\n        Returns:\n            BlockType: The corresponding block type.\n        \"\"\"\n        return BlockType(\n            id=cls._block_type_id or uuid4(),\n            slug=cls.get_block_type_slug(),\n            name=cls.get_block_type_name(),\n            logo_url=cls._logo_url,\n            documentation_url=cls._documentation_url,\n            description=cls.get_description(),\n            code_example=cls.get_code_example(),\n        )\n\n    @classmethod\n    def _from_block_document(cls, block_document: BlockDocument):\n\"\"\"\n        Instantiates a block from a given block document. The corresponding block class\n        will be looked up in the block registry based on the corresponding block schema\n        of the provided block document.\n\n        Args:\n            block_document: The block document used to instantiate a block.\n\n        Raises:\n            ValueError: If the provided block document doesn't have a corresponding block\n                schema.\n\n        Returns:\n            Block: Hydrated block with data from block document.\n        \"\"\"\n        if block_document.block_schema is None:\n            raise ValueError(\n                \"Unable to determine block schema for provided block document\"\n            )\n\n        block_cls = (\n            cls\n            if cls.__name__ != \"Block\"\n            # Look up the block class by dispatch\n            else cls.get_block_class_from_schema(block_document.block_schema)\n        )\n\n        block_cls = instrument_method_calls_on_class_instances(block_cls)\n\n        block = block_cls.parse_obj(block_document.data)\n        block._block_document_id = block_document.id\n        block.__class__._block_schema_id = block_document.block_schema_id\n        block.__class__._block_type_id = block_document.block_type_id\n        block._block_document_name = block_document.name\n        block._is_anonymous = block_document.is_anonymous\n        block._define_metadata_on_nested_blocks(\n            block_document.block_document_references\n        )\n\n        # Due to the way blocks are loaded we can't directly instrument the\n        # `load` method and have the data be about the block document. Instead\n        # this will emit a proxy event for the load method so that block\n        # document data can be included instead of the event being about an\n        # 'anonymous' block.\n\n        emit_instance_method_called_event(block, \"load\", successful=True)\n\n        return block\n\n    def _event_kind(self) -> str:\n        return f\"prefect.block.{self.get_block_type_slug()}\"\n\n    def _event_method_called_resources(self) -> Optional[ResourceTuple]:\n        if not (self._block_document_id and self._block_document_name):\n            return None\n\n        return (\n            {\n                \"prefect.resource.id\": (\n                    f\"prefect.block-document.{self._block_document_id}\"\n                ),\n                \"prefect.resource.name\": self._block_document_name,\n            },\n            [\n                {\n                    \"prefect.resource.id\": (\n                        f\"prefect.block-type.{self.get_block_type_slug()}\"\n                    ),\n                    \"prefect.resource.role\": \"block-type\",\n                }\n            ],\n        )\n\n    @classmethod\n    def get_block_class_from_schema(cls: Type[Self], schema: BlockSchema) -> Type[Self]:\n\"\"\"\n        Retieve the block class implementation given a schema.\n        \"\"\"\n        return cls.get_block_class_from_key(block_schema_to_key(schema))\n\n    @classmethod\n    def get_block_class_from_key(cls: Type[Self], key: str) -> Type[Self]:\n\"\"\"\n        Retieve the block class implementation given a key.\n        \"\"\"\n        # Ensure collections are imported and have the opportunity to register types\n        # before looking up the block class\n        prefect.plugins.load_prefect_collections()\n\n        return lookup_type(cls, key)\n\n    def _define_metadata_on_nested_blocks(\n        self, block_document_references: Dict[str, Dict[str, Any]]\n    ):\n\"\"\"\n        Recursively populates metadata fields on nested blocks based on the\n        provided block document references.\n        \"\"\"\n        for item in block_document_references.items():\n            field_name, block_document_reference = item\n            nested_block = getattr(self, field_name)\n            if isinstance(nested_block, Block):\n                nested_block_document_info = block_document_reference.get(\n                    \"block_document\", {}\n                )\n                nested_block._define_metadata_on_nested_blocks(\n                    nested_block_document_info.get(\"block_document_references\", {})\n                )\n                nested_block_document_id = nested_block_document_info.get(\"id\")\n                nested_block._block_document_id = (\n                    UUID(nested_block_document_id) if nested_block_document_id else None\n                )\n                nested_block._block_document_name = nested_block_document_info.get(\n                    \"name\"\n                )\n                nested_block._is_anonymous = nested_block_document_info.get(\n                    \"is_anonymous\"\n                )\n\n    @classmethod\n    @inject_client\n    async def _get_block_document(\n        cls,\n        name: str,\n        client: \"PrefectClient\" = None,\n    ):\n        if cls.__name__ == \"Block\":\n            block_type_slug, block_document_name = name.split(\"/\", 1)\n        else:\n            block_type_slug = cls.get_block_type_slug()\n            block_document_name = name\n\n        try:\n            block_document = await client.read_block_document_by_name(\n                name=block_document_name, block_type_slug=block_type_slug\n            )\n        except prefect.exceptions.ObjectNotFound as e:\n            raise ValueError(\n                f\"Unable to find block document named {block_document_name} for block\"\n                f\" type {block_type_slug}\"\n            ) from e\n\n        return block_document, block_document_name\n\n    @classmethod\n    @sync_compatible\n    @inject_client\n    async def load(\n        cls,\n        name: str,\n        validate: bool = True,\n        client: \"PrefectClient\" = None,\n    ):\n\"\"\"\n        Retrieves data from the block document with the given name for the block type\n        that corresponds with the current class and returns an instantiated version of\n        the current class with the data stored in the block document.\n\n        If a block document for a given block type is saved with a different schema\n        than the current class calling `load`, a warning will be raised.\n\n        If the current class schema is a subset of the block document schema, the block\n        can be loaded as normal using the default `validate = True`.\n\n        If the current class schema is a superset of the block document schema, `load`\n        must be called with `validate` set to False to prevent a validation error. In\n        this case, the block attributes will default to `None` and must be set manually\n        and saved to a new block document before the block can be used as expected.\n\n        Args:\n            name: The name or slug of the block document. A block document slug is a\n                string with the format <block_type_slug>/<block_document_name>\n            validate: If False, the block document will be loaded without Pydantic\n                validating the block schema. This is useful if the block schema has\n                changed client-side since the block document referred to by `name` was saved.\n            client: The client to use to load the block document. If not provided, the\n                default client will be injected.\n\n        Raises:\n            ValueError: If the requested block document is not found.\n\n        Returns:\n            An instance of the current class hydrated with the data stored in the\n            block document with the specified name.\n\n        Examples:\n            Load from a Block subclass with a block document name:\n            ```python\n            class Custom(Block):\n                message: str\n\n            Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n            loaded_block = Custom.load(\"my-custom-message\")\n            ```\n\n            Load from Block with a block document slug:\n            ```python\n            class Custom(Block):\n                message: str\n\n            Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n            loaded_block = Block.load(\"custom/my-custom-message\")\n            ```\n\n            Migrate a block document to a new schema:\n            ```python\n            # original class\n            class Custom(Block):\n                message: str\n\n            Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n            # Updated class with new required field\n            class Custom(Block):\n                message: str\n                number_of_ducks: int\n\n            loaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n            # Prints UserWarning about schema mismatch\n\n            loaded_block.number_of_ducks = 42\n\n            loaded_block.save(\"my-custom-message\", overwrite=True)\n            ```\n        \"\"\"\n        block_document, block_document_name = await cls._get_block_document(name)\n\n        try:\n            return cls._from_block_document(block_document)\n        except ValidationError as e:\n            if not validate:\n                missing_fields = tuple(err[\"loc\"][0] for err in e.errors())\n                missing_block_data = {field: None for field in missing_fields}\n                warnings.warn(\n                    f\"Could not fully load {block_document_name!r} of block type\"\n                    f\" {cls._block_type_slug!r} - this is likely because one or more\"\n                    \" required fields were added to the schema for\"\n                    f\" {cls.__name__!r} that did not exist on the class when this block\"\n                    \" was last saved. Please specify values for new field(s):\"\n                    f\" {listrepr(missing_fields)}, then run\"\n                    f' `{cls.__name__}.save(\"{block_document_name}\", overwrite=True)`,'\n                    \" and load this block again before attempting to use it.\"\n                )\n                return cls.construct(**block_document.data, **missing_block_data)\n            raise RuntimeError(\n                f\"Unable to load {block_document_name!r} of block type\"\n                f\" {cls._block_type_slug!r} due to failed validation. To load without\"\n                \" validation, try loading again with `validate=False`.\"\n            ) from e\n\n    @staticmethod\n    def is_block_class(block) -> bool:\n        return _is_subclass(block, Block)\n\n    @classmethod\n    @sync_compatible\n    @inject_client\n    async def register_type_and_schema(cls, client: \"PrefectClient\" = None):\n\"\"\"\n        Makes block available for configuration with current Prefect API.\n        Recursively registers all nested blocks. Registration is idempotent.\n\n        Args:\n            client: Optional client to use for registering type and schema with the\n                Prefect API. A new client will be created and used if one is not\n                provided.\n        \"\"\"\n        if cls.__name__ == \"Block\":\n            raise InvalidBlockRegistration(\n                \"`register_type_and_schema` should be called on a Block \"\n                \"subclass and not on the Block class directly.\"\n            )\n        if ABC in getattr(cls, \"__bases__\", []):\n            raise InvalidBlockRegistration(\n                \"`register_type_and_schema` should be called on a Block \"\n                \"subclass and not on a Block interface class directly.\"\n            )\n\n        for field in cls.__fields__.values():\n            if Block.is_block_class(field.type_):\n                await field.type_.register_type_and_schema(client=client)\n            if get_origin(field.type_) is Union:\n                for type_ in get_args(field.type_):\n                    if Block.is_block_class(type_):\n                        await type_.register_type_and_schema(client=client)\n\n        try:\n            block_type = await client.read_block_type_by_slug(\n                slug=cls.get_block_type_slug()\n            )\n            cls._block_type_id = block_type.id\n            local_block_type = cls._to_block_type()\n            if _should_update_block_type(\n                local_block_type=local_block_type, server_block_type=block_type\n            ):\n                await client.update_block_type(\n                    block_type_id=block_type.id, block_type=local_block_type\n                )\n        except prefect.exceptions.ObjectNotFound:\n            block_type = await client.create_block_type(block_type=cls._to_block_type())\n            cls._block_type_id = block_type.id\n\n        try:\n            block_schema = await client.read_block_schema_by_checksum(\n                checksum=cls._calculate_schema_checksum(),\n                version=cls.get_block_schema_version(),\n            )\n        except prefect.exceptions.ObjectNotFound:\n            block_schema = await client.create_block_schema(\n                block_schema=cls._to_block_schema(block_type_id=block_type.id)\n            )\n\n        cls._block_schema_id = block_schema.id\n\n    @inject_client\n    async def _save(\n        self,\n        name: Optional[str] = None,\n        is_anonymous: bool = False,\n        overwrite: bool = False,\n        client: \"PrefectClient\" = None,\n    ):\n\"\"\"\n        Saves the values of a block as a block document with an option to save as an\n        anonymous block document.\n\n        Args:\n            name: User specified name to give saved block document which can later be used to load the\n                block document.\n            is_anonymous: Boolean value specifying whether the block document is anonymous. Anonymous\n                blocks are intended for system use and are not shown in the UI. Anonymous blocks do not\n                require a user-supplied name.\n            overwrite: Boolean value specifying if values should be overwritten if a block document with\n                the specified name already exists.\n\n        Raises:\n            ValueError: If a name is not given and `is_anonymous` is `False` or a name is given and\n                `is_anonymous` is `True`.\n        \"\"\"\n        if name is None and not is_anonymous:\n            raise ValueError(\n                \"You're attempting to save a block document without a name. \"\n                \"Please either save a block document with a name or set \"\n                \"is_anonymous to True.\"\n            )\n\n        self._is_anonymous = is_anonymous\n\n        # Ensure block type and schema are registered before saving block document.\n        await self.register_type_and_schema(client=client)\n\n        try:\n            block_document = await client.create_block_document(\n                block_document=self._to_block_document(name=name)\n            )\n        except prefect.exceptions.ObjectAlreadyExists as err:\n            if overwrite:\n                block_document_id = self._block_document_id\n                if block_document_id is None:\n                    existing_block_document = await client.read_block_document_by_name(\n                        name=name, block_type_slug=self.get_block_type_slug()\n                    )\n                    block_document_id = existing_block_document.id\n                await client.update_block_document(\n                    block_document_id=block_document_id,\n                    block_document=self._to_block_document(name=name),\n                )\n                block_document = await client.read_block_document(\n                    block_document_id=block_document_id\n                )\n            else:\n                raise ValueError(\n                    \"You are attempting to save values with a name that is already in\"\n                    \" use for this block type. If you would like to overwrite the\"\n                    \" values that are saved, then save with `overwrite=True`.\"\n                ) from err\n\n        # Update metadata on block instance for later use.\n        self._block_document_name = block_document.name\n        self._block_document_id = block_document.id\n        return self._block_document_id\n\n    @sync_compatible\n    @instrument_instance_method_call()\n    async def save(\n        self, name: str, overwrite: bool = False, client: \"PrefectClient\" = None\n    ):\n\"\"\"\n        Saves the values of a block as a block document.\n\n        Args:\n            name: User specified name to give saved block document which can later be used to load the\n                block document.\n            overwrite: Boolean value specifying if values should be overwritten if a block document with\n                the specified name already exists.\n\n        \"\"\"\n        document_id = await self._save(name=name, overwrite=overwrite, client=client)\n\n        return document_id\n\n    @classmethod\n    @sync_compatible\n    @inject_client\n    async def delete(\n        cls,\n        name: str,\n        client: \"PrefectClient\" = None,\n    ):\n        block_document, block_document_name = await cls._get_block_document(name)\n\n        await client.delete_block_document(block_document.id)\n\n    def _iter(self, *, include=None, exclude=None, **kwargs):\n        # Injects the `block_type_slug` into serialized payloads for dispatch\n        for key_value in super()._iter(include=include, exclude=exclude, **kwargs):\n            yield key_value\n\n        # Respect inclusion and exclusion still\n        if include and \"block_type_slug\" not in include:\n            return\n        if exclude and \"block_type_slug\" in exclude:\n            return\n\n        yield \"block_type_slug\", self.get_block_type_slug()\n\n    def __new__(cls: Type[Self], **kwargs) -> Self:\n\"\"\"\n        Create an instance of the Block subclass type if a `block_type_slug` is\n        present in the data payload.\n        \"\"\"\n        block_type_slug = kwargs.pop(\"block_type_slug\", None)\n        if block_type_slug:\n            subcls = lookup_type(cls, dispatch_key=block_type_slug)\n            m = super().__new__(subcls)\n            # NOTE: This is a workaround for an obscure issue where copied models were\n            #       missing attributes. This pattern is from Pydantic's\n            #       `BaseModel._copy_and_set_values`.\n            #       The issue this fixes could not be reproduced in unit tests that\n            #       directly targeted dispatch handling and was only observed when\n            #       copying then saving infrastructure blocks on deployment models.\n            object.__setattr__(m, \"__dict__\", kwargs)\n            object.__setattr__(m, \"__fields_set__\", set(kwargs.keys()))\n            return m\n        else:\n            m = super().__new__(cls)\n            object.__setattr__(m, \"__dict__\", kwargs)\n            object.__setattr__(m, \"__fields_set__\", set(kwargs.keys()))\n            return m\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.Config","title":"Config","text":"Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
class Config:\n    extra = \"allow\"\n\n    json_encoders = {SecretDict: lambda v: v.dict()}\n\n    @staticmethod\n    def schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n\"\"\"\n        Customizes Pydantic's schema generation feature to add blocks related information.\n        \"\"\"\n        schema[\"block_type_slug\"] = model.get_block_type_slug()\n        # Ensures args and code examples aren't included in the schema\n        description = model.get_description()\n        if description:\n            schema[\"description\"] = description\n        else:\n            # Prevent the description of the base class from being included in the schema\n            schema.pop(\"description\", None)\n\n        # create a list of secret field names\n        # secret fields include both top-level keys and dot-delimited nested secret keys\n        # A wildcard (*) means that all fields under a given key are secret.\n        # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n        # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n        # nested under the \"child\" key are all secret. There is no limit to nesting.\n        secrets = schema[\"secret_fields\"] = []\n        for field in model.__fields__.values():\n            _collect_secret_fields(field.name, field.type_, secrets)\n\n        # create block schema references\n        refs = schema[\"block_schema_references\"] = {}\n        for field in model.__fields__.values():\n            if Block.is_block_class(field.type_):\n                refs[field.name] = field.type_._to_block_schema_reference_dict()\n            if get_origin(field.type_) is Union:\n                for type_ in get_args(field.type_):\n                    if Block.is_block_class(type_):\n                        if isinstance(refs.get(field.name), list):\n                            refs[field.name].append(\n                                type_._to_block_schema_reference_dict()\n                            )\n                        elif isinstance(refs.get(field.name), dict):\n                            refs[field.name] = [\n                                refs[field.name],\n                                type_._to_block_schema_reference_dict(),\n                            ]\n                        else:\n                            refs[field.name] = (\n                                type_._to_block_schema_reference_dict()\n                            )\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.Config.schema_extra","title":"schema_extra staticmethod","text":"

Customizes Pydantic's schema generation feature to add blocks related information.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@staticmethod\ndef schema_extra(schema: Dict[str, Any], model: Type[\"Block\"]):\n\"\"\"\n    Customizes Pydantic's schema generation feature to add blocks related information.\n    \"\"\"\n    schema[\"block_type_slug\"] = model.get_block_type_slug()\n    # Ensures args and code examples aren't included in the schema\n    description = model.get_description()\n    if description:\n        schema[\"description\"] = description\n    else:\n        # Prevent the description of the base class from being included in the schema\n        schema.pop(\"description\", None)\n\n    # create a list of secret field names\n    # secret fields include both top-level keys and dot-delimited nested secret keys\n    # A wildcard (*) means that all fields under a given key are secret.\n    # for example: [\"x\", \"y\", \"z.*\", \"child.a\"]\n    # means the top-level keys \"x\" and \"y\", all keys under \"z\", and the key \"a\" of a block\n    # nested under the \"child\" key are all secret. There is no limit to nesting.\n    secrets = schema[\"secret_fields\"] = []\n    for field in model.__fields__.values():\n        _collect_secret_fields(field.name, field.type_, secrets)\n\n    # create block schema references\n    refs = schema[\"block_schema_references\"] = {}\n    for field in model.__fields__.values():\n        if Block.is_block_class(field.type_):\n            refs[field.name] = field.type_._to_block_schema_reference_dict()\n        if get_origin(field.type_) is Union:\n            for type_ in get_args(field.type_):\n                if Block.is_block_class(type_):\n                    if isinstance(refs.get(field.name), list):\n                        refs[field.name].append(\n                            type_._to_block_schema_reference_dict()\n                        )\n                    elif isinstance(refs.get(field.name), dict):\n                        refs[field.name] = [\n                            refs[field.name],\n                            type_._to_block_schema_reference_dict(),\n                        ]\n                    else:\n                        refs[field.name] = (\n                            type_._to_block_schema_reference_dict()\n                        )\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_capabilities","title":"get_block_capabilities classmethod","text":"

Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\ndef get_block_capabilities(cls) -> FrozenSet[str]:\n\"\"\"\n    Returns the block capabilities for this Block. Recursively collects all block\n    capabilities of all parent classes into a single frozenset.\n    \"\"\"\n    return frozenset(\n        {\n            c\n            for base in (cls,) + cls.__mro__\n            for c in getattr(base, \"_block_schema_capabilities\", []) or []\n        }\n    )\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_class_from_key","title":"get_block_class_from_key classmethod","text":"

Retieve the block class implementation given a key.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\ndef get_block_class_from_key(cls: Type[Self], key: str) -> Type[Self]:\n\"\"\"\n    Retieve the block class implementation given a key.\n    \"\"\"\n    # Ensure collections are imported and have the opportunity to register types\n    # before looking up the block class\n    prefect.plugins.load_prefect_collections()\n\n    return lookup_type(cls, key)\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_block_class_from_schema","title":"get_block_class_from_schema classmethod","text":"

Retieve the block class implementation given a schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\ndef get_block_class_from_schema(cls: Type[Self], schema: BlockSchema) -> Type[Self]:\n\"\"\"\n    Retieve the block class implementation given a schema.\n    \"\"\"\n    return cls.get_block_class_from_key(block_schema_to_key(schema))\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_code_example","title":"get_code_example classmethod","text":"

Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\ndef get_code_example(cls) -> Optional[str]:\n\"\"\"\n    Returns the code example for the given block. Attempts to parse\n    code example from the class docstring if an override is not provided.\n    \"\"\"\n    code_example = (\n        dedent(cls._code_example) if cls._code_example is not None else None\n    )\n    # If no code example override has been provided, attempt to find a examples\n    # section or an admonition with the annotation \"example\" and use that as the\n    # code example\n    if code_example is None and cls.__doc__ is not None:\n        parsed = cls._parse_docstring()\n        for section in parsed:\n            # Section kind will be \"examples\" if Examples section heading is used.\n            if section.kind == DocstringSectionKind.examples:\n                # Examples sections are made up of smaller sections that need to be\n                # joined with newlines. Smaller sections are represented as tuples\n                # with shape (DocstringSectionKind, str)\n                code_example = \"\\n\".join(\n                    (part[1] for part in section.as_dict().get(\"value\", []))\n                )\n                break\n            # Section kind will be \"admonition\" if Example section heading is used.\n            if section.kind == DocstringSectionKind.admonition:\n                value = section.as_dict().get(\"value\", {})\n                if value.get(\"annotation\") == \"example\":\n                    code_example = value.get(\"description\")\n                    break\n\n    if code_example is None:\n        # If no code example has been specified or extracted from the class\n        # docstring, generate a sensible default\n        code_example = cls._generate_code_example()\n\n    return code_example\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.get_description","title":"get_description classmethod","text":"

Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\ndef get_description(cls) -> Optional[str]:\n\"\"\"\n    Returns the description for the current block. Attempts to parse\n    description from class docstring if an override is not defined.\n    \"\"\"\n    description = cls._description\n    # If no description override has been provided, find the first text section\n    # and use that as the description\n    if description is None and cls.__doc__ is not None:\n        parsed = cls._parse_docstring()\n        parsed_description = next(\n            (\n                section.as_dict().get(\"value\")\n                for section in parsed\n                if section.kind == DocstringSectionKind.text\n            ),\n            None,\n        )\n        if isinstance(parsed_description, str):\n            description = parsed_description.strip()\n    return description\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.load","title":"load async classmethod","text":"

Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document.

If a block document for a given block type is saved with a different schema than the current class calling load, a warning will be raised.

If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default validate = True.

If the current class schema is a superset of the block document schema, load must be called with validate set to False to prevent a validation error. In this case, the block attributes will default to None and must be set manually and saved to a new block document before the block can be used as expected.

Parameters:

Name Type Description Default name str

The name or slug of the block document. A block document slug is a string with the format / required validate bool

If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by name was saved.

True client PrefectClient

The client to use to load the block document. If not provided, the default client will be injected.

None

Raises:

Type Description ValueError

If the requested block document is not found.

Returns:

Type Description

An instance of the current class hydrated with the data stored in the

block document with the specified name.

Examples:

Load from a Block subclass with a block document name:

class Custom(Block):\n    message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\nloaded_block = Custom.load(\"my-custom-message\")\n

Load from Block with a block document slug:

class Custom(Block):\n    message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\nloaded_block = Block.load(\"custom/my-custom-message\")\n

Migrate a block document to a new schema:

# original class\nclass Custom(Block):\n    message: str\n\nCustom(message=\"Hello!\").save(\"my-custom-message\")\n\n# Updated class with new required field\nclass Custom(Block):\n    message: str\n    number_of_ducks: int\n\nloaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n# Prints UserWarning about schema mismatch\n\nloaded_block.number_of_ducks = 42\n\nloaded_block.save(\"my-custom-message\", overwrite=True)\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\n@sync_compatible\n@inject_client\nasync def load(\n    cls,\n    name: str,\n    validate: bool = True,\n    client: \"PrefectClient\" = None,\n):\n\"\"\"\n    Retrieves data from the block document with the given name for the block type\n    that corresponds with the current class and returns an instantiated version of\n    the current class with the data stored in the block document.\n\n    If a block document for a given block type is saved with a different schema\n    than the current class calling `load`, a warning will be raised.\n\n    If the current class schema is a subset of the block document schema, the block\n    can be loaded as normal using the default `validate = True`.\n\n    If the current class schema is a superset of the block document schema, `load`\n    must be called with `validate` set to False to prevent a validation error. In\n    this case, the block attributes will default to `None` and must be set manually\n    and saved to a new block document before the block can be used as expected.\n\n    Args:\n        name: The name or slug of the block document. A block document slug is a\n            string with the format <block_type_slug>/<block_document_name>\n        validate: If False, the block document will be loaded without Pydantic\n            validating the block schema. This is useful if the block schema has\n            changed client-side since the block document referred to by `name` was saved.\n        client: The client to use to load the block document. If not provided, the\n            default client will be injected.\n\n    Raises:\n        ValueError: If the requested block document is not found.\n\n    Returns:\n        An instance of the current class hydrated with the data stored in the\n        block document with the specified name.\n\n    Examples:\n        Load from a Block subclass with a block document name:\n        ```python\n        class Custom(Block):\n            message: str\n\n        Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n        loaded_block = Custom.load(\"my-custom-message\")\n        ```\n\n        Load from Block with a block document slug:\n        ```python\n        class Custom(Block):\n            message: str\n\n        Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n        loaded_block = Block.load(\"custom/my-custom-message\")\n        ```\n\n        Migrate a block document to a new schema:\n        ```python\n        # original class\n        class Custom(Block):\n            message: str\n\n        Custom(message=\"Hello!\").save(\"my-custom-message\")\n\n        # Updated class with new required field\n        class Custom(Block):\n            message: str\n            number_of_ducks: int\n\n        loaded_block = Custom.load(\"my-custom-message\", validate=False)\n\n        # Prints UserWarning about schema mismatch\n\n        loaded_block.number_of_ducks = 42\n\n        loaded_block.save(\"my-custom-message\", overwrite=True)\n        ```\n    \"\"\"\n    block_document, block_document_name = await cls._get_block_document(name)\n\n    try:\n        return cls._from_block_document(block_document)\n    except ValidationError as e:\n        if not validate:\n            missing_fields = tuple(err[\"loc\"][0] for err in e.errors())\n            missing_block_data = {field: None for field in missing_fields}\n            warnings.warn(\n                f\"Could not fully load {block_document_name!r} of block type\"\n                f\" {cls._block_type_slug!r} - this is likely because one or more\"\n                \" required fields were added to the schema for\"\n                f\" {cls.__name__!r} that did not exist on the class when this block\"\n                \" was last saved. Please specify values for new field(s):\"\n                f\" {listrepr(missing_fields)}, then run\"\n                f' `{cls.__name__}.save(\"{block_document_name}\", overwrite=True)`,'\n                \" and load this block again before attempting to use it.\"\n            )\n            return cls.construct(**block_document.data, **missing_block_data)\n        raise RuntimeError(\n            f\"Unable to load {block_document_name!r} of block type\"\n            f\" {cls._block_type_slug!r} due to failed validation. To load without\"\n            \" validation, try loading again with `validate=False`.\"\n        ) from e\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.register_type_and_schema","title":"register_type_and_schema async classmethod","text":"

Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent.

Parameters:

Name Type Description Default client PrefectClient

Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@classmethod\n@sync_compatible\n@inject_client\nasync def register_type_and_schema(cls, client: \"PrefectClient\" = None):\n\"\"\"\n    Makes block available for configuration with current Prefect API.\n    Recursively registers all nested blocks. Registration is idempotent.\n\n    Args:\n        client: Optional client to use for registering type and schema with the\n            Prefect API. A new client will be created and used if one is not\n            provided.\n    \"\"\"\n    if cls.__name__ == \"Block\":\n        raise InvalidBlockRegistration(\n            \"`register_type_and_schema` should be called on a Block \"\n            \"subclass and not on the Block class directly.\"\n        )\n    if ABC in getattr(cls, \"__bases__\", []):\n        raise InvalidBlockRegistration(\n            \"`register_type_and_schema` should be called on a Block \"\n            \"subclass and not on a Block interface class directly.\"\n        )\n\n    for field in cls.__fields__.values():\n        if Block.is_block_class(field.type_):\n            await field.type_.register_type_and_schema(client=client)\n        if get_origin(field.type_) is Union:\n            for type_ in get_args(field.type_):\n                if Block.is_block_class(type_):\n                    await type_.register_type_and_schema(client=client)\n\n    try:\n        block_type = await client.read_block_type_by_slug(\n            slug=cls.get_block_type_slug()\n        )\n        cls._block_type_id = block_type.id\n        local_block_type = cls._to_block_type()\n        if _should_update_block_type(\n            local_block_type=local_block_type, server_block_type=block_type\n        ):\n            await client.update_block_type(\n                block_type_id=block_type.id, block_type=local_block_type\n            )\n    except prefect.exceptions.ObjectNotFound:\n        block_type = await client.create_block_type(block_type=cls._to_block_type())\n        cls._block_type_id = block_type.id\n\n    try:\n        block_schema = await client.read_block_schema_by_checksum(\n            checksum=cls._calculate_schema_checksum(),\n            version=cls.get_block_schema_version(),\n        )\n    except prefect.exceptions.ObjectNotFound:\n        block_schema = await client.create_block_schema(\n            block_schema=cls._to_block_schema(block_type_id=block_type.id)\n        )\n\n    cls._block_schema_id = block_schema.id\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.Block.save","title":"save async","text":"

Saves the values of a block as a block document.

Parameters:

Name Type Description Default name str

User specified name to give saved block document which can later be used to load the block document.

required overwrite bool

Boolean value specifying if values should be overwritten if a block document with the specified name already exists.

False Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
@sync_compatible\n@instrument_instance_method_call()\nasync def save(\n    self, name: str, overwrite: bool = False, client: \"PrefectClient\" = None\n):\n\"\"\"\n    Saves the values of a block as a block document.\n\n    Args:\n        name: User specified name to give saved block document which can later be used to load the\n            block document.\n        overwrite: Boolean value specifying if values should be overwritten if a block document with\n            the specified name already exists.\n\n    \"\"\"\n    document_id = await self._save(name=name, overwrite=overwrite, client=client)\n\n    return document_id\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.InvalidBlockRegistration","title":"InvalidBlockRegistration","text":"

Bases: Exception

Raised on attempted registration of the base Block class or a Block interface class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
class InvalidBlockRegistration(Exception):\n\"\"\"\n    Raised on attempted registration of the base Block\n    class or a Block interface class\n    \"\"\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/core/#prefect.blocks.core.block_schema_to_key","title":"block_schema_to_key","text":"

Defines the unique key used to lookup the Block class for a given schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/core.py
def block_schema_to_key(schema: BlockSchema) -> str:\n\"\"\"\n    Defines the unique key used to lookup the Block class for a given schema.\n    \"\"\"\n    return f\"{schema.block_type.slug}\"\n
","tags":["Python API","blocks"]},{"location":"api-ref/prefect/blocks/kubernetes/","title":"prefect.blocks.kubernetes","text":"","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes","title":"prefect.blocks.kubernetes","text":"","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig","title":"KubernetesClusterConfig","text":"

Bases: Block

Stores configuration for interaction with Kubernetes clusters.

See from_file for creation.

Attributes:

Name Type Description config Dict

The entire loaded YAML contents of a kubectl config file

context_name str

The name of the kubectl context to use

Example

Load a saved Kubernetes cluster config:

from prefect.blocks.kubernetes import KubernetesClusterConfig\n\ncluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
class KubernetesClusterConfig(Block):\n\"\"\"\n    Stores configuration for interaction with Kubernetes clusters.\n\n    See `from_file` for creation.\n\n    Attributes:\n        config: The entire loaded YAML contents of a kubectl config file\n        context_name: The name of the kubectl context to use\n\n    Example:\n        Load a saved Kubernetes cluster config:\n        ```python\n        from prefect.blocks.kubernetes import KubernetesClusterConfig\n\n        cluster_config_block = KubernetesClusterConfig.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Kubernetes Cluster Config\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/1zrSeY8DZ1MJZs2BAyyyGk/20445025358491b8b72600b8f996125b/Kubernetes_logo_without_workmark.svg.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig\"\n\n    config: Dict = Field(\n        default=..., description=\"The entire contents of a kubectl config file.\"\n    )\n    context_name: str = Field(\n        default=..., description=\"The name of the kubectl context to use.\"\n    )\n\n    @validator(\"config\", pre=True)\n    def parse_yaml_config(cls, value):\n        if isinstance(value, str):\n            return yaml.safe_load(value)\n        return value\n\n    @classmethod\n    def from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n\"\"\"\n        Create a cluster config from the a Kubernetes config file.\n\n        By default, the current context in the default Kubernetes config file will be\n        used.\n\n        An alternative file or context may be specified.\n\n        The entire config file will be loaded and stored.\n        \"\"\"\n        kube_config = kubernetes.config.kube_config\n\n        path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n        path = path.expanduser().resolve()\n\n        # Determine the context\n        existing_contexts, current_context = kube_config.list_kube_config_contexts(\n            config_file=str(path)\n        )\n        context_names = {ctx[\"name\"] for ctx in existing_contexts}\n        if context_name:\n            if context_name not in context_names:\n                raise ValueError(\n                    f\"Context {context_name!r} not found. \"\n                    f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n                )\n        else:\n            context_name = current_context[\"name\"]\n\n        # Load the entire config file\n        config_file_contents = path.read_text()\n        config_dict = yaml.safe_load(config_file_contents)\n\n        return cls(config=config_dict, context_name=context_name)\n\n    def get_api_client(self) -> \"ApiClient\":\n\"\"\"\n        Returns a Kubernetes API client for this cluster config.\n        \"\"\"\n        return kubernetes.config.kube_config.new_client_from_config_dict(\n            config_dict=self.config, context=self.context_name\n        )\n\n    def configure_client(self) -> None:\n\"\"\"\n        Activates this cluster configuration by loading the configuration into the\n        Kubernetes Python client. After calling this, Kubernetes API clients can use\n        this config's context.\n        \"\"\"\n        kubernetes.config.kube_config.load_kube_config_from_dict(\n            config_dict=self.config, context=self.context_name\n        )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.configure_client","title":"configure_client","text":"

Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
def configure_client(self) -> None:\n\"\"\"\n    Activates this cluster configuration by loading the configuration into the\n    Kubernetes Python client. After calling this, Kubernetes API clients can use\n    this config's context.\n    \"\"\"\n    kubernetes.config.kube_config.load_kube_config_from_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.from_file","title":"from_file classmethod","text":"

Create a cluster config from the a Kubernetes config file.

By default, the current context in the default Kubernetes config file will be used.

An alternative file or context may be specified.

The entire config file will be loaded and stored.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
@classmethod\ndef from_file(cls: Type[Self], path: Path = None, context_name: str = None) -> Self:\n\"\"\"\n    Create a cluster config from the a Kubernetes config file.\n\n    By default, the current context in the default Kubernetes config file will be\n    used.\n\n    An alternative file or context may be specified.\n\n    The entire config file will be loaded and stored.\n    \"\"\"\n    kube_config = kubernetes.config.kube_config\n\n    path = Path(path or kube_config.KUBE_CONFIG_DEFAULT_LOCATION)\n    path = path.expanduser().resolve()\n\n    # Determine the context\n    existing_contexts, current_context = kube_config.list_kube_config_contexts(\n        config_file=str(path)\n    )\n    context_names = {ctx[\"name\"] for ctx in existing_contexts}\n    if context_name:\n        if context_name not in context_names:\n            raise ValueError(\n                f\"Context {context_name!r} not found. \"\n                f\"Specify one of: {listrepr(context_names, sep=', ')}.\"\n            )\n    else:\n        context_name = current_context[\"name\"]\n\n    # Load the entire config file\n    config_file_contents = path.read_text()\n    config_dict = yaml.safe_load(config_file_contents)\n\n    return cls(config=config_dict, context_name=context_name)\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/kubernetes/#prefect.blocks.kubernetes.KubernetesClusterConfig.get_api_client","title":"get_api_client","text":"

Returns a Kubernetes API client for this cluster config.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/kubernetes.py
def get_api_client(self) -> \"ApiClient\":\n\"\"\"\n    Returns a Kubernetes API client for this cluster config.\n    \"\"\"\n    return kubernetes.config.kube_config.new_client_from_config_dict(\n        config_dict=self.config, context=self.context_name\n    )\n
","tags":["Python API","blocks","Kubernetes"]},{"location":"api-ref/prefect/blocks/notifications/","title":"prefect.blocks.notifications","text":"","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications","title":"prefect.blocks.notifications","text":"","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.AbstractAppriseNotificationBlock","title":"AbstractAppriseNotificationBlock","text":"

Bases: NotificationBlock, ABC

An abstract class for sending notifications using Apprise.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class AbstractAppriseNotificationBlock(NotificationBlock, ABC):\n\"\"\"\n    An abstract class for sending notifications using Apprise.\n    \"\"\"\n\n    notify_type: Literal[\"prefect_default\", \"info\", \"success\", \"warning\", \"failure\"] = (\n        Field(\n            default=PREFECT_NOTIFY_TYPE_DEFAULT,\n            description=(\n                \"The type of notification being performed; the prefect_default \"\n                \"is a plain notification that does not attach an image.\"\n            ),\n        )\n    )\n\n    def __init__(self, *args, **kwargs):\n        import apprise\n\n        if PREFECT_NOTIFY_TYPE_DEFAULT not in apprise.NOTIFY_TYPES:\n            apprise.NOTIFY_TYPES += (PREFECT_NOTIFY_TYPE_DEFAULT,)\n\n        super().__init__(*args, **kwargs)\n\n    def _start_apprise_client(self, url: SecretStr):\n        from apprise import Apprise, AppriseAsset\n\n        # A custom `AppriseAsset` that ensures Prefect Notifications\n        # appear correctly across multiple messaging platforms\n        prefect_app_data = AppriseAsset(\n            app_id=\"Prefect Notifications\",\n            app_desc=\"Prefect Notifications\",\n            app_url=\"https://prefect.io\",\n        )\n\n        self._apprise_client = Apprise(asset=prefect_app_data)\n        self._apprise_client.add(url.get_secret_value())\n\n    def block_initialization(self) -> None:\n        self._start_apprise_client(self.url)\n\n    @sync_compatible\n    @instrument_instance_method_call()\n    async def notify(self, body: str, subject: Optional[str] = None):\n        await self._apprise_client.async_notify(\n            body=body, title=subject, notify_type=self.notify_type\n        )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.AppriseNotificationBlock","title":"AppriseNotificationBlock","text":"

Bases: AbstractAppriseNotificationBlock, ABC

A base class for sending notifications using Apprise, through webhook URLs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class AppriseNotificationBlock(AbstractAppriseNotificationBlock, ABC):\n\"\"\"\n    A base class for sending notifications using Apprise, through webhook URLs.\n    \"\"\"\n\n    _documentation_url = \"https://docs.prefect.io/ui/notifications/\"\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"Incoming webhook URL used to send notifications.\",\n        example=\"https://hooks.example.com/XXX\",\n    )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.CustomWebhookNotificationBlock","title":"CustomWebhookNotificationBlock","text":"

Bases: NotificationBlock

Enables sending notifications via any custom webhook.

All nested string param contains {{key}} will be substituted with value from context/secrets.

Context values include: subject, body and name.

Examples:

Load a saved custom webhook and send a message:

from prefect.blocks.notifications import CustomWebhookNotificationBlock\n\ncustom_webhook_block = CustomWebhookNotificationBlock.load(\"BLOCK_NAME\")\n\ncustom_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class CustomWebhookNotificationBlock(NotificationBlock):\n\"\"\"\n    Enables sending notifications via any custom webhook.\n\n    All nested string param contains `{{key}}` will be substituted with value from context/secrets.\n\n    Context values include: `subject`, `body` and `name`.\n\n    Examples:\n        Load a saved custom webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import CustomWebhookNotificationBlock\n\n        custom_webhook_block = CustomWebhookNotificationBlock.load(\"BLOCK_NAME\")\n\n        custom_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Custom Webhook\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/6ciCsTFsvUAiiIvTllMfOU/627e9513376ca457785118fbba6a858d/webhook_icon_138018.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.CustomWebhookNotificationBlock\"\n\n    name: str = Field(title=\"Name\", description=\"Name of the webhook.\")\n\n    url: str = Field(\n        title=\"Webhook URL\",\n        description=\"The webhook URL.\",\n        example=\"https://hooks.slack.com/XXX\",\n    )\n\n    method: Literal[\"GET\", \"POST\", \"PUT\", \"PATCH\", \"DELETE\"] = Field(\n        default=\"POST\", description=\"The webhook request method. Defaults to `POST`.\"\n    )\n\n    params: Optional[Dict[str, str]] = Field(\n        default=None, title=\"Query Params\", description=\"Custom query params.\"\n    )\n    json_data: Optional[dict] = Field(\n        default=None,\n        title=\"JSON Data\",\n        description=\"Send json data as payload.\",\n        example=(\n            '{\"text\": \"{{subject}}\\\\n{{body}}\", \"title\": \"{{name}}\", \"token\":'\n            ' \"{{tokenFromSecrets}}\"}'\n        ),\n    )\n    form_data: Optional[Dict[str, str]] = Field(\n        default=None,\n        title=\"Form Data\",\n        description=(\n            \"Send form data as payload. Should not be used together with _JSON Data_.\"\n        ),\n        example=(\n            '{\"text\": \"{{subject}}\\\\n{{body}}\", \"title\": \"{{name}}\", \"token\":'\n            ' \"{{tokenFromSecrets}}\"}'\n        ),\n    )\n\n    headers: Optional[Dict[str, str]] = Field(None, description=\"Custom headers.\")\n    cookies: Optional[Dict[str, str]] = Field(None, description=\"Custom cookies.\")\n\n    timeout: float = Field(\n        default=10, description=\"Request timeout in seconds. Defaults to 10.\"\n    )\n\n    secrets: SecretDict = Field(\n        default_factory=lambda: SecretDict(dict()),\n        title=\"Custom Secret Values\",\n        description=\"A dictionary of secret values to be substituted in other configs.\",\n        example='{\"tokenFromSecrets\":\"SomeSecretToken\"}',\n    )\n\n    def _build_request_args(self, body: str, subject: Optional[str]):\n\"\"\"Build kwargs for httpx.AsyncClient.request\"\"\"\n        # prepare values\n        values = self.secrets.get_secret_value()\n        # use 'null' when subject is None\n        values.update(\n            {\n                \"subject\": \"null\" if subject is None else subject,\n                \"body\": body,\n                \"name\": self.name,\n            }\n        )\n        # do substution\n        return apply_values(\n            {\n                \"method\": self.method,\n                \"url\": self.url,\n                \"params\": self.params,\n                \"data\": self.form_data,\n                \"json\": self.json_data,\n                \"headers\": self.headers,\n                \"cookies\": self.cookies,\n                \"timeout\": self.timeout,\n            },\n            values,\n        )\n\n    def block_initialization(self) -> None:\n        # check form_data and json_data\n        if self.form_data is not None and self.json_data is not None:\n            raise ValueError(\"both `Form Data` and `JSON Data` provided\")\n        allowed_keys = {\"subject\", \"body\", \"name\"}.union(\n            self.secrets.get_secret_value().keys()\n        )\n        # test template to raise a error early\n        for name in [\"url\", \"params\", \"form_data\", \"json_data\", \"headers\", \"cookies\"]:\n            template = getattr(self, name)\n            if template is None:\n                continue\n            # check for placeholders not in predefined keys and secrets\n            placeholders = find_placeholders(template)\n            for placeholder in placeholders:\n                if placeholder.name not in allowed_keys:\n                    raise KeyError(f\"{name}/{placeholder}\")\n\n    @sync_compatible\n    @instrument_instance_method_call()\n    async def notify(self, body: str, subject: Optional[str] = None):\n        import httpx\n\n        # make request with httpx\n        client = httpx.AsyncClient(headers={\"user-agent\": \"Prefect Notifications\"})\n        resp = await client.request(**self._build_request_args(body, subject))\n        resp.raise_for_status()\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MattermostWebhook","title":"MattermostWebhook","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via a provided Mattermost webhook. See Apprise notify_Mattermost docs # noqa

Examples:

Load a saved Mattermost webhook and send a message:

from prefect.blocks.notifications import MattermostWebhook\n\nmattermost_webhook_block = MattermostWebhook.load(\"BLOCK_NAME\")\n\nmattermost_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class MattermostWebhook(AbstractAppriseNotificationBlock):\n\"\"\"\n    Enables sending notifications via a provided Mattermost webhook.\n    See [Apprise notify_Mattermost docs](https://github.com/caronc/apprise/wiki/Notify_Mattermost) # noqa\n\n\n    Examples:\n        Load a saved Mattermost webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import MattermostWebhook\n\n        mattermost_webhook_block = MattermostWebhook.load(\"BLOCK_NAME\")\n\n        mattermost_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided Mattermost webhook.\"\n    _block_type_name = \"Mattermost Webhook\"\n    _block_type_slug = \"mattermost-webhook\"\n    _logo_url = \"https://images.ctfassets.net/zscdif0zqppk/3mlbsJDAmK402ER1sf0zUF/a48ac43fa38f395dd5f56c6ed29f22bb/mattermost-logo-png-transparent.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MattermostWebhook\"\n\n    hostname: str = Field(\n        default=...,\n        description=\"The hostname of your Mattermost server.\",\n        example=\"Mattermost.example.com\",\n    )\n\n    token: SecretStr = Field(\n        default=...,\n        description=\"The token associated with your Mattermost webhook.\",\n    )\n\n    botname: Optional[str] = Field(\n        title=\"Bot name\",\n        default=None,\n        description=\"The name of the bot that will send the message.\",\n    )\n\n    channels: Optional[List[str]] = Field(\n        default=None,\n        description=\"The channel(s) you wish to notify.\",\n    )\n\n    include_image: bool = Field(\n        default=False,\n        description=\"Whether to include the Apprise status image in the message.\",\n    )\n\n    path: Optional[str] = Field(\n        default=None,\n        description=\"An optional sub-path specification to append to the hostname.\",\n    )\n\n    port: int = Field(\n        default=8065,\n        description=\"The port of your Mattermost server.\",\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyMattermost import NotifyMattermost\n\n        url = SecretStr(\n            NotifyMattermost(\n                token=self.token.get_secret_value(),\n                fullpath=self.path,\n                host=self.hostname,\n                botname=self.botname,\n                channels=self.channels,\n                include_image=self.include_image,\n                port=self.port,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MicrosoftTeamsWebhook","title":"MicrosoftTeamsWebhook","text":"

Bases: AppriseNotificationBlock

Enables sending notifications via a provided Microsoft Teams webhook.

Examples:

Load a saved Teams webhook and send a message:

from prefect.blocks.notifications import MicrosoftTeamsWebhook\nteams_webhook_block = MicrosoftTeamsWebhook.load(\"BLOCK_NAME\")\nteams_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class MicrosoftTeamsWebhook(AppriseNotificationBlock):\n\"\"\"\n    Enables sending notifications via a provided Microsoft Teams webhook.\n\n    Examples:\n        Load a saved Teams webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import MicrosoftTeamsWebhook\n        teams_webhook_block = MicrosoftTeamsWebhook.load(\"BLOCK_NAME\")\n        teams_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Microsoft Teams Webhook\"\n    _block_type_slug = \"ms-teams-webhook\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/6n0dSTBzwoVPhX8Vgg37i7/9040e07a62def4f48242be3eae6d3719/teams_logo.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.MicrosoftTeamsWebhook\"\n\n    url: SecretStr = Field(\n        ...,\n        title=\"Webhook URL\",\n        description=\"The Teams incoming webhook URL used to send notifications.\",\n        example=(\n            \"https://your-org.webhook.office.com/webhookb2/XXX/IncomingWebhook/YYY/ZZZ\"\n        ),\n    )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.OpsgenieWebhook","title":"OpsgenieWebhook","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via a provided Opsgenie webhook. See Apprise notify_opsgenie docs for more info on formatting the URL.

Examples:

Load a saved Opsgenie webhook and send a message:

from prefect.blocks.notifications import OpsgenieWebhook\nopsgenie_webhook_block = OpsgenieWebhook.load(\"BLOCK_NAME\")\nopsgenie_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class OpsgenieWebhook(AbstractAppriseNotificationBlock):\n\"\"\"\n    Enables sending notifications via a provided Opsgenie webhook.\n    See [Apprise notify_opsgenie docs](https://github.com/caronc/apprise/wiki/Notify_opsgenie)\n    for more info on formatting the URL.\n\n    Examples:\n        Load a saved Opsgenie webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import OpsgenieWebhook\n        opsgenie_webhook_block = OpsgenieWebhook.load(\"BLOCK_NAME\")\n        opsgenie_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided Opsgenie webhook.\"\n\n    _block_type_name = \"Opsgenie Webhook\"\n    _block_type_slug = \"opsgenie-webhook\"\n    _logo_url = \"https://images.ctfassets.net/sahxz1jinscj/3habq8fTzmplh7Ctkppk4/590cecb73f766361fcea9223cd47bad8/opsgenie.png\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.OpsgenieWebhook\"\n\n    apikey: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=\"The API Key associated with your Opsgenie account.\",\n    )\n\n    target_user: Optional[List] = Field(\n        default=None, description=\"The user(s) you wish to notify.\"\n    )\n\n    target_team: Optional[List] = Field(\n        default=None, description=\"The team(s) you wish to notify.\"\n    )\n\n    target_schedule: Optional[List] = Field(\n        default=None, description=\"The schedule(s) you wish to notify.\"\n    )\n\n    target_escalation: Optional[List] = Field(\n        default=None, description=\"The escalation(s) you wish to notify.\"\n    )\n\n    region_name: Literal[\"us\", \"eu\"] = Field(\n        default=\"us\", description=\"The 2-character region code.\"\n    )\n\n    batch: bool = Field(\n        default=False,\n        description=\"Notify all targets in batches (instead of individually).\",\n    )\n\n    tags: Optional[List] = Field(\n        default=None,\n        description=(\n            \"A comma-separated list of tags you can associate with your Opsgenie\"\n            \" message.\"\n        ),\n        example='[\"tag1\", \"tag2\"]',\n    )\n\n    priority: Optional[str] = Field(\n        default=3,\n        description=(\n            \"The priority to associate with the message. It is on a scale between 1\"\n            \" (LOW) and 5 (EMERGENCY).\"\n        ),\n    )\n\n    alias: Optional[str] = Field(\n        default=None, description=\"The alias to associate with the message.\"\n    )\n\n    entity: Optional[str] = Field(\n        default=None, description=\"The entity to associate with the message.\"\n    )\n\n    details: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=\"Additional details composed of key/values pairs.\",\n        example='{\"key1\": \"value1\", \"key2\": \"value2\"}',\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyOpsgenie import NotifyOpsgenie\n\n        targets = []\n        if self.target_user:\n            [targets.append(f\"@{x}\") for x in self.target_user]\n        if self.target_team:\n            [targets.append(f\"#{x}\") for x in self.target_team]\n        if self.target_schedule:\n            [targets.append(f\"*{x}\") for x in self.target_schedule]\n        if self.target_escalation:\n            [targets.append(f\"^{x}\") for x in self.target_escalation]\n        url = SecretStr(\n            NotifyOpsgenie(\n                apikey=self.apikey.get_secret_value(),\n                targets=targets,\n                region_name=self.region_name,\n                details=self.details,\n                priority=self.priority,\n                alias=self.alias,\n                entity=self.entity,\n                batch=self.batch,\n                tags=self.tags,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.PagerDutyWebHook","title":"PagerDutyWebHook","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via a provided PagerDuty webhook. See Apprise notify_pagerduty docs for more info on formatting the URL.

Examples:

Load a saved PagerDuty webhook and send a message:

from prefect.blocks.notifications import PagerDutyWebHook\npagerduty_webhook_block = PagerDutyWebHook.load(\"BLOCK_NAME\")\npagerduty_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class PagerDutyWebHook(AbstractAppriseNotificationBlock):\n\"\"\"\n    Enables sending notifications via a provided PagerDuty webhook.\n    See [Apprise notify_pagerduty docs](https://github.com/caronc/apprise/wiki/Notify_pagerduty)\n    for more info on formatting the URL.\n\n    Examples:\n        Load a saved PagerDuty webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import PagerDutyWebHook\n        pagerduty_webhook_block = PagerDutyWebHook.load(\"BLOCK_NAME\")\n        pagerduty_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via a provided PagerDuty webhook.\"\n\n    _block_type_name = \"Pager Duty Webhook\"\n    _block_type_slug = \"pager-duty-webhook\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/6FHJ4Lcozjfl1yDPxCvQDT/c2f6bdf47327271c068284897527f3da/PagerDuty-Logo.wine.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.PagerDutyWebHook\"\n\n    # The default cannot be prefect_default because NotifyPagerDuty's\n    # PAGERDUTY_SEVERITY_MAP only has these notify types defined as keys\n    notify_type: Literal[\"info\", \"success\", \"warning\", \"failure\"] = Field(\n        default=\"info\", description=\"The severity of the notification.\"\n    )\n\n    integration_key: SecretStr = Field(\n        default=...,\n        description=(\n            \"This can be found on the Events API V2 \"\n            \"integration's detail page, and is also referred to as a Routing Key. \"\n            \"This must be provided alongside `api_key`, but will error if provided \"\n            \"alongside `url`.\"\n        ),\n    )\n\n    api_key: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=(\n            \"This can be found under Integrations. \"\n            \"This must be provided alongside `integration_key`, but will error if \"\n            \"provided alongside `url`.\"\n        ),\n    )\n\n    source: Optional[str] = Field(\n        default=\"Prefect\", description=\"The source string as part of the payload.\"\n    )\n\n    component: str = Field(\n        default=\"Notification\",\n        description=\"The component string as part of the payload.\",\n    )\n\n    group: Optional[str] = Field(\n        default=None, description=\"The group string as part of the payload.\"\n    )\n\n    class_id: Optional[str] = Field(\n        default=None,\n        title=\"Class ID\",\n        description=\"The class string as part of the payload.\",\n    )\n\n    region_name: Literal[\"us\", \"eu\"] = Field(\n        default=\"us\", description=\"The region name.\"\n    )\n\n    clickable_url: Optional[AnyHttpUrl] = Field(\n        default=None,\n        title=\"Clickable URL\",\n        description=\"A clickable URL to associate with the notice.\",\n    )\n\n    include_image: bool = Field(\n        default=True,\n        description=\"Associate the notification status via a represented icon.\",\n    )\n\n    custom_details: Optional[Dict[str, str]] = Field(\n        default=None,\n        description=\"Additional details to include as part of the payload.\",\n        example='{\"disk_space_left\": \"145GB\"}',\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyPagerDuty import NotifyPagerDuty\n\n        url = SecretStr(\n            NotifyPagerDuty(\n                apikey=self.api_key.get_secret_value(),\n                integrationkey=self.integration_key.get_secret_value(),\n                source=self.source,\n                component=self.component,\n                group=self.group,\n                class_id=self.class_id,\n                region_name=self.region_name,\n                click=self.clickable_url,\n                include_image=self.include_image,\n                details=self.custom_details,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SendgridEmail","title":"SendgridEmail","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via any sendgrid account. See Apprise Notify_sendgrid docs

Examples:

Load a saved Sendgrid and send a email message: ```python from prefect.blocks.notifications import SendgridEmail

sendgrid_block = SendgridEmail.load(\"BLOCK_NAME\")

sendgrid_block.notify(\"Hello from Prefect!\")

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class SendgridEmail(AbstractAppriseNotificationBlock):\n\"\"\"\n    Enables sending notifications via any sendgrid account.\n    See [Apprise Notify_sendgrid docs](https://github.com/caronc/apprise/wiki/Notify_Sendgrid)\n\n    Examples:\n        Load a saved Sendgrid and send a email message:\n        ```python\n        from prefect.blocks.notifications import SendgridEmail\n\n        sendgrid_block = SendgridEmail.load(\"BLOCK_NAME\")\n\n        sendgrid_block.notify(\"Hello from Prefect!\")\n    \"\"\"\n\n    _description = \"Enables sending notifications via Sendgrid email service.\"\n    _block_type_name = \"Sendgrid Email\"\n    _block_type_slug = \"sendgrid-email\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/3PcxFuO9XUqs7wU9MiUBMg/af6affa646899cc1712d14b7fc4c0f1f/email__1_.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SendgridEmail\"\n\n    api_key: SecretStr = Field(\n        default=...,\n        title=\"API Key\",\n        description=\"The API Key associated with your sendgrid account.\",\n    )\n\n    sender_email: str = Field(\n        title=\"Sender email id\",\n        description=\"The sender email id.\",\n        example=\"test-support@gmail.com\",\n    )\n\n    to_emails: List[str] = Field(\n        default=...,\n        title=\"Recipient emails\",\n        description=\"Email ids of all recipients.\",\n        example=\"recipient1@gmail.com\",\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifySendGrid import NotifySendGrid\n\n        url = SecretStr(\n            NotifySendGrid(\n                apikey=self.api_key.get_secret_value(),\n                from_email=self.sender_email,\n                targets=self.to_emails,\n            ).url()\n        )\n\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SlackWebhook","title":"SlackWebhook","text":"

Bases: AppriseNotificationBlock

Enables sending notifications via a provided Slack webhook.

Examples:

Load a saved Slack webhook and send a message:

from prefect.blocks.notifications import SlackWebhook\n\nslack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\nslack_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class SlackWebhook(AppriseNotificationBlock):\n\"\"\"\n    Enables sending notifications via a provided Slack webhook.\n\n    Examples:\n        Load a saved Slack webhook and send a message:\n        ```python\n        from prefect.blocks.notifications import SlackWebhook\n\n        slack_webhook_block = SlackWebhook.load(\"BLOCK_NAME\")\n        slack_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Slack Webhook\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/7dkzINU9r6j44giEFuHuUC/85d4cd321ad60c1b1e898bc3fbd28580/5cb480cd5f1b6d3fbadece79.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.SlackWebhook\"\n\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"Slack incoming webhook URL used to send notifications.\",\n        example=\"https://hooks.slack.com/XXX\",\n    )\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.TwilioSMS","title":"TwilioSMS","text":"

Bases: AbstractAppriseNotificationBlock

Enables sending notifications via Twilio SMS. Find more on sending Twilio SMS messages in the docs.

Examples:

Load a saved TwilioSMS block and send a message:

from prefect.blocks.notifications import TwilioSMS\ntwilio_webhook_block = TwilioSMS.load(\"BLOCK_NAME\")\ntwilio_webhook_block.notify(\"Hello from Prefect!\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/notifications.py
class TwilioSMS(AbstractAppriseNotificationBlock):\n\"\"\"Enables sending notifications via Twilio SMS.\n    Find more on sending Twilio SMS messages in the [docs](https://www.twilio.com/docs/sms).\n\n    Examples:\n        Load a saved `TwilioSMS` block and send a message:\n        ```python\n        from prefect.blocks.notifications import TwilioSMS\n        twilio_webhook_block = TwilioSMS.load(\"BLOCK_NAME\")\n        twilio_webhook_block.notify(\"Hello from Prefect!\")\n        ```\n    \"\"\"\n\n    _description = \"Enables sending notifications via Twilio SMS.\"\n    _block_type_name = \"Twilio SMS\"\n    _block_type_slug = \"twilio-sms\"\n    _logo_url = \"https://images.ctfassets.net/zscdif0zqppk/YTCgPL6bnK3BczP2gV9md/609283105a7006c57dbfe44ee1a8f313/58482bb9cef1014c0b5e4a31.png?h=250\"  # noqa\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/notifications/#prefect.blocks.notifications.TwilioSMS\"\n\n    account_sid: str = Field(\n        default=...,\n        description=(\n            \"The Twilio Account SID - it can be found on the homepage \"\n            \"of the Twilio console.\"\n        ),\n    )\n\n    auth_token: SecretStr = Field(\n        default=...,\n        description=(\n            \"The Twilio Authentication Token - \"\n            \"it can be found on the homepage of the Twilio console.\"\n        ),\n    )\n\n    from_phone_number: str = Field(\n        default=...,\n        description=\"The valid Twilio phone number to send the message from.\",\n        example=\"18001234567\",\n    )\n\n    to_phone_numbers: List[str] = Field(\n        default=...,\n        description=\"A list of valid Twilio phone number(s) to send the message to.\",\n        # not wrapped in brackets because of the way UI displays examples; in code should be [\"18004242424\"]\n        example=\"18004242424\",\n    )\n\n    def block_initialization(self) -> None:\n        from apprise.plugins.NotifyTwilio import NotifyTwilio\n\n        url = SecretStr(\n            NotifyTwilio(\n                account_sid=self.account_sid,\n                auth_token=self.auth_token.get_secret_value(),\n                source=self.from_phone_number,\n                targets=self.to_phone_numbers,\n            ).url()\n        )\n        self._start_apprise_client(url)\n
","tags":["Python API","blocks","notifications"]},{"location":"api-ref/prefect/blocks/system/","title":"prefect.blocks.system","text":"","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system","title":"prefect.blocks.system","text":"","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.DateTime","title":"DateTime","text":"

Bases: Block

A block that represents a datetime

Attributes:

Name Type Description value pendulum.DateTime

An ISO 8601-compatible datetime value.

Example

Load a stored JSON value:

from prefect.blocks.system import DateTime\n\ndata_time_block = DateTime.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/system.py
class DateTime(Block):\n\"\"\"\n    A block that represents a datetime\n\n    Attributes:\n        value: An ISO 8601-compatible datetime value.\n\n    Example:\n        Load a stored JSON value:\n        ```python\n        from prefect.blocks.system import DateTime\n\n        data_time_block = DateTime.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _block_type_name = \"Date Time\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/1gmljt5UBcAwEXHPnIofcE/0f3cf1da45b8b2df846e142ab52b1778/image21.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.DateTime\"\n\n    value: pendulum.DateTime = Field(\n        default=...,\n        description=\"An ISO 8601-compatible datetime value.\",\n    )\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.JSON","title":"JSON","text":"

Bases: Block

A block that represents JSON

Attributes:

Name Type Description value Any

A JSON-compatible value.

Example

Load a stored JSON value:

from prefect.blocks.system import JSON\n\njson_block = JSON.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/system.py
class JSON(Block):\n\"\"\"\n    A block that represents JSON\n\n    Attributes:\n        value: A JSON-compatible value.\n\n    Example:\n        Load a stored JSON value:\n        ```python\n        from prefect.blocks.system import JSON\n\n        json_block = JSON.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/19W3Di10hhb4oma2Qer0x6/764d1e7b4b9974cd268c775a488b9d26/image16.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.JSON\"\n\n    value: Any = Field(default=..., description=\"A JSON-compatible value.\")\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.Secret","title":"Secret","text":"

Bases: Block

A block that represents a secret value. The value stored in this block will be obfuscated when this block is logged or shown in the UI.

Attributes:

Name Type Description value SecretStr

A string value that should be kept secret.

Example
from prefect.blocks.system import Secret\n\nsecret_block = Secret.load(\"BLOCK_NAME\")\n\n# Access the stored secret\nsecret_block.get()\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/system.py
class Secret(Block):\n\"\"\"\n    A block that represents a secret value. The value stored in this block will be obfuscated when\n    this block is logged or shown in the UI.\n\n    Attributes:\n        value: A string value that should be kept secret.\n\n    Example:\n        ```python\n        from prefect.blocks.system import Secret\n\n        secret_block = Secret.load(\"BLOCK_NAME\")\n\n        # Access the stored secret\n        secret_block.get()\n        ```\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5uUmyGBjRejYuGTWbTxz6E/3003e1829293718b3a5d2e909643a331/image8.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.Secret\"\n\n    value: SecretStr = Field(\n        default=..., description=\"A string value that should be kept secret.\"\n    )\n\n    def get(self):\n        return self.value.get_secret_value()\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/system/#prefect.blocks.system.String","title":"String","text":"

Bases: Block

A block that represents a string

Attributes:

Name Type Description value str

A string value.

Example

Load a stored string value:

from prefect.blocks.system import String\n\nstring_block = String.load(\"BLOCK_NAME\")\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/system.py
class String(Block):\n\"\"\"\n    A block that represents a string\n\n    Attributes:\n        value: A string value.\n\n    Example:\n        Load a stored string value:\n        ```python\n        from prefect.blocks.system import String\n\n        string_block = String.load(\"BLOCK_NAME\")\n        ```\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/4zjrZmh9tBrFiikeB44G4O/2ce1dbbac1c8e356f7c429e0f8bbb58d/image10.png?h=250\"\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/system/#prefect.blocks.system.String\"\n\n    value: str = Field(default=..., description=\"A string value.\")\n
","tags":["Python API","blocks","secret","config","json"]},{"location":"api-ref/prefect/blocks/webhook/","title":"prefect.blocks.webhook","text":"","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook","title":"prefect.blocks.webhook","text":"","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook","title":"Webhook","text":"

Bases: Block

Block that enables calling webhooks.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/webhook.py
class Webhook(Block):\n\"\"\"\n    Block that enables calling webhooks.\n    \"\"\"\n\n    _block_type_name = \"Webhook\"\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/6ciCsTFsvUAiiIvTllMfOU/627e9513376ca457785118fbba6a858d/webhook_icon_138018.png?h=250\"  # type: ignore\n    _documentation_url = \"https://docs.prefect.io/api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook\"\n\n    method: Literal[\"GET\", \"POST\", \"PUT\", \"PATCH\", \"DELETE\"] = Field(\n        default=\"POST\", description=\"The webhook request method. Defaults to `POST`.\"\n    )\n\n    url: SecretStr = Field(\n        default=...,\n        title=\"Webhook URL\",\n        description=\"The webhook URL.\",\n        example=\"https://hooks.slack.com/XXX\",\n    )\n\n    headers: SecretDict = Field(\n        default_factory=lambda: SecretDict(dict()),\n        title=\"Webhook Headers\",\n        description=\"A dictionary of headers to send with the webhook request.\",\n    )\n\n    def block_initialization(self):\n        self._client = AsyncClient(transport=_http_transport)\n\n    async def call(self, payload: Optional[dict] = None) -> Response:\n\"\"\"\n        Call the webhook.\n\n        Args:\n            payload: an optional payload to send when calling the webhook.\n        \"\"\"\n        async with self._client:\n            return await self._client.request(\n                method=self.method,\n                url=self.url.get_secret_value(),\n                headers=self.headers.get_secret_value(),\n                json=payload,\n            )\n
","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/blocks/webhook/#prefect.blocks.webhook.Webhook.call","title":"call async","text":"

Call the webhook.

Parameters:

Name Type Description Default payload Optional[dict]

an optional payload to send when calling the webhook.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/blocks/webhook.py
async def call(self, payload: Optional[dict] = None) -> Response:\n\"\"\"\n    Call the webhook.\n\n    Args:\n        payload: an optional payload to send when calling the webhook.\n    \"\"\"\n    async with self._client:\n        return await self._client.request(\n            method=self.method,\n            url=self.url.get_secret_value(),\n            headers=self.headers.get_secret_value(),\n            json=payload,\n        )\n
","tags":["Python API","blocks","webhook"]},{"location":"api-ref/prefect/cli/agent/","title":"prefect.cli.agent","text":"","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/agent/#prefect.cli.agent","title":"prefect.cli.agent","text":"

Command line interface for working with agent services

","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/agent/#prefect.cli.agent.start","title":"start async","text":"

Start an agent process to poll one or more work queues for flow runs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/agent.py
@agent_app.command()\nasync def start(\n    # deprecated main argument\n    work_queue: str = typer.Argument(\n        None,\n        show_default=False,\n        help=\"DEPRECATED: A work queue name or ID\",\n    ),\n    work_queues: List[str] = typer.Option(\n        None,\n        \"-q\",\n        \"--work-queue\",\n        help=\"One or more work queue names for the agent to pull from.\",\n    ),\n    work_queue_prefix: List[str] = typer.Option(\n        None,\n        \"-m\",\n        \"--match\",\n        help=(\n            \"Dynamically matches work queue names with the specified prefix for the\"\n            \" agent to pull from,for example `dev-` will match all work queues with a\"\n            \" name that starts with `dev-`\"\n        ),\n    ),\n    work_pool_name: str = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"A work pool name for the agent to pull from.\",\n    ),\n    hide_welcome: bool = typer.Option(False, \"--hide-welcome\"),\n    api: str = SettingsOption(PREFECT_API_URL),\n    run_once: bool = typer.Option(\n        False, help=\"Run the agent loop once, instead of forever.\"\n    ),\n    prefetch_seconds: int = SettingsOption(PREFECT_AGENT_PREFETCH_SECONDS),\n    # deprecated tags\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"DEPRECATED: One or more optional tags that will be used to create a work\"\n            \" queue. This option will be removed on 2023-02-23.\"\n        ),\n    ),\n    limit: int = typer.Option(\n        None,\n        \"-l\",\n        \"--limit\",\n        help=\"Maximum number of flow runs to start simultaneously.\",\n    ),\n):\n\"\"\"\n    Start an agent process to poll one or more work queues for flow runs.\n    \"\"\"\n    work_queues = work_queues or []\n\n    if work_queue is not None:\n        # try to treat the work_queue as a UUID\n        try:\n            async with get_client() as client:\n                q = await client.read_work_queue(UUID(work_queue))\n                work_queue = q.name\n        # otherwise treat it as a string name\n        except (TypeError, ValueError):\n            pass\n        work_queues.append(work_queue)\n        app.console.print(\n            (\n                \"Agents now support multiple work queues. Instead of passing a single\"\n                \" argument, provide work queue names with the `-q` or `--work-queue`\"\n                f\" flag: `prefect agent start -q {work_queue}`\\n\"\n            ),\n            style=\"blue\",\n        )\n\n    if not work_queues and not tags and not work_queue_prefix and not work_pool_name:\n        exit_with_error(\"No work queues provided!\", style=\"red\")\n    elif bool(work_queues) + bool(tags) + bool(work_queue_prefix) > 1:\n        exit_with_error(\n            \"Only one of `work_queues`, `match`, or `tags` can be provided.\",\n            style=\"red\",\n        )\n    if work_pool_name and tags:\n        exit_with_error(\n            \"`tag` and `pool` options cannot be used together.\", style=\"red\"\n        )\n\n    if tags:\n        work_queue_name = f\"Agent queue {'-'.join(sorted(tags))}\"\n        app.console.print(\n            (\n                \"`tags` are deprecated. For backwards-compatibility with old versions\"\n                \" of Prefect, this agent will create a work queue named\"\n                f\" `{work_queue_name}` that uses legacy tag-based matching. This option\"\n                \" will be removed on 2023-02-23.\"\n            ),\n            style=\"red\",\n        )\n\n        async with get_client() as client:\n            try:\n                work_queue = await client.read_work_queue_by_name(work_queue_name)\n                if work_queue.filter is None:\n                    # ensure the work queue has legacy (deprecated) tag-based behavior\n                    await client.update_work_queue(filter=dict(tags=tags))\n            except ObjectNotFound:\n                # if the work queue doesn't already exist, we create it with tags\n                # to enable legacy (deprecated) tag-matching behavior\n                await client.create_work_queue(name=work_queue_name, tags=tags)\n\n        work_queues = [work_queue_name]\n\n    if not hide_welcome:\n        if api:\n            app.console.print(\n                f\"Starting v{prefect.__version__} agent connected to {api}...\"\n            )\n        else:\n            app.console.print(\n                f\"Starting v{prefect.__version__} agent with ephemeral API...\"\n            )\n\n    agent_process_id = os.getpid()\n    setup_signal_handlers_agent(\n        agent_process_id, \"the Prefect agent\", app.console.print\n    )\n\n    async with PrefectAgent(\n        work_queues=work_queues,\n        work_queue_prefix=work_queue_prefix,\n        work_pool_name=work_pool_name,\n        prefetch_seconds=prefetch_seconds,\n        limit=limit,\n    ) as agent:\n        if not hide_welcome:\n            app.console.print(ascii_name)\n            if work_pool_name:\n                app.console.print(\n                    \"Agent started! Looking for work from \"\n                    f\"work pool '{work_pool_name}'...\"\n                )\n            elif work_queue_prefix:\n                app.console.print(\n                    \"Agent started! Looking for work from \"\n                    f\"queue(s) that start with the prefix: {work_queue_prefix}...\"\n                )\n            else:\n                app.console.print(\n                    \"Agent started! Looking for work from \"\n                    f\"queue(s): {', '.join(work_queues)}...\"\n                )\n\n        async with anyio.create_task_group() as tg:\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    agent.get_and_submit_flow_runs,\n                    PREFECT_AGENT_QUERY_INTERVAL.value(),\n                    printer=app.console.print,\n                    run_once=run_once,\n                    jitter_range=0.3,\n                    backoff=4,  # Up to ~1 minute interval during backoff\n                )\n            )\n\n            tg.start_soon(\n                partial(\n                    critical_service_loop,\n                    agent.check_for_cancelled_flow_runs,\n                    PREFECT_AGENT_QUERY_INTERVAL.value() * 2,\n                    printer=app.console.print,\n                    run_once=run_once,\n                    jitter_range=0.3,\n                    backoff=4,\n                )\n            )\n\n    app.console.print(\"Agent stopped!\")\n
","tags":["Python API","agents","CLI"]},{"location":"api-ref/prefect/cli/cloud-webhook/","title":"prefect.cli.cloud_webhook","text":"","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook","title":"prefect.cli.cloud.webhook","text":"

Command line interface for working with webhooks

","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.create","title":"create async","text":"

Create a new Cloud webhook

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def create(\n    webhook_name: str,\n    description: str = typer.Option(\n        \"\", \"--description\", \"-d\", help=\"Description of the webhook\"\n    ),\n    template: str = typer.Option(\n        None, \"--template\", \"-t\", help=\"Jinja2 template expression\"\n    ),\n):\n\"\"\"\n    Create a new Cloud webhook\n    \"\"\"\n    if not template:\n        exit_with_error(\n            \"Please provide a Jinja2 template expression in the --template flag \\nwhich\"\n            ' should define (at minimum) the following attributes: \\n{ \"event\":'\n            ' \"your.event.name\", \"resource\": { \"prefect.resource.id\":'\n            ' \"your.resource.id\" } }'\n            \" \\nhttps://docs.prefect.io/latest/cloud/webhooks/#webhook-templates\"\n        )\n\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\n            \"POST\",\n            \"/webhooks/\",\n            json={\n                \"name\": webhook_name,\n                \"description\": description,\n                \"template\": template,\n            },\n        )\n        app.console.print(f'Successfully created webhook {response[\"name\"]}')\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.delete","title":"delete async","text":"

Delete an existing Cloud webhook

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def delete(webhook_id: UUID):\n\"\"\"\n    Delete an existing Cloud webhook\n    \"\"\"\n    confirm_logged_in()\n\n    confirm_delete = typer.confirm(\n        \"Are you sure you want to delete it? This cannot be undone.\"\n    )\n\n    if not confirm_delete:\n        return\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        await client.request(\"DELETE\", f\"/webhooks/{webhook_id}\")\n        app.console.print(f\"Successfully deleted webhook {webhook_id}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.get","title":"get async","text":"

Retrieve a webhook by ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def get(webhook_id: UUID):\n\"\"\"\n    Retrieve a webhook by ID.\n    \"\"\"\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        webhook = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n        display_table = _render_webhooks_into_table([webhook])\n        app.console.print(display_table)\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.ls","title":"ls async","text":"

Fetch and list all webhooks in your workspace

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def ls():\n\"\"\"\n    Fetch and list all webhooks in your workspace\n    \"\"\"\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        retrieved_webhooks = await client.request(\"POST\", \"/webhooks/filter\")\n        display_table = _render_webhooks_into_table(retrieved_webhooks)\n        app.console.print(display_table)\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.rotate","title":"rotate async","text":"

Rotate url for an existing Cloud webhook, in case it has been compromised

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def rotate(webhook_id: UUID):\n\"\"\"\n    Rotate url for an existing Cloud webhook, in case it has been compromised\n    \"\"\"\n    confirm_logged_in()\n\n    confirm_rotate = typer.confirm(\n        \"Are you sure you want to rotate? This will invalidate the old URL.\"\n    )\n\n    if not confirm_rotate:\n        return\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\"POST\", f\"/webhooks/{webhook_id}/rotate\")\n        app.console.print(f'Successfully rotated webhook URL to {response[\"slug\"]}')\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.toggle","title":"toggle async","text":"

Toggle the enabled status of an existing Cloud webhook

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def toggle(\n    webhook_id: UUID,\n):\n\"\"\"\n    Toggle the enabled status of an existing Cloud webhook\n    \"\"\"\n    confirm_logged_in()\n\n    status_lookup = {True: \"enabled\", False: \"disabled\"}\n\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n        current_status = response[\"enabled\"]\n        new_status = not current_status\n\n        await client.request(\n            \"PATCH\", f\"/webhooks/{webhook_id}\", json={\"enabled\": new_status}\n        )\n        app.console.print(f\"Webhook is now {status_lookup[new_status]}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud-webhook/#prefect.cli.cloud.webhook.update","title":"update async","text":"

Partially update an existing Cloud webhook

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/webhook.py
@webhook_app.command()\nasync def update(\n    webhook_id: UUID,\n    webhook_name: str = typer.Option(None, \"--name\", \"-n\", help=\"Webhook name\"),\n    description: str = typer.Option(\n        None, \"--description\", \"-d\", help=\"Description of the webhook\"\n    ),\n    template: str = typer.Option(\n        None, \"--template\", \"-t\", help=\"Jinja2 template expression\"\n    ),\n):\n\"\"\"\n    Partially update an existing Cloud webhook\n    \"\"\"\n    confirm_logged_in()\n\n    # The /webhooks API lives inside the /accounts/{id}/workspaces/{id} routing tree\n    async with get_cloud_client(host=PREFECT_API_URL.value()) as client:\n        response = await client.request(\"GET\", f\"/webhooks/{webhook_id}\")\n        update_payload = {\n            \"name\": webhook_name or response[\"name\"],\n            \"description\": description or response[\"description\"],\n            \"template\": template or response[\"template\"],\n        }\n\n        await client.request(\"PUT\", f\"/webhooks/{webhook_id}\", json=update_payload)\n        app.console.print(f\"Successfully updated webhook {webhook_id}\")\n
","tags":["Python API","CLI","events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"api-ref/prefect/cli/cloud/","title":"prefect.cli.cloud","text":"","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud","title":"prefect.cli.cloud","text":"

Command line interface for interacting with Prefect Cloud

","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login_api","title":"login_api = FastAPI(lifespan=lifespan) module-attribute","text":"

This small API server is used for data transmission for browser-based log in.

","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.check_key_is_valid_for_login","title":"check_key_is_valid_for_login async","text":"

Attempt to use a key to see if it is valid

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
async def check_key_is_valid_for_login(key: str):\n\"\"\"\n    Attempt to use a key to see if it is valid\n    \"\"\"\n    async with get_cloud_client(api_key=key) as client:\n        try:\n            await client.read_workspaces()\n            return True\n        except CloudUnauthorizedError:\n            return False\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login","title":"login async","text":"

Log in to Prefect Cloud. Creates a new profile configured to use the specified PREFECT_API_KEY. Uses a previously configured profile if it exists.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def login(\n    key: Optional[str] = typer.Option(\n        None, \"--key\", \"-k\", help=\"API Key to authenticate with Prefect\"\n    ),\n    workspace_handle: Optional[str] = typer.Option(\n        None,\n        \"--workspace\",\n        \"-w\",\n        help=(\n            \"Full handle of workspace, in format '<account_handle>/<workspace_handle>'\"\n        ),\n    ),\n):\n\"\"\"\n    Log in to Prefect Cloud.\n    Creates a new profile configured to use the specified PREFECT_API_KEY.\n    Uses a previously configured profile if it exists.\n    \"\"\"\n    if not is_interactive() and (not key or not workspace_handle):\n        exit_with_error(\n            \"When not using an interactive terminal, you must supply a `--key` and\"\n            \" `--workspace`.\"\n        )\n\n    profiles = load_profiles()\n    current_profile = get_settings_context().profile\n\n    if key and PREFECT_API_KEY.value() == key:\n        exit_with_success(\"This profile is already authenticated with that key.\")\n\n    already_logged_in_profiles = []\n    for name, profile in profiles.items():\n        profile_key = profile.settings.get(PREFECT_API_KEY)\n        if (\n            # If a key is provided, only show profiles with the same key\n            (key and profile_key == key)\n            # Otherwise, show all profiles with a key set\n            or (not key and profile_key is not None)\n            # Check that the key is usable to avoid suggesting unauthenticated profiles\n            and await check_key_is_valid_for_login(profile_key)\n        ):\n            already_logged_in_profiles.append(name)\n\n    current_profile_is_logged_in = current_profile.name in already_logged_in_profiles\n\n    if current_profile_is_logged_in:\n        app.console.print(\"It looks like you're already authenticated on this profile.\")\n        should_reauth = typer.confirm(\n            \"? Would you like to reauthenticate?\", default=False\n        )\n        if not should_reauth:\n            app.console.print(\"Using the existing authentication on this profile.\")\n            key = PREFECT_API_KEY.value()\n\n    elif already_logged_in_profiles:\n        app.console.print(\n            \"It looks like you're already authenticated with another profile.\"\n        )\n        if not typer.confirm(\n            \"? Would you like to reauthenticate with this profile?\", default=False\n        ):\n            if typer.confirm(\n                \"? Would you like to switch to an authenticated profile?\", default=True\n            ):\n                profile_name = prompt_select_from_list(\n                    app.console,\n                    \"Which authenticated profile would you like to switch to?\",\n                    already_logged_in_profiles,\n                )\n\n                profiles.set_active(profile_name)\n                save_profiles(profiles)\n                exit_with_success(\n                    f\"Switched to authenticated profile {profile_name!r}.\"\n                )\n            else:\n                return\n\n    if not key:\n        choice = prompt_select_from_list(\n            app.console,\n            \"How would you like to authenticate?\",\n            [\n                (\"browser\", \"Log in with a web browser\"),\n                (\"key\", \"Paste an API key\"),\n            ],\n        )\n\n        if choice == \"key\":\n            key = typer.prompt(\"Paste your API key\", hide_input=True)\n        elif choice == \"browser\":\n            key = await login_with_browser()\n\n    async with get_cloud_client(api_key=key) as client:\n        try:\n            workspaces = await client.read_workspaces()\n        except CloudUnauthorizedError:\n            if key.startswith(\"pcu\"):\n                help_message = (\n                    \"It looks like you're using API key from Cloud 1\"\n                    \" (https://cloud.prefect.io). Make sure that you generate API key\"\n                    \" using Cloud 2 (https://app.prefect.cloud)\"\n                )\n            elif not key.startswith(\"pnu\"):\n                help_message = \"Your key is not in our expected format.\"\n            else:\n                help_message = \"Please ensure your credentials are correct.\"\n            exit_with_error(\n                f\"Unable to authenticate with Prefect Cloud. {help_message}\"\n            )\n        except httpx.HTTPStatusError as exc:\n            exit_with_error(f\"Error connecting to Prefect Cloud: {exc!r}\")\n\n    if workspace_handle:\n        # Search for the given workspace\n        for workspace in workspaces:\n            if workspace.handle == workspace_handle:\n                break\n        else:\n            if workspaces:\n                hint = (\n                    \" Available workspaces:\"\n                    f\" {listrepr((w.handle for w in workspaces), ', ')}\"\n                )\n            else:\n                hint = \"\"\n\n            exit_with_error(f\"Workspace {workspace_handle!r} not found.\" + hint)\n    else:\n        # Prompt a switch if the number of workspaces is greater than one\n        prompt_switch_workspace = len(workspaces) > 1\n\n        current_workspace = get_current_workspace(workspaces)\n\n        # Confirm that we want to switch if the current profile is already logged in\n        if (\n            current_profile_is_logged_in and current_workspace is not None\n        ) and prompt_switch_workspace:\n            app.console.print(\n                f\"You are currently using workspace {current_workspace.handle!r}.\"\n            )\n            prompt_switch_workspace = typer.confirm(\n                \"? Would you like to switch workspaces?\", default=False\n            )\n\n        if prompt_switch_workspace:\n            workspace = prompt_select_from_list(\n                app.console,\n                \"Which workspace would you like to use?\",\n                [(workspace, workspace.handle) for workspace in workspaces],\n            )\n        else:\n            if current_workspace:\n                workspace = current_workspace\n            elif len(workspaces) > 0:\n                workspace = workspaces[0]\n            else:\n                exit_with_error(\n                    \"No workspaces found! Create a workspace at\"\n                    f\" {PREFECT_CLOUD_UI_URL.value()} and try again.\"\n                )\n\n    update_current_profile(\n        {\n            PREFECT_API_KEY: key,\n            PREFECT_API_URL: workspace.api_url(),\n        }\n    )\n\n    exit_with_success(\n        f\"Authenticated with Prefect Cloud! Using workspace {workspace.handle!r}.\"\n    )\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.login_with_browser","title":"login_with_browser async","text":"

Perform login using the browser.

On failure, this function will exit the process. On success, it will return an API key.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
async def login_with_browser() -> str:\n\"\"\"\n    Perform login using the browser.\n\n    On failure, this function will exit the process.\n    On success, it will return an API key.\n    \"\"\"\n\n    # Set up an event that the login API will toggle on startup\n    ready_event = login_api.extra[\"ready-event\"] = anyio.Event()\n\n    # Set up an event that the login API will set when a response comes from the UI\n    result_event = login_api.extra[\"result-event\"] = anyio.Event()\n\n    timeout_scope = None\n    async with anyio.create_task_group() as tg:\n        # Run a server in the background to get payload from the browser\n        server = await tg.start(serve_login_api, tg.cancel_scope)\n\n        # Wait for the login server to be ready\n        with anyio.fail_after(10):\n            await ready_event.wait()\n\n            # The server may not actually be serving as the lifespan is started first\n            while not server.started:\n                await anyio.sleep(0)\n\n        # Get the port the server is using\n        server_port = server.servers[0].sockets[0].getsockname()[1]\n        callback = urllib.parse.quote(f\"http://localhost:{server_port}\")\n        ui_login_url = (\n            PREFECT_CLOUD_UI_URL.value() + f\"/auth/client?callback={callback}\"\n        )\n\n        # Then open the authorization page in a new browser tab\n        app.console.print(\"Opening browser...\")\n        await run_sync_in_worker_thread(webbrowser.open_new_tab, ui_login_url)\n\n        # Wait for the response from the browser,\n        with anyio.move_on_after(120) as timeout_scope:\n            app.console.print(\"Waiting for response...\")\n            await result_event.wait()\n\n        # Uvicorn installs signal handlers, this is the cleanest way to shutdown the\n        # login API\n        raise_signal(signal.SIGINT)\n\n    result = login_api.extra.get(\"result\")\n    if not result:\n        if timeout_scope and timeout_scope.cancel_called:\n            exit_with_error(\"Timed out while waiting for authorization.\")\n        else:\n            exit_with_error(\"Aborted.\")\n\n    if result.type == \"success\":\n        return result.content.api_key\n    elif result.type == \"failure\":\n        exit_with_error(f\"Failed to log in. {result.content.reason}\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.logout","title":"logout async","text":"

Logout the current workspace. Reset PREFECT_API_KEY and PREFECT_API_URL to default.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
@cloud_app.command()\nasync def logout():\n\"\"\"\n    Logout the current workspace.\n    Reset PREFECT_API_KEY and PREFECT_API_URL to default.\n    \"\"\"\n    current_profile = prefect.context.get_settings_context().profile\n    if current_profile is None:\n        exit_with_error(\"There is no current profile set.\")\n\n    if current_profile.settings.get(PREFECT_API_KEY) is None:\n        exit_with_error(\"Current profile is not logged into Prefect Cloud.\")\n\n    update_current_profile(\n        {\n            PREFECT_API_URL: None,\n            PREFECT_API_KEY: None,\n        },\n    )\n\n    exit_with_success(\"Logged out from Prefect Cloud.\")\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.ls","title":"ls async","text":"

List available workspaces.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
@workspace_app.command()\nasync def ls():\n\"\"\"List available workspaces.\"\"\"\n\n    confirm_logged_in()\n\n    async with get_cloud_client() as client:\n        try:\n            workspaces = await client.read_workspaces()\n        except CloudUnauthorizedError:\n            exit_with_error(\n                \"Unable to authenticate. Please ensure your credentials are correct.\"\n            )\n\n    current_workspace = get_current_workspace(workspaces)\n\n    table = Table(caption=\"* active workspace\")\n    table.add_column(\n        \"[#024dfd]Workspaces:\", justify=\"left\", style=\"#8ea0ae\", no_wrap=True\n    )\n\n    for workspace_handle in sorted(workspace.handle for workspace in workspaces):\n        if workspace_handle == current_workspace.handle:\n            table.add_row(f\"[green]* {workspace_handle}[/green]\")\n        else:\n            table.add_row(f\"  {workspace_handle}\")\n\n    app.console.print(table)\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.prompt_select_from_list","title":"prompt_select_from_list","text":"

Given a list of options, display the values to user in a table and prompt them to select one.

Parameters:

Name Type Description Default options Union[List[str], List[Tuple[Hashable, str]]]

A list of options to present to the user. A list of tuples can be passed as key value pairs. If a value is chosen, the key will be returned.

required

Returns:

Name Type Description str str

the selected option

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
def prompt_select_from_list(\n    console, prompt: str, options: Union[List[str], List[Tuple[Hashable, str]]]\n) -> str:\n\"\"\"\n    Given a list of options, display the values to user in a table and prompt them\n    to select one.\n\n    Args:\n        options: A list of options to present to the user.\n            A list of tuples can be passed as key value pairs. If a value is chosen, the\n            key will be returned.\n\n    Returns:\n        str: the selected option\n    \"\"\"\n\n    current_idx = 0\n    selected_option = None\n\n    def build_table() -> Table:\n\"\"\"\n        Generate a table of options. The `current_idx` will be highlighted.\n        \"\"\"\n\n        table = Table(box=False, header_style=None, padding=(0, 0))\n        table.add_column(\n            f\"? [bold]{prompt}[/] [bright_blue][Use arrows to move; enter to select]\",\n            justify=\"left\",\n            no_wrap=True,\n        )\n\n        for i, option in enumerate(options):\n            if isinstance(option, tuple):\n                option = option[1]\n\n            if i == current_idx:\n                # Use blue for selected options\n                table.add_row(\"[bold][blue]> \" + option)\n            else:\n                table.add_row(\"  \" + option)\n        return table\n\n    with Live(build_table(), auto_refresh=False, console=console) as live:\n        while selected_option is None:\n            key = readchar.readkey()\n\n            if key == readchar.key.UP:\n                current_idx = current_idx - 1\n                # wrap to bottom if at the top\n                if current_idx < 0:\n                    current_idx = len(options) - 1\n            elif key == readchar.key.DOWN:\n                current_idx = current_idx + 1\n                # wrap to top if at the bottom\n                if current_idx >= len(options):\n                    current_idx = 0\n            elif key == readchar.key.CTRL_C:\n                # gracefully exit with no message\n                exit_with_error(\"\")\n            elif key == readchar.key.ENTER or key == readchar.key.CR:\n                selected_option = options[current_idx]\n                if isinstance(selected_option, tuple):\n                    selected_option = selected_option[0]\n\n            live.update(build_table(), refresh=True)\n\n        return selected_option\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/cloud/#prefect.cli.cloud.set","title":"set async","text":"

Set current workspace. Shows a workspace picker if no workspace is specified.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/cloud/__init__.py
@workspace_app.command()\nasync def set(\n    workspace_handle: str = typer.Option(\n        None,\n        \"--workspace\",\n        \"-w\",\n        help=(\n            \"Full handle of workspace, in format '<account_handle>/<workspace_handle>'\"\n        ),\n    ),\n):\n\"\"\"Set current workspace. Shows a workspace picker if no workspace is specified.\"\"\"\n    confirm_logged_in()\n\n    async with get_cloud_client() as client:\n        try:\n            workspaces = await client.read_workspaces()\n        except CloudUnauthorizedError:\n            exit_with_error(\n                \"Unable to authenticate. Please ensure your credentials are correct.\"\n            )\n\n    if workspace_handle:\n        # Search for the given workspace\n        for workspace in workspaces:\n            if workspace.handle == workspace_handle:\n                break\n        else:\n            exit_with_error(f\"Workspace {workspace_handle!r} not found.\")\n    else:\n        workspace = prompt_select_from_list(\n            app.console,\n            \"Which workspace would you like to use?\",\n            [(workspace, workspace.handle) for workspace in workspaces],\n        )\n\n    profile = update_current_profile({PREFECT_API_URL: workspace.api_url()})\n\n    exit_with_success(\n        f\"Successfully set workspace to {workspace.handle!r} in profile\"\n        f\" {profile.name!r}.\"\n    )\n
","tags":["Python API","CLI","authentication","Cloud"]},{"location":"api-ref/prefect/cli/concurrency_limit/","title":"prefect.cli.concurrency_limit","text":"","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit","title":"prefect.cli.concurrency_limit","text":"

Command line interface for working with concurrency limits.

","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.create","title":"create async","text":"

Create a concurrency limit against a tag.

This limit controls how many task runs with that tag may simultaneously be in a Running state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def create(tag: str, concurrency_limit: int):\n\"\"\"\n    Create a concurrency limit against a tag.\n\n    This limit controls how many task runs with that tag may simultaneously be in a\n    Running state.\n    \"\"\"\n\n    async with get_client() as client:\n        await client.create_concurrency_limit(\n            tag=tag, concurrency_limit=concurrency_limit\n        )\n        await client.read_concurrency_limit_by_tag(tag)\n\n    app.console.print(\n        textwrap.dedent(\n            f\"\"\"\n            Created concurrency limit with properties:\n                tag - {tag!r}\n                concurrency_limit - {concurrency_limit}\n\n            Delete the concurrency limit:\n                prefect concurrency-limit delete {tag!r}\n\n            Inspect the concurrency limit:\n                prefect concurrency-limit inspect {tag!r}\n        \"\"\"\n        )\n    )\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.delete","title":"delete async","text":"

Delete the concurrency limit set on the specified tag.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def delete(tag: str):\n\"\"\"\n    Delete the concurrency limit set on the specified tag.\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            await client.delete_concurrency_limit_by_tag(tag=tag)\n        except ObjectNotFound:\n            exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n    exit_with_success(f\"Deleted concurrency limit set on the tag: {tag}\")\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.inspect","title":"inspect async","text":"

View details about a concurrency limit. active_slots shows a list of TaskRun IDs which are currently using a concurrency slot.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def inspect(tag: str):\n\"\"\"\n    View details about a concurrency limit. `active_slots` shows a list of TaskRun IDs\n    which are currently using a concurrency slot.\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            result = await client.read_concurrency_limit_by_tag(tag=tag)\n        except ObjectNotFound:\n            exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n    trid_table = Table()\n    trid_table.add_column(\"Active Task Run IDs\", style=\"cyan\", no_wrap=True)\n\n    cl_table = Table(title=f\"Concurrency Limit ID: [red]{str(result.id)}\")\n    cl_table.add_column(\"Tag\", style=\"green\", no_wrap=True)\n    cl_table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n    cl_table.add_column(\"Created\", style=\"magenta\", no_wrap=True)\n    cl_table.add_column(\"Updated\", style=\"magenta\", no_wrap=True)\n\n    for trid in sorted(result.active_slots):\n        trid_table.add_row(str(trid))\n\n    cl_table.add_row(\n        str(result.tag),\n        str(result.concurrency_limit),\n        Pretty(pendulum.instance(result.created).diff_for_humans()),\n        Pretty(pendulum.instance(result.updated).diff_for_humans()),\n    )\n\n    group = Group(\n        cl_table,\n        trid_table,\n    )\n    app.console.print(Panel(group, expand=False))\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.ls","title":"ls async","text":"

View all concurrency limits.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def ls(limit: int = 15, offset: int = 0):\n\"\"\"\n    View all concurrency limits.\n    \"\"\"\n    table = Table(\n        title=\"Concurrency Limits\",\n        caption=\"inspect a concurrency limit to show active task run IDs\",\n    )\n    table.add_column(\"Tag\", style=\"green\", no_wrap=True)\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Active Task Runs\", style=\"magenta\", no_wrap=True)\n\n    async with get_client() as client:\n        concurrency_limits = await client.read_concurrency_limits(\n            limit=limit, offset=offset\n        )\n\n    for cl in sorted(concurrency_limits, key=lambda c: c.updated, reverse=True):\n        table.add_row(\n            str(cl.tag),\n            str(cl.id),\n            str(cl.concurrency_limit),\n            str(len(cl.active_slots)),\n        )\n\n    app.console.print(table)\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/concurrency_limit/#prefect.cli.concurrency_limit.reset","title":"reset async","text":"

Resets the concurrency limit slots set on the specified tag.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/concurrency_limit.py
@concurrency_limit_app.command()\nasync def reset(tag: str):\n\"\"\"\n    Resets the concurrency limit slots set on the specified tag.\n    \"\"\"\n\n    async with get_client() as client:\n        try:\n            await client.reset_concurrency_limit_by_tag(tag=tag)\n        except ObjectNotFound:\n            exit_with_error(f\"No concurrency limit found for the tag: {tag}\")\n\n    exit_with_success(f\"Reset concurrency limit set on the tag: {tag}\")\n
","tags":["Python API","CLI","concurrency"]},{"location":"api-ref/prefect/cli/config/","title":"prefect.cli.config","text":"","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config","title":"prefect.cli.config","text":"

Command line interface for working with profiles

","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.set_","title":"set_","text":"

Change the value for a setting by setting the value in the current profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/config.py
@config_app.command(\"set\")\ndef set_(settings: List[str]):\n\"\"\"\n    Change the value for a setting by setting the value in the current profile.\n    \"\"\"\n    parsed_settings = {}\n    for item in settings:\n        try:\n            setting, value = item.split(\"=\", maxsplit=1)\n        except ValueError:\n            exit_with_error(\n                f\"Failed to parse argument {item!r}. Use the format 'VAR=VAL'.\"\n            )\n\n        if setting not in prefect.settings.SETTING_VARIABLES:\n            exit_with_error(f\"Unknown setting name {setting!r}.\")\n\n        # Guard against changing settings that tweak config locations\n        if setting in {\"PREFECT_HOME\", \"PREFECT_PROFILES_PATH\"}:\n            exit_with_error(\n                f\"Setting {setting!r} cannot be changed with this command. \"\n                \"Use an environment variable instead.\"\n            )\n\n        parsed_settings[setting] = value\n\n    try:\n        new_profile = prefect.settings.update_current_profile(parsed_settings)\n    except pydantic.ValidationError as exc:\n        for error in exc.errors():\n            setting = error[\"loc\"][0]\n            message = error[\"msg\"]\n            app.console.print(f\"Validation error for setting {setting!r}: {message}\")\n        exit_with_error(\"Invalid setting value.\")\n\n    for setting, value in parsed_settings.items():\n        app.console.print(f\"Set {setting!r} to {value!r}.\")\n        if setting in os.environ:\n            app.console.print(\n                f\"[yellow]{setting} is also set by an environment variable which will \"\n                f\"override your config value. Run `unset {setting}` to clear it.\"\n            )\n\n        if prefect.settings.SETTING_VARIABLES[setting].deprecated:\n            app.console.print(\n                f\"[yellow]{prefect.settings.SETTING_VARIABLES[setting].deprecated_message}.\"\n            )\n\n    exit_with_success(f\"Updated profile {new_profile.name!r}.\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.unset","title":"unset","text":"

Restore the default value for a setting.

Removes the setting from the current profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/config.py
@config_app.command()\ndef unset(settings: List[str]):\n\"\"\"\n    Restore the default value for a setting.\n\n    Removes the setting from the current profile.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    profile = profiles[prefect.context.get_settings_context().profile.name]\n    parsed = set()\n\n    for setting in settings:\n        if setting not in prefect.settings.SETTING_VARIABLES:\n            exit_with_error(f\"Unknown setting name {setting!r}.\")\n        # Cast to settings objects\n        parsed.add(prefect.settings.SETTING_VARIABLES[setting])\n\n    for setting in parsed:\n        if setting not in profile.settings:\n            exit_with_error(f\"{setting.name!r} is not set in profile {profile.name!r}.\")\n\n    profiles.update_profile(\n        name=profile.name, settings={setting: None for setting in parsed}\n    )\n\n    for setting in settings:\n        app.console.print(f\"Unset {setting!r}.\")\n\n        if setting in os.environ:\n            app.console.print(\n                f\"[yellow]{setting!r} is also set by an environment variable. \"\n                f\"Use `unset {setting}` to clear it.\"\n            )\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(f\"Updated profile {profile.name!r}.\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.validate","title":"validate","text":"

Read and validate the current profile.

Deprecated settings will be automatically converted to new names unless both are set.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/config.py
@config_app.command()\ndef validate():\n\"\"\"\n    Read and validate the current profile.\n\n    Deprecated settings will be automatically converted to new names unless both are\n    set.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    profile = profiles[prefect.context.get_settings_context().profile.name]\n    changed = profile.convert_deprecated_renamed_settings()\n    for old, new in changed:\n        app.console.print(f\"Updated {old.name!r} to {new.name!r}.\")\n\n    for setting in profile.settings.keys():\n        if setting.deprecated:\n            app.console.print(f\"Found deprecated setting {setting.name!r}.\")\n\n    profile.validate_settings()\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(\"Configuration valid!\")\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/config/#prefect.cli.config.view","title":"view","text":"

Display the current settings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/config.py
@config_app.command()\ndef view(\n    show_defaults: Optional[bool] = typer.Option(\n        False, \"--show-defaults/--hide-defaults\", help=(show_defaults_help)\n    ),\n    show_sources: Optional[bool] = typer.Option(\n        True,\n        \"--show-sources/--hide-sources\",\n        help=(show_sources_help),\n    ),\n    show_secrets: Optional[bool] = typer.Option(\n        False,\n        \"--show-secrets/--hide-secrets\",\n        help=\"Toggle display of secrets setting values.\",\n    ),\n):\n\"\"\"\n    Display the current settings.\n    \"\"\"\n    context = prefect.context.get_settings_context()\n\n    # Get settings at each level, converted to a flat dictionary for easy comparison\n    default_settings = prefect.settings.get_default_settings()\n    env_settings = prefect.settings.get_settings_from_env()\n    current_profile_settings = context.settings\n\n    # Obfuscate secrets\n    if not show_secrets:\n        default_settings = default_settings.with_obfuscated_secrets()\n        env_settings = env_settings.with_obfuscated_secrets()\n        current_profile_settings = current_profile_settings.with_obfuscated_secrets()\n\n    # Display the profile first\n    app.console.print(f\"PREFECT_PROFILE={context.profile.name!r}\")\n\n    settings_output = []\n\n    # The combination of environment variables and profile settings that are in use\n    profile_overrides = current_profile_settings.dict(exclude_unset=True)\n\n    # Used to see which settings in current_profile_settings came from env vars\n    env_overrides = env_settings.dict(exclude_unset=True)\n\n    for key, value in profile_overrides.items():\n        source = \"env\" if env_overrides.get(key) is not None else \"profile\"\n        source_blurb = f\" (from {source})\" if show_sources else \"\"\n        settings_output.append(f\"{key}='{value}'{source_blurb}\")\n\n    if show_defaults:\n        for key, value in default_settings.dict().items():\n            if key not in profile_overrides:\n                source_blurb = \" (from defaults)\" if show_sources else \"\"\n                settings_output.append(f\"{key}='{value}'{source_blurb}\")\n\n    app.console.print(\"\\n\".join(sorted(settings_output)))\n
","tags":["Python API","CLI","config","settings"]},{"location":"api-ref/prefect/cli/deployment/","title":"prefect.cli.deployment","text":"","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment","title":"prefect.cli.deployment","text":"

Command line interface for working with deployments.

","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.apply","title":"apply async","text":"

Create or update a deployment from a YAML file.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command()\nasync def apply(\n    paths: List[str] = typer.Argument(\n        ...,\n        help=\"One or more paths to deployment YAML files.\",\n    ),\n    upload: bool = typer.Option(\n        False,\n        \"--upload\",\n        help=(\n            \"A flag that, when provided, uploads this deployment's files to remote\"\n            \" storage.\"\n        ),\n    ),\n    work_queue_concurrency: int = typer.Option(\n        None,\n        \"--limit\",\n        \"-l\",\n        help=(\n            \"Sets the concurrency limit on the work queue that handles this\"\n            \" deployment's runs\"\n        ),\n    ),\n):\n\"\"\"\n    Create or update a deployment from a YAML file.\n    \"\"\"\n    deployment = None\n    async with get_client() as client:\n        for path in paths:\n            try:\n                deployment = await Deployment.load_from_yaml(path)\n                app.console.print(\n                    f\"Successfully loaded {deployment.name!r}\", style=\"green\"\n                )\n            except Exception as exc:\n                exit_with_error(\n                    f\"'{path!s}' did not conform to deployment spec: {exc!r}\"\n                )\n\n            assert deployment\n\n            await create_work_queue_and_set_concurrency_limit(\n                deployment.work_queue_name,\n                deployment.work_pool_name,\n                work_queue_concurrency,\n            )\n\n            if upload:\n                if (\n                    deployment.storage\n                    and \"put-directory\" in deployment.storage.get_block_capabilities()\n                ):\n                    file_count = await deployment.upload_to_storage()\n                    if file_count:\n                        app.console.print(\n                            (\n                                f\"Successfully uploaded {file_count} files to\"\n                                f\" {deployment.location}\"\n                            ),\n                            style=\"green\",\n                        )\n                else:\n                    app.console.print(\n                        (\n                            f\"Deployment storage {deployment.storage} does not have\"\n                            \" upload capabilities; no files uploaded.\"\n                        ),\n                        style=\"red\",\n                    )\n            await check_work_pool_exists(\n                work_pool_name=deployment.work_pool_name, client=client\n            )\n\n            if client.server_type != ServerType.CLOUD and deployment.triggers:\n                app.console.print(\n                    (\n                        \"Deployment triggers are only supported on \"\n                        f\"Prefect Cloud. Triggers defined in {path!r} will be \"\n                        \"ignored.\"\n                    ),\n                    style=\"red\",\n                )\n\n            deployment_id = await deployment.apply()\n            app.console.print(\n                (\n                    f\"Deployment '{deployment.flow_name}/{deployment.name}'\"\n                    f\" successfully created with id '{deployment_id}'.\"\n                ),\n                style=\"green\",\n            )\n\n            if PREFECT_UI_URL:\n                app.console.print(\n                    \"View Deployment in UI:\"\n                    f\" {PREFECT_UI_URL.value()}/deployments/deployment/{deployment_id}\"\n                )\n\n            if deployment.work_pool_name is not None:\n                await _print_deployment_work_pool_instructions(\n                    work_pool_name=deployment.work_pool_name, client=client\n                )\n            elif deployment.work_queue_name is not None:\n                app.console.print(\n                    \"\\nTo execute flow runs from this deployment, start an agent that\"\n                    f\" pulls work from the {deployment.work_queue_name!r} work queue:\"\n                )\n                app.console.print(\n                    f\"$ prefect agent start -q {deployment.work_queue_name!r}\",\n                    style=\"blue\",\n                )\n            else:\n                app.console.print(\n                    (\n                        \"\\nThis deployment does not specify a work queue name, which\"\n                        \" means agents will not be able to pick up its runs. To add a\"\n                        \" work queue, edit the deployment spec and re-run this command,\"\n                        \" or visit the deployment in the UI.\"\n                    ),\n                    style=\"red\",\n                )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.build","title":"build async","text":"

Generate a deployment YAML from /path/to/file.py:flow_function

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command()\nasync def build(\n    entrypoint: str = typer.Argument(\n        ...,\n        help=(\n            \"The path to a flow entrypoint, in the form of\"\n            \" `./path/to/file.py:flow_func_name`\"\n        ),\n    ),\n    name: str = typer.Option(\n        None, \"--name\", \"-n\", help=\"The name to give the deployment.\"\n    ),\n    description: str = typer.Option(\n        None,\n        \"--description\",\n        \"-d\",\n        help=(\n            \"The description to give the deployment. If not provided, the description\"\n            \" will be populated from the flow's description.\"\n        ),\n    ),\n    version: str = typer.Option(\n        None, \"--version\", \"-v\", help=\"A version to give the deployment.\"\n    ),\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"One or more optional tags to apply to the deployment. Note: tags are used\"\n            \" only for organizational purposes. For delegating work to agents, use the\"\n            \" --work-queue flag.\"\n        ),\n    ),\n    work_queue_name: str = typer.Option(\n        None,\n        \"-q\",\n        \"--work-queue\",\n        help=(\n            \"The work queue that will handle this deployment's runs. \"\n            \"It will be created if it doesn't already exist. Defaults to `None`. \"\n            \"Note that if a work queue is not set, work will not be scheduled.\"\n        ),\n    ),\n    work_pool_name: str = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The work pool that will handle this deployment's runs.\",\n    ),\n    work_queue_concurrency: int = typer.Option(\n        None,\n        \"--limit\",\n        \"-l\",\n        help=(\n            \"Sets the concurrency limit on the work queue that handles this\"\n            \" deployment's runs\"\n        ),\n    ),\n    infra_type: str = typer.Option(\n        None,\n        \"--infra\",\n        \"-i\",\n        help=\"The infrastructure type to use, prepopulated with defaults. For example: \"\n        + listrepr(builtin_infrastructure_types, sep=\", \"),\n    ),\n    infra_block: str = typer.Option(\n        None,\n        \"--infra-block\",\n        \"-ib\",\n        help=\"The slug of the infrastructure block to use as a template.\",\n    ),\n    overrides: List[str] = typer.Option(\n        None,\n        \"--override\",\n        help=(\n            \"One or more optional infrastructure overrides provided as a dot delimited\"\n            \" path, e.g., `env.env_key=env_value`\"\n        ),\n    ),\n    storage_block: str = typer.Option(\n        None,\n        \"--storage-block\",\n        \"-sb\",\n        help=(\n            \"The slug of a remote storage block. Use the syntax:\"\n            \" 'block_type/block_name', where block_type is one of 'github', 's3',\"\n            \" 'gcs', 'azure', 'smb', or a registered block from a library that\"\n            \" implements the WritableDeploymentStorage interface such as\"\n            \" 'gitlab-repository', 'bitbucket-repository', 's3-bucket',\"\n            \" 'gcs-bucket'\"\n        ),\n    ),\n    skip_upload: bool = typer.Option(\n        False,\n        \"--skip-upload\",\n        help=(\n            \"A flag that, when provided, skips uploading this deployment's files to\"\n            \" remote storage.\"\n        ),\n    ),\n    cron: str = typer.Option(\n        None,\n        \"--cron\",\n        help=\"A cron string that will be used to set a CronSchedule on the deployment.\",\n    ),\n    interval: int = typer.Option(\n        None,\n        \"--interval\",\n        help=(\n            \"An integer specifying an interval (in seconds) that will be used to set an\"\n            \" IntervalSchedule on the deployment.\"\n        ),\n    ),\n    interval_anchor: Optional[str] = typer.Option(\n        None, \"--anchor-date\", help=\"The anchor date for an interval schedule\"\n    ),\n    rrule: str = typer.Option(\n        None,\n        \"--rrule\",\n        help=\"An RRule that will be used to set an RRuleSchedule on the deployment.\",\n    ),\n    timezone: str = typer.Option(\n        None,\n        \"--timezone\",\n        help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n    ),\n    path: str = typer.Option(\n        None,\n        \"--path\",\n        help=(\n            \"An optional path to specify a subdirectory of remote storage to upload to,\"\n            \" or to point to a subdirectory of a locally stored flow.\"\n        ),\n    ),\n    output: str = typer.Option(\n        None,\n        \"--output\",\n        \"-o\",\n        help=\"An optional filename to write the deployment file to.\",\n    ),\n    _apply: bool = typer.Option(\n        False,\n        \"--apply\",\n        \"-a\",\n        help=(\n            \"An optional flag to automatically register the resulting deployment with\"\n            \" the API.\"\n        ),\n    ),\n    param: List[str] = typer.Option(\n        None,\n        \"--param\",\n        help=(\n            \"An optional parameter override, values are parsed as JSON strings e.g.\"\n            \" --param question=ultimate --param answer=42\"\n        ),\n    ),\n    params: str = typer.Option(\n        None,\n        \"--params\",\n        help=(\n            \"An optional parameter override in a JSON string format e.g.\"\n            ' --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\''\n        ),\n    ),\n):\n\"\"\"\n    Generate a deployment YAML from /path/to/file.py:flow_function\n    \"\"\"\n    # validate inputs\n    if not name:\n        exit_with_error(\n            \"A name for this deployment must be provided with the '--name' flag.\"\n        )\n\n    if len([value for value in (cron, rrule, interval) if value is not None]) > 1:\n        exit_with_error(\"Only one schedule type can be provided.\")\n\n    if infra_block and infra_type:\n        exit_with_error(\n            \"Only one of `infra` or `infra_block` can be provided, please choose one.\"\n        )\n\n    output_file = None\n    if output:\n        output_file = Path(output)\n        if output_file.suffix and output_file.suffix != \".yaml\":\n            exit_with_error(\"Output file must be a '.yaml' file.\")\n        else:\n            output_file = output_file.with_suffix(\".yaml\")\n\n    # validate flow\n    try:\n        fpath, obj_name = entrypoint.rsplit(\":\", 1)\n    except ValueError as exc:\n        if str(exc) == \"not enough values to unpack (expected 2, got 1)\":\n            missing_flow_name_msg = (\n                \"Your flow entrypoint must include the name of the function that is\"\n                f\" the entrypoint to your flow.\\nTry {entrypoint}:<flow_name>\"\n            )\n            exit_with_error(missing_flow_name_msg)\n        else:\n            raise exc\n    try:\n        flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, entrypoint)\n    except Exception as exc:\n        exit_with_error(exc)\n    app.console.print(f\"Found flow {flow.name!r}\", style=\"green\")\n    infra_overrides = {}\n    for override in overrides or []:\n        key, value = override.split(\"=\", 1)\n        infra_overrides[key] = value\n\n    if infra_block:\n        infrastructure = await Block.load(infra_block)\n    elif infra_type:\n        # Create an instance of the given type\n        infrastructure = Block.get_block_class_from_key(infra_type)()\n    else:\n        # will reset to a default of Process is no infra is present on the\n        # server-side definition of this deployment\n        infrastructure = None\n\n    if interval_anchor and not interval:\n        exit_with_error(\"An anchor date can only be provided with an interval schedule\")\n\n    schedule = None\n    if cron:\n        cron_kwargs = {\"cron\": cron, \"timezone\": timezone}\n        schedule = CronSchedule(\n            **{k: v for k, v in cron_kwargs.items() if v is not None}\n        )\n    elif interval:\n        interval_kwargs = {\n            \"interval\": timedelta(seconds=interval),\n            \"anchor_date\": interval_anchor,\n            \"timezone\": timezone,\n        }\n        schedule = IntervalSchedule(\n            **{k: v for k, v in interval_kwargs.items() if v is not None}\n        )\n    elif rrule:\n        try:\n            schedule = RRuleSchedule(**json.loads(rrule))\n            if timezone:\n                # override timezone if specified via CLI argument\n                schedule.timezone = timezone\n        except json.JSONDecodeError:\n            schedule = RRuleSchedule(rrule=rrule, timezone=timezone)\n\n    # parse storage_block\n    if storage_block:\n        block_type, block_name, *block_path = storage_block.split(\"/\")\n        if block_path and path:\n            exit_with_error(\n                \"Must provide a `path` explicitly or provide one on the storage block\"\n                \" specification, but not both.\"\n            )\n        elif not path:\n            path = \"/\".join(block_path)\n        storage_block = f\"{block_type}/{block_name}\"\n        storage = await Block.load(storage_block)\n    else:\n        storage = None\n\n    if create_default_ignore_file(path=\".\"):\n        app.console.print(\n            (\n                \"Default '.prefectignore' file written to\"\n                f\" {(Path('.') / '.prefectignore').absolute()}\"\n            ),\n            style=\"green\",\n        )\n\n    if param and (params is not None):\n        exit_with_error(\"Can only pass one of `param` or `params` options\")\n\n    parameters = dict()\n\n    if param:\n        for p in param or []:\n            k, unparsed_value = p.split(\"=\", 1)\n            try:\n                v = json.loads(unparsed_value)\n                app.console.print(\n                    f\"The parameter value {unparsed_value} is parsed as a JSON string\"\n                )\n            except json.JSONDecodeError:\n                v = unparsed_value\n            parameters[k] = v\n\n    if params is not None:\n        parameters = json.loads(params)\n\n    # set up deployment object\n    entrypoint = (\n        f\"{Path(fpath).absolute().relative_to(Path('.').absolute())}:{obj_name}\"\n    )\n\n    init_kwargs = dict(\n        path=path,\n        entrypoint=entrypoint,\n        version=version,\n        storage=storage,\n        infra_overrides=infra_overrides or {},\n    )\n\n    if parameters:\n        init_kwargs[\"parameters\"] = parameters\n\n    if description:\n        init_kwargs[\"description\"] = description\n\n    # if a schedule, tags, work_queue_name, or infrastructure are not provided via CLI,\n    # we let `build_from_flow` load them from the server\n    if schedule:\n        init_kwargs.update(schedule=schedule)\n    if tags:\n        init_kwargs.update(tags=tags)\n    if infrastructure:\n        init_kwargs.update(infrastructure=infrastructure)\n    if work_queue_name:\n        init_kwargs.update(work_queue_name=work_queue_name)\n    if work_pool_name:\n        init_kwargs.update(work_pool_name=work_pool_name)\n\n    deployment_loc = output_file or f\"{obj_name}-deployment.yaml\"\n    deployment = await Deployment.build_from_flow(\n        flow=flow,\n        name=name,\n        output=deployment_loc,\n        skip_upload=skip_upload,\n        apply=False,\n        **init_kwargs,\n    )\n    app.console.print(\n        f\"Deployment YAML created at '{Path(deployment_loc).absolute()!s}'.\",\n        style=\"green\",\n    )\n\n    await create_work_queue_and_set_concurrency_limit(\n        deployment.work_queue_name, deployment.work_pool_name, work_queue_concurrency\n    )\n\n    # we process these separately for informative output\n    if not skip_upload:\n        if (\n            deployment.storage\n            and \"put-directory\" in deployment.storage.get_block_capabilities()\n        ):\n            file_count = await deployment.upload_to_storage()\n            if file_count:\n                app.console.print(\n                    (\n                        f\"Successfully uploaded {file_count} files to\"\n                        f\" {deployment.location}\"\n                    ),\n                    style=\"green\",\n                )\n        else:\n            app.console.print(\n                (\n                    f\"Deployment storage {deployment.storage} does not have upload\"\n                    \" capabilities; no files uploaded.  Pass --skip-upload to suppress\"\n                    \" this warning.\"\n                ),\n                style=\"green\",\n            )\n\n    if _apply:\n        async with get_client() as client:\n            await check_work_pool_exists(\n                work_pool_name=deployment.work_pool_name, client=client\n            )\n            deployment_id = await deployment.apply()\n            app.console.print(\n                (\n                    f\"Deployment '{deployment.flow_name}/{deployment.name}'\"\n                    f\" successfully created with id '{deployment_id}'.\"\n                ),\n                style=\"green\",\n            )\n            if deployment.work_pool_name is not None:\n                await _print_deployment_work_pool_instructions(\n                    work_pool_name=deployment.work_pool_name, client=client\n                )\n\n            elif deployment.work_queue_name is not None:\n                app.console.print(\n                    \"\\nTo execute flow runs from this deployment, start an agent that\"\n                    f\" pulls work from the {deployment.work_queue_name!r} work queue:\"\n                )\n                app.console.print(\n                    f\"$ prefect agent start -q {deployment.work_queue_name!r}\",\n                    style=\"blue\",\n                )\n            else:\n                app.console.print(\n                    (\n                        \"\\nThis deployment does not specify a work queue name, which\"\n                        \" means agents will not be able to pick up its runs. To add a\"\n                        \" work queue, edit the deployment spec and re-run this command,\"\n                        \" or visit the deployment in the UI.\"\n                    ),\n                    style=\"red\",\n                )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.delete","title":"delete async","text":"

Delete a deployment.

\b

Examples:

\b $ prefect deployment delete test_flow/test_deployment $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command()\nasync def delete(\n    name: Optional[str] = typer.Argument(\n        None, help=\"A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\"\n    ),\n    deployment_id: Optional[str] = typer.Option(\n        None, \"--id\", help=\"A deployment id to search for if no name is given\"\n    ),\n):\n\"\"\"\n    Delete a deployment.\n\n    \\b\n    Examples:\n        \\b\n        $ prefect deployment delete test_flow/test_deployment\n        $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6\n    \"\"\"\n    async with get_client() as client:\n        if name is None and deployment_id is not None:\n            try:\n                await client.delete_deployment(deployment_id)\n                exit_with_success(f\"Deleted deployment '{deployment_id}'.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {deployment_id!r} not found!\")\n        elif name is not None:\n            try:\n                deployment = await client.read_deployment_by_name(name)\n                await client.delete_deployment(deployment.id)\n                exit_with_success(f\"Deleted deployment '{name}'.\")\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {name!r} not found!\")\n        else:\n            exit_with_error(\"Must provide a deployment name or id\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.inspect","title":"inspect async","text":"

View details about a deployment.

\b

Example

\b $ prefect deployment inspect \"hello-world/my-deployment\" { 'id': '610df9c3-0fb4-4856-b330-67f588d20201', 'created': '2022-08-01T18:36:25.192102+00:00', 'updated': '2022-08-01T18:36:25.188166+00:00', 'name': 'my-deployment', 'description': None, 'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e', 'schedule': None, 'is_schedule_active': True, 'parameters': {'name': 'Marvin'}, 'tags': ['test'], 'parameter_openapi_schema': { 'title': 'Parameters', 'type': 'object', 'properties': { 'name': { 'title': 'name', 'type': 'string' } }, 'required': ['name'] }, 'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32', 'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028', 'infrastructure': { 'type': 'process', 'env': {}, 'labels': {}, 'name': None, 'command': ['python', '-m', 'prefect.engine'], 'stream_output': True } }

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command()\nasync def inspect(name: str):\n\"\"\"\n    View details about a deployment.\n\n    \\b\n    Example:\n        \\b\n        $ prefect deployment inspect \"hello-world/my-deployment\"\n        {\n            'id': '610df9c3-0fb4-4856-b330-67f588d20201',\n            'created': '2022-08-01T18:36:25.192102+00:00',\n            'updated': '2022-08-01T18:36:25.188166+00:00',\n            'name': 'my-deployment',\n            'description': None,\n            'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e',\n            'schedule': None,\n            'is_schedule_active': True,\n            'parameters': {'name': 'Marvin'},\n            'tags': ['test'],\n            'parameter_openapi_schema': {\n                'title': 'Parameters',\n                'type': 'object',\n                'properties': {\n                    'name': {\n                        'title': 'name',\n                        'type': 'string'\n                    }\n                },\n                'required': ['name']\n            },\n            'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32',\n            'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028',\n            'infrastructure': {\n                'type': 'process',\n                'env': {},\n                'labels': {},\n                'name': None,\n                'command': ['python', '-m', 'prefect.engine'],\n                'stream_output': True\n            }\n        }\n\n    \"\"\"\n    assert_deployment_name_format(name)\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(name)\n        except ObjectNotFound:\n            exit_with_error(f\"Deployment {name!r} not found!\")\n\n        deployment_json = deployment.dict(json_compatible=True)\n\n        if deployment.infrastructure_document_id:\n            deployment_json[\"infrastructure\"] = Block._from_block_document(\n                await client.read_block_document(deployment.infrastructure_document_id)\n            ).dict(\n                exclude={\"_block_document_id\", \"_block_document_name\", \"_is_anonymous\"}\n            )\n\n        if client.server_type == ServerType.CLOUD:\n            deployment_json[\"automations\"] = [\n                a.dict()\n                for a in await client.read_resource_related_automations(\n                    f\"prefect.deployment.{deployment.id}\"\n                )\n            ]\n\n    app.console.print(Pretty(deployment_json))\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.ls","title":"ls async","text":"

View all deployments or deployments for specific flows.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command()\nasync def ls(flow_name: List[str] = None, by_created: bool = False):\n\"\"\"\n    View all deployments or deployments for specific flows.\n    \"\"\"\n    async with get_client() as client:\n        deployments = await client.read_deployments(\n            flow_filter=FlowFilter(name={\"any_\": flow_name}) if flow_name else None\n        )\n        flows = {\n            flow.id: flow\n            for flow in await client.read_flows(\n                flow_filter=FlowFilter(id={\"any_\": [d.flow_id for d in deployments]})\n            )\n        }\n\n    def sort_by_name_keys(d):\n        return flows[d.flow_id].name, d.name\n\n    def sort_by_created_key(d):\n        return pendulum.now(\"utc\") - d.created\n\n    table = Table(\n        title=\"Deployments\",\n    )\n    table.add_column(\"Name\", style=\"blue\", no_wrap=True)\n    table.add_column(\"ID\", style=\"cyan\", no_wrap=True)\n\n    for deployment in sorted(\n        deployments, key=sort_by_created_key if by_created else sort_by_name_keys\n    ):\n        table.add_row(\n            f\"{flows[deployment.flow_id].name}/[bold]{deployment.name}[/]\",\n            str(deployment.id),\n        )\n\n    app.console.print(table)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.pause_schedule","title":"pause_schedule async","text":"

Pause schedule of a given deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command(\"pause-schedule\")\nasync def pause_schedule(\n    name: str,\n):\n\"\"\"\n    Pause schedule of a given deployment.\n    \"\"\"\n    assert_deployment_name_format(name)\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(name)\n        except ObjectNotFound:\n            exit_with_error(f\"Deployment {name!r} not found!\")\n\n        await client.update_deployment(deployment, is_schedule_active=False)\n        exit_with_success(f\"Paused schedule for deployment {name}\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.resume_schedule","title":"resume_schedule async","text":"

Resume schedule of a given deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command(\"resume-schedule\")\nasync def resume_schedule(\n    name: str,\n):\n\"\"\"\n    Resume schedule of a given deployment.\n    \"\"\"\n    assert_deployment_name_format(name)\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(name)\n        except ObjectNotFound:\n            exit_with_error(f\"Deployment {name!r} not found!\")\n\n        await client.update_deployment(deployment, is_schedule_active=True)\n        exit_with_success(f\"Resumed schedule for deployment {name}\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.run","title":"run async","text":"

Create a flow run for the given flow and deployment.

The flow run will be scheduled to run immediately unless --start-in or --start-at is specified. The flow run will not execute until an agent starts.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command()\nasync def run(\n    name: Optional[str] = typer.Argument(\n        None, help=\"A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\"\n    ),\n    deployment_id: Optional[str] = typer.Option(\n        None, \"--id\", help=\"A deployment id to search for if no name is given\"\n    ),\n    params: List[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--param\",\n        help=(\n            \"A key, value pair (key=value) specifying a flow parameter. The value will\"\n            \" be interpreted as JSON. May be passed multiple times to specify multiple\"\n            \" parameter values.\"\n        ),\n    ),\n    multiparams: Optional[str] = typer.Option(\n        None,\n        \"--params\",\n        help=(\n            \"A mapping of parameters to values. To use a stdin, pass '-'. Any \"\n            \"parameters passed with `--param` will take precedence over these values.\"\n        ),\n    ),\n    start_in: Optional[str] = typer.Option(\n        None,\n        \"--start-in\",\n    ),\n    start_at: Optional[str] = typer.Option(\n        None,\n        \"--start-at\",\n    ),\n):\n\"\"\"\n    Create a flow run for the given flow and deployment.\n\n    The flow run will be scheduled to run immediately unless `--start-in` or `--start-at` is specified.\n    The flow run will not execute until an agent starts.\n    \"\"\"\n    import dateparser\n\n    now = pendulum.now(\"UTC\")\n\n    multi_params = {}\n    if multiparams:\n        if multiparams == \"-\":\n            multiparams = sys.stdin.read()\n            if not multiparams:\n                exit_with_error(\"No data passed to stdin\")\n\n        try:\n            multi_params = json.loads(multiparams)\n        except ValueError as exc:\n            exit_with_error(f\"Failed to parse JSON: {exc}\")\n\n    cli_params = _load_json_key_values(params, \"parameter\")\n    conflicting_keys = set(cli_params.keys()).intersection(multi_params.keys())\n    if conflicting_keys:\n        app.console.print(\n            \"The following parameters were specified by `--param` and `--params`, the \"\n            f\"`--param` value will be used: {conflicting_keys}\"\n        )\n    parameters = {**multi_params, **cli_params}\n\n    if start_in and start_at:\n        exit_with_error(\n            \"Only one of `--start-in` or `--start-at` can be set, not both.\"\n        )\n\n    elif start_in is None and start_at is None:\n        scheduled_start_time = now\n        human_dt_diff = \" (now)\"\n    else:\n        if start_in:\n            start_time_raw = \"in \" + start_in\n        else:\n            start_time_raw = \"at \" + start_at\n        with warnings.catch_warnings():\n            # PyTZ throws a warning based on dateparser usage of the library\n            # See https://github.com/scrapinghub/dateparser/issues/1089\n            warnings.filterwarnings(\"ignore\", module=\"dateparser\")\n\n            try:\n                start_time_parsed = dateparser.parse(\n                    start_time_raw,\n                    settings={\n                        \"TO_TIMEZONE\": \"UTC\",\n                        \"RETURN_AS_TIMEZONE_AWARE\": False,\n                        \"PREFER_DATES_FROM\": \"future\",\n                        \"RELATIVE_BASE\": datetime.fromtimestamp(now.timestamp()),\n                    },\n                )\n\n            except Exception as exc:\n                exit_with_error(f\"Failed to parse '{start_time_raw!r}': {exc!s}\")\n\n        if start_time_parsed is None:\n            exit_with_error(f\"Unable to parse scheduled start time {start_time_raw!r}.\")\n\n        scheduled_start_time = pendulum.instance(start_time_parsed)\n        human_dt_diff = (\n            \" (\" + pendulum.format_diff(scheduled_start_time.diff(now)) + \")\"\n        )\n\n    async with get_client() as client:\n        deployment = await get_deployment(client, name, deployment_id)\n        flow = await client.read_flow(deployment.flow_id)\n\n        deployment_parameters = deployment.parameter_openapi_schema[\"properties\"].keys()\n        unknown_keys = set(parameters.keys()).difference(deployment_parameters)\n        if unknown_keys:\n            available_parameters = (\n                (\n                    \"The following parameters are available on the deployment: \"\n                    + listrepr(deployment_parameters, sep=\", \")\n                )\n                if deployment_parameters\n                else \"This deployment does not accept parameters.\"\n            )\n\n            exit_with_error(\n                \"The following parameters were specified but not found on the \"\n                f\"deployment: {listrepr(unknown_keys, sep=', ')}\"\n                f\"\\n{available_parameters}\"\n            )\n\n        app.console.print(\n            f\"Creating flow run for deployment '{flow.name}/{deployment.name}'...\",\n        )\n\n        flow_run = await client.create_flow_run_from_deployment(\n            deployment.id,\n            parameters=parameters,\n            state=Scheduled(scheduled_time=scheduled_start_time),\n        )\n\n    if PREFECT_UI_URL:\n        run_url = f\"{PREFECT_UI_URL.value()}/flow-runs/flow-run/{flow_run.id}\"\n    else:\n        run_url = \"<no dashboard available>\"\n\n    datetime_local_tz = scheduled_start_time.in_tz(pendulum.tz.local_timezone())\n    scheduled_display = (\n        datetime_local_tz.to_datetime_string()\n        + \" \"\n        + datetime_local_tz.tzname()\n        + human_dt_diff\n    )\n\n    app.console.print(f\"Created flow run {flow_run.name!r}.\")\n    app.console.print(\n        textwrap.dedent(\n            f\"\"\"\n        \u2514\u2500\u2500 UUID: {flow_run.id}\n        \u2514\u2500\u2500 Parameters: {flow_run.parameters}\n        \u2514\u2500\u2500 Scheduled start time: {scheduled_display}\n        \u2514\u2500\u2500 URL: {run_url}\n        \"\"\"\n        ).strip()\n    )\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.set_schedule","title":"set_schedule async","text":"

Set schedule for a given deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
@deployment_app.command(\"set-schedule\")\nasync def set_schedule(\n    name: str,\n    interval: Optional[float] = typer.Option(\n        None,\n        \"--interval\",\n        help=\"An interval to schedule on, specified in seconds\",\n        min=0.0001,\n    ),\n    interval_anchor: Optional[str] = typer.Option(\n        None,\n        \"--anchor-date\",\n        help=\"The anchor date for an interval schedule\",\n    ),\n    rrule_string: Optional[str] = typer.Option(\n        None, \"--rrule\", help=\"Deployment schedule rrule string\"\n    ),\n    cron_string: Optional[str] = typer.Option(\n        None, \"--cron\", help=\"Deployment schedule cron string\"\n    ),\n    cron_day_or: Optional[str] = typer.Option(\n        None,\n        \"--day_or\",\n        help=\"Control how croniter handles `day` and `day_of_week` entries\",\n    ),\n    timezone: Optional[str] = typer.Option(\n        None,\n        \"--timezone\",\n        help=\"Deployment schedule timezone string e.g. 'America/New_York'\",\n    ),\n):\n\"\"\"\n    Set schedule for a given deployment.\n    \"\"\"\n    assert_deployment_name_format(name)\n\n    if sum(option is not None for option in [interval, rrule_string, cron_string]) != 1:\n        exit_with_error(\n            \"Exactly one of `--interval`, `--rrule`, or `--cron` must be provided.\"\n        )\n\n    if interval_anchor and not interval:\n        exit_with_error(\"An anchor date can only be provided with an interval schedule\")\n\n    if interval is not None:\n        if interval_anchor:\n            try:\n                pendulum.parse(interval_anchor)\n            except ValueError:\n                exit_with_error(\"The anchor date must be a valid date string.\")\n        interval_schedule = {\n            \"interval\": interval,\n            \"anchor_date\": interval_anchor,\n            \"timezone\": timezone,\n        }\n        updated_schedule = IntervalSchedule(\n            **{k: v for k, v in interval_schedule.items() if v is not None}\n        )\n\n    if cron_string is not None:\n        cron_schedule = {\n            \"cron\": cron_string,\n            \"day_or\": cron_day_or,\n            \"timezone\": timezone,\n        }\n        updated_schedule = CronSchedule(\n            **{k: v for k, v in cron_schedule.items() if v is not None}\n        )\n\n    if rrule_string is not None:\n        # a timezone in the `rrule_string` gets ignored by the RRuleSchedule constructor\n        if \"TZID\" in rrule_string and not timezone:\n            exit_with_error(\n                \"You can provide a timezone by providing a dict with a `timezone` key\"\n                \" to the --rrule option. E.g. {'rrule': 'FREQ=MINUTELY;INTERVAL=5',\"\n                \" 'timezone': 'America/New_York'}.\\nAlternatively, you can provide a\"\n                \" timezone by passing in a --timezone argument.\"\n            )\n        try:\n            updated_schedule = RRuleSchedule(**json.loads(rrule_string))\n            if timezone:\n                # override timezone if specified via CLI argument\n                updated_schedule.timezone = timezone\n        except json.JSONDecodeError:\n            updated_schedule = RRuleSchedule(rrule=rrule_string, timezone=timezone)\n\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(name)\n        except ObjectNotFound:\n            exit_with_error(f\"Deployment {name!r} not found!\")\n\n        await client.update_deployment(deployment, schedule=updated_schedule)\n        exit_with_success(\"Updated deployment schedule!\")\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/deployment/#prefect.cli.deployment.str_presenter","title":"str_presenter","text":"

configures yaml for dumping multiline strings Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/deployment.py
def str_presenter(dumper, data):\n\"\"\"\n    configures yaml for dumping multiline strings\n    Ref: https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data\n    \"\"\"\n    if len(data.splitlines()) > 1:  # check for multiline string\n        return dumper.represent_scalar(\"tag:yaml.org,2002:str\", data, style=\"|\")\n    return dumper.represent_scalar(\"tag:yaml.org,2002:str\", data)\n
","tags":["Python API","CLI","deployments"]},{"location":"api-ref/prefect/cli/dev/","title":"prefect.cli.dev","text":"","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev","title":"prefect.cli.dev","text":"

Command line interface for working with Prefect Server

","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.agent","title":"agent async","text":"

Starts a hot-reloading development agent process.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\nasync def agent(\n    api_url: str = SettingsOption(PREFECT_API_URL),\n    work_queues: List[str] = typer.Option(\n        [\"default\"],\n        \"-q\",\n        \"--work-queue\",\n        help=\"One or more work queue names for the agent to pull from.\",\n    ),\n):\n\"\"\"\n    Starts a hot-reloading development agent process.\n    \"\"\"\n    # Delayed import since this is only a 'dev' dependency\n    import watchfiles\n\n    app.console.print(\"Creating hot-reloading agent process...\")\n\n    try:\n        await watchfiles.arun_process(\n            prefect.__module_path__,\n            target=agent_process_entrypoint,\n            kwargs=dict(api=api_url, work_queues=work_queues),\n        )\n    except RuntimeError as err:\n        # a bug in watchfiles causes an 'Already borrowed' error from Rust when\n        # exiting: https://github.com/samuelcolvin/watchfiles/issues/200\n        if str(err).strip() != \"Already borrowed\":\n            raise\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.agent_process_entrypoint","title":"agent_process_entrypoint","text":"

An entrypoint for starting an agent in a subprocess. Adds a Rich console to the Typer app, processes Typer default parameters, then starts an agent. All kwargs are forwarded to prefect.cli.agent.start.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
def agent_process_entrypoint(**kwargs):\n\"\"\"\n    An entrypoint for starting an agent in a subprocess. Adds a Rich console\n    to the Typer app, processes Typer default parameters, then starts an agent.\n    All kwargs are forwarded to  `prefect.cli.agent.start`.\n    \"\"\"\n    import inspect\n\n    import rich\n\n    # import locally so only the `dev` command breaks if Typer internals change\n    from typer.models import ParameterInfo\n\n    # Typer does not process default parameters when calling a function\n    # directly, so we must set `start_agent`'s default parameters manually.\n    # get the signature of the `start_agent` function\n    start_agent_signature = inspect.signature(start_agent)\n\n    # for any arguments not present in kwargs, use the default value.\n    for name, param in start_agent_signature.parameters.items():\n        if name not in kwargs:\n            # All `param.default` values for start_agent are Typer params that store the\n            # actual default value in their `default` attribute and we must call\n            # `param.default.default` to get the actual default value. We should also\n            # ensure we extract the right default if non-Typer defaults are added\n            # to `start_agent` in the future.\n            if isinstance(param.default, ParameterInfo):\n                default = param.default.default\n            else:\n                default = param.default\n\n            # Some defaults are Prefect `SettingsOption.value` methods\n            # that must be called to get the actual value.\n            kwargs[name] = default() if callable(default) else default\n\n    # add a console, because calling the agent start function directly\n    # instead of via CLI call means `app` has no `console` attached.\n    app.console = (\n        rich.console.Console(\n            highlight=False,\n            color_system=\"auto\" if PREFECT_CLI_COLORS else None,\n            soft_wrap=not PREFECT_CLI_WRAP_LINES.value(),\n        )\n        if not getattr(app, \"console\", None)\n        else app.console\n    )\n\n    try:\n        start_agent(**kwargs)  # type: ignore\n    except KeyboardInterrupt:\n        # expected when watchfiles kills the process\n        pass\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.api","title":"api async","text":"

Starts a hot-reloading development API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\nasync def api(\n    host: str = SettingsOption(PREFECT_SERVER_API_HOST),\n    port: int = SettingsOption(PREFECT_SERVER_API_PORT),\n    log_level: str = \"DEBUG\",\n    services: bool = True,\n):\n\"\"\"\n    Starts a hot-reloading development API.\n    \"\"\"\n    import watchfiles\n\n    server_env = os.environ.copy()\n    server_env[\"PREFECT_API_SERVICES_RUN_IN_APP\"] = str(services)\n    server_env[\"PREFECT_API_SERVICES_UI\"] = \"False\"\n    server_env[\"PREFECT_UI_API_URL\"] = f\"http://{host}:{port}/api\"\n\n    command = [\n        sys.executable,\n        \"-m\",\n        \"uvicorn\",\n        \"--factory\",\n        \"prefect.server.api.server:create_app\",\n        \"--host\",\n        str(host),\n        \"--port\",\n        str(port),\n        \"--log-level\",\n        log_level.lower(),\n    ]\n\n    app.console.print(f\"Running: {' '.join(command)}\")\n    import signal\n\n    stop_event = anyio.Event()\n    start_command = partial(\n        run_process, command=command, env=server_env, stream_output=True\n    )\n\n    async with anyio.create_task_group() as tg:\n        try:\n            server_pid = await tg.start(start_command)\n            async for _ in watchfiles.awatch(\n                prefect.__module_path__, stop_event=stop_event  # type: ignore\n            ):\n                # when any watched files change, restart the server\n                app.console.print(\"Restarting Prefect Server...\")\n                os.kill(server_pid, signal.SIGTERM)  # type: ignore\n                # start a new server\n                server_pid = await tg.start(start_command)\n        except RuntimeError as err:\n            # a bug in watchfiles causes an 'Already borrowed' error from Rust when\n            # exiting: https://github.com/samuelcolvin/watchfiles/issues/200\n            if str(err).strip() != \"Already borrowed\":\n                raise\n        except KeyboardInterrupt:\n            # exit cleanly on ctrl-c by killing the server process if it's\n            # still running\n            try:\n                os.kill(server_pid, signal.SIGTERM)  # type: ignore\n            except ProcessLookupError:\n                # process already exited\n                pass\n\n            stop_event.set()\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.build_docs","title":"build_docs","text":"

Builds REST API reference documentation for static display.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\ndef build_docs(\n    schema_path: str = None,\n):\n\"\"\"\n    Builds REST API reference documentation for static display.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n\n    from prefect.server.api.server import create_app\n\n    schema = create_app(ephemeral=True).openapi()\n\n    if not schema_path:\n        schema_path = (\n            prefect.__development_base_path__ / \"docs\" / \"api-ref\" / \"schema.json\"\n        ).absolute()\n    # overwrite info for display purposes\n    schema[\"info\"] = {}\n    with open(schema_path, \"w\") as f:\n        json.dump(schema, f)\n    app.console.print(f\"OpenAPI schema written to {schema_path}\")\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.build_image","title":"build_image","text":"

Build a docker image for development.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\ndef build_image(\n    arch: str = typer.Option(\n        None,\n        help=(\n            \"The architecture to build the container for. \"\n            \"Defaults to the architecture of the host Python. \"\n            f\"[default: {platform.machine()}]\"\n        ),\n    ),\n    python_version: str = typer.Option(\n        None,\n        help=(\n            \"The Python version to build the container for. \"\n            \"Defaults to the version of the host Python. \"\n            f\"[default: {python_version_minor()}]\"\n        ),\n    ),\n    flavor: str = typer.Option(\n        None,\n        help=(\n            \"An alternative flavor to build, for example 'conda'. \"\n            \"Defaults to the standard Python base image\"\n        ),\n    ),\n    dry_run: bool = False,\n):\n\"\"\"\n    Build a docker image for development.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n    # TODO: Once https://github.com/tiangolo/typer/issues/354 is addresesd, the\n    #       default can be set in the function signature\n    arch = arch or platform.machine()\n    python_version = python_version or python_version_minor()\n\n    tag = get_prefect_image_name(python_version=python_version, flavor=flavor)\n\n    # Here we use a subprocess instead of the docker-py client to easily stream output\n    # as it comes\n    command = [\n        \"docker\",\n        \"build\",\n        str(prefect.__development_base_path__),\n        \"--tag\",\n        tag,\n        \"--platform\",\n        f\"linux/{arch}\",\n        \"--build-arg\",\n        \"PREFECT_EXTRAS=[dev]\",\n        \"--build-arg\",\n        f\"PYTHON_VERSION={python_version}\",\n    ]\n\n    if flavor:\n        command += [\"--build-arg\", f\"BASE_IMAGE=prefect-{flavor}\"]\n\n    if dry_run:\n        print(\" \".join(command))\n        return\n\n    try:\n        subprocess.check_call(command, shell=sys.platform == \"win32\")\n    except subprocess.CalledProcessError:\n        exit_with_error(\"Failed to build image!\")\n    else:\n        exit_with_success(f\"Built image {tag!r} for linux/{arch}\")\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.container","title":"container","text":"

Run a docker container with local code mounted and installed.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\ndef container(bg: bool = False, name=\"prefect-dev\", api: bool = True, tag: str = None):\n\"\"\"\n    Run a docker container with local code mounted and installed.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n    import docker\n    from docker.models.containers import Container\n\n    client = docker.from_env()\n\n    containers = client.containers.list()\n    container_names = {container.name for container in containers}\n    if name in container_names:\n        exit_with_error(\n            f\"Container {name!r} already exists. Specify a different name or stop \"\n            \"the existing container.\"\n        )\n\n    blocking_cmd = \"prefect dev api\" if api else \"sleep infinity\"\n    tag = tag or get_prefect_image_name()\n\n    container: Container = client.containers.create(\n        image=tag,\n        command=[\n            \"/bin/bash\",\n            \"-c\",\n            (  # noqa\n                \"pip install -e /opt/prefect/repo\\\\[dev\\\\] && touch /READY &&\"\n                f\" {blocking_cmd}\"\n            ),\n        ],\n        name=name,\n        auto_remove=True,\n        working_dir=\"/opt/prefect/repo\",\n        volumes=[f\"{prefect.__development_base_path__}:/opt/prefect/repo\"],\n        shm_size=\"4G\",\n    )\n\n    print(f\"Starting container for image {tag!r}...\")\n    container.start()\n\n    print(\"Waiting for installation to complete\", end=\"\", flush=True)\n    try:\n        ready = False\n        while not ready:\n            print(\".\", end=\"\", flush=True)\n            result = container.exec_run(\"test -f /READY\")\n            ready = result.exit_code == 0\n            if not ready:\n                time.sleep(3)\n    except BaseException:\n        print(\"\\nInterrupted. Stopping container...\")\n        container.stop()\n        raise\n\n    print(\n        textwrap.dedent(\n            f\"\"\"\n            Container {container.name!r} is ready! To connect to the container, run:\n\n                docker exec -it {container.name} /bin/bash\n            \"\"\"\n        )\n    )\n\n    if bg:\n        print(\n            textwrap.dedent(\n                f\"\"\"\n                The container will run forever. Stop the container with:\n\n                    docker stop {container.name}\n                \"\"\"\n            )\n        )\n        # Exit without stopping\n        return\n\n    try:\n        print(\"Send a keyboard interrupt to exit...\")\n        container.wait()\n    except KeyboardInterrupt:\n        pass  # Avoid showing \"Abort\"\n    finally:\n        print(\"\\nStopping container...\")\n        container.stop()\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.kubernetes_manifest","title":"kubernetes_manifest","text":"

Generates a Kubernetes manifest for development.

Example

$ prefect dev kubernetes-manifest | kubectl apply -f -

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\ndef kubernetes_manifest():\n\"\"\"\n    Generates a Kubernetes manifest for development.\n\n    Example:\n        $ prefect dev kubernetes-manifest | kubectl apply -f -\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n\n    template = Template(\n        (\n            prefect.__module_path__ / \"cli\" / \"templates\" / \"kubernetes-dev.yaml\"\n        ).read_text()\n    )\n    manifest = template.substitute(\n        {\n            \"prefect_root_directory\": prefect.__development_base_path__,\n            \"image_name\": get_prefect_image_name(),\n        }\n    )\n    print(manifest)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.start","title":"start async","text":"

Starts a hot-reloading development server with API, UI, and agent processes.

Each service has an individual command if you wish to start them separately. Each service can be excluded here as well.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\nasync def start(\n    exclude_api: bool = typer.Option(False, \"--no-api\"),\n    exclude_ui: bool = typer.Option(False, \"--no-ui\"),\n    exclude_agent: bool = typer.Option(False, \"--no-agent\"),\n    work_queues: List[str] = typer.Option(\n        [\"default\"],\n        \"-q\",\n        \"--work-queue\",\n        help=\"One or more work queue names for the dev agent to pull from.\",\n    ),\n):\n\"\"\"\n    Starts a hot-reloading development server with API, UI, and agent processes.\n\n    Each service has an individual command if you wish to start them separately.\n    Each service can be excluded here as well.\n    \"\"\"\n    async with anyio.create_task_group() as tg:\n        if not exclude_api:\n            tg.start_soon(\n                partial(\n                    api,\n                    host=PREFECT_SERVER_API_HOST.value(),\n                    port=PREFECT_SERVER_API_PORT.value(),\n                )\n            )\n        if not exclude_ui:\n            tg.start_soon(ui)\n        if not exclude_agent:\n            # Hook the agent to the hosted API if running\n            if not exclude_api:\n                host = f\"http://{PREFECT_SERVER_API_HOST.value()}:{PREFECT_SERVER_API_PORT.value()}/api\"  # noqa\n            else:\n                host = PREFECT_API_URL.value()\n            tg.start_soon(agent, host, work_queues)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/dev/#prefect.cli.dev.ui","title":"ui async","text":"

Starts a hot-reloading development UI.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/dev.py
@dev_app.command()\nasync def ui():\n\"\"\"\n    Starts a hot-reloading development UI.\n    \"\"\"\n    exit_with_error_if_not_editable_install()\n    with tmpchdir(prefect.__development_base_path__):\n        with tmpchdir(prefect.__development_base_path__ / \"ui\"):\n            app.console.print(\"Installing npm packages...\")\n            await run_process([\"npm\", \"install\"], stream_output=True)\n\n            app.console.print(\"Starting UI development server...\")\n            await run_process(command=[\"npm\", \"run\", \"serve\"], stream_output=True)\n
","tags":["Python API","CLI","development"]},{"location":"api-ref/prefect/cli/flow_run/","title":"prefect.cli.flow_run","text":"","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run","title":"prefect.cli.flow_run","text":"

Command line interface for working with flow runs

","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.cancel","title":"cancel async","text":"

Cancel a flow run by ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/flow_run.py
@flow_run_app.command()\nasync def cancel(id: UUID):\n\"\"\"Cancel a flow run by ID.\"\"\"\n    async with get_client() as client:\n        cancelling_state = State(type=StateType.CANCELLING)\n        try:\n            result = await client.set_flow_run_state(\n                flow_run_id=id, state=cancelling_state\n            )\n        except ObjectNotFound:\n            exit_with_error(f\"Flow run '{id}' not found!\")\n\n    if result.status == SetStateStatus.ABORT:\n        exit_with_error(\n            f\"Flow run '{id}' was unable to be cancelled. Reason:\"\n            f\" '{result.details.reason}'\"\n        )\n\n    exit_with_success(f\"Flow run '{id}' was succcessfully scheduled for cancellation.\")\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.delete","title":"delete async","text":"

Delete a flow run by ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/flow_run.py
@flow_run_app.command()\nasync def delete(id: UUID):\n\"\"\"\n    Delete a flow run by ID.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            await client.delete_flow_run(id)\n        except ObjectNotFound:\n            exit_with_error(f\"Flow run '{id}' not found!\")\n\n    exit_with_success(f\"Successfully deleted flow run '{id}'.\")\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.inspect","title":"inspect async","text":"

View details about a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/flow_run.py
@flow_run_app.command()\nasync def inspect(id: UUID):\n\"\"\"\n    View details about a flow run.\n    \"\"\"\n    async with get_client() as client:\n        try:\n            flow_run = await client.read_flow_run(id)\n        except httpx.HTTPStatusError as exc:\n            if exc.response.status_code == status.HTTP_404_NOT_FOUND:\n                exit_with_error(f\"Flow run {id!r} not found!\")\n            else:\n                raise\n\n    app.console.print(Pretty(flow_run))\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.logs","title":"logs async","text":"

View logs for a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/flow_run.py
@flow_run_app.command()\nasync def logs(\n    id: UUID,\n    head: bool = typer.Option(\n        False,\n        \"--head\",\n        \"-h\",\n        help=(\n            f\"Show the first {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of\"\n            \" all logs.\"\n        ),\n    ),\n    num_logs: int = typer.Option(\n        None,\n        \"--num-logs\",\n        \"-n\",\n        help=(\n            \"Number of logs to show when using the --head or --tail flag. If None,\"\n            f\" defaults to {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS}.\"\n        ),\n        min=1,\n    ),\n    reverse: bool = typer.Option(\n        False,\n        \"--reverse\",\n        \"-r\",\n        help=\"Reverse the logs order to print the most recent logs first\",\n    ),\n    tail: bool = typer.Option(\n        False,\n        \"--tail\",\n        \"-t\",\n        help=(\n            f\"Show the last {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of\"\n            \" all logs.\"\n        ),\n    ),\n):\n\"\"\"\n    View logs for a flow run.\n    \"\"\"\n    # Pagination - API returns max 200 (LOGS_DEFAULT_PAGE_SIZE) logs at a time\n    offset = 0\n    more_logs = True\n    num_logs_returned = 0\n\n    # if head and tail flags are being used together\n    if head and tail:\n        exit_with_error(\"Please provide either a `head` or `tail` option but not both.\")\n\n    user_specified_num_logs = (\n        num_logs or LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS\n        if head or tail or num_logs\n        else None\n    )\n\n    # if using tail update offset according to LOGS_DEFAULT_PAGE_SIZE\n    if tail:\n        offset = max(0, user_specified_num_logs - LOGS_DEFAULT_PAGE_SIZE)\n\n    log_filter = LogFilter(flow_run_id={\"any_\": [id]})\n\n    async with get_client() as client:\n        # Get the flow run\n        try:\n            flow_run = await client.read_flow_run(id)\n        except ObjectNotFound:\n            exit_with_error(f\"Flow run {str(id)!r} not found!\")\n\n        while more_logs:\n            num_logs_to_return_from_page = (\n                LOGS_DEFAULT_PAGE_SIZE\n                if user_specified_num_logs is None\n                else min(\n                    LOGS_DEFAULT_PAGE_SIZE, user_specified_num_logs - num_logs_returned\n                )\n            )\n\n            # Get the next page of logs\n            page_logs = await client.read_logs(\n                log_filter=log_filter,\n                limit=num_logs_to_return_from_page,\n                offset=offset,\n                sort=(\n                    LogSort.TIMESTAMP_DESC if reverse or tail else LogSort.TIMESTAMP_ASC\n                ),\n            )\n\n            for log in reversed(page_logs) if tail and not reverse else page_logs:\n                app.console.print(\n                    # Print following the flow run format (declared in logging.yml)\n                    (\n                        f\"{pendulum.instance(log.timestamp).to_datetime_string()}.{log.timestamp.microsecond // 1000:03d} |\"\n                        f\" {logging.getLevelName(log.level):7s} | Flow run\"\n                        f\" {flow_run.name!r} - {log.message}\"\n                    ),\n                    soft_wrap=True,\n                )\n\n            # Update the number of logs retrieved\n            num_logs_returned += num_logs_to_return_from_page\n\n            if tail:\n                #  If the current offset is not 0, update the offset for the next page\n                if offset != 0:\n                    offset = (\n                        0\n                        # Reset the offset to 0 if there are less logs than the LOGS_DEFAULT_PAGE_SIZE to get the remaining log\n                        if offset < LOGS_DEFAULT_PAGE_SIZE\n                        else offset - LOGS_DEFAULT_PAGE_SIZE\n                    )\n                else:\n                    more_logs = False\n            else:\n                if len(page_logs) == LOGS_DEFAULT_PAGE_SIZE:\n                    offset += LOGS_DEFAULT_PAGE_SIZE\n                else:\n                    # No more logs to show, exit\n                    more_logs = False\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/flow_run/#prefect.cli.flow_run.ls","title":"ls async","text":"

View recent flow runs or flow runs for specific flows

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/flow_run.py
@flow_run_app.command()\nasync def ls(\n    flow_name: List[str] = typer.Option(None, help=\"Name of the flow\"),\n    limit: int = typer.Option(15, help=\"Maximum number of flow runs to list\"),\n    state: List[str] = typer.Option(None, help=\"Name of the flow run's state\"),\n    state_type: List[StateType] = typer.Option(\n        None, help=\"Type of the flow run's state\"\n    ),\n):\n\"\"\"\n    View recent flow runs or flow runs for specific flows\n    \"\"\"\n\n    state_filter = {}\n    if state:\n        state_filter[\"name\"] = {\"any_\": state}\n    if state_type:\n        state_filter[\"type\"] = {\"any_\": state_type}\n\n    async with get_client() as client:\n        flow_runs = await client.read_flow_runs(\n            flow_filter=FlowFilter(name={\"any_\": flow_name}) if flow_name else None,\n            flow_run_filter=FlowRunFilter(state=state_filter) if state_filter else None,\n            limit=limit,\n            sort=FlowRunSort.EXPECTED_START_TIME_DESC,\n        )\n        flows_by_id = {\n            flow.id: flow\n            for flow in await client.read_flows(\n                flow_filter=FlowFilter(id={\"any_\": [run.flow_id for run in flow_runs]})\n            )\n        }\n\n    table = Table(title=\"Flow Runs\")\n    table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Flow\", style=\"blue\", no_wrap=True)\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"State\", no_wrap=True)\n    table.add_column(\"When\", style=\"bold\", no_wrap=True)\n\n    for flow_run in sorted(flow_runs, key=lambda d: d.created, reverse=True):\n        flow = flows_by_id[flow_run.flow_id]\n        timestamp = (\n            flow_run.state.state_details.scheduled_time\n            if flow_run.state.is_scheduled()\n            else flow_run.state.timestamp\n        )\n        table.add_row(\n            str(flow_run.id),\n            str(flow.name),\n            str(flow_run.name),\n            str(flow_run.state.type.value),\n            pendulum.instance(timestamp).diff_for_humans(),\n        )\n\n    app.console.print(table)\n
","tags":["Python API","CLI","flow runs"]},{"location":"api-ref/prefect/cli/profile/","title":"prefect.cli.profile","text":"","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile","title":"prefect.cli.profile","text":"

Command line interface for working with profiles.

","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.create","title":"create","text":"

Create a new profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/profile.py
@profile_app.command()\ndef create(\n    name: str,\n    from_name: str = typer.Option(None, \"--from\", help=\"Copy an existing profile.\"),\n):\n\"\"\"\n    Create a new profile.\n    \"\"\"\n\n    profiles = prefect.settings.load_profiles()\n    if name in profiles:\n        app.console.print(\n            textwrap.dedent(\n                f\"\"\"\n                [red]Profile {name!r} already exists.[/red]\n                To create a new profile, remove the existing profile first:\n\n                    prefect profile delete {name!r}\n                \"\"\"\n            ).strip()\n        )\n        raise typer.Exit(1)\n\n    if from_name:\n        if from_name not in profiles:\n            exit_with_error(f\"Profile {from_name!r} not found.\")\n\n        # Create a copy of the profile with a new name and add to the collection\n        profiles.add_profile(profiles[from_name].copy(update={\"name\": name}))\n    else:\n        profiles.add_profile(prefect.settings.Profile(name=name, settings={}))\n\n    prefect.settings.save_profiles(profiles)\n\n    app.console.print(\n        textwrap.dedent(\n            f\"\"\"\n            Created profile with properties:\n                name - {name!r}\n                from name - {from_name or None}\n\n            Use created profile for future, subsequent commands:\n                prefect profile use {name!r}\n\n            Use created profile temporarily for a single command:\n                prefect -p {name!r} config view\n            \"\"\"\n        )\n    )\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.delete","title":"delete","text":"

Delete the given profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/profile.py
@profile_app.command()\ndef delete(name: str):\n\"\"\"\n    Delete the given profile.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    if name not in profiles:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    current_profile = prefect.context.get_settings_context().profile\n    if current_profile.name == name:\n        exit_with_error(\n            f\"Profile {name!r} is the active profile. You must switch profiles before\"\n            \" it can be deleted.\"\n        )\n\n    profiles.remove_profile(name)\n\n    verb = \"Removed\"\n    if name == \"default\":\n        verb = \"Reset\"\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(f\"{verb} profile {name!r}.\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.inspect","title":"inspect","text":"

Display settings from a given profile; defaults to active.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/profile.py
@profile_app.command()\ndef inspect(\n    name: Optional[str] = typer.Argument(\n        None, help=\"Name of profile to inspect; defaults to active profile.\"\n    )\n):\n\"\"\"\n    Display settings from a given profile; defaults to active.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    if name is None:\n        current_profile = prefect.context.get_settings_context().profile\n        if not current_profile:\n            exit_with_error(\"No active profile set - please provide a name to inspect.\")\n        name = current_profile.name\n        print(f\"No name provided, defaulting to {name!r}\")\n    if name not in profiles:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    if not profiles[name].settings:\n        # TODO: Consider instructing on how to add settings.\n        print(f\"Profile {name!r} is empty.\")\n\n    for setting, value in profiles[name].settings.items():\n        app.console.print(f\"{setting.name}='{value}'\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.ls","title":"ls","text":"

List profile names.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/profile.py
@profile_app.command()\ndef ls():\n\"\"\"\n    List profile names.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    current_profile = prefect.context.get_settings_context().profile\n    current_name = current_profile.name if current_profile is not None else None\n\n    table = Table(caption=\"* active profile\")\n    table.add_column(\n        \"[#024dfd]Available Profiles:\", justify=\"right\", style=\"#8ea0ae\", no_wrap=True\n    )\n\n    for name in profiles:\n        if name == current_name:\n            table.add_row(f\"[green]  * {name}[/green]\")\n        else:\n            table.add_row(f\"  {name}\")\n    app.console.print(table)\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.rename","title":"rename","text":"

Change the name of a profile.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/profile.py
@profile_app.command()\ndef rename(name: str, new_name: str):\n\"\"\"\n    Change the name of a profile.\n    \"\"\"\n    profiles = prefect.settings.load_profiles()\n    if name not in profiles:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    if new_name in profiles:\n        exit_with_error(f\"Profile {new_name!r} already exists.\")\n\n    profiles.add_profile(profiles[name].copy(update={\"name\": new_name}))\n    profiles.remove_profile(name)\n\n    # If the active profile was renamed switch the active profile to the new name.\n    prefect.context.get_settings_context().profile\n    if profiles.active_name == name:\n        profiles.set_active(new_name)\n    if os.environ.get(\"PREFECT_PROFILE\") == name:\n        app.console.print(\n            f\"You have set your current profile to {name!r} with the \"\n            \"PREFECT_PROFILE environment variable. You must update this variable to \"\n            f\"{new_name!r} to continue using the profile.\"\n        )\n\n    prefect.settings.save_profiles(profiles)\n    exit_with_success(f\"Renamed profile {name!r} to {new_name!r}.\")\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/profile/#prefect.cli.profile.use","title":"use async","text":"

Set the given profile to active.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/profile.py
@profile_app.command()\nasync def use(name: str):\n\"\"\"\n    Set the given profile to active.\n    \"\"\"\n    status_messages = {\n        ConnectionStatus.CLOUD_CONNECTED: (\n            exit_with_success,\n            f\"Connected to Prefect Cloud using profile {name!r}\",\n        ),\n        ConnectionStatus.CLOUD_ERROR: (\n            exit_with_error,\n            f\"Error connecting to Prefect Cloud using profile {name!r}\",\n        ),\n        ConnectionStatus.CLOUD_UNAUTHORIZED: (\n            exit_with_error,\n            f\"Error authenticating with Prefect Cloud using profile {name!r}\",\n        ),\n        ConnectionStatus.ORION_CONNECTED: (\n            exit_with_success,\n            f\"Connected to Prefect server using profile {name!r}\",\n        ),\n        ConnectionStatus.ORION_ERROR: (\n            exit_with_error,\n            f\"Error connecting to Prefect server using profile {name!r}\",\n        ),\n        ConnectionStatus.EPHEMERAL: (\n            exit_with_success,\n            (\n                f\"No Prefect server specified using profile {name!r} - the API will run\"\n                \" in ephemeral mode.\"\n            ),\n        ),\n        ConnectionStatus.INVALID_API: (\n            exit_with_error,\n            \"Error connecting to Prefect API URL\",\n        ),\n    }\n\n    profiles = prefect.settings.load_profiles()\n    if name not in profiles.names:\n        exit_with_error(f\"Profile {name!r} not found.\")\n\n    profiles.set_active(name)\n    prefect.settings.save_profiles(profiles)\n\n    with Progress(\n        SpinnerColumn(),\n        TextColumn(\"[progress.description]{task.description}\"),\n        transient=False,\n    ) as progress:\n        progress.add_task(\n            description=\"Checking API connectivity...\",\n            total=None,\n        )\n\n        with use_profile(name, include_current_context=False):\n            connection_status = await check_orion_connection()\n\n        exit_method, msg = status_messages[connection_status]\n\n    exit_method(msg)\n
","tags":["Python API","CLI","settings","configuration","profiles"]},{"location":"api-ref/prefect/cli/project/","title":"prefect.cli.project","text":"","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project","title":"prefect.cli.project","text":"

Deprecated - Command line interface for working with projects.

","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.clone","title":"clone async","text":"

Clone an existing project for a given deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/project.py
@project_app.command()\nasync def clone(\n    deployment_name: str = typer.Option(\n        None,\n        \"--deployment\",\n        \"-d\",\n        help=\"The name of the deployment to clone a project for.\",\n    ),\n    deployment_id: str = typer.Option(\n        None,\n        \"--id\",\n        \"-i\",\n        help=\"The id of the deployment to clone a project for.\",\n    ),\n):\n\"\"\"\n    Clone an existing project for a given deployment.\n    \"\"\"\n    app.console.print(\n        generate_deprecation_message(\n            \"The `prefect project clone` command\",\n            start_date=\"Jun 2023\",\n        )\n    )\n    if deployment_name and deployment_id:\n        exit_with_error(\n            \"Can only pass one of deployment name or deployment ID options.\"\n        )\n\n    if not deployment_name and not deployment_id:\n        exit_with_error(\"Must pass either a deployment name or deployment ID.\")\n\n    if deployment_name:\n        async with get_client() as client:\n            try:\n                deployment = await client.read_deployment_by_name(deployment_name)\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {deployment_name!r} not found!\")\n    else:\n        async with get_client() as client:\n            try:\n                deployment = await client.read_deployment(deployment_id)\n            except ObjectNotFound:\n                exit_with_error(f\"Deployment {deployment_id!r} not found!\")\n\n    if deployment.pull_steps:\n        output = await run_steps(deployment.pull_steps)\n        app.console.out(output[\"directory\"])\n    else:\n        exit_with_error(\"No pull steps found, exiting early.\")\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.init","title":"init async","text":"

Initialize a new project.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/project.py
@project_app.command()\n@app.command()\nasync def init(\n    name: str = None,\n    recipe: str = None,\n    fields: List[str] = typer.Option(\n        None,\n        \"-f\",\n        \"--field\",\n        help=(\n            \"One or more fields to pass to the recipe (e.g., image_name) in the format\"\n            \" of key=value.\"\n        ),\n    ),\n):\n\"\"\"\n    Initialize a new project.\n    \"\"\"\n    inputs = {}\n    fields = fields or []\n    recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n\n    for field in fields:\n        key, value = field.split(\"=\")\n        inputs[key] = value\n\n    if not recipe and is_interactive():\n        recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n        recipes = []\n\n        for r in recipe_paths.iterdir():\n            if r.is_dir() and (r / \"prefect.yaml\").exists():\n                with open(r / \"prefect.yaml\") as f:\n                    recipe_data = yaml.safe_load(f)\n                    recipe_name = r.name\n                    recipe_description = recipe_data.get(\n                        \"description\", \"(no description available)\"\n                    )\n                    recipe_dict = {\n                        \"name\": recipe_name,\n                        \"description\": recipe_description,\n                    }\n                    recipes.append(recipe_dict)\n\n        selected_recipe = prompt_select_from_table(\n            app.console,\n            \"Would you like to initialize your deployment configuration with a recipe?\",\n            columns=[\n                {\"header\": \"Name\", \"key\": \"name\"},\n                {\"header\": \"Description\", \"key\": \"description\"},\n            ],\n            data=recipes,\n            opt_out_message=\"No, I'll use the default deployment configuration.\",\n            opt_out_response={},\n        )\n        if selected_recipe != {}:\n            recipe = selected_recipe[\"name\"]\n\n    if recipe and (recipe_paths / recipe / \"prefect.yaml\").exists():\n        with open(recipe_paths / recipe / \"prefect.yaml\") as f:\n            recipe_inputs = yaml.safe_load(f).get(\"required_inputs\") or {}\n\n        if recipe_inputs:\n            if set(recipe_inputs.keys()) < set(inputs.keys()):\n                # message to user about extra fields\n                app.console.print(\n                    (\n                        f\"Warning: extra fields provided for {recipe!r} recipe:\"\n                        f\" '{', '.join(set(inputs.keys()) - set(recipe_inputs.keys()))}'\"\n                    ),\n                    style=\"red\",\n                )\n            elif set(recipe_inputs.keys()) > set(inputs.keys()):\n                table = Table(\n                    title=f\"[red]Required inputs for {recipe!r} recipe[/red]\",\n                )\n                table.add_column(\"Field Name\", style=\"green\", no_wrap=True)\n                table.add_column(\n                    \"Description\", justify=\"left\", style=\"white\", no_wrap=False\n                )\n                for field, description in recipe_inputs.items():\n                    if field not in inputs:\n                        table.add_row(field, description)\n\n                app.console.print(table)\n\n                for key, description in recipe_inputs.items():\n                    if key not in inputs:\n                        inputs[key] = typer.prompt(key)\n\n            app.console.print(\"-\" * 15)\n\n    try:\n        files = [\n            f\"[green]{fname}[/green]\"\n            for fname in initialize_project(name=name, recipe=recipe, inputs=inputs)\n        ]\n    except ValueError as exc:\n        if \"Unknown recipe\" in str(exc):\n            exit_with_error(\n                f\"Unknown recipe {recipe!r} provided - run [yellow]`prefect init\"\n                \"`[/yellow] to see all available recipes.\"\n            )\n        else:\n            raise\n\n    files = \"\\n\".join(files)\n    empty_msg = (\n        f\"Created project in [green]{Path('.').resolve()}[/green]; no new files\"\n        \" created.\"\n    )\n    file_msg = (\n        f\"Created project in [green]{Path('.').resolve()}[/green] with the following\"\n        f\" new files:\\n{files}\"\n    )\n    app.console.print(file_msg if files else empty_msg)\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.ls","title":"ls async","text":"

List available recipes.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/project.py
@recipe_app.command()\nasync def ls():\n\"\"\"\n    List available recipes.\n    \"\"\"\n\n    recipe_paths = prefect.__module_path__ / \"deployments\" / \"recipes\"\n    recipes = {}\n\n    for recipe in recipe_paths.iterdir():\n        if recipe.is_dir() and (recipe / \"prefect.yaml\").exists():\n            with open(recipe / \"prefect.yaml\") as f:\n                recipes[recipe.name] = yaml.safe_load(f).get(\n                    \"description\", \"(no description available)\"\n                )\n\n    table = Table(\n        title=\"Available project recipes\",\n        caption=(\n            \"Run `prefect project init --recipe <recipe>` to initialize a project with\"\n            \" a recipe.\"\n        ),\n        caption_style=\"red\",\n    )\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"Description\", justify=\"left\", style=\"white\", no_wrap=False)\n    for name, description in sorted(recipes.items(), key=lambda x: x[0]):\n        table.add_row(name, description)\n\n    app.console.print(table)\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/project/#prefect.cli.project.register_flow","title":"register_flow async","text":"

Register a flow with this project.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/project.py
@project_app.command()\nasync def register_flow(\n    entrypoint: str = typer.Argument(\n        ...,\n        help=(\n            \"The path to a flow entrypoint, in the form of\"\n            \" `./path/to/file.py:flow_func_name`\"\n        ),\n    ),\n    force: bool = typer.Option(\n        False,\n        \"--force\",\n        \"-f\",\n        help=(\n            \"An optional flag to force register this flow and overwrite any existing\"\n            \" entry\"\n        ),\n    ),\n):\n\"\"\"\n    Register a flow with this project.\n    \"\"\"\n    try:\n        flow = await register(entrypoint, force=force)\n    except Exception as exc:\n        exit_with_error(exc)\n\n    app.console.print(\n        (\n            f\"Registered flow {flow.name!r} in\"\n            f\" {(find_prefect_directory()/'flows.json').resolve()!s}\"\n        ),\n        style=\"green\",\n    )\n
","tags":["Python API","CLI","projects","deployments","storage"]},{"location":"api-ref/prefect/cli/root/","title":"prefect.cli.root","text":"","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/root/#prefect.cli.root","title":"prefect.cli.root","text":"

Base prefect command-line application

","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/root/#prefect.cli.root.version","title":"version async","text":"

Get the current Prefect version.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/root.py
@app.command()\nasync def version():\n\"\"\"Get the current Prefect version.\"\"\"\n    import sqlite3\n\n    from prefect.server.api.server import SERVER_API_VERSION\n    from prefect.server.utilities.database import get_dialect\n    from prefect.settings import PREFECT_API_DATABASE_CONNECTION_URL\n\n    version_info = {\n        \"Version\": prefect.__version__,\n        \"API version\": SERVER_API_VERSION,\n        \"Python version\": platform.python_version(),\n        \"Git commit\": prefect.__version_info__[\"full-revisionid\"][:8],\n        \"Built\": pendulum.parse(\n            prefect.__version_info__[\"date\"]\n        ).to_day_datetime_string(),\n        \"OS/Arch\": f\"{sys.platform}/{platform.machine()}\",\n        \"Profile\": prefect.context.get_settings_context().profile.name,\n    }\n\n    server_type: str\n\n    try:\n        # We do not context manage the client because when using an ephemeral app we do not\n        # want to create the database or run migrations\n        client = prefect.get_client()\n        server_type = client.server_type.value\n    except Exception:\n        server_type = \"<client error>\"\n\n    version_info[\"Server type\"] = server_type.lower()\n\n    # TODO: Consider adding an API route to retrieve this information?\n    if server_type == ServerType.EPHEMERAL.value:\n        database = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value()).name\n        version_info[\"Server\"] = {\"Database\": database}\n        if database == \"sqlite\":\n            version_info[\"Server\"][\"SQLite version\"] = sqlite3.sqlite_version\n\n    def display(object: dict, nesting: int = 0):\n        # Recursive display of a dictionary with nesting\n        for key, value in object.items():\n            key += \":\"\n            if isinstance(value, dict):\n                app.console.print(key)\n                return display(value, nesting + 2)\n            prefix = \" \" * nesting\n            app.console.print(f\"{prefix}{key.ljust(20 - len(prefix))} {value}\")\n\n    display(version_info)\n
","tags":["Python API","CLI"]},{"location":"api-ref/prefect/cli/server/","title":"prefect.cli.server","text":"","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server","title":"prefect.cli.server","text":"

Command line interface for working with Prefect

","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.downgrade","title":"downgrade async","text":"

Downgrade the Prefect database

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/server.py
@orion_database_app.command()\n@database_app.command()\nasync def downgrade(\n    yes: bool = typer.Option(False, \"--yes\", \"-y\"),\n    revision: str = typer.Option(\n        \"base\",\n        \"-r\",\n        help=(\n            \"The revision to pass to `alembic downgrade`. If not provided, runs all\"\n            \" migrations.\"\n        ),\n    ),\n    dry_run: bool = typer.Option(\n        False,\n        help=(\n            \"Flag to show what migrations would be made without applying them. Will\"\n            \" emit sql statements to stdout.\"\n        ),\n    ),\n):\n\"\"\"Downgrade the Prefect database\"\"\"\n    from prefect.server.database.alembic_commands import alembic_downgrade\n    from prefect.server.database.dependencies import provide_database_interface\n\n    db = provide_database_interface()\n\n    engine = await db.engine()\n\n    if not yes:\n        confirm = typer.confirm(\n            \"Are you sure you want to downgrade the Prefect \"\n            f\"database at {engine.url!r}?\"\n        )\n        if not confirm:\n            exit_with_error(\"Database downgrade aborted!\")\n\n    app.console.print(\"Running downgrade migrations ...\")\n    await run_sync_in_worker_thread(\n        alembic_downgrade, revision=revision, dry_run=dry_run\n    )\n    app.console.print(\"Migrations succeeded!\")\n    exit_with_success(f\"Prefect database at {engine.url!r} downgraded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.reset","title":"reset async","text":"

Drop and recreate all Prefect database tables

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/server.py
@orion_database_app.command()\n@database_app.command()\nasync def reset(yes: bool = typer.Option(False, \"--yes\", \"-y\")):\n\"\"\"Drop and recreate all Prefect database tables\"\"\"\n    from prefect.server.database.dependencies import provide_database_interface\n\n    db = provide_database_interface()\n    engine = await db.engine()\n    if not yes:\n        confirm = typer.confirm(\n            \"Are you sure you want to reset the Prefect database located \"\n            f'at \"{engine.url!r}\"? This will drop and recreate all tables.'\n        )\n        if not confirm:\n            exit_with_error(\"Database reset aborted\")\n    app.console.print(\"Downgrading database...\")\n    await db.drop_db()\n    app.console.print(\"Upgrading database...\")\n    await db.create_db()\n    exit_with_success(f'Prefect database \"{engine.url!r}\" reset!')\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.revision","title":"revision async","text":"

Create a new migration for the Prefect database

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/server.py
@orion_database_app.command()\n@database_app.command()\nasync def revision(\n    message: str = typer.Option(\n        None,\n        \"--message\",\n        \"-m\",\n        help=\"A message to describe the migration.\",\n    ),\n    autogenerate: bool = False,\n):\n\"\"\"Create a new migration for the Prefect database\"\"\"\n    from prefect.server.database.alembic_commands import alembic_revision\n\n    app.console.print(\"Running migration file creation ...\")\n    await run_sync_in_worker_thread(\n        alembic_revision,\n        message=message,\n        autogenerate=autogenerate,\n    )\n    exit_with_success(\"Creating new migration file succeeded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.stamp","title":"stamp async","text":"

Stamp the revision table with the given revision; don't run any migrations

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/server.py
@orion_database_app.command()\n@database_app.command()\nasync def stamp(revision: str):\n\"\"\"Stamp the revision table with the given revision; don't run any migrations\"\"\"\n    from prefect.server.database.alembic_commands import alembic_stamp\n\n    app.console.print(\"Stamping database with revision ...\")\n    await run_sync_in_worker_thread(alembic_stamp, revision=revision)\n    exit_with_success(\"Stamping database with revision succeeded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.start","title":"start async","text":"

Start a Prefect server

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/server.py
@orion_app.command()\n@server_app.command()\nasync def start(\n    host: str = SettingsOption(PREFECT_SERVER_API_HOST),\n    port: int = SettingsOption(PREFECT_SERVER_API_PORT),\n    keep_alive_timeout: int = SettingsOption(PREFECT_SERVER_API_KEEPALIVE_TIMEOUT),\n    log_level: str = SettingsOption(PREFECT_LOGGING_SERVER_LEVEL),\n    scheduler: bool = SettingsOption(PREFECT_API_SERVICES_SCHEDULER_ENABLED),\n    analytics: bool = SettingsOption(\n        PREFECT_SERVER_ANALYTICS_ENABLED, \"--analytics-on/--analytics-off\"\n    ),\n    late_runs: bool = SettingsOption(PREFECT_API_SERVICES_LATE_RUNS_ENABLED),\n    ui: bool = SettingsOption(PREFECT_UI_ENABLED),\n):\n\"\"\"Start a Prefect server\"\"\"\n\n    server_env = os.environ.copy()\n    server_env[\"PREFECT_API_SERVICES_SCHEDULER_ENABLED\"] = str(scheduler)\n    server_env[\"PREFECT_SERVER_ANALYTICS_ENABLED\"] = str(analytics)\n    server_env[\"PREFECT_API_SERVICES_LATE_RUNS_ENABLED\"] = str(late_runs)\n    server_env[\"PREFECT_API_SERVICES_UI\"] = str(ui)\n    server_env[\"PREFECT_LOGGING_SERVER_LEVEL\"] = log_level\n\n    base_url = f\"http://{host}:{port}\"\n\n    async with anyio.create_task_group() as tg:\n        app.console.print(generate_welcome_blurb(base_url, ui_enabled=ui))\n        app.console.print(\"\\n\")\n\n        server_process_id = await tg.start(\n            partial(\n                run_process,\n                command=[\n                    sys.executable,\n                    \"-m\",\n                    \"uvicorn\",\n                    \"--app-dir\",\n                    # quote wrapping needed for windows paths with spaces\n                    f'\"{prefect.__module_path__.parent}\"',\n                    \"--factory\",\n                    \"prefect.server.api.server:create_app\",\n                    \"--host\",\n                    str(host),\n                    \"--port\",\n                    str(port),\n                    \"--timeout-keep-alive\",\n                    str(keep_alive_timeout),\n                ],\n                env=server_env,\n                stream_output=True,\n            )\n        )\n\n        # Explicitly handle the interrupt signal here, as it will allow us to\n        # cleanly stop the uvicorn server. Failing to do that may cause a\n        # large amount of anyio error traces on the terminal, because the\n        # SIGINT is handled by Typer/Click in this process (the parent process)\n        # and will start shutting down subprocesses:\n        # https://github.com/PrefectHQ/server/issues/2475\n\n        setup_signal_handlers_server(\n            server_process_id, \"the Prefect server\", app.console.print\n        )\n\n    app.console.print(\"Server stopped!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/server/#prefect.cli.server.upgrade","title":"upgrade async","text":"

Upgrade the Prefect database

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/server.py
@orion_database_app.command()\n@database_app.command()\nasync def upgrade(\n    yes: bool = typer.Option(False, \"--yes\", \"-y\"),\n    revision: str = typer.Option(\n        \"head\",\n        \"-r\",\n        help=(\n            \"The revision to pass to `alembic upgrade`. If not provided, runs all\"\n            \" migrations.\"\n        ),\n    ),\n    dry_run: bool = typer.Option(\n        False,\n        help=(\n            \"Flag to show what migrations would be made without applying them. Will\"\n            \" emit sql statements to stdout.\"\n        ),\n    ),\n):\n\"\"\"Upgrade the Prefect database\"\"\"\n    from prefect.server.database.alembic_commands import alembic_upgrade\n    from prefect.server.database.dependencies import provide_database_interface\n\n    db = provide_database_interface()\n    engine = await db.engine()\n\n    if not yes:\n        confirm = typer.confirm(\n            f\"Are you sure you want to upgrade the Prefect database at {engine.url!r}?\"\n        )\n        if not confirm:\n            exit_with_error(\"Database upgrade aborted!\")\n\n    app.console.print(\"Running upgrade migrations ...\")\n    await run_sync_in_worker_thread(alembic_upgrade, revision=revision, dry_run=dry_run)\n    app.console.print(\"Migrations succeeded!\")\n    exit_with_success(f\"Prefect database at {engine.url!r} upgraded!\")\n
","tags":["Python API","CLI","Kubernetes","database"]},{"location":"api-ref/prefect/cli/work_queue/","title":"prefect.cli.work_queue","text":"","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue","title":"prefect.cli.work_queue","text":"

Command line interface for working with work queues.

","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.clear_concurrency_limit","title":"clear_concurrency_limit async","text":"

Clear any concurrency limits from a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def clear_concurrency_limit(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to clear\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n\"\"\"\n    Clear any concurrency limits from a work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                concurrency_limit=None,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = (\n            f\"Concurrency limits removed on work queue {name!r} in work pool {pool!r}\"\n        )\n    else:\n        success_message = f\"Concurrency limits removed on work queue {name!r}\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.create","title":"create async","text":"

Create a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def create(\n    name: str = typer.Argument(..., help=\"The unique name to assign this work queue\"),\n    limit: int = typer.Option(\n        None, \"-l\", \"--limit\", help=\"The concurrency limit to set on the queue.\"\n    ),\n    tags: List[str] = typer.Option(\n        None,\n        \"-t\",\n        \"--tag\",\n        help=(\n            \"DEPRECATED: One or more optional tags. This option will be removed on\"\n            \" 2023-02-23.\"\n        ),\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool to create the work queue in.\",\n    ),\n    priority: Optional[int] = typer.Option(\n        None,\n        \"-q\",\n        \"--priority\",\n        help=\"The associated priority for the created work queue\",\n    ),\n):\n\"\"\"\n    Create a work queue.\n    \"\"\"\n    if tags:\n        app.console.print(\n            (\n                \"Supplying `tags` for work queues is deprecated. This work \"\n                \"queue will use legacy tag-matching behavior. \"\n                \"This option will be removed on 2023-02-23.\"\n            ),\n            style=\"red\",\n        )\n\n    if pool and tags:\n        exit_with_error(\n            \"Work queues created with tags cannot specify work pools or set priorities.\"\n        )\n\n    async with get_client() as client:\n        try:\n            result = await client.create_work_queue(\n                name=name, tags=tags or None, work_pool_name=pool, priority=priority\n            )\n            if limit is not None:\n                await client.update_work_queue(\n                    id=result.id,\n                    concurrency_limit=limit,\n                )\n        except ObjectAlreadyExists:\n            exit_with_error(f\"Work queue with name: {name!r} already exists.\")\n        except ObjectNotFound:\n            exit_with_error(f\"Work pool with name: {pool!r} not found.\")\n\n    if tags:\n        tags_message = f\"tags - {', '.join(sorted(tags))}\\n\" or \"\"\n        output_msg = dedent(\n            f\"\"\"\n            Created work queue with properties:\n                name - {name!r}\n                id - {result.id}\n                concurrency limit - {limit}\n{tags_message}\n            Start an agent to pick up flow runs from the work queue:\n                prefect agent start -q '{result.name}'\n\n            Inspect the work queue:\n                prefect work-queue inspect '{result.name}'\n            \"\"\"\n        )\n    else:\n        if not pool:\n            # specify the default work pool name after work queue creation to allow the server\n            # to handle a bunch of logic associated with agents without work pools\n            pool = DEFAULT_AGENT_WORK_POOL_NAME\n        output_msg = dedent(\n            f\"\"\"\n            Created work queue with properties:\n                name - {name!r}\n                work pool - {pool!r}\n                id - {result.id}\n                concurrency limit - {limit}\n            Start an agent to pick up flow runs from the work queue:\n                prefect agent start -q '{result.name} -p {pool}'\n\n            Inspect the work queue:\n                prefect work-queue inspect '{result.name}'\n            \"\"\"\n        )\n    exit_with_success(output_msg)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.delete","title":"delete async","text":"

Delete a work queue by ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def delete(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to delete\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool containing the work queue to delete.\",\n    ),\n):\n\"\"\"\n    Delete a work queue by ID.\n    \"\"\"\n\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            await client.delete_work_queue_by_id(id=queue_id)\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n    if pool:\n        success_message = (\n            f\"Successfully deleted work queue {name!r} in work pool {pool!r}\"\n        )\n    else:\n        success_message = f\"Successfully deleted work queue {name!r}\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.inspect","title":"inspect async","text":"

Inspect a work queue by ID.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def inspect(\n    name: str = typer.Argument(\n        None, help=\"The name or ID of the work queue to inspect\"\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n\"\"\"\n    Inspect a work queue by ID.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n    async with get_client() as client:\n        try:\n            result = await client.read_work_queue(id=queue_id)\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    app.console.print(Pretty(result))\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.ls","title":"ls async","text":"

View all work queues.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def ls(\n    verbose: bool = typer.Option(\n        False, \"--verbose\", \"-v\", help=\"Display more information.\"\n    ),\n    work_queue_prefix: str = typer.Option(\n        None,\n        \"--match\",\n        \"-m\",\n        help=(\n            \"Will match work queues with names that start with the specified prefix\"\n            \" string\"\n        ),\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool containing the work queues to list.\",\n    ),\n):\n\"\"\"\n    View all work queues.\n    \"\"\"\n    if not pool and not experiment_enabled(\"work_pools\"):\n        table = Table(\n            title=\"Work Queues\",\n            caption=\"(**) denotes a paused queue\",\n            caption_style=\"red\",\n        )\n        table.add_column(\"Name\", style=\"green\", no_wrap=True)\n        table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n        table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n        if verbose:\n            table.add_column(\"Filter (Deprecated)\", style=\"magenta\", no_wrap=True)\n\n        async with get_client() as client:\n            if work_queue_prefix is not None:\n                queues = await client.match_work_queues([work_queue_prefix])\n            else:\n                queues = await client.read_work_queues()\n\n            def sort_by_created_key(q):\n                return pendulum.now(\"utc\") - q.created\n\n            for queue in sorted(queues, key=sort_by_created_key):\n                row = [\n                    f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n                    str(queue.id),\n                    (\n                        f\"[red]{queue.concurrency_limit}\"\n                        if queue.concurrency_limit\n                        else \"[blue]None\"\n                    ),\n                ]\n                if verbose and queue.filter is not None:\n                    row.append(queue.filter.json())\n                table.add_row(*row)\n    elif not pool:\n        table = Table(\n            title=\"Work Queues\",\n            caption=\"(**) denotes a paused queue\",\n            caption_style=\"red\",\n        )\n        table.add_column(\"Name\", style=\"green\", no_wrap=True)\n        table.add_column(\"Pool\", style=\"magenta\", no_wrap=True)\n        table.add_column(\"ID\", justify=\"right\", style=\"cyan\", no_wrap=True)\n        table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n        if verbose:\n            table.add_column(\"Filter (Deprecated)\", style=\"magenta\", no_wrap=True)\n\n        async with get_client() as client:\n            if work_queue_prefix is not None:\n                queues = await client.match_work_queues([work_queue_prefix])\n            else:\n                queues = await client.read_work_queues()\n\n            pool_ids = [q.work_pool_id for q in queues]\n            wp_filter = WorkPoolFilter(id=WorkPoolFilterId(any_=pool_ids))\n            pools = await client.read_work_pools(work_pool_filter=wp_filter)\n            pool_id_name_map = {p.id: p.name for p in pools}\n\n            def sort_by_created_key(q):\n                return pendulum.now(\"utc\") - q.created\n\n            for queue in sorted(queues, key=sort_by_created_key):\n                row = [\n                    f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n                    pool_id_name_map[queue.work_pool_id],\n                    str(queue.id),\n                    (\n                        f\"[red]{queue.concurrency_limit}\"\n                        if queue.concurrency_limit\n                        else \"[blue]None\"\n                    ),\n                ]\n                if verbose and queue.filter is not None:\n                    row.append(queue.filter.json())\n                table.add_row(*row)\n\n    else:\n        table = Table(\n            title=f\"Work Queues in Work Pool {pool!r}\",\n            caption=\"(**) denotes a paused queue\",\n            caption_style=\"red\",\n        )\n        table.add_column(\"Name\", style=\"green\", no_wrap=True)\n        table.add_column(\"Priority\", style=\"magenta\", no_wrap=True)\n        table.add_column(\"Concurrency Limit\", style=\"blue\", no_wrap=True)\n        if verbose:\n            table.add_column(\"Description\", style=\"cyan\", no_wrap=False)\n\n        async with get_client() as client:\n            try:\n                queues = await client.read_work_queues(work_pool_name=pool)\n            except ObjectNotFound:\n                exit_with_error(f\"No work pool found: {pool!r}\")\n\n            def sort_by_created_key(q):\n                return pendulum.now(\"utc\") - q.created\n\n            for queue in sorted(queues, key=sort_by_created_key):\n                row = [\n                    f\"{queue.name} [red](**)\" if queue.is_paused else queue.name,\n                    f\"{queue.priority}\",\n                    (\n                        f\"[red]{queue.concurrency_limit}\"\n                        if queue.concurrency_limit\n                        else \"[blue]None\"\n                    ),\n                ]\n                if verbose:\n                    row.append(queue.description)\n                table.add_row(*row)\n\n    app.console.print(table)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.pause","title":"pause async","text":"

Pause a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def pause(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to pause\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n\"\"\"\n    Pause a work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                is_paused=True,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = f\"Work queue {name!r} in work pool {pool!r} paused\"\n    else:\n        success_message = f\"Work queue {name!r} paused\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.preview","title":"preview async","text":"

Preview a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def preview(\n    name: str = typer.Argument(\n        None, help=\"The name or ID of the work queue to preview\"\n    ),\n    hours: int = typer.Option(\n        None,\n        \"-h\",\n        \"--hours\",\n        help=\"The number of hours to look ahead; defaults to 1 hour\",\n    ),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n\"\"\"\n    Preview a work queue.\n    \"\"\"\n    if pool:\n        title = f\"Preview of Work Queue {name!r} in Work Pool {pool!r}\"\n    else:\n        title = f\"Preview of Work Queue {name!r}\"\n\n    table = Table(title=title, caption=\"(**) denotes a late run\", caption_style=\"red\")\n    table.add_column(\n        \"Scheduled Start Time\", justify=\"left\", style=\"yellow\", no_wrap=True\n    )\n    table.add_column(\"Run ID\", justify=\"left\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Name\", style=\"green\", no_wrap=True)\n    table.add_column(\"Deployment ID\", style=\"blue\", no_wrap=True)\n\n    window = pendulum.now(\"utc\").add(hours=hours or 1)\n\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name, work_pool_name=pool\n    )\n    async with get_client() as client:\n        if pool:\n            try:\n                responses = await client.get_scheduled_flow_runs_for_work_pool(\n                    work_pool_name=pool,\n                    work_queue_names=[name],\n                )\n                runs = [response.flow_run for response in responses]\n            except ObjectNotFound:\n                exit_with_error(f\"No work queue found: {name!r} in work pool {pool!r}\")\n        else:\n            try:\n                runs = await client.get_runs_in_work_queue(\n                    queue_id,\n                    limit=10,\n                    scheduled_before=window,\n                )\n            except ObjectNotFound:\n                exit_with_error(f\"No work queue found: {name!r}\")\n    now = pendulum.now(\"utc\")\n\n    def sort_by_created_key(r):\n        return now - r.created\n\n    for run in sorted(runs, key=sort_by_created_key):\n        table.add_row(\n            (\n                f\"{run.expected_start_time} [red](**)\"\n                if run.expected_start_time < now\n                else f\"{run.expected_start_time}\"\n            ),\n            str(run.id),\n            run.name,\n            str(run.deployment_id),\n        )\n\n    if runs:\n        app.console.print(table)\n    else:\n        app.console.print(\n            (\n                \"No runs found - try increasing how far into the future you preview\"\n                \" with the --hours flag\"\n            ),\n            style=\"yellow\",\n        )\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.resume","title":"resume async","text":"

Resume a paused work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def resume(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue to resume\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n\"\"\"\n    Resume a paused work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                is_paused=False,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = f\"No work queue found: {name!r} in work pool {pool!r}\"\n            else:\n                error_message = f\"No work queue found: {name!r}\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = f\"Work queue {name!r} in work pool {pool!r} resumed\"\n    else:\n        success_message = f\"Work queue {name!r} resumed\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/cli/work_queue/#prefect.cli.work_queue.set_concurrency_limit","title":"set_concurrency_limit async","text":"

Set a concurrency limit on a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/cli/work_queue.py
@work_app.command()\n@experimental_parameter(\"pool\", group=\"work_pools\", when=lambda y: y is not None)\nasync def set_concurrency_limit(\n    name: str = typer.Argument(..., help=\"The name or ID of the work queue\"),\n    limit: int = typer.Argument(..., help=\"The concurrency limit to set on the queue.\"),\n    pool: Optional[str] = typer.Option(\n        None,\n        \"-p\",\n        \"--pool\",\n        help=\"The name of the work pool that the work queue belongs to.\",\n    ),\n):\n\"\"\"\n    Set a concurrency limit on a work queue.\n    \"\"\"\n    queue_id = await _get_work_queue_id_from_name_or_id(\n        name_or_id=name,\n        work_pool_name=pool,\n    )\n\n    async with get_client() as client:\n        try:\n            await client.update_work_queue(\n                id=queue_id,\n                concurrency_limit=limit,\n            )\n        except ObjectNotFound:\n            if pool:\n                error_message = (\n                    f\"No work queue named {name!r} found in work pool {pool!r}.\"\n                )\n            else:\n                error_message = f\"No work queue named {name!r} found.\"\n            exit_with_error(error_message)\n\n    if pool:\n        success_message = (\n            f\"Concurrency limit of {limit} set on work queue {name!r} in work pool\"\n            f\" {pool!r}\"\n        )\n    else:\n        success_message = f\"Concurrency limit of {limit} set on work queue {name!r}\"\n    exit_with_success(success_message)\n
","tags":["Python API","CLI","work-queue"]},{"location":"api-ref/prefect/client/base/","title":"prefect.client.base","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base","title":"prefect.client.base","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse","title":"PrefectResponse","text":"

Bases: httpx.Response

A Prefect wrapper for the httpx.Response class.

Provides more informative error messages.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/base.py
class PrefectResponse(httpx.Response):\n\"\"\"\n    A Prefect wrapper for the `httpx.Response` class.\n\n    Provides more informative error messages.\n    \"\"\"\n\n    def raise_for_status(self) -> None:\n\"\"\"\n        Raise an exception if the response contains an HTTPStatusError.\n\n        The `PrefectHTTPStatusError` contains useful additional information that\n        is not contained in the `HTTPStatusError`.\n        \"\"\"\n        try:\n            return super().raise_for_status()\n        except HTTPStatusError as exc:\n            raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__\n\n    @classmethod\n    def from_httpx_response(cls: Type[Self], response: httpx.Response) -> Self:\n\"\"\"\n        Create a `PrefectReponse` from an `httpx.Response`.\n\n        By changing the `__class__` attribute of the Response, we change the method\n        resolution order to look for methods defined in PrefectResponse, while leaving\n        everything else about the original Response instance intact.\n        \"\"\"\n        new_response = copy.copy(response)\n        new_response.__class__ = cls\n        return new_response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse.raise_for_status","title":"raise_for_status","text":"

Raise an exception if the response contains an HTTPStatusError.

The PrefectHTTPStatusError contains useful additional information that is not contained in the HTTPStatusError.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/base.py
def raise_for_status(self) -> None:\n\"\"\"\n    Raise an exception if the response contains an HTTPStatusError.\n\n    The `PrefectHTTPStatusError` contains useful additional information that\n    is not contained in the `HTTPStatusError`.\n    \"\"\"\n    try:\n        return super().raise_for_status()\n    except HTTPStatusError as exc:\n        raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectResponse.from_httpx_response","title":"from_httpx_response classmethod","text":"

Create a PrefectReponse from an httpx.Response.

By changing the __class__ attribute of the Response, we change the method resolution order to look for methods defined in PrefectResponse, while leaving everything else about the original Response instance intact.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/base.py
@classmethod\ndef from_httpx_response(cls: Type[Self], response: httpx.Response) -> Self:\n\"\"\"\n    Create a `PrefectReponse` from an `httpx.Response`.\n\n    By changing the `__class__` attribute of the Response, we change the method\n    resolution order to look for methods defined in PrefectResponse, while leaving\n    everything else about the original Response instance intact.\n    \"\"\"\n    new_response = copy.copy(response)\n    new_response.__class__ = cls\n    return new_response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.PrefectHttpxClient","title":"PrefectHttpxClient","text":"

Bases: httpx.AsyncClient

A Prefect wrapper for the async httpx client with support for retry-after headers for:

Additionally, this client will always call raise_for_status on responses.

For more details on rate limit headers, see: Configuring Cloudflare Rate Limiting

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/base.py
class PrefectHttpxClient(httpx.AsyncClient):\n\"\"\"\n    A Prefect wrapper for the async httpx client with support for retry-after headers\n    for:\n\n    - 429 CloudFlare-style rate limiting\n    - 503 Service unavailable\n\n    Additionally, this client will always call `raise_for_status` on responses.\n\n    For more details on rate limit headers, see:\n    [Configuring Cloudflare Rate Limiting](https://support.cloudflare.com/hc/en-us/articles/115001635128-Configuring-Rate-Limiting-from-UI)\n    \"\"\"\n\n    async def _send_with_retry(\n        self,\n        request: Callable,\n        retry_codes: Set[int] = set(),\n        retry_exceptions: Tuple[Exception, ...] = tuple(),\n    ):\n\"\"\"\n        Send a request and retry it if it fails.\n\n        Sends the provided request and retries it up to PREFECT_CLIENT_MAX_RETRIES times\n        if the request either raises an exception listed in `retry_exceptions` or\n        receives a response with a status code listed in `retry_codes`.\n\n        Retries will be delayed based on either the retry header (preferred) or\n        exponential backoff if a retry header is not provided.\n        \"\"\"\n        try_count = 0\n        response = None\n\n        while try_count <= PREFECT_CLIENT_MAX_RETRIES.value():\n            try_count += 1\n            retry_seconds = None\n            exc_info = None\n\n            try:\n                response = await request()\n            except retry_exceptions:  # type: ignore\n                if try_count > PREFECT_CLIENT_MAX_RETRIES.value():\n                    raise\n                # Otherwise, we will ignore this error but capture the info for logging\n                exc_info = sys.exc_info()\n            else:\n                # We got a response; return immediately if it is not retryable\n                if response.status_code not in retry_codes:\n                    return response\n\n                if \"Retry-After\" in response.headers:\n                    retry_seconds = float(response.headers[\"Retry-After\"])\n\n            # Use an exponential back-off if not set in a header\n            if retry_seconds is None:\n                retry_seconds = 2**try_count\n\n            # Add jitter\n            jitter_factor = PREFECT_CLIENT_RETRY_JITTER_FACTOR.value()\n            if retry_seconds > 0 and jitter_factor > 0:\n                if response is not None and \"Retry-After\" in response.headers:\n                    # Always wait for _at least_ retry seconds if requested by the API\n                    retry_seconds = bounded_poisson_interval(\n                        retry_seconds, retry_seconds * (1 + jitter_factor)\n                    )\n                else:\n                    # Otherwise, use a symmetrical jitter\n                    retry_seconds = clamped_poisson_interval(\n                        retry_seconds, jitter_factor\n                    )\n\n            logger.debug(\n                (\n                    \"Encountered retryable exception during request. \"\n                    if exc_info\n                    else (\n                        \"Received response with retryable status code\"\n                        f\" {response.status_code}. \"\n                    )\n                )\n                + f\"Another attempt will be made in {retry_seconds}s. \"\n                \"This is attempt\"\n                f\" {try_count}/{PREFECT_CLIENT_MAX_RETRIES.value() + 1}.\",\n                exc_info=exc_info,\n            )\n            await anyio.sleep(retry_seconds)\n\n        assert (\n            response is not None\n        ), \"Retry handling ended without response or exception\"\n\n        # We ran out of retries, return the failed response\n        return response\n\n    async def send(self, *args, **kwargs) -> Response:\n        api_request = partial(super().send, *args, **kwargs)\n\n        response = await self._send_with_retry(\n            request=api_request,\n            retry_codes={\n                status.HTTP_429_TOO_MANY_REQUESTS,\n                status.HTTP_503_SERVICE_UNAVAILABLE,\n                status.HTTP_502_BAD_GATEWAY,\n                *PREFECT_CLIENT_RETRY_EXTRA_CODES.value(),\n            },\n            retry_exceptions=(\n                httpx.ReadTimeout,\n                httpx.PoolTimeout,\n                httpx.ConnectTimeout,\n                # `ConnectionResetError` when reading socket raises as a `ReadError`\n                httpx.ReadError,\n                # Sockets can be closed during writes resulting in a `WriteError`\n                httpx.WriteError,\n                # Uvicorn bug, see https://github.com/PrefectHQ/prefect/issues/7512\n                httpx.RemoteProtocolError,\n                # HTTP2 bug, see https://github.com/PrefectHQ/prefect/issues/7442\n                httpx.LocalProtocolError,\n            ),\n        )\n\n        # Convert to a Prefect response to add nicer errors messages\n        response = PrefectResponse.from_httpx_response(response)\n\n        # Always raise bad responses\n        # NOTE: We may want to remove this and handle responses per route in the\n        #       `PrefectClient`\n        response.raise_for_status()\n\n        return response\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/base/#prefect.client.base.app_lifespan_context","title":"app_lifespan_context async","text":"

A context manager that calls startup/shutdown hooks for the given application.

Lifespan contexts are cached per application to avoid calling the lifespan hooks more than once if the context is entered in nested code. A no-op context will be returned if the context for the given application is already being managed.

This manager is robust to concurrent access within the event loop. For example, if you have concurrent contexts for the same application, it is guaranteed that startup hooks will be called before their context starts and shutdown hooks will only be called after their context exits.

A reference count is used to support nested use of clients without running lifespan hooks excessively. The first client context entered will create and enter a lifespan context. Each subsequent client will increment a reference count but will not create a new lifespan context. When each client context exits, the reference count is decremented. When the last client context exits, the lifespan will be closed.

In simple nested cases, the first client context will be the one to exit the lifespan. However, if client contexts are entered concurrently they may not exit in a consistent order. If the first client context was responsible for closing the lifespan, it would have to wait until all other client contexts to exit to avoid firing shutdown hooks while the application is in use. Waiting for the other clients to exit can introduce deadlocks, so, instead, the first client will exit without closing the lifespan context and reference counts will be used to ensure the lifespan is closed once all of the clients are done.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/base.py
@asynccontextmanager\nasync def app_lifespan_context(app: FastAPI) -> ContextManager[None]:\n\"\"\"\n    A context manager that calls startup/shutdown hooks for the given application.\n\n    Lifespan contexts are cached per application to avoid calling the lifespan hooks\n    more than once if the context is entered in nested code. A no-op context will be\n    returned if the context for the given application is already being managed.\n\n    This manager is robust to concurrent access within the event loop. For example,\n    if you have concurrent contexts for the same application, it is guaranteed that\n    startup hooks will be called before their context starts and shutdown hooks will\n    only be called after their context exits.\n\n    A reference count is used to support nested use of clients without running\n    lifespan hooks excessively. The first client context entered will create and enter\n    a lifespan context. Each subsequent client will increment a reference count but will\n    not create a new lifespan context. When each client context exits, the reference\n    count is decremented. When the last client context exits, the lifespan will be\n    closed.\n\n    In simple nested cases, the first client context will be the one to exit the\n    lifespan. However, if client contexts are entered concurrently they may not exit\n    in a consistent order. If the first client context was responsible for closing\n    the lifespan, it would have to wait until all other client contexts to exit to\n    avoid firing shutdown hooks while the application is in use. Waiting for the other\n    clients to exit can introduce deadlocks, so, instead, the first client will exit\n    without closing the lifespan context and reference counts will be used to ensure\n    the lifespan is closed once all of the clients are done.\n    \"\"\"\n    # TODO: A deadlock has been observed during multithreaded use of clients while this\n    #       lifespan context is being used. This has only been reproduced on Python 3.7\n    #       and while we hope to discourage using multiple event loops in threads, this\n    #       bug may emerge again.\n    #       See https://github.com/PrefectHQ/orion/pull/1696\n    thread_id = threading.get_ident()\n\n    # The id of the application is used instead of the hash so each application instance\n    # is managed independently even if they share the same settings. We include the\n    # thread id since applications are managed separately per thread.\n    key = (thread_id, id(app))\n\n    # On exception, this will be populated with exception details\n    exc_info = (None, None, None)\n\n    # Get a lock unique to this thread since anyio locks are not threadsafe\n    lock = APP_LIFESPANS_LOCKS[thread_id]\n\n    async with lock:\n        if key in APP_LIFESPANS:\n            # The lifespan is already being managed, just increment the reference count\n            APP_LIFESPANS_REF_COUNTS[key] += 1\n        else:\n            # Create a new lifespan manager\n            APP_LIFESPANS[key] = context = LifespanManager(\n                app, startup_timeout=30, shutdown_timeout=30\n            )\n            APP_LIFESPANS_REF_COUNTS[key] = 1\n\n            # Ensure we enter the context before releasing the lock so startup hooks\n            # are complete before another client can be used\n            await context.__aenter__()\n\n    try:\n        yield\n    except BaseException:\n        exc_info = sys.exc_info()\n        raise\n    finally:\n        # If we do not shield against anyio cancellation, the lock will return\n        # immediately and the code in its context will not run, leaving the lifespan\n        # open\n        with anyio.CancelScope(shield=True):\n            async with lock:\n                # After the consumer exits the context, decrement the reference count\n                APP_LIFESPANS_REF_COUNTS[key] -= 1\n\n                # If this the last context to exit, close the lifespan\n                if APP_LIFESPANS_REF_COUNTS[key] <= 0:\n                    APP_LIFESPANS_REF_COUNTS.pop(key)\n                    context = APP_LIFESPANS.pop(key)\n                    await context.__aexit__(*exc_info)\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/","title":"prefect.client.cloud","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud","title":"prefect.client.cloud","text":"","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudUnauthorizedError","title":"CloudUnauthorizedError","text":"

Bases: PrefectException

Raised when the CloudClient receives a 401 or 403 from the Cloud API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/cloud.py
class CloudUnauthorizedError(PrefectException):\n\"\"\"\n    Raised when the CloudClient receives a 401 or 403 from the Cloud API.\n    \"\"\"\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudClient","title":"CloudClient","text":"Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/cloud.py
class CloudClient:\n    def __init__(\n        self,\n        host: str,\n        api_key: str,\n        httpx_settings: dict = None,\n    ) -> None:\n        httpx_settings = httpx_settings or dict()\n        httpx_settings.setdefault(\"headers\", dict())\n        httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n        httpx_settings.setdefault(\"base_url\", host)\n        if not PREFECT_UNIT_TEST_MODE.value():\n            httpx_settings.setdefault(\"follow_redirects\", True)\n        self._client = PrefectHttpxClient(**httpx_settings)\n\n    async def api_healthcheck(self):\n\"\"\"\n        Attempts to connect to the Cloud API and raises the encountered exception if not\n        successful.\n\n        If successful, returns `None`.\n        \"\"\"\n        with anyio.fail_after(10):\n            await self.read_workspaces()\n\n    async def read_workspaces(self) -> List[Workspace]:\n        return pydantic.parse_obj_as(List[Workspace], await self.get(\"/me/workspaces\"))\n\n    async def read_worker_metadata(self) -> Dict[str, Any]:\n        return await self.get(\"collections/views/aggregate-worker-metadata\")\n\n    async def __aenter__(self):\n        await self._client.__aenter__()\n        return self\n\n    async def __aexit__(self, *exc_info):\n        return await self._client.__aexit__(*exc_info)\n\n    def __enter__(self):\n        raise RuntimeError(\n            \"The `CloudClient` must be entered with an async context. Use 'async \"\n            \"with CloudClient(...)' not 'with CloudClient(...)'\"\n        )\n\n    def __exit__(self, *_):\n        assert False, \"This should never be called but must be defined for __enter__\"\n\n    async def get(self, route, **kwargs):\n        return await self.request(\"GET\", route, **kwargs)\n\n    async def request(self, method, route, **kwargs):\n        try:\n            res = await self._client.request(method, route, **kwargs)\n            res.raise_for_status()\n        except httpx.HTTPStatusError as exc:\n            if exc.response.status_code in (\n                status.HTTP_401_UNAUTHORIZED,\n                status.HTTP_403_FORBIDDEN,\n            ):\n                raise CloudUnauthorizedError\n            else:\n                raise exc\n\n        if res.status_code == status.HTTP_204_NO_CONTENT:\n            return\n\n        return res.json()\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.CloudClient.api_healthcheck","title":"api_healthcheck async","text":"

Attempts to connect to the Cloud API and raises the encountered exception if not successful.

If successful, returns None.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/cloud.py
async def api_healthcheck(self):\n\"\"\"\n    Attempts to connect to the Cloud API and raises the encountered exception if not\n    successful.\n\n    If successful, returns `None`.\n    \"\"\"\n    with anyio.fail_after(10):\n        await self.read_workspaces()\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/cloud/#prefect.client.cloud.get_cloud_client","title":"get_cloud_client","text":"

Needs a docstring.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/cloud.py
def get_cloud_client(\n    host: Optional[str] = None,\n    api_key: Optional[str] = None,\n    httpx_settings: Optional[dict] = None,\n    infer_cloud_url: bool = False,\n) -> \"CloudClient\":\n\"\"\"\n    Needs a docstring.\n    \"\"\"\n    if httpx_settings is not None:\n        httpx_settings = httpx_settings.copy()\n\n    if infer_cloud_url is False:\n        host = host or PREFECT_CLOUD_API_URL.value()\n    else:\n        configured_url = prefect.settings.PREFECT_API_URL.value()\n        host = re.sub(r\"accounts/.{36}/workspaces/.{36}\\Z\", \"\", configured_url)\n\n    return CloudClient(\n        host=host,\n        api_key=api_key or PREFECT_API_KEY.value(),\n        httpx_settings=httpx_settings,\n    )\n
","tags":["Python API"]},{"location":"api-ref/prefect/client/orchestration/","title":"prefect.client.orchestration","text":"

Asynchronous client implementation for communicating with the Prefect REST API.

Explore the client by communicating with an in-memory webserver \u2014 no setup required:

$ # start python REPL with native await functionality\n$ python -m asyncio\n>>> from prefect import get_client\n>>> async with get_client() as client:\n...     response = await client.hello()\n...     print(response.json())\n\ud83d\udc4b\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration","title":"prefect.client.orchestration","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient","title":"PrefectClient","text":"

An asynchronous client for interacting with the Prefect REST API.

Parameters:

Name Type Description Default api Union[str, FastAPI]

the REST API URL or FastAPI application to connect to

required api_key str

An optional API key for authentication.

None api_version str

The API version this client is compatible with.

None httpx_settings dict

An optional dictionary of settings to pass to the underlying httpx.AsyncClient

None

Examples:

Say hello to a Prefect REST API

>>> async with get_client() as client:\n>>>     response = await client.hello()\n>>>\n>>> print(response.json())\n\ud83d\udc4b\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
class PrefectClient:\n\"\"\"\n    An asynchronous client for interacting with the [Prefect REST API](/api-ref/rest-api/).\n\n    Args:\n        api: the REST API URL or FastAPI application to connect to\n        api_key: An optional API key for authentication.\n        api_version: The API version this client is compatible with.\n        httpx_settings: An optional dictionary of settings to pass to the underlying\n            `httpx.AsyncClient`\n\n    Examples:\n\n        Say hello to a Prefect REST API\n\n        <div class=\"terminal\">\n        ```\n        >>> async with get_client() as client:\n        >>>     response = await client.hello()\n        >>>\n        >>> print(response.json())\n        \ud83d\udc4b\n        ```\n        </div>\n    \"\"\"\n\n    def __init__(\n        self,\n        api: Union[str, FastAPI],\n        *,\n        api_key: str = None,\n        api_version: str = None,\n        httpx_settings: dict = None,\n    ) -> None:\n        httpx_settings = httpx_settings.copy() if httpx_settings else {}\n        httpx_settings.setdefault(\"headers\", {})\n\n        if PREFECT_API_TLS_INSECURE_SKIP_VERIFY:\n            httpx_settings.setdefault(\"verify\", False)\n\n        if api_version is None:\n            # deferred import to avoid importing the entire server unless needed\n            from prefect.server.api.server import SERVER_API_VERSION\n\n            api_version = SERVER_API_VERSION\n        httpx_settings[\"headers\"].setdefault(\"X-PREFECT-API-VERSION\", api_version)\n        if api_key:\n            httpx_settings[\"headers\"].setdefault(\"Authorization\", f\"Bearer {api_key}\")\n\n        # Context management\n        self._exit_stack = AsyncExitStack()\n        self._ephemeral_app: Optional[FastAPI] = None\n        self.manage_lifespan = True\n        self.server_type: ServerType\n\n        # Only set if this client started the lifespan of the application\n        self._ephemeral_lifespan: Optional[LifespanManager] = None\n\n        self._closed = False\n        self._started = False\n\n        # Connect to an external application\n        if isinstance(api, str):\n            if httpx_settings.get(\"app\"):\n                raise ValueError(\n                    \"Invalid httpx settings: `app` cannot be set when providing an \"\n                    \"api url. `app` is only for use with ephemeral instances. Provide \"\n                    \"it as the `api` parameter instead.\"\n                )\n            httpx_settings.setdefault(\"base_url\", api)\n\n            # See https://www.python-httpx.org/advanced/#pool-limit-configuration\n            httpx_settings.setdefault(\n                \"limits\",\n                httpx.Limits(\n                    # We see instability when allowing the client to open many connections at once.\n                    # Limiting concurrency results in more stable performance.\n                    max_connections=16,\n                    max_keepalive_connections=8,\n                    # The Prefect Cloud LB will keep connections alive for 30s.\n                    # Only allow the client to keep connections alive for 25s.\n                    keepalive_expiry=25,\n                ),\n            )\n\n            # See https://www.python-httpx.org/http2/\n            # Enabling HTTP/2 support on the client does not necessarily mean that your requests\n            # and responses will be transported over HTTP/2, since both the client and the server\n            # need to support HTTP/2. If you connect to a server that only supports HTTP/1.1 the\n            # client will use a standard HTTP/1.1 connection instead.\n            httpx_settings.setdefault(\"http2\", PREFECT_API_ENABLE_HTTP2.value())\n\n            self.server_type = (\n                ServerType.CLOUD\n                if api.startswith(PREFECT_CLOUD_API_URL.value())\n                else ServerType.SERVER\n            )\n\n        # Connect to an in-process application\n        elif isinstance(api, FastAPI):\n            self._ephemeral_app = api\n            self.server_type = ServerType.EPHEMERAL\n\n            # When using an ephemeral server, server-side exceptions can be raised\n            # client-side breaking all of our response error code handling. To work\n            # around this, we create an ASGI transport with application exceptions\n            # disabled instead of using the application directly.\n            # refs:\n            # - https://github.com/PrefectHQ/prefect/pull/9637\n            # - https://github.com/encode/starlette/blob/d3a11205ed35f8e5a58a711db0ff59c86fa7bb31/starlette/middleware/errors.py#L184\n            # - https://github.com/tiangolo/fastapi/blob/8cc967a7605d3883bd04ceb5d25cc94ae079612f/fastapi/applications.py#L163-L164\n            httpx_settings.setdefault(\n                \"transport\",\n                httpx.ASGITransport(\n                    app=self._ephemeral_app, raise_app_exceptions=False\n                ),\n            )\n            httpx_settings.setdefault(\"base_url\", \"http://ephemeral-prefect/api\")\n\n        else:\n            raise TypeError(\n                f\"Unexpected type {type(api).__name__!r} for argument `api`. Expected\"\n                \" 'str' or 'FastAPI'\"\n            )\n\n        # See https://www.python-httpx.org/advanced/#timeout-configuration\n        httpx_settings.setdefault(\n            \"timeout\",\n            httpx.Timeout(\n                connect=PREFECT_API_REQUEST_TIMEOUT.value(),\n                read=PREFECT_API_REQUEST_TIMEOUT.value(),\n                write=PREFECT_API_REQUEST_TIMEOUT.value(),\n                pool=PREFECT_API_REQUEST_TIMEOUT.value(),\n            ),\n        )\n\n        if not PREFECT_UNIT_TEST_MODE:\n            httpx_settings.setdefault(\"follow_redirects\", True)\n        self._client = PrefectHttpxClient(**httpx_settings)\n        self._loop = None\n\n        # See https://www.python-httpx.org/advanced/#custom-transports\n        #\n        # If we're using an HTTP/S client (not the ephemeral client), adjust the\n        # transport to add retries _after_ it is instantiated. If we alter the transport\n        # before instantiation, the transport will not be aware of proxies unless we\n        # reproduce all of the logic to make it so.\n        #\n        # Only alter the transport to set our default of 3 retries, don't modify any\n        # transport a user may have provided via httpx_settings.\n        #\n        # Making liberal use of getattr and isinstance checks here to avoid any\n        # surprises if the internals of httpx or httpcore change on us\n        if isinstance(api, str) and not httpx_settings.get(\"transport\"):\n            transport_for_url = getattr(self._client, \"_transport_for_url\", None)\n            if callable(transport_for_url):\n                server_transport = transport_for_url(httpx.URL(api))\n                if isinstance(server_transport, httpx.AsyncHTTPTransport):\n                    pool = getattr(server_transport, \"_pool\", None)\n                    if isinstance(pool, httpcore.AsyncConnectionPool):\n                        pool._retries = 3\n\n        self.logger = get_logger(\"client\")\n\n    @property\n    def api_url(self) -> httpx.URL:\n\"\"\"\n        Get the base URL for the API.\n        \"\"\"\n        return self._client.base_url\n\n    # API methods ----------------------------------------------------------------------\n\n    async def api_healthcheck(self) -> Optional[Exception]:\n\"\"\"\n        Attempts to connect to the API and returns the encountered exception if not\n        successful.\n\n        If successful, returns `None`.\n        \"\"\"\n        try:\n            await self._client.get(\"/health\")\n            return None\n        except Exception as exc:\n            return exc\n\n    async def hello(self) -> httpx.Response:\n\"\"\"\n        Send a GET request to /hello for testing purposes.\n        \"\"\"\n        return await self._client.get(\"/hello\")\n\n    async def create_flow(self, flow: \"FlowObject\") -> UUID:\n\"\"\"\n        Create a flow in the Prefect API.\n\n        Args:\n            flow: a [Flow][prefect.flows.Flow] object\n\n        Raises:\n            httpx.RequestError: if a flow was not created for any reason\n\n        Returns:\n            the ID of the flow in the backend\n        \"\"\"\n        return await self.create_flow_from_name(flow.name)\n\n    async def create_flow_from_name(self, flow_name: str) -> UUID:\n\"\"\"\n        Create a flow in the Prefect API.\n\n        Args:\n            flow_name: the name of the new flow\n\n        Raises:\n            httpx.RequestError: if a flow was not created for any reason\n\n        Returns:\n            the ID of the flow in the backend\n        \"\"\"\n        flow_data = FlowCreate(name=flow_name)\n        response = await self._client.post(\n            \"/flows/\", json=flow_data.dict(json_compatible=True)\n        )\n\n        flow_id = response.json().get(\"id\")\n        if not flow_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        # Return the id of the created flow\n        return UUID(flow_id)\n\n    async def read_flow(self, flow_id: UUID) -> Flow:\n\"\"\"\n        Query the Prefect API for a flow by id.\n\n        Args:\n            flow_id: the flow ID of interest\n\n        Returns:\n            a [Flow model][prefect.client.schemas.objects.Flow] representation of the flow\n        \"\"\"\n        response = await self._client.get(f\"/flows/{flow_id}\")\n        return Flow.parse_obj(response.json())\n\n    async def read_flows(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        work_pool_filter: WorkPoolFilter = None,\n        work_queue_filter: WorkQueueFilter = None,\n        sort: FlowSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[Flow]:\n\"\"\"\n        Query the Prefect API for flows. Only flows matching all criteria will\n        be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            work_pool_filter: filter criteria for work pools\n            work_queue_filter: filter criteria for work pool queues\n            sort: sort criteria for the flows\n            limit: limit for the flow query\n            offset: offset for the flow query\n\n        Returns:\n            a list of Flow model representations of the flows\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n            \"work_queues\": (\n                work_queue_filter.dict(json_compatible=True)\n                if work_queue_filter\n                else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        response = await self._client.post(\"/flows/filter\", json=body)\n        return pydantic.parse_obj_as(List[Flow], response.json())\n\n    async def read_flow_by_name(\n        self,\n        flow_name: str,\n    ) -> Flow:\n\"\"\"\n        Query the Prefect API for a flow by name.\n\n        Args:\n            flow_name: the name of a flow\n\n        Returns:\n            a fully hydrated Flow model\n        \"\"\"\n        response = await self._client.get(f\"/flows/name/{flow_name}\")\n        return Flow.parse_obj(response.json())\n\n    async def create_flow_run_from_deployment(\n        self,\n        deployment_id: UUID,\n        *,\n        parameters: Dict[str, Any] = None,\n        context: dict = None,\n        state: prefect.states.State = None,\n        name: str = None,\n        tags: Iterable[str] = None,\n        idempotency_key: str = None,\n        parent_task_run_id: UUID = None,\n        work_queue_name: str = None,\n    ) -> FlowRun:\n\"\"\"\n        Create a flow run for a deployment.\n\n        Args:\n            deployment_id: The deployment ID to create the flow run from\n            parameters: Parameter overrides for this flow run. Merged with the\n                deployment defaults\n            context: Optional run context data\n            state: The initial state for the run. If not provided, defaults to\n                `Scheduled` for now. Should always be a `Scheduled` type.\n            name: An optional name for the flow run. If not provided, the server will\n                generate a name.\n            tags: An optional iterable of tags to apply to the flow run; these tags\n                are merged with the deployment's tags.\n            idempotency_key: Optional idempotency key for creation of the flow run.\n                If the key matches the key of an existing flow run, the existing run will\n                be returned instead of creating a new one.\n            parent_task_run_id: if a subflow run is being created, the placeholder task\n                run identifier in the parent flow\n            work_queue_name: An optional work queue name to add this run to. If not provided,\n                will default to the deployment's set work queue.  If one is provided that does not\n                exist, a new work queue will be created within the deployment's work pool.\n\n        Raises:\n            httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n        Returns:\n            The flow run model\n        \"\"\"\n        parameters = parameters or {}\n        context = context or {}\n        state = state or prefect.states.Scheduled()\n        tags = tags or []\n\n        flow_run_create = DeploymentFlowRunCreate(\n            parameters=parameters,\n            context=context,\n            state=state.to_state_create(),\n            tags=tags,\n            name=name,\n            idempotency_key=idempotency_key,\n            parent_task_run_id=parent_task_run_id,\n        )\n\n        # done separately to avoid including this field in payloads sent to older API versions\n        if work_queue_name:\n            flow_run_create.work_queue_name = work_queue_name\n\n        response = await self._client.post(\n            f\"/deployments/{deployment_id}/create_flow_run\",\n            json=flow_run_create.dict(json_compatible=True, exclude_unset=True),\n        )\n        return FlowRun.parse_obj(response.json())\n\n    async def create_flow_run(\n        self,\n        flow: \"FlowObject\",\n        name: str = None,\n        parameters: Dict[str, Any] = None,\n        context: dict = None,\n        tags: Iterable[str] = None,\n        parent_task_run_id: UUID = None,\n        state: \"prefect.states.State\" = None,\n    ) -> FlowRun:\n\"\"\"\n        Create a flow run for a flow.\n\n        Args:\n            flow: The flow model to create the flow run for\n            name: An optional name for the flow run\n            parameters: Parameter overrides for this flow run.\n            context: Optional run context data\n            tags: a list of tags to apply to this flow run\n            parent_task_run_id: if a subflow run is being created, the placeholder task\n                run identifier in the parent flow\n            state: The initial state for the run. If not provided, defaults to\n                `Scheduled` for now. Should always be a `Scheduled` type.\n\n        Raises:\n            httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n        Returns:\n            The flow run model\n        \"\"\"\n        parameters = parameters or {}\n        context = context or {}\n\n        if state is None:\n            state = prefect.states.Pending()\n\n        # Retrieve the flow id\n        flow_id = await self.create_flow(flow)\n\n        flow_run_create = FlowRunCreate(\n            flow_id=flow_id,\n            flow_version=flow.version,\n            name=name,\n            parameters=parameters,\n            context=context,\n            tags=list(tags or []),\n            parent_task_run_id=parent_task_run_id,\n            state=state.to_state_create(),\n            empirical_policy=FlowRunPolicy(\n                retries=flow.retries,\n                retry_delay=flow.retry_delay_seconds,\n            ),\n        )\n\n        flow_run_create_json = flow_run_create.dict(json_compatible=True)\n        response = await self._client.post(\"/flow_runs/\", json=flow_run_create_json)\n        flow_run = FlowRun.parse_obj(response.json())\n\n        # Restore the parameters to the local objects to retain expectations about\n        # Python objects\n        flow_run.parameters = parameters\n\n        return flow_run\n\n    async def update_flow_run(\n        self,\n        flow_run_id: UUID,\n        flow_version: Optional[str] = None,\n        parameters: Optional[dict] = None,\n        name: Optional[str] = None,\n        tags: Optional[Iterable[str]] = None,\n        empirical_policy: Optional[FlowRunPolicy] = None,\n        infrastructure_pid: Optional[str] = None,\n    ) -> httpx.Response:\n\"\"\"\n        Update a flow run's details.\n\n        Args:\n            flow_run_id: The identifier for the flow run to update.\n            flow_version: A new version string for the flow run.\n            parameters: A dictionary of parameter values for the flow run. This will not\n                be merged with any existing parameters.\n            name: A new name for the flow run.\n            empirical_policy: A new flow run orchestration policy. This will not be\n                merged with any existing policy.\n            tags: An iterable of new tags for the flow run. These will not be merged with\n                any existing tags.\n            infrastructure_pid: The id of flow run as returned by an\n                infrastructure block.\n\n        Returns:\n            an `httpx.Response` object from the PATCH request\n        \"\"\"\n        params = {}\n        if flow_version is not None:\n            params[\"flow_version\"] = flow_version\n        if parameters is not None:\n            params[\"parameters\"] = parameters\n        if name is not None:\n            params[\"name\"] = name\n        if tags is not None:\n            params[\"tags\"] = tags\n        if empirical_policy is not None:\n            params[\"empirical_policy\"] = empirical_policy\n        if infrastructure_pid:\n            params[\"infrastructure_pid\"] = infrastructure_pid\n\n        flow_run_data = FlowRunUpdate(**params)\n\n        return await self._client.patch(\n            f\"/flow_runs/{flow_run_id}\",\n            json=flow_run_data.dict(json_compatible=True, exclude_unset=True),\n        )\n\n    async def delete_flow_run(\n        self,\n        flow_run_id: UUID,\n    ) -> None:\n\"\"\"\n        Delete a flow run by UUID.\n\n        Args:\n            flow_run_id: The flow run UUID of interest.\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(f\"/flow_runs/{flow_run_id}\"),\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_concurrency_limit(\n        self,\n        tag: str,\n        concurrency_limit: int,\n    ) -> UUID:\n\"\"\"\n        Create a tag concurrency limit in the Prefect API. These limits govern concurrently\n        running tasks.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n            concurrency_limit: the maximum number of concurrent task runs for a given tag\n\n        Raises:\n            httpx.RequestError: if the concurrency limit was not created for any reason\n\n        Returns:\n            the ID of the concurrency limit in the backend\n        \"\"\"\n\n        concurrency_limit_create = ConcurrencyLimitCreate(\n            tag=tag,\n            concurrency_limit=concurrency_limit,\n        )\n        response = await self._client.post(\n            \"/concurrency_limits/\",\n            json=concurrency_limit_create.dict(json_compatible=True),\n        )\n\n        concurrency_limit_id = response.json().get(\"id\")\n\n        if not concurrency_limit_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(concurrency_limit_id)\n\n    async def read_concurrency_limit_by_tag(\n        self,\n        tag: str,\n    ):\n\"\"\"\n        Read the concurrency limit set on a specific tag.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: if the concurrency limit was not created for any reason\n\n        Returns:\n            the concurrency limit set on a specific tag\n        \"\"\"\n        try:\n            response = await self._client.get(\n                f\"/concurrency_limits/tag/{tag}\",\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        concurrency_limit_id = response.json().get(\"id\")\n\n        if not concurrency_limit_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        concurrency_limit = ConcurrencyLimit.parse_obj(response.json())\n        return concurrency_limit\n\n    async def read_concurrency_limits(\n        self,\n        limit: int,\n        offset: int,\n    ):\n\"\"\"\n        Lists concurrency limits set on task run tags.\n\n        Args:\n            limit: the maximum number of concurrency limits returned\n            offset: the concurrency limit query offset\n\n        Returns:\n            a list of concurrency limits\n        \"\"\"\n\n        body = {\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        response = await self._client.post(\"/concurrency_limits/filter\", json=body)\n        return pydantic.parse_obj_as(List[ConcurrencyLimit], response.json())\n\n    async def reset_concurrency_limit_by_tag(\n        self,\n        tag: str,\n        slot_override: Optional[List[Union[UUID, str]]] = None,\n    ):\n\"\"\"\n        Resets the concurrency limit slots set on a specific tag.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n            slot_override: a list of task run IDs that are currently using a\n                concurrency slot, please check that any task run IDs included in\n                `slot_override` are currently running, otherwise those concurrency\n                slots will never be released.\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        \"\"\"\n        if slot_override is not None:\n            slot_override = [str(slot) for slot in slot_override]\n\n        try:\n            await self._client.post(\n                f\"/concurrency_limits/tag/{tag}/reset\",\n                json=dict(slot_override=slot_override),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_concurrency_limit_by_tag(\n        self,\n        tag: str,\n    ):\n\"\"\"\n        Delete the concurrency limit set on a specific tag.\n\n        Args:\n            tag: a tag the concurrency limit is applied to\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        \"\"\"\n        try:\n            await self._client.delete(\n                f\"/concurrency_limits/tag/{tag}\",\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_work_queue(\n        self,\n        name: str,\n        tags: Optional[List[str]] = None,\n        description: Optional[str] = None,\n        is_paused: Optional[bool] = None,\n        concurrency_limit: Optional[int] = None,\n        priority: Optional[int] = None,\n        work_pool_name: Optional[str] = None,\n    ) -> WorkQueue:\n\"\"\"\n        Create a work queue.\n\n        Args:\n            name: a unique name for the work queue\n            tags: DEPRECATED: an optional list of tags to filter on; only work scheduled with these tags\n                will be included in the queue. This option will be removed on 2023-02-23.\n            description: An optional description for the work queue.\n            is_paused: Whether or not the work queue is paused.\n            concurrency_limit: An optional concurrency limit for the work queue.\n            priority: The queue's priority. Lower values are higher priority (1 is the highest).\n            work_pool_name: The name of the work pool to use for this queue.\n\n        Raises:\n            prefect.exceptions.ObjectAlreadyExists: If request returns 409\n            httpx.RequestError: If request fails\n\n        Returns:\n            The created work queue\n        \"\"\"\n        if tags:\n            warnings.warn(\n                (\n                    \"The use of tags for creating work queue filters is deprecated.\"\n                    \" This option will be removed on 2023-02-23.\"\n                ),\n                DeprecationWarning,\n            )\n            filter = QueueFilter(tags=tags)\n        else:\n            filter = None\n        create_model = WorkQueueCreate(name=name, filter=filter)\n        if description is not None:\n            create_model.description = description\n        if is_paused is not None:\n            create_model.is_paused = is_paused\n        if concurrency_limit is not None:\n            create_model.concurrency_limit = concurrency_limit\n        if priority is not None:\n            create_model.priority = priority\n\n        data = create_model.dict(json_compatible=True)\n        try:\n            if work_pool_name is not None:\n                response = await self._client.post(\n                    f\"/work_pools/{work_pool_name}/queues\", json=data\n                )\n            else:\n                response = await self._client.post(\"/work_queues/\", json=data)\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            elif e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return WorkQueue.parse_obj(response.json())\n\n    async def read_work_queue_by_name(\n        self,\n        name: str,\n        work_pool_name: Optional[str] = None,\n    ) -> WorkQueue:\n\"\"\"\n        Read a work queue by name.\n\n        Args:\n            name (str): a unique name for the work queue\n            work_pool_name (str, optional): the name of the work pool\n                the queue belongs to.\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: if no work queue is found\n            httpx.HTTPStatusError: other status errors\n\n        Returns:\n            WorkQueue: a work queue API object\n        \"\"\"\n        try:\n            if work_pool_name is not None:\n                response = await self._client.get(\n                    f\"/work_pools/{work_pool_name}/queues/{name}\"\n                )\n            else:\n                response = await self._client.get(f\"/work_queues/name/{name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        return WorkQueue.parse_obj(response.json())\n\n    async def update_work_queue(self, id: UUID, **kwargs):\n\"\"\"\n        Update properties of a work queue.\n\n        Args:\n            id: the ID of the work queue to update\n            **kwargs: the fields to update\n\n        Raises:\n            ValueError: if no kwargs are provided\n            prefect.exceptions.ObjectNotFound: if request returns 404\n            httpx.RequestError: if the request fails\n\n        \"\"\"\n        if not kwargs:\n            raise ValueError(\"No fields provided to update.\")\n\n        data = WorkQueueUpdate(**kwargs).dict(json_compatible=True, exclude_unset=True)\n        try:\n            await self._client.patch(f\"/work_queues/{id}\", json=data)\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def get_runs_in_work_queue(\n        self,\n        id: UUID,\n        limit: int = 10,\n        scheduled_before: datetime.datetime = None,\n    ) -> List[FlowRun]:\n\"\"\"\n        Read flow runs off a work queue.\n\n        Args:\n            id: the id of the work queue to read from\n            limit: a limit on the number of runs to return\n            scheduled_before: a timestamp; only runs scheduled before this time will be returned.\n                Defaults to now.\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            List[FlowRun]: a list of FlowRun objects read from the queue\n        \"\"\"\n        if scheduled_before is None:\n            scheduled_before = pendulum.now(\"UTC\")\n\n        try:\n            response = await self._client.post(\n                f\"/work_queues/{id}/get_runs\",\n                json={\n                    \"limit\": limit,\n                    \"scheduled_before\": scheduled_before.isoformat(),\n                },\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return pydantic.parse_obj_as(List[FlowRun], response.json())\n\n    async def read_work_queue(\n        self,\n        id: UUID,\n    ) -> WorkQueue:\n\"\"\"\n        Read a work queue.\n\n        Args:\n            id: the id of the work queue to load\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            WorkQueue: an instantiated WorkQueue object\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/work_queues/{id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return WorkQueue.parse_obj(response.json())\n\n    async def match_work_queues(\n        self,\n        prefixes: List[str],\n    ) -> List[WorkQueue]:\n\"\"\"\n        Query the Prefect API for work queues with names with a specific prefix.\n\n        Args:\n            prefixes: a list of strings used to match work queue name prefixes\n\n        Returns:\n            a list of WorkQueue model representations\n                of the work queues\n        \"\"\"\n        page_length = 100\n        current_page = 0\n        work_queues = []\n\n        while True:\n            new_queues = await self.read_work_queues(\n                offset=current_page * page_length,\n                limit=page_length,\n                work_queue_filter=WorkQueueFilter(\n                    name=WorkQueueFilterName(startswith_=prefixes)\n                ),\n            )\n            if not new_queues:\n                break\n            work_queues += new_queues\n            current_page += 1\n\n        return work_queues\n\n    async def delete_work_queue_by_id(\n        self,\n        id: UUID,\n    ):\n\"\"\"\n        Delete a work queue by its ID.\n\n        Args:\n            id: the id of the work queue to delete\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(\n                f\"/work_queues/{id}\",\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def create_block_type(self, block_type: BlockTypeCreate) -> BlockType:\n\"\"\"\n        Create a block type in the Prefect API.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/block_types/\",\n                json=block_type.dict(\n                    json_compatible=True, exclude_unset=True, exclude={\"id\"}\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n        return BlockType.parse_obj(response.json())\n\n    async def create_block_schema(self, block_schema: BlockSchemaCreate) -> BlockSchema:\n\"\"\"\n        Create a block schema in the Prefect API.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/block_schemas/\",\n                json=block_schema.dict(\n                    json_compatible=True,\n                    exclude_unset=True,\n                    exclude={\"id\", \"block_type\", \"checksum\"},\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n        return BlockSchema.parse_obj(response.json())\n\n    async def create_block_document(\n        self,\n        block_document: Union[BlockDocument, BlockDocumentCreate],\n        include_secrets: bool = True,\n    ) -> BlockDocument:\n\"\"\"\n        Create a block document in the Prefect API. This data is used to configure a\n        corresponding Block.\n\n        Args:\n            include_secrets (bool): whether to include secret values\n                on the stored Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. Note Blocks may not work as expected if\n                this is set to `False`.\n        \"\"\"\n        if isinstance(block_document, BlockDocument):\n            block_document = BlockDocumentCreate.parse_obj(\n                block_document.dict(\n                    json_compatible=True,\n                    include_secrets=include_secrets,\n                    exclude_unset=True,\n                    exclude={\"id\", \"block_schema\", \"block_type\"},\n                ),\n            )\n\n        try:\n            response = await self._client.post(\n                \"/block_documents/\",\n                json=block_document.dict(\n                    json_compatible=True,\n                    include_secrets=include_secrets,\n                    exclude_unset=True,\n                    exclude={\"id\", \"block_schema\", \"block_type\"},\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n        return BlockDocument.parse_obj(response.json())\n\n    async def update_block_document(\n        self,\n        block_document_id: UUID,\n        block_document: BlockDocumentUpdate,\n    ):\n\"\"\"\n        Update a block document in the Prefect API.\n        \"\"\"\n        try:\n            await self._client.patch(\n                f\"/block_documents/{block_document_id}\",\n                json=block_document.dict(\n                    json_compatible=True,\n                    exclude_unset=True,\n                    include={\"data\", \"merge_existing_data\", \"block_schema_id\"},\n                    include_secrets=True,\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_block_document(self, block_document_id: UUID):\n\"\"\"\n        Delete a block document.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/block_documents/{block_document_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_block_type_by_slug(self, slug: str) -> BlockType:\n\"\"\"\n        Read a block type by its slug.\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/block_types/slug/{slug}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockType.parse_obj(response.json())\n\n    async def read_block_schema_by_checksum(\n        self, checksum: str, version: Optional[str] = None\n    ) -> BlockSchema:\n\"\"\"\n        Look up a block schema checksum\n        \"\"\"\n        try:\n            url = f\"/block_schemas/checksum/{checksum}\"\n            if version is not None:\n                url = f\"{url}?version={version}\"\n            response = await self._client.get(url)\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockSchema.parse_obj(response.json())\n\n    async def update_block_type(self, block_type_id: UUID, block_type: BlockTypeUpdate):\n\"\"\"\n        Update a block document in the Prefect API.\n        \"\"\"\n        try:\n            await self._client.patch(\n                f\"/block_types/{block_type_id}\",\n                json=block_type.dict(\n                    json_compatible=True,\n                    exclude_unset=True,\n                    include=BlockTypeUpdate.updatable_fields(),\n                    include_secrets=True,\n                ),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def delete_block_type(self, block_type_id: UUID):\n\"\"\"\n        Delete a block type.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/block_types/{block_type_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            elif (\n                e.response.status_code == status.HTTP_403_FORBIDDEN\n                and e.response.json()[\"detail\"]\n                == \"protected block types cannot be deleted.\"\n            ):\n                raise prefect.exceptions.ProtectedBlockError(\n                    \"Protected block types cannot be deleted.\"\n                ) from e\n            else:\n                raise\n\n    async def read_block_types(self) -> List[BlockType]:\n\"\"\"\n        Read all block types\n        Raises:\n            httpx.RequestError: if the block types were not found\n\n        Returns:\n            List of BlockTypes.\n        \"\"\"\n        response = await self._client.post(\"/block_types/filter\", json={})\n        return pydantic.parse_obj_as(List[BlockType], response.json())\n\n    async def read_block_schemas(self) -> List[BlockSchema]:\n\"\"\"\n        Read all block schemas\n        Raises:\n            httpx.RequestError: if a valid block schema was not found\n\n        Returns:\n            A BlockSchema.\n        \"\"\"\n        response = await self._client.post(\"/block_schemas/filter\", json={})\n        return pydantic.parse_obj_as(List[BlockSchema], response.json())\n\n    async def read_block_document(\n        self,\n        block_document_id: UUID,\n        include_secrets: bool = True,\n    ):\n\"\"\"\n        Read the block document with the specified ID.\n\n        Args:\n            block_document_id: the block document id\n            include_secrets (bool): whether to include secret values\n                on the Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. These fields are automatically obfuscated\n                by Pydantic, but users can additionally choose not to receive\n                their values from the API. Note that any business logic on the\n                Block may not work if this is `False`.\n\n        Raises:\n            httpx.RequestError: if the block document was not found for any reason\n\n        Returns:\n            A block document or None.\n        \"\"\"\n        try:\n            response = await self._client.get(\n                f\"/block_documents/{block_document_id}\",\n                params=dict(include_secrets=include_secrets),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockDocument.parse_obj(response.json())\n\n    async def read_block_document_by_name(\n        self,\n        name: str,\n        block_type_slug: str,\n        include_secrets: bool = True,\n    ):\n\"\"\"\n        Read the block document with the specified name that corresponds to a\n        specific block type name.\n\n        Args:\n            name: The block document name.\n            block_type_slug: The block type slug.\n            include_secrets (bool): whether to include secret values\n                on the Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. These fields are automatically obfuscated\n                by Pydantic, but users can additionally choose not to receive\n                their values from the API. Note that any business logic on the\n                Block may not work if this is `False`.\n\n        Raises:\n            httpx.RequestError: if the block document was not found for any reason\n\n        Returns:\n            A block document or None.\n        \"\"\"\n        try:\n            response = await self._client.get(\n                f\"/block_types/slug/{block_type_slug}/block_documents/name/{name}\",\n                params=dict(include_secrets=include_secrets),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return BlockDocument.parse_obj(response.json())\n\n    async def read_block_documents(\n        self,\n        block_schema_type: Optional[str] = None,\n        offset: Optional[int] = None,\n        limit: Optional[int] = None,\n        include_secrets: bool = True,\n    ):\n\"\"\"\n        Read block documents\n\n        Args:\n            block_schema_type: an optional block schema type\n            offset: an offset\n            limit: the number of blocks to return\n            include_secrets (bool): whether to include secret values\n                on the Block, corresponding to Pydantic's `SecretStr` and\n                `SecretBytes` fields. These fields are automatically obfuscated\n                by Pydantic, but users can additionally choose not to receive\n                their values from the API. Note that any business logic on the\n                Block may not work if this is `False`.\n\n        Returns:\n            A list of block documents\n        \"\"\"\n        response = await self._client.post(\n            \"/block_documents/filter\",\n            json=dict(\n                block_schema_type=block_schema_type,\n                offset=offset,\n                limit=limit,\n                include_secrets=include_secrets,\n            ),\n        )\n        return pydantic.parse_obj_as(List[BlockDocument], response.json())\n\n    async def create_deployment(\n        self,\n        flow_id: UUID,\n        name: str,\n        version: str = None,\n        schedule: SCHEDULE_TYPES = None,\n        parameters: Dict[str, Any] = None,\n        description: str = None,\n        work_queue_name: str = None,\n        work_pool_name: str = None,\n        tags: List[str] = None,\n        storage_document_id: UUID = None,\n        manifest_path: str = None,\n        path: str = None,\n        entrypoint: str = None,\n        infrastructure_document_id: UUID = None,\n        infra_overrides: Dict[str, Any] = None,\n        parameter_openapi_schema: dict = None,\n        is_schedule_active: Optional[bool] = None,\n        pull_steps: Optional[List[dict]] = None,\n    ) -> UUID:\n\"\"\"\n        Create a deployment.\n\n        Args:\n            flow_id: the flow ID to create a deployment for\n            name: the name of the deployment\n            version: an optional version string for the deployment\n            schedule: an optional schedule to apply to the deployment\n            tags: an optional list of tags to apply to the deployment\n            storage_document_id: an reference to the storage block document\n                used for the deployed flow\n            infrastructure_document_id: an reference to the infrastructure block document\n                to use for this deployment\n\n        Raises:\n            httpx.RequestError: if the deployment was not created for any reason\n\n        Returns:\n            the ID of the deployment in the backend\n        \"\"\"\n        deployment_create = DeploymentCreate(\n            flow_id=flow_id,\n            name=name,\n            version=version,\n            schedule=schedule,\n            parameters=dict(parameters or {}),\n            tags=list(tags or []),\n            work_queue_name=work_queue_name,\n            description=description,\n            storage_document_id=storage_document_id,\n            path=path,\n            entrypoint=entrypoint,\n            manifest_path=manifest_path,  # for backwards compat\n            infrastructure_document_id=infrastructure_document_id,\n            infra_overrides=infra_overrides or {},\n            parameter_openapi_schema=parameter_openapi_schema,\n            is_schedule_active=is_schedule_active,\n            pull_steps=pull_steps,\n        )\n\n        if work_pool_name is not None:\n            deployment_create.work_pool_name = work_pool_name\n\n        # Exclude newer fields that are not set to avoid compatibility issues\n        exclude = {\n            field\n            for field in [\"work_pool_name\", \"work_queue_name\"]\n            if field not in deployment_create.__fields_set__\n        }\n\n        if deployment_create.is_schedule_active is None:\n            exclude.add(\"is_schedule_active\")\n\n        if deployment_create.pull_steps is None:\n            exclude.add(\"pull_steps\")\n\n        json = deployment_create.dict(json_compatible=True, exclude=exclude)\n        response = await self._client.post(\n            \"/deployments/\",\n            json=json,\n        )\n        deployment_id = response.json().get(\"id\")\n        if not deployment_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(deployment_id)\n\n    async def update_deployment(\n        self,\n        deployment: Deployment,\n        schedule: SCHEDULE_TYPES = None,\n        is_schedule_active: bool = None,\n    ):\n        deployment_update = DeploymentUpdate(\n            version=deployment.version,\n            schedule=schedule if schedule is not None else deployment.schedule,\n            is_schedule_active=(\n                is_schedule_active\n                if is_schedule_active is not None\n                else deployment.is_schedule_active\n            ),\n            description=deployment.description,\n            work_queue_name=deployment.work_queue_name,\n            tags=deployment.tags,\n            manifest_path=deployment.manifest_path,\n            path=deployment.path,\n            entrypoint=deployment.entrypoint,\n            parameters=deployment.parameters,\n            storage_document_id=deployment.storage_document_id,\n            infrastructure_document_id=deployment.infrastructure_document_id,\n            infra_overrides=deployment.infra_overrides,\n        )\n\n        if getattr(deployment, \"work_pool_name\", None) is not None:\n            deployment_update.work_pool_name = deployment.work_pool_name\n\n        await self._client.patch(\n            f\"/deployments/{deployment.id}\",\n            json=deployment_update.dict(json_compatible=True),\n        )\n\n    async def _create_deployment_from_schema(self, schema: DeploymentCreate) -> UUID:\n\"\"\"\n        Create a deployment from a prepared `DeploymentCreate` schema.\n        \"\"\"\n        # TODO: We are likely to remove this method once we have considered the\n        #       packaging interface for deployments further.\n        response = await self._client.post(\n            \"/deployments/\", json=schema.dict(json_compatible=True)\n        )\n        deployment_id = response.json().get(\"id\")\n        if not deployment_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(deployment_id)\n\n    async def read_deployment(\n        self,\n        deployment_id: UUID,\n    ) -> DeploymentResponse:\n\"\"\"\n        Query the Prefect API for a deployment by id.\n\n        Args:\n            deployment_id: the deployment ID of interest\n\n        Returns:\n            a [Deployment model][prefect.client.schemas.objects.Deployment] representation of the deployment\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/deployments/{deployment_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return DeploymentResponse.parse_obj(response.json())\n\n    async def read_deployment_by_name(\n        self,\n        name: str,\n    ) -> DeploymentResponse:\n\"\"\"\n        Query the Prefect API for a deployment by name.\n\n        Args:\n            name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\n\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If request fails\n\n        Returns:\n            a Deployment model representation of the deployment\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/deployments/name/{name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        return DeploymentResponse.parse_obj(response.json())\n\n    async def read_deployments(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        work_pool_filter: WorkPoolFilter = None,\n        work_queue_filter: WorkQueueFilter = None,\n        limit: int = None,\n        sort: DeploymentSort = None,\n        offset: int = 0,\n    ) -> List[DeploymentResponse]:\n\"\"\"\n        Query the Prefect API for deployments. Only deployments matching all\n        the provided criteria will be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            work_pool_filter: filter criteria for work pools\n            work_queue_filter: filter criteria for work pool queues\n            limit: a limit for the deployment query\n            offset: an offset for the deployment query\n\n        Returns:\n            a list of Deployment model representations\n                of the deployments\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n            \"work_pool_queues\": (\n                work_queue_filter.dict(json_compatible=True)\n                if work_queue_filter\n                else None\n            ),\n            \"limit\": limit,\n            \"offset\": offset,\n            \"sort\": sort,\n        }\n\n        response = await self._client.post(\"/deployments/filter\", json=body)\n        return pydantic.parse_obj_as(List[DeploymentResponse], response.json())\n\n    async def delete_deployment(\n        self,\n        deployment_id: UUID,\n    ):\n\"\"\"\n        Delete deployment by id.\n\n        Args:\n            deployment_id: The deployment id of interest.\n        Raises:\n            prefect.exceptions.ObjectNotFound: If request returns 404\n            httpx.RequestError: If requests fails\n        \"\"\"\n        try:\n            await self._client.delete(f\"/deployments/{deployment_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_flow_run(self, flow_run_id: UUID) -> FlowRun:\n\"\"\"\n        Query the Prefect API for a flow run by id.\n\n        Args:\n            flow_run_id: the flow run ID of interest\n\n        Returns:\n            a Flow Run model representation of the flow run\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/flow_runs/{flow_run_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n        return FlowRun.parse_obj(response.json())\n\n    async def resume_flow_run(self, flow_run_id: UUID) -> OrchestrationResult:\n\"\"\"\n        Resumes a paused flow run.\n\n        Args:\n            flow_run_id: the flow run ID of interest\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        try:\n            response = await self._client.post(f\"/flow_runs/{flow_run_id}/resume\")\n        except httpx.HTTPStatusError:\n            raise\n\n        return OrchestrationResult.parse_obj(response.json())\n\n    async def read_flow_runs(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        work_pool_filter: WorkPoolFilter = None,\n        work_queue_filter: WorkQueueFilter = None,\n        sort: FlowRunSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[FlowRun]:\n\"\"\"\n        Query the Prefect API for flow runs. Only flow runs matching all criteria will\n        be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            work_pool_filter: filter criteria for work pools\n            work_queue_filter: filter criteria for work pool queues\n            sort: sort criteria for the flow runs\n            limit: limit for the flow run query\n            offset: offset for the flow run query\n\n        Returns:\n            a list of Flow Run model representations\n                of the flow runs\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n            \"work_pool_queues\": (\n                work_queue_filter.dict(json_compatible=True)\n                if work_queue_filter\n                else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        response = await self._client.post(\"/flow_runs/filter\", json=body)\n        return pydantic.parse_obj_as(List[FlowRun], response.json())\n\n    async def set_flow_run_state(\n        self,\n        flow_run_id: UUID,\n        state: \"prefect.states.State\",\n        force: bool = False,\n    ) -> OrchestrationResult:\n\"\"\"\n        Set the state of a flow run.\n\n        Args:\n            flow_run_id: the id of the flow run\n            state: the state to set\n            force: if True, disregard orchestration logic when setting the state,\n                forcing the Prefect API to accept the state\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        state_create = state.to_state_create()\n        state_create.state_details.flow_run_id = flow_run_id\n        try:\n            response = await self._client.post(\n                f\"/flow_runs/{flow_run_id}/set_state\",\n                json=dict(state=state_create.dict(json_compatible=True), force=force),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n        return OrchestrationResult.parse_obj(response.json())\n\n    async def read_flow_run_states(\n        self, flow_run_id: UUID\n    ) -> List[prefect.states.State]:\n\"\"\"\n        Query for the states of a flow run\n\n        Args:\n            flow_run_id: the id of the flow run\n\n        Returns:\n            a list of State model representations\n                of the flow run states\n        \"\"\"\n        response = await self._client.get(\n            \"/flow_run_states/\", params=dict(flow_run_id=str(flow_run_id))\n        )\n        return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n\n    async def set_task_run_name(self, task_run_id: UUID, name: str):\n        task_run_data = TaskRunUpdate(name=name)\n        return await self._client.patch(\n            f\"/task_runs/{task_run_id}\",\n            json=task_run_data.dict(json_compatible=True, exclude_unset=True),\n        )\n\n    async def create_task_run(\n        self,\n        task: \"TaskObject\",\n        flow_run_id: UUID,\n        dynamic_key: str,\n        name: str = None,\n        extra_tags: Iterable[str] = None,\n        state: prefect.states.State = None,\n        task_inputs: Dict[\n            str,\n            List[\n                Union[\n                    TaskRunResult,\n                    Parameter,\n                    Constant,\n                ]\n            ],\n        ] = None,\n    ) -> TaskRun:\n\"\"\"\n        Create a task run\n\n        Args:\n            task: The Task to run\n            flow_run_id: The flow run id with which to associate the task run\n            dynamic_key: A key unique to this particular run of a Task within the flow\n            name: An optional name for the task run\n            extra_tags: an optional list of extra tags to apply to the task run in\n                addition to `task.tags`\n            state: The initial state for the run. If not provided, defaults to\n                `Pending` for now. Should always be a `Scheduled` type.\n            task_inputs: the set of inputs passed to the task\n\n        Returns:\n            The created task run.\n        \"\"\"\n        tags = set(task.tags).union(extra_tags or [])\n\n        if state is None:\n            state = prefect.states.Pending()\n\n        task_run_data = TaskRunCreate(\n            name=name,\n            flow_run_id=flow_run_id,\n            task_key=task.task_key,\n            dynamic_key=dynamic_key,\n            tags=list(tags),\n            task_version=task.version,\n            empirical_policy=TaskRunPolicy(\n                retries=task.retries,\n                retry_delay=task.retry_delay_seconds,\n                retry_jitter_factor=task.retry_jitter_factor,\n            ),\n            state=state.to_state_create(),\n            task_inputs=task_inputs or {},\n        )\n\n        response = await self._client.post(\n            \"/task_runs/\", json=task_run_data.dict(json_compatible=True)\n        )\n        return TaskRun.parse_obj(response.json())\n\n    async def read_task_run(self, task_run_id: UUID) -> TaskRun:\n\"\"\"\n        Query the Prefect API for a task run by id.\n\n        Args:\n            task_run_id: the task run ID of interest\n\n        Returns:\n            a Task Run model representation of the task run\n        \"\"\"\n        response = await self._client.get(f\"/task_runs/{task_run_id}\")\n        return TaskRun.parse_obj(response.json())\n\n    async def read_task_runs(\n        self,\n        *,\n        flow_filter: FlowFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        deployment_filter: DeploymentFilter = None,\n        sort: TaskRunSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[TaskRun]:\n\"\"\"\n        Query the Prefect API for task runs. Only task runs matching all criteria will\n        be returned.\n\n        Args:\n            flow_filter: filter criteria for flows\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            deployment_filter: filter criteria for deployments\n            sort: sort criteria for the task runs\n            limit: a limit for the task run query\n            offset: an offset for the task run query\n\n        Returns:\n            a list of Task Run model representations\n                of the task runs\n        \"\"\"\n        body = {\n            \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n                if flow_run_filter\n                else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"deployments\": (\n                deployment_filter.dict(json_compatible=True)\n                if deployment_filter\n                else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\"/task_runs/filter\", json=body)\n        return pydantic.parse_obj_as(List[TaskRun], response.json())\n\n    async def set_task_run_state(\n        self,\n        task_run_id: UUID,\n        state: prefect.states.State,\n        force: bool = False,\n    ) -> OrchestrationResult:\n\"\"\"\n        Set the state of a task run.\n\n        Args:\n            task_run_id: the id of the task run\n            state: the state to set\n            force: if True, disregard orchestration logic when setting the state,\n                forcing the Prefect API to accept the state\n\n        Returns:\n            an OrchestrationResult model representation of state orchestration output\n        \"\"\"\n        state_create = state.to_state_create()\n        state_create.state_details.task_run_id = task_run_id\n        response = await self._client.post(\n            f\"/task_runs/{task_run_id}/set_state\",\n            json=dict(state=state_create.dict(json_compatible=True), force=force),\n        )\n        return OrchestrationResult.parse_obj(response.json())\n\n    async def read_task_run_states(\n        self, task_run_id: UUID\n    ) -> List[prefect.states.State]:\n\"\"\"\n        Query for the states of a task run\n\n        Args:\n            task_run_id: the id of the task run\n\n        Returns:\n            a list of State model representations of the task run states\n        \"\"\"\n        response = await self._client.get(\n            \"/task_run_states/\", params=dict(task_run_id=str(task_run_id))\n        )\n        return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n\n    async def create_logs(self, logs: Iterable[Union[LogCreate, dict]]) -> None:\n\"\"\"\n        Create logs for a flow or task run\n\n        Args:\n            logs: An iterable of `LogCreate` objects or already json-compatible dicts\n        \"\"\"\n        serialized_logs = [\n            log.dict(json_compatible=True) if isinstance(log, LogCreate) else log\n            for log in logs\n        ]\n        await self._client.post(\"/logs/\", json=serialized_logs)\n\n    async def create_flow_run_notification_policy(\n        self,\n        block_document_id: UUID,\n        is_active: bool = True,\n        tags: List[str] = None,\n        state_names: List[str] = None,\n        message_template: Optional[str] = None,\n    ) -> UUID:\n\"\"\"\n        Create a notification policy for flow runs\n\n        Args:\n            block_document_id: The block document UUID\n            is_active: Whether the notification policy is active\n            tags: List of flow tags\n            state_names: List of state names\n            message_template: Notification message template\n        \"\"\"\n        if tags is None:\n            tags = []\n        if state_names is None:\n            state_names = []\n\n        policy = FlowRunNotificationPolicyCreate(\n            block_document_id=block_document_id,\n            is_active=is_active,\n            tags=tags,\n            state_names=state_names,\n            message_template=message_template,\n        )\n        response = await self._client.post(\n            \"/flow_run_notification_policies/\",\n            json=policy.dict(json_compatible=True),\n        )\n\n        policy_id = response.json().get(\"id\")\n        if not policy_id:\n            raise httpx.RequestError(f\"Malformed response: {response}\")\n\n        return UUID(policy_id)\n\n    async def read_flow_run_notification_policies(\n        self,\n        flow_run_notification_policy_filter: FlowRunNotificationPolicyFilter,\n        limit: Optional[int] = None,\n        offset: int = 0,\n    ) -> List[FlowRunNotificationPolicy]:\n\"\"\"\n        Query the Prefect API for flow run notification policies. Only policies matching all criteria will\n        be returned.\n\n        Args:\n            flow_run_notification_policy_filter: filter criteria for notification policies\n            limit: a limit for the notification policies query\n            offset: an offset for the notification policies query\n\n        Returns:\n            a list of FlowRunNotificationPolicy model representations\n                of the notification policies\n        \"\"\"\n        body = {\n            \"flow_run_notification_policy_filter\": (\n                flow_run_notification_policy_filter.dict(json_compatible=True)\n                if flow_run_notification_policy_filter\n                else None\n            ),\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\n            \"/flow_run_notification_policies/filter\", json=body\n        )\n        return pydantic.parse_obj_as(List[FlowRunNotificationPolicy], response.json())\n\n    async def read_logs(\n        self,\n        log_filter: LogFilter = None,\n        limit: int = None,\n        offset: int = None,\n        sort: LogSort = LogSort.TIMESTAMP_ASC,\n    ) -> None:\n\"\"\"\n        Read flow and task run logs.\n        \"\"\"\n        body = {\n            \"logs\": log_filter.dict(json_compatible=True) if log_filter else None,\n            \"limit\": limit,\n            \"offset\": offset,\n            \"sort\": sort,\n        }\n\n        response = await self._client.post(\"/logs/filter\", json=body)\n        return pydantic.parse_obj_as(List[Log], response.json())\n\n    async def resolve_datadoc(self, datadoc: DataDocument) -> Any:\n\"\"\"\n        Recursively decode possibly nested data documents.\n\n        \"server\" encoded documents will be retrieved from the server.\n\n        Args:\n            datadoc: The data document to resolve\n\n        Returns:\n            a decoded object, the innermost data\n        \"\"\"\n        if not isinstance(datadoc, DataDocument):\n            raise TypeError(\n                f\"`resolve_datadoc` received invalid type {type(datadoc).__name__}\"\n            )\n\n        async def resolve_inner(data):\n            if isinstance(data, bytes):\n                try:\n                    data = DataDocument.parse_raw(data)\n                except pydantic.ValidationError:\n                    return data\n\n            if isinstance(data, DataDocument):\n                return await resolve_inner(data.decode())\n\n            return data\n\n        return await resolve_inner(datadoc)\n\n    async def send_worker_heartbeat(self, work_pool_name: str, worker_name: str):\n\"\"\"\n        Sends a worker heartbeat for a given work pool.\n\n        Args:\n            work_pool_name: The name of the work pool to heartbeat against.\n            worker_name: The name of the worker sending the heartbeat.\n        \"\"\"\n        await self._client.post(\n            f\"/work_pools/{work_pool_name}/workers/heartbeat\",\n            json={\"name\": worker_name},\n        )\n\n    async def read_workers_for_work_pool(\n        self,\n        work_pool_name: str,\n        worker_filter: Optional[WorkerFilter] = None,\n        offset: Optional[int] = None,\n        limit: Optional[int] = None,\n    ) -> List[Worker]:\n\"\"\"\n        Reads workers for a given work pool.\n\n        Args:\n            work_pool_name: The name of the work pool for which to get\n                member workers.\n            worker_filter: Criteria by which to filter workers.\n            limit: Limit for the worker query.\n            offset: Limit for the worker query.\n        \"\"\"\n        response = await self._client.post(\n            f\"/work_pools/{work_pool_name}/workers/filter\",\n            json={\n                \"worker_filter\": (\n                    worker_filter.dict(json_compatible=True, exclude_unset=True)\n                    if worker_filter\n                    else None\n                ),\n                \"offset\": offset,\n                \"limit\": limit,\n            },\n        )\n\n        return pydantic.parse_obj_as(List[Worker], response.json())\n\n    async def read_work_pool(self, work_pool_name: str) -> WorkPool:\n\"\"\"\n        Reads information for a given work pool\n\n        Args:\n            work_pool_name: The name of the work pool to for which to get\n                information.\n\n        Returns:\n            Information about the requested work pool.\n        \"\"\"\n        try:\n            response = await self._client.get(f\"/work_pools/{work_pool_name}\")\n            return pydantic.parse_obj_as(WorkPool, response.json())\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_work_pools(\n        self,\n        limit: Optional[int] = None,\n        offset: int = 0,\n        work_pool_filter: Optional[WorkPoolFilter] = None,\n    ) -> List[WorkPool]:\n\"\"\"\n        Reads work pools.\n\n        Args:\n            limit: Limit for the work pool query.\n            offset: Offset for the work pool query.\n            work_pool_filter: Criteria by which to filter work pools.\n\n        Returns:\n            A list of work pools.\n        \"\"\"\n\n        body = {\n            \"limit\": limit,\n            \"offset\": offset,\n            \"work_pools\": (\n                work_pool_filter.dict(json_compatible=True)\n                if work_pool_filter\n                else None\n            ),\n        }\n        response = await self._client.post(\"/work_pools/filter\", json=body)\n        return pydantic.parse_obj_as(List[WorkPool], response.json())\n\n    async def create_work_pool(\n        self,\n        work_pool: WorkPoolCreate,\n    ) -> WorkPool:\n\"\"\"\n        Creates a work pool with the provided configuration.\n\n        Args:\n            work_pool: Desired configuration for the new work pool.\n\n        Returns:\n            Information about the newly created work pool.\n        \"\"\"\n        try:\n            response = await self._client.post(\n                \"/work_pools/\",\n                json=work_pool.dict(json_compatible=True, exclude_unset=True),\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_409_CONFLICT:\n                raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n            else:\n                raise\n\n        return pydantic.parse_obj_as(WorkPool, response.json())\n\n    async def update_work_pool(\n        self,\n        work_pool_name: str,\n        work_pool: WorkPoolUpdate,\n    ):\n        await self._client.patch(\n            f\"/work_pools/{work_pool_name}\",\n            json=work_pool.dict(json_compatible=True, exclude_unset=True),\n        )\n\n    async def delete_work_pool(\n        self,\n        work_pool_name: str,\n    ):\n\"\"\"\n        Deletes a work pool.\n\n        Args:\n            work_pool_name: Name of the work pool to delete.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/work_pools/{work_pool_name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_work_queues(\n        self,\n        work_pool_name: Optional[str] = None,\n        work_queue_filter: Optional[WorkQueueFilter] = None,\n        limit: Optional[int] = None,\n        offset: Optional[int] = None,\n    ) -> List[WorkQueue]:\n\"\"\"\n        Retrieves queues for a work pool.\n\n        Args:\n            work_pool_name: Name of the work pool for which to get queues.\n            work_queue_filter: Criteria by which to filter queues.\n            limit: Limit for the queue query.\n            offset: Limit for the queue query.\n\n        Returns:\n            List of queues for the specified work pool.\n        \"\"\"\n        json = {\n            \"work_queues\": (\n                work_queue_filter.dict(json_compatible=True, exclude_unset=True)\n                if work_queue_filter\n                else None\n            ),\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n\n        if work_pool_name:\n            try:\n                response = await self._client.post(\n                    f\"/work_pools/{work_pool_name}/queues/filter\",\n                    json=json,\n                )\n            except httpx.HTTPStatusError as e:\n                if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                    raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n                else:\n                    raise\n        else:\n            response = await self._client.post(\"/work_queues/filter\", json=json)\n\n        return pydantic.parse_obj_as(List[WorkQueue], response.json())\n\n    async def get_scheduled_flow_runs_for_work_pool(\n        self,\n        work_pool_name: str,\n        work_queue_names: Optional[List[str]] = None,\n        scheduled_before: Optional[datetime.datetime] = None,\n    ) -> List[WorkerFlowRunResponse]:\n\"\"\"\n        Retrieves scheduled flow runs for the provided set of work pool queues.\n\n        Args:\n            work_pool_name: The name of the work pool that the work pool\n                queues are associated with.\n            work_queue_names: The names of the work pool queues from which\n                to get scheduled flow runs.\n            scheduled_before: Datetime used to filter returned flow runs. Flow runs\n                scheduled for after the given datetime string will not be returned.\n\n        Returns:\n            A list of worker flow run responses containing information about the\n            retrieved flow runs.\n        \"\"\"\n        body: Dict[str, Any] = {}\n        if work_queue_names is not None:\n            body[\"work_queue_names\"] = list(work_queue_names)\n        if scheduled_before:\n            body[\"scheduled_before\"] = str(scheduled_before)\n\n        response = await self._client.post(\n            f\"/work_pools/{work_pool_name}/get_scheduled_flow_runs\",\n            json=body,\n        )\n\n        return pydantic.parse_obj_as(List[WorkerFlowRunResponse], response.json())\n\n    async def create_artifact(\n        self,\n        artifact: ArtifactCreate,\n    ) -> Artifact:\n\"\"\"\n        Creates an artifact with the provided configuration.\n\n        Args:\n            artifact: Desired configuration for the new artifact.\n        Returns:\n            Information about the newly created artifact.\n        \"\"\"\n\n        response = await self._client.post(\n            \"/artifacts/\",\n            json=artifact.dict(json_compatible=True, exclude_unset=True),\n        )\n\n        return pydantic.parse_obj_as(Artifact, response.json())\n\n    async def read_artifacts(\n        self,\n        *,\n        artifact_filter: ArtifactFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        sort: ArtifactSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[Artifact]:\n\"\"\"\n        Query the Prefect API for artifacts. Only artifacts matching all criteria will\n        be returned.\n        Args:\n            artifact_filter: filter criteria for artifacts\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            sort: sort criteria for the artifacts\n            limit: limit for the artifact query\n            offset: offset for the artifact query\n        Returns:\n            a list of Artifact model representations of the artifacts\n        \"\"\"\n        body = {\n            \"artifacts\": (\n                artifact_filter.dict(json_compatible=True) if artifact_filter else None\n            ),\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\"/artifacts/filter\", json=body)\n        return pydantic.parse_obj_as(List[Artifact], response.json())\n\n    async def read_latest_artifacts(\n        self,\n        *,\n        artifact_filter: ArtifactCollectionFilter = None,\n        flow_run_filter: FlowRunFilter = None,\n        task_run_filter: TaskRunFilter = None,\n        sort: ArtifactCollectionSort = None,\n        limit: int = None,\n        offset: int = 0,\n    ) -> List[ArtifactCollection]:\n\"\"\"\n        Query the Prefect API for artifacts. Only artifacts matching all criteria will\n        be returned.\n        Args:\n            artifact_filter: filter criteria for artifacts\n            flow_run_filter: filter criteria for flow runs\n            task_run_filter: filter criteria for task runs\n            sort: sort criteria for the artifacts\n            limit: limit for the artifact query\n            offset: offset for the artifact query\n        Returns:\n            a list of Artifact model representations of the artifacts\n        \"\"\"\n        body = {\n            \"artifacts\": (\n                artifact_filter.dict(json_compatible=True) if artifact_filter else None\n            ),\n            \"flow_runs\": (\n                flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n            ),\n            \"task_runs\": (\n                task_run_filter.dict(json_compatible=True) if task_run_filter else None\n            ),\n            \"sort\": sort,\n            \"limit\": limit,\n            \"offset\": offset,\n        }\n        response = await self._client.post(\"/artifacts/latest/filter\", json=body)\n        return pydantic.parse_obj_as(List[ArtifactCollection], response.json())\n\n    async def delete_artifact(self, artifact_id: UUID) -> None:\n\"\"\"\n        Deletes an artifact with the provided id.\n\n        Args:\n            artifact_id: The id of the artifact to delete.\n        \"\"\"\n        try:\n            await self._client.delete(f\"/artifacts/{artifact_id}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_variable_by_name(self, name: str) -> Optional[Variable]:\n\"\"\"Reads a variable by name. Returns None if no variable is found.\"\"\"\n        try:\n            response = await self._client.get(f\"/variables/name/{name}\")\n            return pydantic.parse_obj_as(Variable, response.json())\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                return None\n            else:\n                raise\n\n    async def delete_variable_by_name(self, name: str):\n\"\"\"Deletes a variable by name.\"\"\"\n        try:\n            await self._client.delete(f\"/variables/name/{name}\")\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == 404:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n\n    async def read_variables(self, limit: int = None) -> List[Variable]:\n\"\"\"Reads all variables.\"\"\"\n        response = await self._client.post(\"/variables/filter\", json={\"limit\": limit})\n        return pydantic.parse_obj_as(List[Variable], response.json())\n\n    async def read_worker_metadata(self) -> Dict[str, Any]:\n\"\"\"Reads worker metadata stored in Prefect collection registry.\"\"\"\n        response = await self._client.get(\"collections/views/aggregate-worker-metadata\")\n        response.raise_for_status()\n        return response.json()\n\n    async def create_automation(self, automation: Automation) -> UUID:\n\"\"\"Creates an automation in Prefect Cloud.\"\"\"\n        if self.server_type != ServerType.CLOUD:\n            raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n        response = await self._client.post(\n            \"/automations/\",\n            json=automation.dict(json_compatible=True),\n        )\n\n        return UUID(response.json()[\"id\"])\n\n    async def read_resource_related_automations(\n        self, resource_id: str\n    ) -> List[ExistingAutomation]:\n        if self.server_type != ServerType.CLOUD:\n            raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n        response = await self._client.get(f\"/automations/related-to/{resource_id}\")\n        response.raise_for_status()\n        return pydantic.parse_obj_as(List[ExistingAutomation], response.json())\n\n    async def delete_resource_owned_automations(self, resource_id: str):\n        if self.server_type != ServerType.CLOUD:\n            raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n        await self._client.delete(f\"/automations/owned-by/{resource_id}\")\n\n    async def increment_concurrency_slots(\n        self, names: List[str], slots: int, mode: str\n    ) -> httpx.Response:\n        return await self._client.post(\n            \"/v2/concurrency_limits/increment\",\n            json={\"names\": names, \"slots\": slots, \"mode\": mode},\n        )\n\n    async def release_concurrency_slots(\n        self, names: List[str], slots: int\n    ) -> httpx.Response:\n        return await self._client.post(\n            \"/v2/concurrency_limits/decrement\",\n            json={\"names\": names, \"slots\": slots},\n        )\n\n    async def __aenter__(self):\n\"\"\"\n        Start the client.\n\n        If the client is already started, this will raise an exception.\n\n        If the client is already closed, this will raise an exception. Use a new client\n        instance instead.\n        \"\"\"\n        if self._closed:\n            # httpx.AsyncClient does not allow reuse so we will not either.\n            raise RuntimeError(\n                \"The client cannot be started again after closing. \"\n                \"Retrieve a new client with `get_client()` instead.\"\n            )\n\n        if self._started:\n            # httpx.AsyncClient does not allow reentrancy so we will not either.\n            raise RuntimeError(\"The client cannot be started more than once.\")\n\n        self._loop = asyncio.get_running_loop()\n        await self._exit_stack.__aenter__()\n\n        # Enter a lifespan context if using an ephemeral application.\n        # See https://github.com/encode/httpx/issues/350\n        if self._ephemeral_app and self.manage_lifespan:\n            self._ephemeral_lifespan = await self._exit_stack.enter_async_context(\n                app_lifespan_context(self._ephemeral_app)\n            )\n\n        if self._ephemeral_app:\n            self.logger.debug(\n                \"Using ephemeral application with database at \"\n                f\"{PREFECT_API_DATABASE_CONNECTION_URL.value()}\"\n            )\n        else:\n            self.logger.debug(f\"Connecting to API at {self.api_url}\")\n\n        # Enter the httpx client's context\n        await self._exit_stack.enter_async_context(self._client)\n\n        self._started = True\n\n        return self\n\n    async def __aexit__(self, *exc_info):\n\"\"\"\n        Shutdown the client.\n        \"\"\"\n        self._closed = True\n        return await self._exit_stack.__aexit__(*exc_info)\n\n    def __enter__(self):\n        raise RuntimeError(\n            \"The `PrefectClient` must be entered with an async context. Use 'async \"\n            \"with PrefectClient(...)' not 'with PrefectClient(...)'\"\n        )\n\n    def __exit__(self, *_):\n        assert False, \"This should never be called but must be defined for __enter__\"\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.api_url","title":"api_url: httpx.URL property","text":"

Get the base URL for the API.

","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.api_healthcheck","title":"api_healthcheck async","text":"

Attempts to connect to the API and returns the encountered exception if not successful.

If successful, returns None.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def api_healthcheck(self) -> Optional[Exception]:\n\"\"\"\n    Attempts to connect to the API and returns the encountered exception if not\n    successful.\n\n    If successful, returns `None`.\n    \"\"\"\n    try:\n        await self._client.get(\"/health\")\n        return None\n    except Exception as exc:\n        return exc\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.hello","title":"hello async","text":"

Send a GET request to /hello for testing purposes.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def hello(self) -> httpx.Response:\n\"\"\"\n    Send a GET request to /hello for testing purposes.\n    \"\"\"\n    return await self._client.get(\"/hello\")\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow","title":"create_flow async","text":"

Create a flow in the Prefect API.

Parameters:

Name Type Description Default flow FlowObject

a Flow object

required

Raises:

Type Description httpx.RequestError

if a flow was not created for any reason

Returns:

Type Description UUID

the ID of the flow in the backend

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_flow(self, flow: \"FlowObject\") -> UUID:\n\"\"\"\n    Create a flow in the Prefect API.\n\n    Args:\n        flow: a [Flow][prefect.flows.Flow] object\n\n    Raises:\n        httpx.RequestError: if a flow was not created for any reason\n\n    Returns:\n        the ID of the flow in the backend\n    \"\"\"\n    return await self.create_flow_from_name(flow.name)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_from_name","title":"create_flow_from_name async","text":"

Create a flow in the Prefect API.

Parameters:

Name Type Description Default flow_name str

the name of the new flow

required

Raises:

Type Description httpx.RequestError

if a flow was not created for any reason

Returns:

Type Description UUID

the ID of the flow in the backend

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_flow_from_name(self, flow_name: str) -> UUID:\n\"\"\"\n    Create a flow in the Prefect API.\n\n    Args:\n        flow_name: the name of the new flow\n\n    Raises:\n        httpx.RequestError: if a flow was not created for any reason\n\n    Returns:\n        the ID of the flow in the backend\n    \"\"\"\n    flow_data = FlowCreate(name=flow_name)\n    response = await self._client.post(\n        \"/flows/\", json=flow_data.dict(json_compatible=True)\n    )\n\n    flow_id = response.json().get(\"id\")\n    if not flow_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    # Return the id of the created flow\n    return UUID(flow_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow","title":"read_flow async","text":"

Query the Prefect API for a flow by id.

Parameters:

Name Type Description Default flow_id UUID

the flow ID of interest

required

Returns:

Type Description Flow

a Flow model representation of the flow

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flow(self, flow_id: UUID) -> Flow:\n\"\"\"\n    Query the Prefect API for a flow by id.\n\n    Args:\n        flow_id: the flow ID of interest\n\n    Returns:\n        a [Flow model][prefect.client.schemas.objects.Flow] representation of the flow\n    \"\"\"\n    response = await self._client.get(f\"/flows/{flow_id}\")\n    return Flow.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flows","title":"read_flows async","text":"

Query the Prefect API for flows. Only flows matching all criteria will be returned.

Parameters:

Name Type Description Default flow_filter FlowFilter

filter criteria for flows

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None deployment_filter DeploymentFilter

filter criteria for deployments

None work_pool_filter WorkPoolFilter

filter criteria for work pools

None work_queue_filter WorkQueueFilter

filter criteria for work pool queues

None sort FlowSort

sort criteria for the flows

None limit int

limit for the flow query

None offset int

offset for the flow query

0

Returns:

Type Description List[Flow]

a list of Flow model representations of the flows

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flows(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    work_pool_filter: WorkPoolFilter = None,\n    work_queue_filter: WorkQueueFilter = None,\n    sort: FlowSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[Flow]:\n\"\"\"\n    Query the Prefect API for flows. Only flows matching all criteria will\n    be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        work_pool_filter: filter criteria for work pools\n        work_queue_filter: filter criteria for work pool queues\n        sort: sort criteria for the flows\n        limit: limit for the flow query\n        offset: offset for the flow query\n\n    Returns:\n        a list of Flow model representations of the flows\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n        \"work_queues\": (\n            work_queue_filter.dict(json_compatible=True)\n            if work_queue_filter\n            else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    response = await self._client.post(\"/flows/filter\", json=body)\n    return pydantic.parse_obj_as(List[Flow], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_by_name","title":"read_flow_by_name async","text":"

Query the Prefect API for a flow by name.

Parameters:

Name Type Description Default flow_name str

the name of a flow

required

Returns:

Type Description Flow

a fully hydrated Flow model

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flow_by_name(\n    self,\n    flow_name: str,\n) -> Flow:\n\"\"\"\n    Query the Prefect API for a flow by name.\n\n    Args:\n        flow_name: the name of a flow\n\n    Returns:\n        a fully hydrated Flow model\n    \"\"\"\n    response = await self._client.get(f\"/flows/name/{flow_name}\")\n    return Flow.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_from_deployment","title":"create_flow_run_from_deployment async","text":"

Create a flow run for a deployment.

Parameters:

Name Type Description Default deployment_id UUID

The deployment ID to create the flow run from

required parameters Dict[str, Any]

Parameter overrides for this flow run. Merged with the deployment defaults

None context dict

Optional run context data

None state prefect.states.State

The initial state for the run. If not provided, defaults to Scheduled for now. Should always be a Scheduled type.

None name str

An optional name for the flow run. If not provided, the server will generate a name.

None tags Iterable[str]

An optional iterable of tags to apply to the flow run; these tags are merged with the deployment's tags.

None idempotency_key str

Optional idempotency key for creation of the flow run. If the key matches the key of an existing flow run, the existing run will be returned instead of creating a new one.

None parent_task_run_id UUID

if a subflow run is being created, the placeholder task run identifier in the parent flow

None work_queue_name str

An optional work queue name to add this run to. If not provided, will default to the deployment's set work queue. If one is provided that does not exist, a new work queue will be created within the deployment's work pool.

None

Raises:

Type Description httpx.RequestError

if the Prefect API does not successfully create a run for any reason

Returns:

Type Description FlowRun

The flow run model

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_flow_run_from_deployment(\n    self,\n    deployment_id: UUID,\n    *,\n    parameters: Dict[str, Any] = None,\n    context: dict = None,\n    state: prefect.states.State = None,\n    name: str = None,\n    tags: Iterable[str] = None,\n    idempotency_key: str = None,\n    parent_task_run_id: UUID = None,\n    work_queue_name: str = None,\n) -> FlowRun:\n\"\"\"\n    Create a flow run for a deployment.\n\n    Args:\n        deployment_id: The deployment ID to create the flow run from\n        parameters: Parameter overrides for this flow run. Merged with the\n            deployment defaults\n        context: Optional run context data\n        state: The initial state for the run. If not provided, defaults to\n            `Scheduled` for now. Should always be a `Scheduled` type.\n        name: An optional name for the flow run. If not provided, the server will\n            generate a name.\n        tags: An optional iterable of tags to apply to the flow run; these tags\n            are merged with the deployment's tags.\n        idempotency_key: Optional idempotency key for creation of the flow run.\n            If the key matches the key of an existing flow run, the existing run will\n            be returned instead of creating a new one.\n        parent_task_run_id: if a subflow run is being created, the placeholder task\n            run identifier in the parent flow\n        work_queue_name: An optional work queue name to add this run to. If not provided,\n            will default to the deployment's set work queue.  If one is provided that does not\n            exist, a new work queue will be created within the deployment's work pool.\n\n    Raises:\n        httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n    Returns:\n        The flow run model\n    \"\"\"\n    parameters = parameters or {}\n    context = context or {}\n    state = state or prefect.states.Scheduled()\n    tags = tags or []\n\n    flow_run_create = DeploymentFlowRunCreate(\n        parameters=parameters,\n        context=context,\n        state=state.to_state_create(),\n        tags=tags,\n        name=name,\n        idempotency_key=idempotency_key,\n        parent_task_run_id=parent_task_run_id,\n    )\n\n    # done separately to avoid including this field in payloads sent to older API versions\n    if work_queue_name:\n        flow_run_create.work_queue_name = work_queue_name\n\n    response = await self._client.post(\n        f\"/deployments/{deployment_id}/create_flow_run\",\n        json=flow_run_create.dict(json_compatible=True, exclude_unset=True),\n    )\n    return FlowRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run","title":"create_flow_run async","text":"

Create a flow run for a flow.

Parameters:

Name Type Description Default flow FlowObject

The flow model to create the flow run for

required name str

An optional name for the flow run

None parameters Dict[str, Any]

Parameter overrides for this flow run.

None context dict

Optional run context data

None tags Iterable[str]

a list of tags to apply to this flow run

None parent_task_run_id UUID

if a subflow run is being created, the placeholder task run identifier in the parent flow

None state prefect.states.State

The initial state for the run. If not provided, defaults to Scheduled for now. Should always be a Scheduled type.

None

Raises:

Type Description httpx.RequestError

if the Prefect API does not successfully create a run for any reason

Returns:

Type Description FlowRun

The flow run model

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_flow_run(\n    self,\n    flow: \"FlowObject\",\n    name: str = None,\n    parameters: Dict[str, Any] = None,\n    context: dict = None,\n    tags: Iterable[str] = None,\n    parent_task_run_id: UUID = None,\n    state: \"prefect.states.State\" = None,\n) -> FlowRun:\n\"\"\"\n    Create a flow run for a flow.\n\n    Args:\n        flow: The flow model to create the flow run for\n        name: An optional name for the flow run\n        parameters: Parameter overrides for this flow run.\n        context: Optional run context data\n        tags: a list of tags to apply to this flow run\n        parent_task_run_id: if a subflow run is being created, the placeholder task\n            run identifier in the parent flow\n        state: The initial state for the run. If not provided, defaults to\n            `Scheduled` for now. Should always be a `Scheduled` type.\n\n    Raises:\n        httpx.RequestError: if the Prefect API does not successfully create a run for any reason\n\n    Returns:\n        The flow run model\n    \"\"\"\n    parameters = parameters or {}\n    context = context or {}\n\n    if state is None:\n        state = prefect.states.Pending()\n\n    # Retrieve the flow id\n    flow_id = await self.create_flow(flow)\n\n    flow_run_create = FlowRunCreate(\n        flow_id=flow_id,\n        flow_version=flow.version,\n        name=name,\n        parameters=parameters,\n        context=context,\n        tags=list(tags or []),\n        parent_task_run_id=parent_task_run_id,\n        state=state.to_state_create(),\n        empirical_policy=FlowRunPolicy(\n            retries=flow.retries,\n            retry_delay=flow.retry_delay_seconds,\n        ),\n    )\n\n    flow_run_create_json = flow_run_create.dict(json_compatible=True)\n    response = await self._client.post(\"/flow_runs/\", json=flow_run_create_json)\n    flow_run = FlowRun.parse_obj(response.json())\n\n    # Restore the parameters to the local objects to retain expectations about\n    # Python objects\n    flow_run.parameters = parameters\n\n    return flow_run\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_flow_run","title":"update_flow_run async","text":"

Update a flow run's details.

Parameters:

Name Type Description Default flow_run_id UUID

The identifier for the flow run to update.

required flow_version Optional[str]

A new version string for the flow run.

None parameters Optional[dict]

A dictionary of parameter values for the flow run. This will not be merged with any existing parameters.

None name Optional[str]

A new name for the flow run.

None empirical_policy Optional[FlowRunPolicy]

A new flow run orchestration policy. This will not be merged with any existing policy.

None tags Optional[Iterable[str]]

An iterable of new tags for the flow run. These will not be merged with any existing tags.

None infrastructure_pid Optional[str]

The id of flow run as returned by an infrastructure block.

None

Returns:

Type Description httpx.Response

an httpx.Response object from the PATCH request

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def update_flow_run(\n    self,\n    flow_run_id: UUID,\n    flow_version: Optional[str] = None,\n    parameters: Optional[dict] = None,\n    name: Optional[str] = None,\n    tags: Optional[Iterable[str]] = None,\n    empirical_policy: Optional[FlowRunPolicy] = None,\n    infrastructure_pid: Optional[str] = None,\n) -> httpx.Response:\n\"\"\"\n    Update a flow run's details.\n\n    Args:\n        flow_run_id: The identifier for the flow run to update.\n        flow_version: A new version string for the flow run.\n        parameters: A dictionary of parameter values for the flow run. This will not\n            be merged with any existing parameters.\n        name: A new name for the flow run.\n        empirical_policy: A new flow run orchestration policy. This will not be\n            merged with any existing policy.\n        tags: An iterable of new tags for the flow run. These will not be merged with\n            any existing tags.\n        infrastructure_pid: The id of flow run as returned by an\n            infrastructure block.\n\n    Returns:\n        an `httpx.Response` object from the PATCH request\n    \"\"\"\n    params = {}\n    if flow_version is not None:\n        params[\"flow_version\"] = flow_version\n    if parameters is not None:\n        params[\"parameters\"] = parameters\n    if name is not None:\n        params[\"name\"] = name\n    if tags is not None:\n        params[\"tags\"] = tags\n    if empirical_policy is not None:\n        params[\"empirical_policy\"] = empirical_policy\n    if infrastructure_pid:\n        params[\"infrastructure_pid\"] = infrastructure_pid\n\n    flow_run_data = FlowRunUpdate(**params)\n\n    return await self._client.patch(\n        f\"/flow_runs/{flow_run_id}\",\n        json=flow_run_data.dict(json_compatible=True, exclude_unset=True),\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_flow_run","title":"delete_flow_run async","text":"

Delete a flow run by UUID.

Parameters:

Name Type Description Default flow_run_id UUID

The flow run UUID of interest.

required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If requests fails

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_flow_run(\n    self,\n    flow_run_id: UUID,\n) -> None:\n\"\"\"\n    Delete a flow run by UUID.\n\n    Args:\n        flow_run_id: The flow run UUID of interest.\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(f\"/flow_runs/{flow_run_id}\"),\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_concurrency_limit","title":"create_concurrency_limit async","text":"

Create a tag concurrency limit in the Prefect API. These limits govern concurrently running tasks.

Parameters:

Name Type Description Default tag str

a tag the concurrency limit is applied to

required concurrency_limit int

the maximum number of concurrent task runs for a given tag

required

Raises:

Type Description httpx.RequestError

if the concurrency limit was not created for any reason

Returns:

Type Description UUID

the ID of the concurrency limit in the backend

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_concurrency_limit(\n    self,\n    tag: str,\n    concurrency_limit: int,\n) -> UUID:\n\"\"\"\n    Create a tag concurrency limit in the Prefect API. These limits govern concurrently\n    running tasks.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n        concurrency_limit: the maximum number of concurrent task runs for a given tag\n\n    Raises:\n        httpx.RequestError: if the concurrency limit was not created for any reason\n\n    Returns:\n        the ID of the concurrency limit in the backend\n    \"\"\"\n\n    concurrency_limit_create = ConcurrencyLimitCreate(\n        tag=tag,\n        concurrency_limit=concurrency_limit,\n    )\n    response = await self._client.post(\n        \"/concurrency_limits/\",\n        json=concurrency_limit_create.dict(json_compatible=True),\n    )\n\n    concurrency_limit_id = response.json().get(\"id\")\n\n    if not concurrency_limit_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    return UUID(concurrency_limit_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_concurrency_limit_by_tag","title":"read_concurrency_limit_by_tag async","text":"

Read the concurrency limit set on a specific tag.

Parameters:

Name Type Description Default tag str

a tag the concurrency limit is applied to

required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

if the concurrency limit was not created for any reason

Returns:

Type Description

the concurrency limit set on a specific tag

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_concurrency_limit_by_tag(\n    self,\n    tag: str,\n):\n\"\"\"\n    Read the concurrency limit set on a specific tag.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: if the concurrency limit was not created for any reason\n\n    Returns:\n        the concurrency limit set on a specific tag\n    \"\"\"\n    try:\n        response = await self._client.get(\n            f\"/concurrency_limits/tag/{tag}\",\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    concurrency_limit_id = response.json().get(\"id\")\n\n    if not concurrency_limit_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    concurrency_limit = ConcurrencyLimit.parse_obj(response.json())\n    return concurrency_limit\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_concurrency_limits","title":"read_concurrency_limits async","text":"

Lists concurrency limits set on task run tags.

Parameters:

Name Type Description Default limit int

the maximum number of concurrency limits returned

required offset int

the concurrency limit query offset

required

Returns:

Type Description

a list of concurrency limits

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_concurrency_limits(\n    self,\n    limit: int,\n    offset: int,\n):\n\"\"\"\n    Lists concurrency limits set on task run tags.\n\n    Args:\n        limit: the maximum number of concurrency limits returned\n        offset: the concurrency limit query offset\n\n    Returns:\n        a list of concurrency limits\n    \"\"\"\n\n    body = {\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    response = await self._client.post(\"/concurrency_limits/filter\", json=body)\n    return pydantic.parse_obj_as(List[ConcurrencyLimit], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.reset_concurrency_limit_by_tag","title":"reset_concurrency_limit_by_tag async","text":"

Resets the concurrency limit slots set on a specific tag.

Parameters:

Name Type Description Default tag str

a tag the concurrency limit is applied to

required slot_override Optional[List[Union[UUID, str]]]

a list of task run IDs that are currently using a concurrency slot, please check that any task run IDs included in slot_override are currently running, otherwise those concurrency slots will never be released.

None

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If request fails

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def reset_concurrency_limit_by_tag(\n    self,\n    tag: str,\n    slot_override: Optional[List[Union[UUID, str]]] = None,\n):\n\"\"\"\n    Resets the concurrency limit slots set on a specific tag.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n        slot_override: a list of task run IDs that are currently using a\n            concurrency slot, please check that any task run IDs included in\n            `slot_override` are currently running, otherwise those concurrency\n            slots will never be released.\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    \"\"\"\n    if slot_override is not None:\n        slot_override = [str(slot) for slot in slot_override]\n\n    try:\n        await self._client.post(\n            f\"/concurrency_limits/tag/{tag}/reset\",\n            json=dict(slot_override=slot_override),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_concurrency_limit_by_tag","title":"delete_concurrency_limit_by_tag async","text":"

Delete the concurrency limit set on a specific tag.

Parameters:

Name Type Description Default tag str

a tag the concurrency limit is applied to

required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If request fails

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_concurrency_limit_by_tag(\n    self,\n    tag: str,\n):\n\"\"\"\n    Delete the concurrency limit set on a specific tag.\n\n    Args:\n        tag: a tag the concurrency limit is applied to\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    \"\"\"\n    try:\n        await self._client.delete(\n            f\"/concurrency_limits/tag/{tag}\",\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_work_queue","title":"create_work_queue async","text":"

Create a work queue.

Parameters:

Name Type Description Default name str

a unique name for the work queue

required tags Optional[List[str]]

will be included in the queue. This option will be removed on 2023-02-23.

None description Optional[str]

An optional description for the work queue.

None is_paused Optional[bool]

Whether or not the work queue is paused.

None concurrency_limit Optional[int]

An optional concurrency limit for the work queue.

None priority Optional[int]

The queue's priority. Lower values are higher priority (1 is the highest).

None work_pool_name Optional[str]

The name of the work pool to use for this queue.

None

Raises:

Type Description prefect.exceptions.ObjectAlreadyExists

If request returns 409

httpx.RequestError

If request fails

Returns:

Type Description WorkQueue

The created work queue

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_work_queue(\n    self,\n    name: str,\n    tags: Optional[List[str]] = None,\n    description: Optional[str] = None,\n    is_paused: Optional[bool] = None,\n    concurrency_limit: Optional[int] = None,\n    priority: Optional[int] = None,\n    work_pool_name: Optional[str] = None,\n) -> WorkQueue:\n\"\"\"\n    Create a work queue.\n\n    Args:\n        name: a unique name for the work queue\n        tags: DEPRECATED: an optional list of tags to filter on; only work scheduled with these tags\n            will be included in the queue. This option will be removed on 2023-02-23.\n        description: An optional description for the work queue.\n        is_paused: Whether or not the work queue is paused.\n        concurrency_limit: An optional concurrency limit for the work queue.\n        priority: The queue's priority. Lower values are higher priority (1 is the highest).\n        work_pool_name: The name of the work pool to use for this queue.\n\n    Raises:\n        prefect.exceptions.ObjectAlreadyExists: If request returns 409\n        httpx.RequestError: If request fails\n\n    Returns:\n        The created work queue\n    \"\"\"\n    if tags:\n        warnings.warn(\n            (\n                \"The use of tags for creating work queue filters is deprecated.\"\n                \" This option will be removed on 2023-02-23.\"\n            ),\n            DeprecationWarning,\n        )\n        filter = QueueFilter(tags=tags)\n    else:\n        filter = None\n    create_model = WorkQueueCreate(name=name, filter=filter)\n    if description is not None:\n        create_model.description = description\n    if is_paused is not None:\n        create_model.is_paused = is_paused\n    if concurrency_limit is not None:\n        create_model.concurrency_limit = concurrency_limit\n    if priority is not None:\n        create_model.priority = priority\n\n    data = create_model.dict(json_compatible=True)\n    try:\n        if work_pool_name is not None:\n            response = await self._client.post(\n                f\"/work_pools/{work_pool_name}/queues\", json=data\n            )\n        else:\n            response = await self._client.post(\"/work_queues/\", json=data)\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        elif e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue_by_name","title":"read_work_queue_by_name async","text":"

Read a work queue by name.

Parameters:

Name Type Description Default name str

a unique name for the work queue

required work_pool_name str

the name of the work pool the queue belongs to.

None

Raises:

Type Description prefect.exceptions.ObjectNotFound

if no work queue is found

httpx.HTTPStatusError

other status errors

Returns:

Name Type Description WorkQueue WorkQueue

a work queue API object

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_work_queue_by_name(\n    self,\n    name: str,\n    work_pool_name: Optional[str] = None,\n) -> WorkQueue:\n\"\"\"\n    Read a work queue by name.\n\n    Args:\n        name (str): a unique name for the work queue\n        work_pool_name (str, optional): the name of the work pool\n            the queue belongs to.\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: if no work queue is found\n        httpx.HTTPStatusError: other status errors\n\n    Returns:\n        WorkQueue: a work queue API object\n    \"\"\"\n    try:\n        if work_pool_name is not None:\n            response = await self._client.get(\n                f\"/work_pools/{work_pool_name}/queues/{name}\"\n            )\n        else:\n            response = await self._client.get(f\"/work_queues/name/{name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_work_queue","title":"update_work_queue async","text":"

Update properties of a work queue.

Parameters:

Name Type Description Default id UUID

the ID of the work queue to update

required **kwargs

the fields to update

{}

Raises:

Type Description ValueError

if no kwargs are provided

prefect.exceptions.ObjectNotFound

if request returns 404

httpx.RequestError

if the request fails

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def update_work_queue(self, id: UUID, **kwargs):\n\"\"\"\n    Update properties of a work queue.\n\n    Args:\n        id: the ID of the work queue to update\n        **kwargs: the fields to update\n\n    Raises:\n        ValueError: if no kwargs are provided\n        prefect.exceptions.ObjectNotFound: if request returns 404\n        httpx.RequestError: if the request fails\n\n    \"\"\"\n    if not kwargs:\n        raise ValueError(\"No fields provided to update.\")\n\n    data = WorkQueueUpdate(**kwargs).dict(json_compatible=True, exclude_unset=True)\n    try:\n        await self._client.patch(f\"/work_queues/{id}\", json=data)\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_runs_in_work_queue","title":"get_runs_in_work_queue async","text":"

Read flow runs off a work queue.

Parameters:

Name Type Description Default id UUID

the id of the work queue to read from

required limit int

a limit on the number of runs to return

10 scheduled_before datetime.datetime

a timestamp; only runs scheduled before this time will be returned. Defaults to now.

None

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If request fails

Returns:

Type Description List[FlowRun]

List[FlowRun]: a list of FlowRun objects read from the queue

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def get_runs_in_work_queue(\n    self,\n    id: UUID,\n    limit: int = 10,\n    scheduled_before: datetime.datetime = None,\n) -> List[FlowRun]:\n\"\"\"\n    Read flow runs off a work queue.\n\n    Args:\n        id: the id of the work queue to read from\n        limit: a limit on the number of runs to return\n        scheduled_before: a timestamp; only runs scheduled before this time will be returned.\n            Defaults to now.\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        List[FlowRun]: a list of FlowRun objects read from the queue\n    \"\"\"\n    if scheduled_before is None:\n        scheduled_before = pendulum.now(\"UTC\")\n\n    try:\n        response = await self._client.post(\n            f\"/work_queues/{id}/get_runs\",\n            json={\n                \"limit\": limit,\n                \"scheduled_before\": scheduled_before.isoformat(),\n            },\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return pydantic.parse_obj_as(List[FlowRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queue","title":"read_work_queue async","text":"

Read a work queue.

Parameters:

Name Type Description Default id UUID

the id of the work queue to load

required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If request fails

Returns:

Name Type Description WorkQueue WorkQueue

an instantiated WorkQueue object

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_work_queue(\n    self,\n    id: UUID,\n) -> WorkQueue:\n\"\"\"\n    Read a work queue.\n\n    Args:\n        id: the id of the work queue to load\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        WorkQueue: an instantiated WorkQueue object\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/work_queues/{id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return WorkQueue.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.match_work_queues","title":"match_work_queues async","text":"

Query the Prefect API for work queues with names with a specific prefix.

Parameters:

Name Type Description Default prefixes List[str]

a list of strings used to match work queue name prefixes

required

Returns:

Type Description List[WorkQueue]

a list of WorkQueue model representations of the work queues

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def match_work_queues(\n    self,\n    prefixes: List[str],\n) -> List[WorkQueue]:\n\"\"\"\n    Query the Prefect API for work queues with names with a specific prefix.\n\n    Args:\n        prefixes: a list of strings used to match work queue name prefixes\n\n    Returns:\n        a list of WorkQueue model representations\n            of the work queues\n    \"\"\"\n    page_length = 100\n    current_page = 0\n    work_queues = []\n\n    while True:\n        new_queues = await self.read_work_queues(\n            offset=current_page * page_length,\n            limit=page_length,\n            work_queue_filter=WorkQueueFilter(\n                name=WorkQueueFilterName(startswith_=prefixes)\n            ),\n        )\n        if not new_queues:\n            break\n        work_queues += new_queues\n        current_page += 1\n\n    return work_queues\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_work_queue_by_id","title":"delete_work_queue_by_id async","text":"

Delete a work queue by its ID.

Parameters:

Name Type Description Default id UUID

the id of the work queue to delete

required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If requests fails

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_work_queue_by_id(\n    self,\n    id: UUID,\n):\n\"\"\"\n    Delete a work queue by its ID.\n\n    Args:\n        id: the id of the work queue to delete\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(\n            f\"/work_queues/{id}\",\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_type","title":"create_block_type async","text":"

Create a block type in the Prefect API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_block_type(self, block_type: BlockTypeCreate) -> BlockType:\n\"\"\"\n    Create a block type in the Prefect API.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/block_types/\",\n            json=block_type.dict(\n                json_compatible=True, exclude_unset=True, exclude={\"id\"}\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n    return BlockType.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_schema","title":"create_block_schema async","text":"

Create a block schema in the Prefect API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_block_schema(self, block_schema: BlockSchemaCreate) -> BlockSchema:\n\"\"\"\n    Create a block schema in the Prefect API.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/block_schemas/\",\n            json=block_schema.dict(\n                json_compatible=True,\n                exclude_unset=True,\n                exclude={\"id\", \"block_type\", \"checksum\"},\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n    return BlockSchema.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_block_document","title":"create_block_document async","text":"

Create a block document in the Prefect API. This data is used to configure a corresponding Block.

Parameters:

Name Type Description Default include_secrets bool

whether to include secret values on the stored Block, corresponding to Pydantic's SecretStr and SecretBytes fields. Note Blocks may not work as expected if this is set to False.

True Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_block_document(\n    self,\n    block_document: Union[BlockDocument, BlockDocumentCreate],\n    include_secrets: bool = True,\n) -> BlockDocument:\n\"\"\"\n    Create a block document in the Prefect API. This data is used to configure a\n    corresponding Block.\n\n    Args:\n        include_secrets (bool): whether to include secret values\n            on the stored Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. Note Blocks may not work as expected if\n            this is set to `False`.\n    \"\"\"\n    if isinstance(block_document, BlockDocument):\n        block_document = BlockDocumentCreate.parse_obj(\n            block_document.dict(\n                json_compatible=True,\n                include_secrets=include_secrets,\n                exclude_unset=True,\n                exclude={\"id\", \"block_schema\", \"block_type\"},\n            ),\n        )\n\n    try:\n        response = await self._client.post(\n            \"/block_documents/\",\n            json=block_document.dict(\n                json_compatible=True,\n                include_secrets=include_secrets,\n                exclude_unset=True,\n                exclude={\"id\", \"block_schema\", \"block_type\"},\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n    return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_block_document","title":"update_block_document async","text":"

Update a block document in the Prefect API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def update_block_document(\n    self,\n    block_document_id: UUID,\n    block_document: BlockDocumentUpdate,\n):\n\"\"\"\n    Update a block document in the Prefect API.\n    \"\"\"\n    try:\n        await self._client.patch(\n            f\"/block_documents/{block_document_id}\",\n            json=block_document.dict(\n                json_compatible=True,\n                exclude_unset=True,\n                include={\"data\", \"merge_existing_data\", \"block_schema_id\"},\n                include_secrets=True,\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_block_document","title":"delete_block_document async","text":"

Delete a block document.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_block_document(self, block_document_id: UUID):\n\"\"\"\n    Delete a block document.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/block_documents/{block_document_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_type_by_slug","title":"read_block_type_by_slug async","text":"

Read a block type by its slug.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_type_by_slug(self, slug: str) -> BlockType:\n\"\"\"\n    Read a block type by its slug.\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/block_types/slug/{slug}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockType.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_schema_by_checksum","title":"read_block_schema_by_checksum async","text":"

Look up a block schema checksum

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_schema_by_checksum(\n    self, checksum: str, version: Optional[str] = None\n) -> BlockSchema:\n\"\"\"\n    Look up a block schema checksum\n    \"\"\"\n    try:\n        url = f\"/block_schemas/checksum/{checksum}\"\n        if version is not None:\n            url = f\"{url}?version={version}\"\n        response = await self._client.get(url)\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockSchema.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.update_block_type","title":"update_block_type async","text":"

Update a block document in the Prefect API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def update_block_type(self, block_type_id: UUID, block_type: BlockTypeUpdate):\n\"\"\"\n    Update a block document in the Prefect API.\n    \"\"\"\n    try:\n        await self._client.patch(\n            f\"/block_types/{block_type_id}\",\n            json=block_type.dict(\n                json_compatible=True,\n                exclude_unset=True,\n                include=BlockTypeUpdate.updatable_fields(),\n                include_secrets=True,\n            ),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_block_type","title":"delete_block_type async","text":"

Delete a block type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_block_type(self, block_type_id: UUID):\n\"\"\"\n    Delete a block type.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/block_types/{block_type_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        elif (\n            e.response.status_code == status.HTTP_403_FORBIDDEN\n            and e.response.json()[\"detail\"]\n            == \"protected block types cannot be deleted.\"\n        ):\n            raise prefect.exceptions.ProtectedBlockError(\n                \"Protected block types cannot be deleted.\"\n            ) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_types","title":"read_block_types async","text":"

Read all block types

Raises:

Type Description httpx.RequestError

if the block types were not found

Returns:

Type Description List[BlockType]

List of BlockTypes.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_types(self) -> List[BlockType]:\n\"\"\"\n    Read all block types\n    Raises:\n        httpx.RequestError: if the block types were not found\n\n    Returns:\n        List of BlockTypes.\n    \"\"\"\n    response = await self._client.post(\"/block_types/filter\", json={})\n    return pydantic.parse_obj_as(List[BlockType], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_schemas","title":"read_block_schemas async","text":"

Read all block schemas

Raises:

Type Description httpx.RequestError

if a valid block schema was not found

Returns:

Type Description List[BlockSchema]

A BlockSchema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_schemas(self) -> List[BlockSchema]:\n\"\"\"\n    Read all block schemas\n    Raises:\n        httpx.RequestError: if a valid block schema was not found\n\n    Returns:\n        A BlockSchema.\n    \"\"\"\n    response = await self._client.post(\"/block_schemas/filter\", json={})\n    return pydantic.parse_obj_as(List[BlockSchema], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_document","title":"read_block_document async","text":"

Read the block document with the specified ID.

Parameters:

Name Type Description Default block_document_id UUID

the block document id

required include_secrets bool

whether to include secret values on the Block, corresponding to Pydantic's SecretStr and SecretBytes fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False.

True

Raises:

Type Description httpx.RequestError

if the block document was not found for any reason

Returns:

Type Description

A block document or None.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_document(\n    self,\n    block_document_id: UUID,\n    include_secrets: bool = True,\n):\n\"\"\"\n    Read the block document with the specified ID.\n\n    Args:\n        block_document_id: the block document id\n        include_secrets (bool): whether to include secret values\n            on the Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. These fields are automatically obfuscated\n            by Pydantic, but users can additionally choose not to receive\n            their values from the API. Note that any business logic on the\n            Block may not work if this is `False`.\n\n    Raises:\n        httpx.RequestError: if the block document was not found for any reason\n\n    Returns:\n        A block document or None.\n    \"\"\"\n    try:\n        response = await self._client.get(\n            f\"/block_documents/{block_document_id}\",\n            params=dict(include_secrets=include_secrets),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_document_by_name","title":"read_block_document_by_name async","text":"

Read the block document with the specified name that corresponds to a specific block type name.

Parameters:

Name Type Description Default name str

The block document name.

required block_type_slug str

The block type slug.

required include_secrets bool

whether to include secret values on the Block, corresponding to Pydantic's SecretStr and SecretBytes fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False.

True

Raises:

Type Description httpx.RequestError

if the block document was not found for any reason

Returns:

Type Description

A block document or None.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_document_by_name(\n    self,\n    name: str,\n    block_type_slug: str,\n    include_secrets: bool = True,\n):\n\"\"\"\n    Read the block document with the specified name that corresponds to a\n    specific block type name.\n\n    Args:\n        name: The block document name.\n        block_type_slug: The block type slug.\n        include_secrets (bool): whether to include secret values\n            on the Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. These fields are automatically obfuscated\n            by Pydantic, but users can additionally choose not to receive\n            their values from the API. Note that any business logic on the\n            Block may not work if this is `False`.\n\n    Raises:\n        httpx.RequestError: if the block document was not found for any reason\n\n    Returns:\n        A block document or None.\n    \"\"\"\n    try:\n        response = await self._client.get(\n            f\"/block_types/slug/{block_type_slug}/block_documents/name/{name}\",\n            params=dict(include_secrets=include_secrets),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return BlockDocument.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_block_documents","title":"read_block_documents async","text":"

Read block documents

Parameters:

Name Type Description Default block_schema_type Optional[str]

an optional block schema type

None offset Optional[int]

an offset

None limit Optional[int]

the number of blocks to return

None include_secrets bool

whether to include secret values on the Block, corresponding to Pydantic's SecretStr and SecretBytes fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is False.

True

Returns:

Type Description

A list of block documents

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_block_documents(\n    self,\n    block_schema_type: Optional[str] = None,\n    offset: Optional[int] = None,\n    limit: Optional[int] = None,\n    include_secrets: bool = True,\n):\n\"\"\"\n    Read block documents\n\n    Args:\n        block_schema_type: an optional block schema type\n        offset: an offset\n        limit: the number of blocks to return\n        include_secrets (bool): whether to include secret values\n            on the Block, corresponding to Pydantic's `SecretStr` and\n            `SecretBytes` fields. These fields are automatically obfuscated\n            by Pydantic, but users can additionally choose not to receive\n            their values from the API. Note that any business logic on the\n            Block may not work if this is `False`.\n\n    Returns:\n        A list of block documents\n    \"\"\"\n    response = await self._client.post(\n        \"/block_documents/filter\",\n        json=dict(\n            block_schema_type=block_schema_type,\n            offset=offset,\n            limit=limit,\n            include_secrets=include_secrets,\n        ),\n    )\n    return pydantic.parse_obj_as(List[BlockDocument], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_deployment","title":"create_deployment async","text":"

Create a deployment.

Parameters:

Name Type Description Default flow_id UUID

the flow ID to create a deployment for

required name str

the name of the deployment

required version str

an optional version string for the deployment

None schedule SCHEDULE_TYPES

an optional schedule to apply to the deployment

None tags List[str]

an optional list of tags to apply to the deployment

None storage_document_id UUID

an reference to the storage block document used for the deployed flow

None infrastructure_document_id UUID

an reference to the infrastructure block document to use for this deployment

None

Raises:

Type Description httpx.RequestError

if the deployment was not created for any reason

Returns:

Type Description UUID

the ID of the deployment in the backend

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_deployment(\n    self,\n    flow_id: UUID,\n    name: str,\n    version: str = None,\n    schedule: SCHEDULE_TYPES = None,\n    parameters: Dict[str, Any] = None,\n    description: str = None,\n    work_queue_name: str = None,\n    work_pool_name: str = None,\n    tags: List[str] = None,\n    storage_document_id: UUID = None,\n    manifest_path: str = None,\n    path: str = None,\n    entrypoint: str = None,\n    infrastructure_document_id: UUID = None,\n    infra_overrides: Dict[str, Any] = None,\n    parameter_openapi_schema: dict = None,\n    is_schedule_active: Optional[bool] = None,\n    pull_steps: Optional[List[dict]] = None,\n) -> UUID:\n\"\"\"\n    Create a deployment.\n\n    Args:\n        flow_id: the flow ID to create a deployment for\n        name: the name of the deployment\n        version: an optional version string for the deployment\n        schedule: an optional schedule to apply to the deployment\n        tags: an optional list of tags to apply to the deployment\n        storage_document_id: an reference to the storage block document\n            used for the deployed flow\n        infrastructure_document_id: an reference to the infrastructure block document\n            to use for this deployment\n\n    Raises:\n        httpx.RequestError: if the deployment was not created for any reason\n\n    Returns:\n        the ID of the deployment in the backend\n    \"\"\"\n    deployment_create = DeploymentCreate(\n        flow_id=flow_id,\n        name=name,\n        version=version,\n        schedule=schedule,\n        parameters=dict(parameters or {}),\n        tags=list(tags or []),\n        work_queue_name=work_queue_name,\n        description=description,\n        storage_document_id=storage_document_id,\n        path=path,\n        entrypoint=entrypoint,\n        manifest_path=manifest_path,  # for backwards compat\n        infrastructure_document_id=infrastructure_document_id,\n        infra_overrides=infra_overrides or {},\n        parameter_openapi_schema=parameter_openapi_schema,\n        is_schedule_active=is_schedule_active,\n        pull_steps=pull_steps,\n    )\n\n    if work_pool_name is not None:\n        deployment_create.work_pool_name = work_pool_name\n\n    # Exclude newer fields that are not set to avoid compatibility issues\n    exclude = {\n        field\n        for field in [\"work_pool_name\", \"work_queue_name\"]\n        if field not in deployment_create.__fields_set__\n    }\n\n    if deployment_create.is_schedule_active is None:\n        exclude.add(\"is_schedule_active\")\n\n    if deployment_create.pull_steps is None:\n        exclude.add(\"pull_steps\")\n\n    json = deployment_create.dict(json_compatible=True, exclude=exclude)\n    response = await self._client.post(\n        \"/deployments/\",\n        json=json,\n    )\n    deployment_id = response.json().get(\"id\")\n    if not deployment_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    return UUID(deployment_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment","title":"read_deployment async","text":"

Query the Prefect API for a deployment by id.

Parameters:

Name Type Description Default deployment_id UUID

the deployment ID of interest

required

Returns:

Type Description DeploymentResponse

a Deployment model representation of the deployment

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_deployment(\n    self,\n    deployment_id: UUID,\n) -> DeploymentResponse:\n\"\"\"\n    Query the Prefect API for a deployment by id.\n\n    Args:\n        deployment_id: the deployment ID of interest\n\n    Returns:\n        a [Deployment model][prefect.client.schemas.objects.Deployment] representation of the deployment\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/deployments/{deployment_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return DeploymentResponse.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployment_by_name","title":"read_deployment_by_name async","text":"

Query the Prefect API for a deployment by name.

Parameters:

Name Type Description Default name str

A deployed flow's name: / required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If request fails

Returns:

Type Description DeploymentResponse

a Deployment model representation of the deployment

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_deployment_by_name(\n    self,\n    name: str,\n) -> DeploymentResponse:\n\"\"\"\n    Query the Prefect API for a deployment by name.\n\n    Args:\n        name: A deployed flow's name: <FLOW_NAME>/<DEPLOYMENT_NAME>\n\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If request fails\n\n    Returns:\n        a Deployment model representation of the deployment\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/deployments/name/{name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    return DeploymentResponse.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_deployments","title":"read_deployments async","text":"

Query the Prefect API for deployments. Only deployments matching all the provided criteria will be returned.

Parameters:

Name Type Description Default flow_filter FlowFilter

filter criteria for flows

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None deployment_filter DeploymentFilter

filter criteria for deployments

None work_pool_filter WorkPoolFilter

filter criteria for work pools

None work_queue_filter WorkQueueFilter

filter criteria for work pool queues

None limit int

a limit for the deployment query

None offset int

an offset for the deployment query

0

Returns:

Type Description List[DeploymentResponse]

a list of Deployment model representations of the deployments

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_deployments(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    work_pool_filter: WorkPoolFilter = None,\n    work_queue_filter: WorkQueueFilter = None,\n    limit: int = None,\n    sort: DeploymentSort = None,\n    offset: int = 0,\n) -> List[DeploymentResponse]:\n\"\"\"\n    Query the Prefect API for deployments. Only deployments matching all\n    the provided criteria will be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        work_pool_filter: filter criteria for work pools\n        work_queue_filter: filter criteria for work pool queues\n        limit: a limit for the deployment query\n        offset: an offset for the deployment query\n\n    Returns:\n        a list of Deployment model representations\n            of the deployments\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n        \"work_pool_queues\": (\n            work_queue_filter.dict(json_compatible=True)\n            if work_queue_filter\n            else None\n        ),\n        \"limit\": limit,\n        \"offset\": offset,\n        \"sort\": sort,\n    }\n\n    response = await self._client.post(\"/deployments/filter\", json=body)\n    return pydantic.parse_obj_as(List[DeploymentResponse], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_deployment","title":"delete_deployment async","text":"

Delete deployment by id.

Parameters:

Name Type Description Default deployment_id UUID

The deployment id of interest.

required

Raises:

Type Description prefect.exceptions.ObjectNotFound

If request returns 404

httpx.RequestError

If requests fails

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_deployment(\n    self,\n    deployment_id: UUID,\n):\n\"\"\"\n    Delete deployment by id.\n\n    Args:\n        deployment_id: The deployment id of interest.\n    Raises:\n        prefect.exceptions.ObjectNotFound: If request returns 404\n        httpx.RequestError: If requests fails\n    \"\"\"\n    try:\n        await self._client.delete(f\"/deployments/{deployment_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run","title":"read_flow_run async","text":"

Query the Prefect API for a flow run by id.

Parameters:

Name Type Description Default flow_run_id UUID

the flow run ID of interest

required

Returns:

Type Description FlowRun

a Flow Run model representation of the flow run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flow_run(self, flow_run_id: UUID) -> FlowRun:\n\"\"\"\n    Query the Prefect API for a flow run by id.\n\n    Args:\n        flow_run_id: the flow run ID of interest\n\n    Returns:\n        a Flow Run model representation of the flow run\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/flow_runs/{flow_run_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n    return FlowRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.resume_flow_run","title":"resume_flow_run async","text":"

Resumes a paused flow run.

Parameters:

Name Type Description Default flow_run_id UUID

the flow run ID of interest

required

Returns:

Type Description OrchestrationResult

an OrchestrationResult model representation of state orchestration output

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def resume_flow_run(self, flow_run_id: UUID) -> OrchestrationResult:\n\"\"\"\n    Resumes a paused flow run.\n\n    Args:\n        flow_run_id: the flow run ID of interest\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    try:\n        response = await self._client.post(f\"/flow_runs/{flow_run_id}/resume\")\n    except httpx.HTTPStatusError:\n        raise\n\n    return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_runs","title":"read_flow_runs async","text":"

Query the Prefect API for flow runs. Only flow runs matching all criteria will be returned.

Parameters:

Name Type Description Default flow_filter FlowFilter

filter criteria for flows

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None deployment_filter DeploymentFilter

filter criteria for deployments

None work_pool_filter WorkPoolFilter

filter criteria for work pools

None work_queue_filter WorkQueueFilter

filter criteria for work pool queues

None sort FlowRunSort

sort criteria for the flow runs

None limit int

limit for the flow run query

None offset int

offset for the flow run query

0

Returns:

Type Description List[FlowRun]

a list of Flow Run model representations of the flow runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flow_runs(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    work_pool_filter: WorkPoolFilter = None,\n    work_queue_filter: WorkQueueFilter = None,\n    sort: FlowRunSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[FlowRun]:\n\"\"\"\n    Query the Prefect API for flow runs. Only flow runs matching all criteria will\n    be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        work_pool_filter: filter criteria for work pools\n        work_queue_filter: filter criteria for work pool queues\n        sort: sort criteria for the flow runs\n        limit: limit for the flow run query\n        offset: offset for the flow run query\n\n    Returns:\n        a list of Flow Run model representations\n            of the flow runs\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n        \"work_pool_queues\": (\n            work_queue_filter.dict(json_compatible=True)\n            if work_queue_filter\n            else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    response = await self._client.post(\"/flow_runs/filter\", json=body)\n    return pydantic.parse_obj_as(List[FlowRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.set_flow_run_state","title":"set_flow_run_state async","text":"

Set the state of a flow run.

Parameters:

Name Type Description Default flow_run_id UUID

the id of the flow run

required state prefect.states.State

the state to set

required force bool

if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state

False

Returns:

Type Description OrchestrationResult

an OrchestrationResult model representation of state orchestration output

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def set_flow_run_state(\n    self,\n    flow_run_id: UUID,\n    state: \"prefect.states.State\",\n    force: bool = False,\n) -> OrchestrationResult:\n\"\"\"\n    Set the state of a flow run.\n\n    Args:\n        flow_run_id: the id of the flow run\n        state: the state to set\n        force: if True, disregard orchestration logic when setting the state,\n            forcing the Prefect API to accept the state\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    state_create = state.to_state_create()\n    state_create.state_details.flow_run_id = flow_run_id\n    try:\n        response = await self._client.post(\n            f\"/flow_runs/{flow_run_id}/set_state\",\n            json=dict(state=state_create.dict(json_compatible=True), force=force),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n\n    return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_states","title":"read_flow_run_states async","text":"

Query for the states of a flow run

Parameters:

Name Type Description Default flow_run_id UUID

the id of the flow run

required

Returns:

Type Description List[prefect.states.State]

a list of State model representations of the flow run states

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flow_run_states(\n    self, flow_run_id: UUID\n) -> List[prefect.states.State]:\n\"\"\"\n    Query for the states of a flow run\n\n    Args:\n        flow_run_id: the id of the flow run\n\n    Returns:\n        a list of State model representations\n            of the flow run states\n    \"\"\"\n    response = await self._client.get(\n        \"/flow_run_states/\", params=dict(flow_run_id=str(flow_run_id))\n    )\n    return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_task_run","title":"create_task_run async","text":"

Create a task run

Parameters:

Name Type Description Default task TaskObject

The Task to run

required flow_run_id UUID

The flow run id with which to associate the task run

required dynamic_key str

A key unique to this particular run of a Task within the flow

required name str

An optional name for the task run

None extra_tags Iterable[str]

an optional list of extra tags to apply to the task run in addition to task.tags

None state prefect.states.State

The initial state for the run. If not provided, defaults to Pending for now. Should always be a Scheduled type.

None task_inputs Dict[str, List[Union[TaskRunResult, Parameter, Constant]]]

the set of inputs passed to the task

None

Returns:

Type Description TaskRun

The created task run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_task_run(\n    self,\n    task: \"TaskObject\",\n    flow_run_id: UUID,\n    dynamic_key: str,\n    name: str = None,\n    extra_tags: Iterable[str] = None,\n    state: prefect.states.State = None,\n    task_inputs: Dict[\n        str,\n        List[\n            Union[\n                TaskRunResult,\n                Parameter,\n                Constant,\n            ]\n        ],\n    ] = None,\n) -> TaskRun:\n\"\"\"\n    Create a task run\n\n    Args:\n        task: The Task to run\n        flow_run_id: The flow run id with which to associate the task run\n        dynamic_key: A key unique to this particular run of a Task within the flow\n        name: An optional name for the task run\n        extra_tags: an optional list of extra tags to apply to the task run in\n            addition to `task.tags`\n        state: The initial state for the run. If not provided, defaults to\n            `Pending` for now. Should always be a `Scheduled` type.\n        task_inputs: the set of inputs passed to the task\n\n    Returns:\n        The created task run.\n    \"\"\"\n    tags = set(task.tags).union(extra_tags or [])\n\n    if state is None:\n        state = prefect.states.Pending()\n\n    task_run_data = TaskRunCreate(\n        name=name,\n        flow_run_id=flow_run_id,\n        task_key=task.task_key,\n        dynamic_key=dynamic_key,\n        tags=list(tags),\n        task_version=task.version,\n        empirical_policy=TaskRunPolicy(\n            retries=task.retries,\n            retry_delay=task.retry_delay_seconds,\n            retry_jitter_factor=task.retry_jitter_factor,\n        ),\n        state=state.to_state_create(),\n        task_inputs=task_inputs or {},\n    )\n\n    response = await self._client.post(\n        \"/task_runs/\", json=task_run_data.dict(json_compatible=True)\n    )\n    return TaskRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_run","title":"read_task_run async","text":"

Query the Prefect API for a task run by id.

Parameters:

Name Type Description Default task_run_id UUID

the task run ID of interest

required

Returns:

Type Description TaskRun

a Task Run model representation of the task run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_task_run(self, task_run_id: UUID) -> TaskRun:\n\"\"\"\n    Query the Prefect API for a task run by id.\n\n    Args:\n        task_run_id: the task run ID of interest\n\n    Returns:\n        a Task Run model representation of the task run\n    \"\"\"\n    response = await self._client.get(f\"/task_runs/{task_run_id}\")\n    return TaskRun.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_runs","title":"read_task_runs async","text":"

Query the Prefect API for task runs. Only task runs matching all criteria will be returned.

Parameters:

Name Type Description Default flow_filter FlowFilter

filter criteria for flows

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None deployment_filter DeploymentFilter

filter criteria for deployments

None sort TaskRunSort

sort criteria for the task runs

None limit int

a limit for the task run query

None offset int

an offset for the task run query

0

Returns:

Type Description List[TaskRun]

a list of Task Run model representations of the task runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_task_runs(\n    self,\n    *,\n    flow_filter: FlowFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    deployment_filter: DeploymentFilter = None,\n    sort: TaskRunSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[TaskRun]:\n\"\"\"\n    Query the Prefect API for task runs. Only task runs matching all criteria will\n    be returned.\n\n    Args:\n        flow_filter: filter criteria for flows\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        deployment_filter: filter criteria for deployments\n        sort: sort criteria for the task runs\n        limit: a limit for the task run query\n        offset: an offset for the task run query\n\n    Returns:\n        a list of Task Run model representations\n            of the task runs\n    \"\"\"\n    body = {\n        \"flows\": flow_filter.dict(json_compatible=True) if flow_filter else None,\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True, exclude_unset=True)\n            if flow_run_filter\n            else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"deployments\": (\n            deployment_filter.dict(json_compatible=True)\n            if deployment_filter\n            else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\"/task_runs/filter\", json=body)\n    return pydantic.parse_obj_as(List[TaskRun], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.set_task_run_state","title":"set_task_run_state async","text":"

Set the state of a task run.

Parameters:

Name Type Description Default task_run_id UUID

the id of the task run

required state prefect.states.State

the state to set

required force bool

if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state

False

Returns:

Type Description OrchestrationResult

an OrchestrationResult model representation of state orchestration output

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def set_task_run_state(\n    self,\n    task_run_id: UUID,\n    state: prefect.states.State,\n    force: bool = False,\n) -> OrchestrationResult:\n\"\"\"\n    Set the state of a task run.\n\n    Args:\n        task_run_id: the id of the task run\n        state: the state to set\n        force: if True, disregard orchestration logic when setting the state,\n            forcing the Prefect API to accept the state\n\n    Returns:\n        an OrchestrationResult model representation of state orchestration output\n    \"\"\"\n    state_create = state.to_state_create()\n    state_create.state_details.task_run_id = task_run_id\n    response = await self._client.post(\n        f\"/task_runs/{task_run_id}/set_state\",\n        json=dict(state=state_create.dict(json_compatible=True), force=force),\n    )\n    return OrchestrationResult.parse_obj(response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_task_run_states","title":"read_task_run_states async","text":"

Query for the states of a task run

Parameters:

Name Type Description Default task_run_id UUID

the id of the task run

required

Returns:

Type Description List[prefect.states.State]

a list of State model representations of the task run states

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_task_run_states(\n    self, task_run_id: UUID\n) -> List[prefect.states.State]:\n\"\"\"\n    Query for the states of a task run\n\n    Args:\n        task_run_id: the id of the task run\n\n    Returns:\n        a list of State model representations of the task run states\n    \"\"\"\n    response = await self._client.get(\n        \"/task_run_states/\", params=dict(task_run_id=str(task_run_id))\n    )\n    return pydantic.parse_obj_as(List[prefect.states.State], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_logs","title":"create_logs async","text":"

Create logs for a flow or task run

Parameters:

Name Type Description Default logs Iterable[Union[LogCreate, dict]]

An iterable of LogCreate objects or already json-compatible dicts

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_logs(self, logs: Iterable[Union[LogCreate, dict]]) -> None:\n\"\"\"\n    Create logs for a flow or task run\n\n    Args:\n        logs: An iterable of `LogCreate` objects or already json-compatible dicts\n    \"\"\"\n    serialized_logs = [\n        log.dict(json_compatible=True) if isinstance(log, LogCreate) else log\n        for log in logs\n    ]\n    await self._client.post(\"/logs/\", json=serialized_logs)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_flow_run_notification_policy","title":"create_flow_run_notification_policy async","text":"

Create a notification policy for flow runs

Parameters:

Name Type Description Default block_document_id UUID

The block document UUID

required is_active bool

Whether the notification policy is active

True tags List[str]

List of flow tags

None state_names List[str]

List of state names

None message_template Optional[str]

Notification message template

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_flow_run_notification_policy(\n    self,\n    block_document_id: UUID,\n    is_active: bool = True,\n    tags: List[str] = None,\n    state_names: List[str] = None,\n    message_template: Optional[str] = None,\n) -> UUID:\n\"\"\"\n    Create a notification policy for flow runs\n\n    Args:\n        block_document_id: The block document UUID\n        is_active: Whether the notification policy is active\n        tags: List of flow tags\n        state_names: List of state names\n        message_template: Notification message template\n    \"\"\"\n    if tags is None:\n        tags = []\n    if state_names is None:\n        state_names = []\n\n    policy = FlowRunNotificationPolicyCreate(\n        block_document_id=block_document_id,\n        is_active=is_active,\n        tags=tags,\n        state_names=state_names,\n        message_template=message_template,\n    )\n    response = await self._client.post(\n        \"/flow_run_notification_policies/\",\n        json=policy.dict(json_compatible=True),\n    )\n\n    policy_id = response.json().get(\"id\")\n    if not policy_id:\n        raise httpx.RequestError(f\"Malformed response: {response}\")\n\n    return UUID(policy_id)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_flow_run_notification_policies","title":"read_flow_run_notification_policies async","text":"

Query the Prefect API for flow run notification policies. Only policies matching all criteria will be returned.

Parameters:

Name Type Description Default flow_run_notification_policy_filter FlowRunNotificationPolicyFilter

filter criteria for notification policies

required limit Optional[int]

a limit for the notification policies query

None offset int

an offset for the notification policies query

0

Returns:

Type Description List[FlowRunNotificationPolicy]

a list of FlowRunNotificationPolicy model representations of the notification policies

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_flow_run_notification_policies(\n    self,\n    flow_run_notification_policy_filter: FlowRunNotificationPolicyFilter,\n    limit: Optional[int] = None,\n    offset: int = 0,\n) -> List[FlowRunNotificationPolicy]:\n\"\"\"\n    Query the Prefect API for flow run notification policies. Only policies matching all criteria will\n    be returned.\n\n    Args:\n        flow_run_notification_policy_filter: filter criteria for notification policies\n        limit: a limit for the notification policies query\n        offset: an offset for the notification policies query\n\n    Returns:\n        a list of FlowRunNotificationPolicy model representations\n            of the notification policies\n    \"\"\"\n    body = {\n        \"flow_run_notification_policy_filter\": (\n            flow_run_notification_policy_filter.dict(json_compatible=True)\n            if flow_run_notification_policy_filter\n            else None\n        ),\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\n        \"/flow_run_notification_policies/filter\", json=body\n    )\n    return pydantic.parse_obj_as(List[FlowRunNotificationPolicy], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_logs","title":"read_logs async","text":"

Read flow and task run logs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_logs(\n    self,\n    log_filter: LogFilter = None,\n    limit: int = None,\n    offset: int = None,\n    sort: LogSort = LogSort.TIMESTAMP_ASC,\n) -> None:\n\"\"\"\n    Read flow and task run logs.\n    \"\"\"\n    body = {\n        \"logs\": log_filter.dict(json_compatible=True) if log_filter else None,\n        \"limit\": limit,\n        \"offset\": offset,\n        \"sort\": sort,\n    }\n\n    response = await self._client.post(\"/logs/filter\", json=body)\n    return pydantic.parse_obj_as(List[Log], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.resolve_datadoc","title":"resolve_datadoc async","text":"

Recursively decode possibly nested data documents.

\"server\" encoded documents will be retrieved from the server.

Parameters:

Name Type Description Default datadoc DataDocument

The data document to resolve

required

Returns:

Type Description Any

a decoded object, the innermost data

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def resolve_datadoc(self, datadoc: DataDocument) -> Any:\n\"\"\"\n    Recursively decode possibly nested data documents.\n\n    \"server\" encoded documents will be retrieved from the server.\n\n    Args:\n        datadoc: The data document to resolve\n\n    Returns:\n        a decoded object, the innermost data\n    \"\"\"\n    if not isinstance(datadoc, DataDocument):\n        raise TypeError(\n            f\"`resolve_datadoc` received invalid type {type(datadoc).__name__}\"\n        )\n\n    async def resolve_inner(data):\n        if isinstance(data, bytes):\n            try:\n                data = DataDocument.parse_raw(data)\n            except pydantic.ValidationError:\n                return data\n\n        if isinstance(data, DataDocument):\n            return await resolve_inner(data.decode())\n\n        return data\n\n    return await resolve_inner(datadoc)\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.send_worker_heartbeat","title":"send_worker_heartbeat async","text":"

Sends a worker heartbeat for a given work pool.

Parameters:

Name Type Description Default work_pool_name str

The name of the work pool to heartbeat against.

required worker_name str

The name of the worker sending the heartbeat.

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def send_worker_heartbeat(self, work_pool_name: str, worker_name: str):\n\"\"\"\n    Sends a worker heartbeat for a given work pool.\n\n    Args:\n        work_pool_name: The name of the work pool to heartbeat against.\n        worker_name: The name of the worker sending the heartbeat.\n    \"\"\"\n    await self._client.post(\n        f\"/work_pools/{work_pool_name}/workers/heartbeat\",\n        json={\"name\": worker_name},\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_workers_for_work_pool","title":"read_workers_for_work_pool async","text":"

Reads workers for a given work pool.

Parameters:

Name Type Description Default work_pool_name str

The name of the work pool for which to get member workers.

required worker_filter Optional[WorkerFilter]

Criteria by which to filter workers.

None limit Optional[int]

Limit for the worker query.

None offset Optional[int]

Limit for the worker query.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_workers_for_work_pool(\n    self,\n    work_pool_name: str,\n    worker_filter: Optional[WorkerFilter] = None,\n    offset: Optional[int] = None,\n    limit: Optional[int] = None,\n) -> List[Worker]:\n\"\"\"\n    Reads workers for a given work pool.\n\n    Args:\n        work_pool_name: The name of the work pool for which to get\n            member workers.\n        worker_filter: Criteria by which to filter workers.\n        limit: Limit for the worker query.\n        offset: Limit for the worker query.\n    \"\"\"\n    response = await self._client.post(\n        f\"/work_pools/{work_pool_name}/workers/filter\",\n        json={\n            \"worker_filter\": (\n                worker_filter.dict(json_compatible=True, exclude_unset=True)\n                if worker_filter\n                else None\n            ),\n            \"offset\": offset,\n            \"limit\": limit,\n        },\n    )\n\n    return pydantic.parse_obj_as(List[Worker], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_pool","title":"read_work_pool async","text":"

Reads information for a given work pool

Parameters:

Name Type Description Default work_pool_name str

The name of the work pool to for which to get information.

required

Returns:

Type Description WorkPool

Information about the requested work pool.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_work_pool(self, work_pool_name: str) -> WorkPool:\n\"\"\"\n    Reads information for a given work pool\n\n    Args:\n        work_pool_name: The name of the work pool to for which to get\n            information.\n\n    Returns:\n        Information about the requested work pool.\n    \"\"\"\n    try:\n        response = await self._client.get(f\"/work_pools/{work_pool_name}\")\n        return pydantic.parse_obj_as(WorkPool, response.json())\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_pools","title":"read_work_pools async","text":"

Reads work pools.

Parameters:

Name Type Description Default limit Optional[int]

Limit for the work pool query.

None offset int

Offset for the work pool query.

0 work_pool_filter Optional[WorkPoolFilter]

Criteria by which to filter work pools.

None

Returns:

Type Description List[WorkPool]

A list of work pools.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_work_pools(\n    self,\n    limit: Optional[int] = None,\n    offset: int = 0,\n    work_pool_filter: Optional[WorkPoolFilter] = None,\n) -> List[WorkPool]:\n\"\"\"\n    Reads work pools.\n\n    Args:\n        limit: Limit for the work pool query.\n        offset: Offset for the work pool query.\n        work_pool_filter: Criteria by which to filter work pools.\n\n    Returns:\n        A list of work pools.\n    \"\"\"\n\n    body = {\n        \"limit\": limit,\n        \"offset\": offset,\n        \"work_pools\": (\n            work_pool_filter.dict(json_compatible=True)\n            if work_pool_filter\n            else None\n        ),\n    }\n    response = await self._client.post(\"/work_pools/filter\", json=body)\n    return pydantic.parse_obj_as(List[WorkPool], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_work_pool","title":"create_work_pool async","text":"

Creates a work pool with the provided configuration.

Parameters:

Name Type Description Default work_pool WorkPoolCreate

Desired configuration for the new work pool.

required

Returns:

Type Description WorkPool

Information about the newly created work pool.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_work_pool(\n    self,\n    work_pool: WorkPoolCreate,\n) -> WorkPool:\n\"\"\"\n    Creates a work pool with the provided configuration.\n\n    Args:\n        work_pool: Desired configuration for the new work pool.\n\n    Returns:\n        Information about the newly created work pool.\n    \"\"\"\n    try:\n        response = await self._client.post(\n            \"/work_pools/\",\n            json=work_pool.dict(json_compatible=True, exclude_unset=True),\n        )\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_409_CONFLICT:\n            raise prefect.exceptions.ObjectAlreadyExists(http_exc=e) from e\n        else:\n            raise\n\n    return pydantic.parse_obj_as(WorkPool, response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_work_pool","title":"delete_work_pool async","text":"

Deletes a work pool.

Parameters:

Name Type Description Default work_pool_name str

Name of the work pool to delete.

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_work_pool(\n    self,\n    work_pool_name: str,\n):\n\"\"\"\n    Deletes a work pool.\n\n    Args:\n        work_pool_name: Name of the work pool to delete.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/work_pools/{work_pool_name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_work_queues","title":"read_work_queues async","text":"

Retrieves queues for a work pool.

Parameters:

Name Type Description Default work_pool_name Optional[str]

Name of the work pool for which to get queues.

None work_queue_filter Optional[WorkQueueFilter]

Criteria by which to filter queues.

None limit Optional[int]

Limit for the queue query.

None offset Optional[int]

Limit for the queue query.

None

Returns:

Type Description List[WorkQueue]

List of queues for the specified work pool.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_work_queues(\n    self,\n    work_pool_name: Optional[str] = None,\n    work_queue_filter: Optional[WorkQueueFilter] = None,\n    limit: Optional[int] = None,\n    offset: Optional[int] = None,\n) -> List[WorkQueue]:\n\"\"\"\n    Retrieves queues for a work pool.\n\n    Args:\n        work_pool_name: Name of the work pool for which to get queues.\n        work_queue_filter: Criteria by which to filter queues.\n        limit: Limit for the queue query.\n        offset: Limit for the queue query.\n\n    Returns:\n        List of queues for the specified work pool.\n    \"\"\"\n    json = {\n        \"work_queues\": (\n            work_queue_filter.dict(json_compatible=True, exclude_unset=True)\n            if work_queue_filter\n            else None\n        ),\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n\n    if work_pool_name:\n        try:\n            response = await self._client.post(\n                f\"/work_pools/{work_pool_name}/queues/filter\",\n                json=json,\n            )\n        except httpx.HTTPStatusError as e:\n            if e.response.status_code == status.HTTP_404_NOT_FOUND:\n                raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n            else:\n                raise\n    else:\n        response = await self._client.post(\"/work_queues/filter\", json=json)\n\n    return pydantic.parse_obj_as(List[WorkQueue], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.get_scheduled_flow_runs_for_work_pool","title":"get_scheduled_flow_runs_for_work_pool async","text":"

Retrieves scheduled flow runs for the provided set of work pool queues.

Parameters:

Name Type Description Default work_pool_name str

The name of the work pool that the work pool queues are associated with.

required work_queue_names Optional[List[str]]

The names of the work pool queues from which to get scheduled flow runs.

None scheduled_before Optional[datetime.datetime]

Datetime used to filter returned flow runs. Flow runs scheduled for after the given datetime string will not be returned.

None

Returns:

Type Description List[WorkerFlowRunResponse]

A list of worker flow run responses containing information about the

List[WorkerFlowRunResponse]

retrieved flow runs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def get_scheduled_flow_runs_for_work_pool(\n    self,\n    work_pool_name: str,\n    work_queue_names: Optional[List[str]] = None,\n    scheduled_before: Optional[datetime.datetime] = None,\n) -> List[WorkerFlowRunResponse]:\n\"\"\"\n    Retrieves scheduled flow runs for the provided set of work pool queues.\n\n    Args:\n        work_pool_name: The name of the work pool that the work pool\n            queues are associated with.\n        work_queue_names: The names of the work pool queues from which\n            to get scheduled flow runs.\n        scheduled_before: Datetime used to filter returned flow runs. Flow runs\n            scheduled for after the given datetime string will not be returned.\n\n    Returns:\n        A list of worker flow run responses containing information about the\n        retrieved flow runs.\n    \"\"\"\n    body: Dict[str, Any] = {}\n    if work_queue_names is not None:\n        body[\"work_queue_names\"] = list(work_queue_names)\n    if scheduled_before:\n        body[\"scheduled_before\"] = str(scheduled_before)\n\n    response = await self._client.post(\n        f\"/work_pools/{work_pool_name}/get_scheduled_flow_runs\",\n        json=body,\n    )\n\n    return pydantic.parse_obj_as(List[WorkerFlowRunResponse], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_artifact","title":"create_artifact async","text":"

Creates an artifact with the provided configuration.

Parameters:

Name Type Description Default artifact ArtifactCreate

Desired configuration for the new artifact.

required

Returns:

Type Description Artifact

Information about the newly created artifact.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_artifact(\n    self,\n    artifact: ArtifactCreate,\n) -> Artifact:\n\"\"\"\n    Creates an artifact with the provided configuration.\n\n    Args:\n        artifact: Desired configuration for the new artifact.\n    Returns:\n        Information about the newly created artifact.\n    \"\"\"\n\n    response = await self._client.post(\n        \"/artifacts/\",\n        json=artifact.dict(json_compatible=True, exclude_unset=True),\n    )\n\n    return pydantic.parse_obj_as(Artifact, response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_artifacts","title":"read_artifacts async","text":"

Query the Prefect API for artifacts. Only artifacts matching all criteria will be returned.

Parameters:

Name Type Description Default artifact_filter ArtifactFilter

filter criteria for artifacts

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None sort ArtifactSort

sort criteria for the artifacts

None limit int

limit for the artifact query

None offset int

offset for the artifact query

0

Returns:

Type Description List[Artifact]

a list of Artifact model representations of the artifacts

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_artifacts(\n    self,\n    *,\n    artifact_filter: ArtifactFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    sort: ArtifactSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[Artifact]:\n\"\"\"\n    Query the Prefect API for artifacts. Only artifacts matching all criteria will\n    be returned.\n    Args:\n        artifact_filter: filter criteria for artifacts\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        sort: sort criteria for the artifacts\n        limit: limit for the artifact query\n        offset: offset for the artifact query\n    Returns:\n        a list of Artifact model representations of the artifacts\n    \"\"\"\n    body = {\n        \"artifacts\": (\n            artifact_filter.dict(json_compatible=True) if artifact_filter else None\n        ),\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\"/artifacts/filter\", json=body)\n    return pydantic.parse_obj_as(List[Artifact], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_latest_artifacts","title":"read_latest_artifacts async","text":"

Query the Prefect API for artifacts. Only artifacts matching all criteria will be returned.

Parameters:

Name Type Description Default artifact_filter ArtifactCollectionFilter

filter criteria for artifacts

None flow_run_filter FlowRunFilter

filter criteria for flow runs

None task_run_filter TaskRunFilter

filter criteria for task runs

None sort ArtifactCollectionSort

sort criteria for the artifacts

None limit int

limit for the artifact query

None offset int

offset for the artifact query

0

Returns:

Type Description List[ArtifactCollection]

a list of Artifact model representations of the artifacts

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_latest_artifacts(\n    self,\n    *,\n    artifact_filter: ArtifactCollectionFilter = None,\n    flow_run_filter: FlowRunFilter = None,\n    task_run_filter: TaskRunFilter = None,\n    sort: ArtifactCollectionSort = None,\n    limit: int = None,\n    offset: int = 0,\n) -> List[ArtifactCollection]:\n\"\"\"\n    Query the Prefect API for artifacts. Only artifacts matching all criteria will\n    be returned.\n    Args:\n        artifact_filter: filter criteria for artifacts\n        flow_run_filter: filter criteria for flow runs\n        task_run_filter: filter criteria for task runs\n        sort: sort criteria for the artifacts\n        limit: limit for the artifact query\n        offset: offset for the artifact query\n    Returns:\n        a list of Artifact model representations of the artifacts\n    \"\"\"\n    body = {\n        \"artifacts\": (\n            artifact_filter.dict(json_compatible=True) if artifact_filter else None\n        ),\n        \"flow_runs\": (\n            flow_run_filter.dict(json_compatible=True) if flow_run_filter else None\n        ),\n        \"task_runs\": (\n            task_run_filter.dict(json_compatible=True) if task_run_filter else None\n        ),\n        \"sort\": sort,\n        \"limit\": limit,\n        \"offset\": offset,\n    }\n    response = await self._client.post(\"/artifacts/latest/filter\", json=body)\n    return pydantic.parse_obj_as(List[ArtifactCollection], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_artifact","title":"delete_artifact async","text":"

Deletes an artifact with the provided id.

Parameters:

Name Type Description Default artifact_id UUID

The id of the artifact to delete.

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_artifact(self, artifact_id: UUID) -> None:\n\"\"\"\n    Deletes an artifact with the provided id.\n\n    Args:\n        artifact_id: The id of the artifact to delete.\n    \"\"\"\n    try:\n        await self._client.delete(f\"/artifacts/{artifact_id}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_variable_by_name","title":"read_variable_by_name async","text":"

Reads a variable by name. Returns None if no variable is found.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_variable_by_name(self, name: str) -> Optional[Variable]:\n\"\"\"Reads a variable by name. Returns None if no variable is found.\"\"\"\n    try:\n        response = await self._client.get(f\"/variables/name/{name}\")\n        return pydantic.parse_obj_as(Variable, response.json())\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == status.HTTP_404_NOT_FOUND:\n            return None\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.delete_variable_by_name","title":"delete_variable_by_name async","text":"

Deletes a variable by name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def delete_variable_by_name(self, name: str):\n\"\"\"Deletes a variable by name.\"\"\"\n    try:\n        await self._client.delete(f\"/variables/name/{name}\")\n    except httpx.HTTPStatusError as e:\n        if e.response.status_code == 404:\n            raise prefect.exceptions.ObjectNotFound(http_exc=e) from e\n        else:\n            raise\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_variables","title":"read_variables async","text":"

Reads all variables.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_variables(self, limit: int = None) -> List[Variable]:\n\"\"\"Reads all variables.\"\"\"\n    response = await self._client.post(\"/variables/filter\", json={\"limit\": limit})\n    return pydantic.parse_obj_as(List[Variable], response.json())\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.read_worker_metadata","title":"read_worker_metadata async","text":"

Reads worker metadata stored in Prefect collection registry.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def read_worker_metadata(self) -> Dict[str, Any]:\n\"\"\"Reads worker metadata stored in Prefect collection registry.\"\"\"\n    response = await self._client.get(\"collections/views/aggregate-worker-metadata\")\n    response.raise_for_status()\n    return response.json()\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient.create_automation","title":"create_automation async","text":"

Creates an automation in Prefect Cloud.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
async def create_automation(self, automation: Automation) -> UUID:\n\"\"\"Creates an automation in Prefect Cloud.\"\"\"\n    if self.server_type != ServerType.CLOUD:\n        raise RuntimeError(\"Automations are only supported for Prefect Cloud.\")\n\n    response = await self._client.post(\n        \"/automations/\",\n        json=automation.dict(json_compatible=True),\n    )\n\n    return UUID(response.json()[\"id\"])\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.OrionClient","title":"OrionClient","text":"

Bases: PrefectClient

Deprecated. Use PrefectClient instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
@deprecated_callable(start_date=\"Feb 2023\", help=\"Use `PrefectClient` instead.\")\nclass OrionClient(PrefectClient):\n\"\"\"\n    Deprecated. Use `PrefectClient` instead.\n    \"\"\"\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/orchestration/#prefect.client.orchestration.get_client","title":"get_client","text":"

Retrieve a HTTP client for communicating with the Prefect REST API.

The client must be context managed; for example:

async with get_client() as client:\n    await client.hello()\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/orchestration.py
def get_client(httpx_settings: Optional[dict] = None) -> \"PrefectClient\":\n\"\"\"\n    Retrieve a HTTP client for communicating with the Prefect REST API.\n\n    The client must be context managed; for example:\n\n    ```python\n    async with get_client() as client:\n        await client.hello()\n    ```\n    \"\"\"\n    ctx = prefect.context.get_settings_context()\n    api = PREFECT_API_URL.value()\n\n    if not api:\n        # create an ephemeral API if none was provided\n        from prefect.server.api.server import create_app\n\n        api = create_app(ctx.settings, ephemeral=True)\n\n    return PrefectClient(\n        api,\n        api_key=PREFECT_API_KEY.value(),\n        httpx_settings=httpx_settings,\n    )\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/","title":"prefect.client.schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/schemas/#prefect.client.schemas","title":"prefect.client.schemas","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/","title":"prefect.client.utilities","text":"","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities","title":"prefect.client.utilities","text":"

Utilities for working with clients.

","tags":["Python API","REST API"]},{"location":"api-ref/prefect/client/utilities/#prefect.client.utilities.inject_client","title":"inject_client","text":"

Simple helper to provide a context managed client to a asynchronous function.

The decorated function must take a client kwarg and if a client is passed when called it will be used instead of creating a new one, but it will not be context managed as it is assumed that the caller is managing the context.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/client/utilities.py
def inject_client(fn):\n\"\"\"\n    Simple helper to provide a context managed client to a asynchronous function.\n\n    The decorated function _must_ take a `client` kwarg and if a client is passed when\n    called it will be used instead of creating a new one, but it will not be context\n    managed as it is assumed that the caller is managing the context.\n    \"\"\"\n\n    @wraps(fn)\n    async def with_injected_client(*args, **kwargs):\n        from prefect.client.orchestration import get_client\n        from prefect.context import FlowRunContext, TaskRunContext\n\n        flow_run_context = FlowRunContext.get()\n        task_run_context = TaskRunContext.get()\n        client = None\n        client_context = asyncnullcontext()\n\n        if \"client\" in kwargs and kwargs[\"client\"] is not None:\n            # Client provided in kwargs\n            client = kwargs[\"client\"]\n        elif flow_run_context and flow_run_context.client._loop == get_running_loop():\n            # Client available from flow run context\n            client = flow_run_context.client\n        elif task_run_context and task_run_context.client._loop == get_running_loop():\n            # Client available from task run context\n            client = task_run_context.client\n\n        else:\n            # A new client is needed\n            client_context = get_client()\n\n        # Removes existing client to allow it to be set by setdefault below\n        kwargs.pop(\"client\", None)\n\n        async with client_context as new_client:\n            kwargs.setdefault(\"client\", new_client or client)\n            return await fn(*args, **kwargs)\n\n    return with_injected_client\n
","tags":["Python API","REST API"]},{"location":"api-ref/prefect/deployments/base/","title":"prefect.deployments.base","text":"","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base","title":"prefect.deployments.base","text":"

Core primitives for managing Prefect projects. Projects provide a minimally opinionated build system for managing flows and deployments.

To get started, follow along with the deloyments tutorial.

","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.configure_project_by_recipe","title":"configure_project_by_recipe","text":"

Given a recipe name, returns a dictionary representing base configuration options.

Parameters:

Name Type Description Default recipe str

the name of the recipe to use

required formatting_kwargs dict

additional keyword arguments to format the recipe

{}

Raises:

Type Description ValueError

if provided recipe name does not exist.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/base.py
def configure_project_by_recipe(recipe: str, **formatting_kwargs) -> dict:\n\"\"\"\n    Given a recipe name, returns a dictionary representing base configuration options.\n\n    Args:\n        recipe (str): the name of the recipe to use\n        formatting_kwargs (dict, optional): additional keyword arguments to format the recipe\n\n    Raises:\n        ValueError: if provided recipe name does not exist.\n    \"\"\"\n    # load the recipe\n    recipe_path = Path(__file__).parent / \"recipes\" / recipe / \"prefect.yaml\"\n\n    if not recipe_path.exists():\n        raise ValueError(f\"Unknown recipe {recipe!r} provided.\")\n\n    with recipe_path.open(mode=\"r\") as f:\n        config = yaml.safe_load(f)\n\n    config = apply_values(\n        template=config, values=formatting_kwargs, remove_notset=False\n    )\n\n    return config\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.create_default_prefect_yaml","title":"create_default_prefect_yaml","text":"

Creates default prefect.yaml file in the provided path if one does not already exist; returns boolean specifying whether a file was created.

Parameters:

Name Type Description Default name str

the name of the project; if not provided, the current directory name will be used

None contents dict

a dictionary of contents to write to the file; if not provided, defaults will be used

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/base.py
def create_default_prefect_yaml(\n    path: str, name: str = None, contents: dict = None\n) -> bool:\n\"\"\"\n    Creates default `prefect.yaml` file in the provided path if one does not already exist;\n    returns boolean specifying whether a file was created.\n\n    Args:\n        name (str, optional): the name of the project; if not provided, the current directory name\n            will be used\n        contents (dict, optional): a dictionary of contents to write to the file; if not provided,\n            defaults will be used\n    \"\"\"\n    path = Path(path)\n    prefect_file = path / \"prefect.yaml\"\n    if prefect_file.exists():\n        return False\n    default_file = Path(__file__).parent / \"templates\" / \"prefect.yaml\"\n\n    with default_file.open(mode=\"r\") as df:\n        default_contents = yaml.safe_load(df)\n\n    import prefect\n\n    contents[\"prefect-version\"] = prefect.__version__\n    contents[\"name\"] = name\n\n    with prefect_file.open(mode=\"w\") as f:\n        # write header\n        f.write(\n            \"# Welcome to your prefect.yaml file! You can you this file for storing and\"\n            \" managing\\n# configuration for deploying your flows. We recommend\"\n            \" committing this file to source\\n# control along with your flow code.\\n\\n\"\n        )\n\n        f.write(\"# Generic metadata about this project\\n\")\n        yaml.dump({\"name\": contents[\"name\"]}, f, sort_keys=False)\n        yaml.dump({\"prefect-version\": contents[\"prefect-version\"]}, f, sort_keys=False)\n        f.write(\"\\n\")\n\n        # build\n        f.write(\"# build section allows you to manage and build docker images\\n\")\n        yaml.dump(\n            {\"build\": contents.get(\"build\", default_contents.get(\"build\"))},\n            f,\n            sort_keys=False,\n        )\n        f.write(\"\\n\")\n\n        # push\n        f.write(\n            \"# push section allows you to manage if and how this project is uploaded to\"\n            \" remote locations\\n\"\n        )\n        yaml.dump(\n            {\"push\": contents.get(\"push\", default_contents.get(\"push\"))},\n            f,\n            sort_keys=False,\n        )\n        f.write(\"\\n\")\n\n        # pull\n        f.write(\n            \"# pull section allows you to provide instructions for cloning this project\"\n            \" in remote locations\\n\"\n        )\n        yaml.dump(\n            {\"pull\": contents.get(\"pull\", default_contents.get(\"pull\"))},\n            f,\n            sort_keys=False,\n        )\n        f.write(\"\\n\")\n\n        # deployments\n        f.write(\n            \"# the deployments section allows you to provide configuration for\"\n            \" deploying flows\\n\"\n        )\n        yaml.dump(\n            {\n                \"deployments\": contents.get(\n                    \"deployments\", default_contents.get(\"deployments\")\n                )\n            },\n            f,\n            sort_keys=False,\n        )\n    return True\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.find_prefect_directory","title":"find_prefect_directory","text":"

Given a path, recurses upward looking for .prefect/ directories.

Once found, returns absolute path to the ./prefect directory, which is assumed to reside within the root for the current project.

If one is never found, None is returned.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/base.py
def find_prefect_directory(path: Path = None) -> Optional[Path]:\n\"\"\"\n    Given a path, recurses upward looking for .prefect/ directories.\n\n    Once found, returns absolute path to the ./prefect directory, which is assumed to reside within the\n    root for the current project.\n\n    If one is never found, `None` is returned.\n    \"\"\"\n    path = Path(path or \".\").resolve()\n    parent = path.parent.resolve()\n    while path != parent:\n        prefect_dir = path.joinpath(\".prefect\")\n        if prefect_dir.is_dir():\n            return prefect_dir\n\n        path = parent.resolve()\n        parent = path.parent.resolve()\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.initialize_project","title":"initialize_project","text":"

Initializes a basic project structure with base files. If no name is provided, the name of the current directory is used. If no recipe is provided, one is inferred.

Parameters:

Name Type Description Default name str

the name of the project; if not provided, the current directory name

None recipe str

the name of the recipe to use; if not provided, one is inferred

None inputs dict

a dictionary of inputs to use when formatting the recipe

None

Returns:

Type Description List[str]

List[str]: a list of files / directories that were created

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/base.py
def initialize_project(\n    name: str = None, recipe: str = None, inputs: dict = None\n) -> List[str]:\n\"\"\"\n    Initializes a basic project structure with base files.  If no name is provided, the name\n    of the current directory is used.  If no recipe is provided, one is inferred.\n\n    Args:\n        name (str, optional): the name of the project; if not provided, the current directory name\n        recipe (str, optional): the name of the recipe to use; if not provided, one is inferred\n        inputs (dict, optional): a dictionary of inputs to use when formatting the recipe\n\n    Returns:\n        List[str]: a list of files / directories that were created\n    \"\"\"\n    # determine if in git repo or use directory name as a default\n    is_git_based = False\n    formatting_kwargs = {\"directory\": str(Path(\".\").absolute().resolve())}\n    dir_name = os.path.basename(os.getcwd())\n\n    remote_url = _get_git_remote_origin_url()\n    if remote_url:\n        formatting_kwargs[\"repository\"] = remote_url\n        is_git_based = True\n        branch = _get_git_branch()\n        formatting_kwargs[\"branch\"] = branch or \"main\"\n\n    formatting_kwargs[\"name\"] = dir_name\n\n    has_dockerfile = Path(\"Dockerfile\").exists()\n\n    if has_dockerfile:\n        formatting_kwargs[\"dockerfile\"] = \"Dockerfile\"\n    elif recipe is not None and \"docker\" in recipe:\n        formatting_kwargs[\"dockerfile\"] = \"auto\"\n\n    # hand craft a pull step\n    if is_git_based and recipe is None:\n        if has_dockerfile:\n            recipe = \"docker-git\"\n        else:\n            recipe = \"git\"\n    elif recipe is None and has_dockerfile:\n        recipe = \"docker\"\n    elif recipe is None:\n        recipe = \"local\"\n\n    formatting_kwargs.update(inputs or {})\n    configuration = configure_project_by_recipe(recipe=recipe, **formatting_kwargs)\n\n    project_name = name or dir_name\n\n    files = []\n    if create_default_ignore_file(\".\"):\n        files.append(\".prefectignore\")\n    if create_default_prefect_yaml(\".\", name=project_name, contents=configuration):\n        files.append(\"prefect.yaml\")\n    if set_prefect_hidden_dir():\n        files.append(\".prefect/\")\n\n    return files\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.register_flow","title":"register_flow async","text":"

Register a flow with this project from an entrypoint.

Parameters:

Name Type Description Default entrypoint str

the entrypoint to the flow to register

required force bool

whether or not to overwrite an existing flow with the same name

False

Raises:

Type Description ValueError

if force is False and registration would overwrite an existing flow

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/base.py
async def register_flow(entrypoint: str, force: bool = False):\n\"\"\"\n    Register a flow with this project from an entrypoint.\n\n    Args:\n        entrypoint (str): the entrypoint to the flow to register\n        force (bool, optional): whether or not to overwrite an existing flow with the same name\n\n    Raises:\n        ValueError: if `force` is `False` and registration would overwrite an existing flow\n    \"\"\"\n    try:\n        fpath, obj_name = entrypoint.rsplit(\":\", 1)\n    except ValueError as exc:\n        if str(exc) == \"not enough values to unpack (expected 2, got 1)\":\n            missing_flow_name_msg = (\n                \"Your flow entrypoint must include the name of the function that is\"\n                f\" the entrypoint to your flow.\\nTry {entrypoint}:<flow_name> as your\"\n                f\" entrypoint. If you meant to specify '{entrypoint}' as the deployment\"\n                f\" name, try `prefect deploy -n {entrypoint}`.\"\n            )\n            raise ValueError(missing_flow_name_msg)\n        else:\n            raise exc\n\n    flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, entrypoint)\n\n    fpath = Path(fpath).absolute()\n    prefect_dir = find_prefect_directory()\n    if not prefect_dir:\n        raise FileNotFoundError(\n            \"No .prefect directory could be found - run `prefect project\"\n            \" init` to create one.\"\n        )\n\n    entrypoint = f\"{fpath.relative_to(prefect_dir.parent)!s}:{obj_name}\"\n\n    flows_file = prefect_dir / \"flows.json\"\n    if flows_file.exists():\n        with flows_file.open(mode=\"r\") as f:\n            flows = json.load(f)\n    else:\n        flows = {}\n\n    ## quality control\n    if flow.name in flows and flows[flow.name] != entrypoint:\n        if not force:\n            raise ValueError(\n                f\"Conflicting entry found for flow with name {flow.name!r}.\\nExisting\"\n                f\" entrypoint: {flows[flow.name]}\\nAttempted entrypoint:\"\n                f\" {entrypoint}\\n\\nYou can try removing the existing entry for\"\n                f\" {flow.name!r} from your [yellow]~/.prefect/flows.json[/yellow].\"\n            )\n\n    flows[flow.name] = entrypoint\n\n    with flows_file.open(mode=\"w\") as f:\n        json.dump(flows, f, sort_keys=True, indent=2)\n\n    return flow\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/base/#prefect.deployments.base.set_prefect_hidden_dir","title":"set_prefect_hidden_dir","text":"

Creates default .prefect/ directory if one does not already exist. Returns boolean specifying whether or not a directory was created.

If a path is provided, the directory will be created in that location.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/base.py
def set_prefect_hidden_dir(path: str = None) -> bool:\n\"\"\"\n    Creates default `.prefect/` directory if one does not already exist.\n    Returns boolean specifying whether or not a directory was created.\n\n    If a path is provided, the directory will be created in that location.\n    \"\"\"\n    path = Path(path or \".\") / \".prefect\"\n\n    # use exists so that we dont accidentally overwrite a file\n    if path.exists():\n        return False\n    path.mkdir(mode=0o0700)\n    return True\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/deployments/","title":"prefect.deployments.deployments","text":"","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments","title":"prefect.deployments.deployments","text":"

Objects for specifying deployments and utilities for loading flows from deployments.

","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment","title":"Deployment","text":"

Bases: BaseModel

A Prefect Deployment definition, used for specifying and building deployments.

Parameters:

Name Type Description Default name

A name for the deployment (required).

required version

An optional version for the deployment; defaults to the flow's version

required description

An optional description of the deployment; defaults to the flow's description

required tags

An optional list of tags to associate with this deployment; note that tags are used only for organizational purposes. For delegating work to agents, see work_queue_name.

required schedule

A schedule to run this deployment on, once registered

required is_schedule_active

Whether or not the schedule is active

required work_queue_name

The work queue that will handle this deployment's runs

required work_pool_name

The work pool for the deployment

required flow_name

The name of the flow this deployment encapsulates

required parameters

A dictionary of parameter values to pass to runs created from this deployment

required infrastructure

An optional infrastructure block used to configure infrastructure for runs; if not provided, will default to running this deployment in Agent subprocesses

required infra_overrides

A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example env.CONFIG_KEY=config_value or namespace='prefect'

required storage

An optional remote storage block used to store and retrieve this workflow; if not provided, will default to referencing this flow by its local path

required path

The path to the working directory for the workflow, relative to remote storage or, if stored on a local filesystem, an absolute path

required entrypoint

The path to the entrypoint for the workflow, always relative to the path

required parameter_openapi_schema

The parameter schema of the flow, including defaults.

required

Examples:

Create a new deployment using configuration defaults for an imported flow:

>>> from my_project.flows import my_flow\n>>> from prefect.deployments import Deployment\n>>>\n>>> deployment = Deployment.build_from_flow(\n...     flow=my_flow,\n...     name=\"example\",\n...     version=\"1\",\n...     tags=[\"demo\"],\n>>> )\n>>> deployment.apply()\n

Create a new deployment with custom storage and an infrastructure override:

>>> from my_project.flows import my_flow\n>>> from prefect.deployments import Deployment\n>>> from prefect.filesystems import S3\n
>>> storage = S3.load(\"dev-bucket\") # load a pre-defined block\n>>> deployment = Deployment.build_from_flow(\n...     flow=my_flow,\n...     name=\"s3-example\",\n...     version=\"2\",\n...     tags=[\"aws\"],\n...     storage=storage,\n...     infra_overrides=dict(\"env.PREFECT_LOGGING_LEVEL\"=\"DEBUG\"),\n>>> )\n>>> deployment.apply()\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@experimental_field(\n    \"work_pool_name\",\n    group=\"work_pools\",\n    when=lambda x: x is not None and x != DEFAULT_AGENT_WORK_POOL_NAME,\n)\nclass Deployment(BaseModel):\n\"\"\"\n    A Prefect Deployment definition, used for specifying and building deployments.\n\n    Args:\n        name: A name for the deployment (required).\n        version: An optional version for the deployment; defaults to the flow's version\n        description: An optional description of the deployment; defaults to the flow's\n            description\n        tags: An optional list of tags to associate with this deployment; note that tags\n            are used only for organizational purposes. For delegating work to agents,\n            see `work_queue_name`.\n        schedule: A schedule to run this deployment on, once registered\n        is_schedule_active: Whether or not the schedule is active\n        work_queue_name: The work queue that will handle this deployment's runs\n        work_pool_name: The work pool for the deployment\n        flow_name: The name of the flow this deployment encapsulates\n        parameters: A dictionary of parameter values to pass to runs created from this\n            deployment\n        infrastructure: An optional infrastructure block used to configure\n            infrastructure for runs; if not provided, will default to running this\n            deployment in Agent subprocesses\n        infra_overrides: A dictionary of dot delimited infrastructure overrides that\n            will be applied at runtime; for example `env.CONFIG_KEY=config_value` or\n            `namespace='prefect'`\n        storage: An optional remote storage block used to store and retrieve this\n            workflow; if not provided, will default to referencing this flow by its\n            local path\n        path: The path to the working directory for the workflow, relative to remote\n            storage or, if stored on a local filesystem, an absolute path\n        entrypoint: The path to the entrypoint for the workflow, always relative to the\n            `path`\n        parameter_openapi_schema: The parameter schema of the flow, including defaults.\n\n    Examples:\n\n        Create a new deployment using configuration defaults for an imported flow:\n\n        >>> from my_project.flows import my_flow\n        >>> from prefect.deployments import Deployment\n        >>>\n        >>> deployment = Deployment.build_from_flow(\n        ...     flow=my_flow,\n        ...     name=\"example\",\n        ...     version=\"1\",\n        ...     tags=[\"demo\"],\n        >>> )\n        >>> deployment.apply()\n\n        Create a new deployment with custom storage and an infrastructure override:\n\n        >>> from my_project.flows import my_flow\n        >>> from prefect.deployments import Deployment\n        >>> from prefect.filesystems import S3\n\n        >>> storage = S3.load(\"dev-bucket\") # load a pre-defined block\n        >>> deployment = Deployment.build_from_flow(\n        ...     flow=my_flow,\n        ...     name=\"s3-example\",\n        ...     version=\"2\",\n        ...     tags=[\"aws\"],\n        ...     storage=storage,\n        ...     infra_overrides=dict(\"env.PREFECT_LOGGING_LEVEL\"=\"DEBUG\"),\n        >>> )\n        >>> deployment.apply()\n\n    \"\"\"\n\n    class Config:\n        json_encoders = {SecretDict: lambda v: v.dict()}\n        validate_assignment = True\n        extra = \"forbid\"\n\n    @property\n    def _editable_fields(self) -> List[str]:\n        editable_fields = [\n            \"name\",\n            \"description\",\n            \"version\",\n            \"work_queue_name\",\n            \"work_pool_name\",\n            \"tags\",\n            \"parameters\",\n            \"schedule\",\n            \"is_schedule_active\",\n            \"infra_overrides\",\n        ]\n\n        # if infrastructure is baked as a pre-saved block, then\n        # editing its fields will not update anything\n        if self.infrastructure._block_document_id:\n            return editable_fields\n        else:\n            return editable_fields + [\"infrastructure\"]\n\n    @property\n    def location(self) -> str:\n\"\"\"\n        The 'location' that this deployment points to is given by `path` alone\n        in the case of no remote storage, and otherwise by `storage.basepath / path`.\n\n        The underlying flow entrypoint is interpreted relative to this location.\n        \"\"\"\n        location = \"\"\n        if self.storage:\n            location = (\n                self.storage.basepath + \"/\"\n                if not self.storage.basepath.endswith(\"/\")\n                else \"\"\n            )\n        if self.path:\n            location += self.path\n        return location\n\n    @sync_compatible\n    async def to_yaml(self, path: Path) -> None:\n        yaml_dict = self._yaml_dict()\n        schema = self.schema()\n\n        with open(path, \"w\") as f:\n            # write header\n            f.write(\n                \"###\\n### A complete description of a Prefect Deployment for flow\"\n                f\" {self.flow_name!r}\\n###\\n\"\n            )\n\n            # write editable fields\n            for field in self._editable_fields:\n                # write any comments\n                if schema[\"properties\"][field].get(\"yaml_comment\"):\n                    f.write(f\"# {schema['properties'][field]['yaml_comment']}\\n\")\n                # write the field\n                yaml.dump({field: yaml_dict[field]}, f, sort_keys=False)\n\n            # write non-editable fields\n            f.write(\"\\n###\\n### DO NOT EDIT BELOW THIS LINE\\n###\\n\")\n            yaml.dump(\n                {k: v for k, v in yaml_dict.items() if k not in self._editable_fields},\n                f,\n                sort_keys=False,\n            )\n\n    def _yaml_dict(self) -> dict:\n\"\"\"\n        Returns a YAML-compatible representation of this deployment as a dictionary.\n        \"\"\"\n        # avoids issues with UUIDs showing up in YAML\n        all_fields = json.loads(\n            self.json(\n                exclude={\n                    \"storage\": {\"_filesystem\", \"filesystem\", \"_remote_file_system\"}\n                }\n            )\n        )\n        if all_fields[\"storage\"]:\n            all_fields[\"storage\"][\n                \"_block_type_slug\"\n            ] = self.storage.get_block_type_slug()\n        if all_fields[\"infrastructure\"]:\n            all_fields[\"infrastructure\"][\n                \"_block_type_slug\"\n            ] = self.infrastructure.get_block_type_slug()\n        return all_fields\n\n    # top level metadata\n    name: str = Field(..., description=\"The name of the deployment.\")\n    description: Optional[str] = Field(\n        default=None, description=\"An optional description of the deployment.\"\n    )\n    version: Optional[str] = Field(\n        default=None, description=\"An optional version for the deployment.\"\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"One of more tags to apply to this deployment.\",\n    )\n    schedule: SCHEDULE_TYPES = None\n    is_schedule_active: Optional[bool] = Field(\n        default=None, description=\"Whether or not the schedule is active.\"\n    )\n    flow_name: Optional[str] = Field(default=None, description=\"The name of the flow.\")\n    work_queue_name: Optional[str] = Field(\n        \"default\",\n        description=\"The work queue for the deployment.\",\n        yaml_comment=\"The work queue that will handle this deployment's runs\",\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None, description=\"The work pool for the deployment\"\n    )\n    # flow data\n    parameters: Dict[str, Any] = Field(default_factory=dict)\n    manifest_path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the flow's manifest file, relative to the chosen storage.\"\n        ),\n    )\n    infrastructure: Infrastructure = Field(default_factory=Process)\n    infra_overrides: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to the base infrastructure block at runtime.\",\n    )\n    storage: Optional[Block] = Field(\n        None,\n        help=\"The remote storage to use for this workflow.\",\n    )\n    path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the working directory for the workflow, relative to remote\"\n            \" storage or an absolute path.\"\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the entrypoint for the workflow, relative to the `path`.\"\n        ),\n    )\n    parameter_openapi_schema: ParameterSchema = Field(\n        default_factory=ParameterSchema,\n        description=\"The parameter schema of the flow, including defaults.\",\n    )\n    timestamp: datetime = Field(default_factory=partial(pendulum.now, \"UTC\"))\n    triggers: List[DeploymentTrigger] = Field(\n        default_factory=list,\n        description=\"The triggers that should cause this deployment to run.\",\n    )\n\n    @validator(\"infrastructure\", pre=True)\n    def infrastructure_must_have_capabilities(cls, value):\n        if isinstance(value, dict):\n            if \"_block_type_slug\" in value:\n                # Replace private attribute with public for dispatch\n                value[\"block_type_slug\"] = value.pop(\"_block_type_slug\")\n            block = Block(**value)\n        elif value is None:\n            return value\n        else:\n            block = value\n\n        if \"run-infrastructure\" not in block.get_block_capabilities():\n            raise ValueError(\n                \"Infrastructure block must have 'run-infrastructure' capabilities.\"\n            )\n        return block\n\n    @validator(\"storage\", pre=True)\n    def storage_must_have_capabilities(cls, value):\n        if isinstance(value, dict):\n            block_type = Block.get_block_class_from_key(value.pop(\"_block_type_slug\"))\n            block = block_type(**value)\n        elif value is None:\n            return value\n        else:\n            block = value\n\n        capabilities = block.get_block_capabilities()\n        if \"get-directory\" not in capabilities:\n            raise ValueError(\n                \"Remote Storage block must have 'get-directory' capabilities.\"\n            )\n        return block\n\n    @validator(\"parameter_openapi_schema\", pre=True)\n    def handle_openapi_schema(cls, value):\n\"\"\"\n        This method ensures setting a value of `None` is handled gracefully.\n        \"\"\"\n        if value is None:\n            return ParameterSchema()\n        return value\n\n    @validator(\"triggers\")\n    def validate_automation_names(cls, field_value, values, field, config):\n\"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n        for i, trigger in enumerate(field_value, start=1):\n            if trigger.name is None:\n                trigger.name = f\"{values['name']}__automation_{i}\"\n\n        return field_value\n\n    @classmethod\n    @sync_compatible\n    async def load_from_yaml(cls, path: str):\n        data = yaml.safe_load(await anyio.Path(path).read_bytes())\n        # load blocks from server to ensure secret values are properly hydrated\n        if data.get(\"storage\"):\n            block_doc_name = data[\"storage\"].get(\"_block_document_name\")\n            # if no doc name, this block is not stored on the server\n            if block_doc_name:\n                block_slug = data[\"storage\"][\"_block_type_slug\"]\n                block = await Block.load(f\"{block_slug}/{block_doc_name}\")\n                data[\"storage\"] = block\n\n        if data.get(\"infrastructure\"):\n            block_doc_name = data[\"infrastructure\"].get(\"_block_document_name\")\n            # if no doc name, this block is not stored on the server\n            if block_doc_name:\n                block_slug = data[\"infrastructure\"][\"_block_type_slug\"]\n                block = await Block.load(f\"{block_slug}/{block_doc_name}\")\n                data[\"infrastructure\"] = block\n\n            return cls(**data)\n\n    @sync_compatible\n    async def load(self) -> bool:\n\"\"\"\n        Queries the API for a deployment with this name for this flow, and if found,\n        prepopulates any settings that were not set at initialization.\n\n        Returns a boolean specifying whether a load was successful or not.\n\n        Raises:\n            - ValueError: if both name and flow name are not set\n        \"\"\"\n        if not self.name or not self.flow_name:\n            raise ValueError(\"Both a deployment name and flow name must be provided.\")\n        async with get_client() as client:\n            try:\n                deployment = await client.read_deployment_by_name(\n                    f\"{self.flow_name}/{self.name}\"\n                )\n                if deployment.storage_document_id:\n                    Block._from_block_document(\n                        await client.read_block_document(deployment.storage_document_id)\n                    )\n\n                excluded_fields = self.__fields_set__.union(\n                    {\"infrastructure\", \"storage\", \"timestamp\", \"triggers\"}\n                )\n                for field in set(self.__fields__.keys()) - excluded_fields:\n                    new_value = getattr(deployment, field)\n                    setattr(self, field, new_value)\n\n                if \"infrastructure\" not in self.__fields_set__:\n                    if deployment.infrastructure_document_id:\n                        self.infrastructure = Block._from_block_document(\n                            await client.read_block_document(\n                                deployment.infrastructure_document_id\n                            )\n                        )\n                if \"storage\" not in self.__fields_set__:\n                    if deployment.storage_document_id:\n                        self.storage = Block._from_block_document(\n                            await client.read_block_document(\n                                deployment.storage_document_id\n                            )\n                        )\n            except ObjectNotFound:\n                return False\n        return True\n\n    @sync_compatible\n    async def update(self, ignore_none: bool = False, **kwargs):\n\"\"\"\n        Performs an in-place update with the provided settings.\n\n        Args:\n            ignore_none: if True, all `None` values are ignored when performing the\n                update\n        \"\"\"\n        unknown_keys = set(kwargs.keys()) - set(self.dict().keys())\n        if unknown_keys:\n            raise ValueError(\n                f\"Received unexpected attributes: {', '.join(unknown_keys)}\"\n            )\n        for key, value in kwargs.items():\n            if ignore_none and value is None:\n                continue\n            setattr(self, key, value)\n\n    @sync_compatible\n    async def upload_to_storage(\n        self, storage_block: str = None, ignore_file: str = \".prefectignore\"\n    ) -> Optional[int]:\n\"\"\"\n        Uploads the workflow this deployment represents using a provided storage block;\n        if no block is provided, defaults to configuring self for local storage.\n\n        Args:\n            storage_block: a string reference a remote storage block slug `$type/$name`;\n                if provided, used to upload the workflow's project\n            ignore_file: an optional path to a `.prefectignore` file that specifies\n                filename patterns to ignore when uploading to remote storage; if not\n                provided, looks for `.prefectignore` in the current working directory\n        \"\"\"\n        file_count = None\n        if storage_block:\n            storage = await Block.load(storage_block)\n\n            if \"put-directory\" not in storage.get_block_capabilities():\n                raise BlockMissingCapabilities(\n                    f\"Storage block {storage!r} missing 'put-directory' capability.\"\n                )\n\n            self.storage = storage\n\n            # upload current directory to storage location\n            file_count = await self.storage.put_directory(\n                ignore_file=ignore_file, to_path=self.path\n            )\n        elif self.storage:\n            if \"put-directory\" not in self.storage.get_block_capabilities():\n                raise BlockMissingCapabilities(\n                    f\"Storage block {self.storage!r} missing 'put-directory'\"\n                    \" capability.\"\n                )\n\n            file_count = await self.storage.put_directory(\n                ignore_file=ignore_file, to_path=self.path\n            )\n\n        # persists storage now in case it contains secret values\n        if self.storage and not self.storage._block_document_id:\n            await self.storage._save(is_anonymous=True)\n\n        return file_count\n\n    @sync_compatible\n    async def apply(\n        self, upload: bool = False, work_queue_concurrency: int = None\n    ) -> UUID:\n\"\"\"\n        Registers this deployment with the API and returns the deployment's ID.\n\n        Args:\n            upload: if True, deployment files are automatically uploaded to remote\n                storage\n            work_queue_concurrency: If provided, sets the concurrency limit on the\n                deployment's work queue\n        \"\"\"\n        if not self.name or not self.flow_name:\n            raise ValueError(\"Both a deployment name and flow name must be set.\")\n        async with get_client() as client:\n            # prep IDs\n            flow_id = await client.create_flow_from_name(self.flow_name)\n\n            infrastructure_document_id = self.infrastructure._block_document_id\n            if not infrastructure_document_id:\n                # if not building off a block, will create an anonymous block\n                self.infrastructure = self.infrastructure.copy()\n                infrastructure_document_id = await self.infrastructure._save(\n                    is_anonymous=True,\n                )\n\n            if upload:\n                await self.upload_to_storage()\n\n            if self.work_queue_name and work_queue_concurrency is not None:\n                try:\n                    res = await client.create_work_queue(\n                        name=self.work_queue_name, work_pool_name=self.work_pool_name\n                    )\n                except ObjectAlreadyExists:\n                    res = await client.read_work_queue_by_name(\n                        name=self.work_queue_name, work_pool_name=self.work_pool_name\n                    )\n                await client.update_work_queue(\n                    res.id, concurrency_limit=work_queue_concurrency\n                )\n\n            # we assume storage was already saved\n            storage_document_id = getattr(self.storage, \"_block_document_id\", None)\n            deployment_id = await client.create_deployment(\n                flow_id=flow_id,\n                name=self.name,\n                work_queue_name=self.work_queue_name,\n                work_pool_name=self.work_pool_name,\n                version=self.version,\n                schedule=self.schedule,\n                is_schedule_active=self.is_schedule_active,\n                parameters=self.parameters,\n                description=self.description,\n                tags=self.tags,\n                manifest_path=self.manifest_path,  # allows for backwards YAML compat\n                path=self.path,\n                entrypoint=self.entrypoint,\n                infra_overrides=self.infra_overrides,\n                storage_document_id=storage_document_id,\n                infrastructure_document_id=infrastructure_document_id,\n                parameter_openapi_schema=self.parameter_openapi_schema.dict(),\n            )\n\n            if client.server_type == ServerType.CLOUD:\n                # The triggers defined in the deployment spec are, essentially,\n                # anonymous and attempting truly sync them with cloud is not\n                # feasible. Instead, we remove all automations that are owned\n                # by the deployment, meaning that they were created via this\n                # mechanism below, and then recreate them.\n                await client.delete_resource_owned_automations(\n                    f\"prefect.deployment.{deployment_id}\"\n                )\n                for trigger in self.triggers:\n                    trigger.set_deployment_id(deployment_id)\n                    await client.create_automation(trigger.as_automation())\n\n            return deployment_id\n\n    @classmethod\n    @sync_compatible\n    async def build_from_flow(\n        cls,\n        flow: Flow,\n        name: str,\n        output: str = None,\n        skip_upload: bool = False,\n        ignore_file: str = \".prefectignore\",\n        apply: bool = False,\n        load_existing: bool = True,\n        **kwargs,\n    ) -> \"Deployment\":\n\"\"\"\n        Configure a deployment for a given flow.\n\n        Args:\n            flow: A flow function to deploy\n            name: A name for the deployment\n            output (optional): if provided, the full deployment specification will be\n                written as a YAML file in the location specified by `output`\n            skip_upload: if True, deployment files are not automatically uploaded to\n                remote storage\n            ignore_file: an optional path to a `.prefectignore` file that specifies\n                filename patterns to ignore when uploading to remote storage; if not\n                provided, looks for `.prefectignore` in the current working directory\n            apply: if True, the deployment is automatically registered with the API\n            load_existing: if True, load any settings that may already be configured for\n                the named deployment server-side (e.g., schedules, default parameter\n                values, etc.)\n            **kwargs: other keyword arguments to pass to the constructor for the\n                `Deployment` class\n        \"\"\"\n        if not name:\n            raise ValueError(\"A deployment name must be provided.\")\n\n        # note that `deployment.load` only updates settings that were *not*\n        # provided at initialization\n        deployment = cls(name=name, **kwargs)\n        deployment.flow_name = flow.name\n        if not deployment.entrypoint:\n            ## first see if an entrypoint can be determined\n            flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n            mod_name = getattr(flow, \"__module__\", None)\n            if not flow_file:\n                if not mod_name:\n                    # todo, check if the file location was manually set already\n                    raise ValueError(\"Could not determine flow's file location.\")\n                module = importlib.import_module(mod_name)\n                flow_file = getattr(module, \"__file__\", None)\n                if not flow_file:\n                    raise ValueError(\"Could not determine flow's file location.\")\n\n            # set entrypoint\n            entry_path = Path(flow_file).absolute().relative_to(Path(\".\").absolute())\n            deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n        if load_existing:\n            await deployment.load()\n\n        # set a few attributes for this flow object\n        deployment.parameter_openapi_schema = parameter_schema(flow)\n\n        # ensure the ignore file exists\n        if not Path(ignore_file).exists():\n            Path(ignore_file).touch()\n\n        if not deployment.version:\n            deployment.version = flow.version\n        if not deployment.description:\n            deployment.description = flow.description\n\n        # proxy for whether infra is docker-based\n        is_docker_based = hasattr(deployment.infrastructure, \"image\")\n\n        if not deployment.storage and not is_docker_based and not deployment.path:\n            deployment.path = str(Path(\".\").absolute())\n        elif not deployment.storage and is_docker_based:\n            # only update if a path is not already set\n            if not deployment.path:\n                deployment.path = \"/opt/prefect/flows\"\n\n        if not skip_upload:\n            if (\n                deployment.storage\n                and \"put-directory\" in deployment.storage.get_block_capabilities()\n            ):\n                await deployment.upload_to_storage(ignore_file=ignore_file)\n\n        if output:\n            await deployment.to_yaml(output)\n\n        if apply:\n            await deployment.apply()\n\n        return deployment\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.location","title":"location: str property","text":"

The 'location' that this deployment points to is given by path alone in the case of no remote storage, and otherwise by storage.basepath / path.

The underlying flow entrypoint is interpreted relative to this location.

","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.apply","title":"apply async","text":"

Registers this deployment with the API and returns the deployment's ID.

Parameters:

Name Type Description Default upload bool

if True, deployment files are automatically uploaded to remote storage

False work_queue_concurrency int

If provided, sets the concurrency limit on the deployment's work queue

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@sync_compatible\nasync def apply(\n    self, upload: bool = False, work_queue_concurrency: int = None\n) -> UUID:\n\"\"\"\n    Registers this deployment with the API and returns the deployment's ID.\n\n    Args:\n        upload: if True, deployment files are automatically uploaded to remote\n            storage\n        work_queue_concurrency: If provided, sets the concurrency limit on the\n            deployment's work queue\n    \"\"\"\n    if not self.name or not self.flow_name:\n        raise ValueError(\"Both a deployment name and flow name must be set.\")\n    async with get_client() as client:\n        # prep IDs\n        flow_id = await client.create_flow_from_name(self.flow_name)\n\n        infrastructure_document_id = self.infrastructure._block_document_id\n        if not infrastructure_document_id:\n            # if not building off a block, will create an anonymous block\n            self.infrastructure = self.infrastructure.copy()\n            infrastructure_document_id = await self.infrastructure._save(\n                is_anonymous=True,\n            )\n\n        if upload:\n            await self.upload_to_storage()\n\n        if self.work_queue_name and work_queue_concurrency is not None:\n            try:\n                res = await client.create_work_queue(\n                    name=self.work_queue_name, work_pool_name=self.work_pool_name\n                )\n            except ObjectAlreadyExists:\n                res = await client.read_work_queue_by_name(\n                    name=self.work_queue_name, work_pool_name=self.work_pool_name\n                )\n            await client.update_work_queue(\n                res.id, concurrency_limit=work_queue_concurrency\n            )\n\n        # we assume storage was already saved\n        storage_document_id = getattr(self.storage, \"_block_document_id\", None)\n        deployment_id = await client.create_deployment(\n            flow_id=flow_id,\n            name=self.name,\n            work_queue_name=self.work_queue_name,\n            work_pool_name=self.work_pool_name,\n            version=self.version,\n            schedule=self.schedule,\n            is_schedule_active=self.is_schedule_active,\n            parameters=self.parameters,\n            description=self.description,\n            tags=self.tags,\n            manifest_path=self.manifest_path,  # allows for backwards YAML compat\n            path=self.path,\n            entrypoint=self.entrypoint,\n            infra_overrides=self.infra_overrides,\n            storage_document_id=storage_document_id,\n            infrastructure_document_id=infrastructure_document_id,\n            parameter_openapi_schema=self.parameter_openapi_schema.dict(),\n        )\n\n        if client.server_type == ServerType.CLOUD:\n            # The triggers defined in the deployment spec are, essentially,\n            # anonymous and attempting truly sync them with cloud is not\n            # feasible. Instead, we remove all automations that are owned\n            # by the deployment, meaning that they were created via this\n            # mechanism below, and then recreate them.\n            await client.delete_resource_owned_automations(\n                f\"prefect.deployment.{deployment_id}\"\n            )\n            for trigger in self.triggers:\n                trigger.set_deployment_id(deployment_id)\n                await client.create_automation(trigger.as_automation())\n\n        return deployment_id\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.build_from_flow","title":"build_from_flow async classmethod","text":"

Configure a deployment for a given flow.

Parameters:

Name Type Description Default flow Flow

A flow function to deploy

required name str

A name for the deployment

required output optional

if provided, the full deployment specification will be written as a YAML file in the location specified by output

None skip_upload bool

if True, deployment files are not automatically uploaded to remote storage

False ignore_file str

an optional path to a .prefectignore file that specifies filename patterns to ignore when uploading to remote storage; if not provided, looks for .prefectignore in the current working directory

'.prefectignore' apply bool

if True, the deployment is automatically registered with the API

False load_existing bool

if True, load any settings that may already be configured for the named deployment server-side (e.g., schedules, default parameter values, etc.)

True **kwargs

other keyword arguments to pass to the constructor for the Deployment class

{} Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@classmethod\n@sync_compatible\nasync def build_from_flow(\n    cls,\n    flow: Flow,\n    name: str,\n    output: str = None,\n    skip_upload: bool = False,\n    ignore_file: str = \".prefectignore\",\n    apply: bool = False,\n    load_existing: bool = True,\n    **kwargs,\n) -> \"Deployment\":\n\"\"\"\n    Configure a deployment for a given flow.\n\n    Args:\n        flow: A flow function to deploy\n        name: A name for the deployment\n        output (optional): if provided, the full deployment specification will be\n            written as a YAML file in the location specified by `output`\n        skip_upload: if True, deployment files are not automatically uploaded to\n            remote storage\n        ignore_file: an optional path to a `.prefectignore` file that specifies\n            filename patterns to ignore when uploading to remote storage; if not\n            provided, looks for `.prefectignore` in the current working directory\n        apply: if True, the deployment is automatically registered with the API\n        load_existing: if True, load any settings that may already be configured for\n            the named deployment server-side (e.g., schedules, default parameter\n            values, etc.)\n        **kwargs: other keyword arguments to pass to the constructor for the\n            `Deployment` class\n    \"\"\"\n    if not name:\n        raise ValueError(\"A deployment name must be provided.\")\n\n    # note that `deployment.load` only updates settings that were *not*\n    # provided at initialization\n    deployment = cls(name=name, **kwargs)\n    deployment.flow_name = flow.name\n    if not deployment.entrypoint:\n        ## first see if an entrypoint can be determined\n        flow_file = getattr(flow, \"__globals__\", {}).get(\"__file__\")\n        mod_name = getattr(flow, \"__module__\", None)\n        if not flow_file:\n            if not mod_name:\n                # todo, check if the file location was manually set already\n                raise ValueError(\"Could not determine flow's file location.\")\n            module = importlib.import_module(mod_name)\n            flow_file = getattr(module, \"__file__\", None)\n            if not flow_file:\n                raise ValueError(\"Could not determine flow's file location.\")\n\n        # set entrypoint\n        entry_path = Path(flow_file).absolute().relative_to(Path(\".\").absolute())\n        deployment.entrypoint = f\"{entry_path}:{flow.fn.__name__}\"\n\n    if load_existing:\n        await deployment.load()\n\n    # set a few attributes for this flow object\n    deployment.parameter_openapi_schema = parameter_schema(flow)\n\n    # ensure the ignore file exists\n    if not Path(ignore_file).exists():\n        Path(ignore_file).touch()\n\n    if not deployment.version:\n        deployment.version = flow.version\n    if not deployment.description:\n        deployment.description = flow.description\n\n    # proxy for whether infra is docker-based\n    is_docker_based = hasattr(deployment.infrastructure, \"image\")\n\n    if not deployment.storage and not is_docker_based and not deployment.path:\n        deployment.path = str(Path(\".\").absolute())\n    elif not deployment.storage and is_docker_based:\n        # only update if a path is not already set\n        if not deployment.path:\n            deployment.path = \"/opt/prefect/flows\"\n\n    if not skip_upload:\n        if (\n            deployment.storage\n            and \"put-directory\" in deployment.storage.get_block_capabilities()\n        ):\n            await deployment.upload_to_storage(ignore_file=ignore_file)\n\n    if output:\n        await deployment.to_yaml(output)\n\n    if apply:\n        await deployment.apply()\n\n    return deployment\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.handle_openapi_schema","title":"handle_openapi_schema","text":"

This method ensures setting a value of None is handled gracefully.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@validator(\"parameter_openapi_schema\", pre=True)\ndef handle_openapi_schema(cls, value):\n\"\"\"\n    This method ensures setting a value of `None` is handled gracefully.\n    \"\"\"\n    if value is None:\n        return ParameterSchema()\n    return value\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.load","title":"load async","text":"

Queries the API for a deployment with this name for this flow, and if found, prepopulates any settings that were not set at initialization.

Returns a boolean specifying whether a load was successful or not.

Raises:

Type Description -ValueError

if both name and flow name are not set

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@sync_compatible\nasync def load(self) -> bool:\n\"\"\"\n    Queries the API for a deployment with this name for this flow, and if found,\n    prepopulates any settings that were not set at initialization.\n\n    Returns a boolean specifying whether a load was successful or not.\n\n    Raises:\n        - ValueError: if both name and flow name are not set\n    \"\"\"\n    if not self.name or not self.flow_name:\n        raise ValueError(\"Both a deployment name and flow name must be provided.\")\n    async with get_client() as client:\n        try:\n            deployment = await client.read_deployment_by_name(\n                f\"{self.flow_name}/{self.name}\"\n            )\n            if deployment.storage_document_id:\n                Block._from_block_document(\n                    await client.read_block_document(deployment.storage_document_id)\n                )\n\n            excluded_fields = self.__fields_set__.union(\n                {\"infrastructure\", \"storage\", \"timestamp\", \"triggers\"}\n            )\n            for field in set(self.__fields__.keys()) - excluded_fields:\n                new_value = getattr(deployment, field)\n                setattr(self, field, new_value)\n\n            if \"infrastructure\" not in self.__fields_set__:\n                if deployment.infrastructure_document_id:\n                    self.infrastructure = Block._from_block_document(\n                        await client.read_block_document(\n                            deployment.infrastructure_document_id\n                        )\n                    )\n            if \"storage\" not in self.__fields_set__:\n                if deployment.storage_document_id:\n                    self.storage = Block._from_block_document(\n                        await client.read_block_document(\n                            deployment.storage_document_id\n                        )\n                    )\n        except ObjectNotFound:\n            return False\n    return True\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.update","title":"update async","text":"

Performs an in-place update with the provided settings.

Parameters:

Name Type Description Default ignore_none bool

if True, all None values are ignored when performing the update

False Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@sync_compatible\nasync def update(self, ignore_none: bool = False, **kwargs):\n\"\"\"\n    Performs an in-place update with the provided settings.\n\n    Args:\n        ignore_none: if True, all `None` values are ignored when performing the\n            update\n    \"\"\"\n    unknown_keys = set(kwargs.keys()) - set(self.dict().keys())\n    if unknown_keys:\n        raise ValueError(\n            f\"Received unexpected attributes: {', '.join(unknown_keys)}\"\n        )\n    for key, value in kwargs.items():\n        if ignore_none and value is None:\n            continue\n        setattr(self, key, value)\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.upload_to_storage","title":"upload_to_storage async","text":"

Uploads the workflow this deployment represents using a provided storage block; if no block is provided, defaults to configuring self for local storage.

Parameters:

Name Type Description Default storage_block str

a string reference a remote storage block slug $type/$name; if provided, used to upload the workflow's project

None ignore_file str

an optional path to a .prefectignore file that specifies filename patterns to ignore when uploading to remote storage; if not provided, looks for .prefectignore in the current working directory

'.prefectignore' Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@sync_compatible\nasync def upload_to_storage(\n    self, storage_block: str = None, ignore_file: str = \".prefectignore\"\n) -> Optional[int]:\n\"\"\"\n    Uploads the workflow this deployment represents using a provided storage block;\n    if no block is provided, defaults to configuring self for local storage.\n\n    Args:\n        storage_block: a string reference a remote storage block slug `$type/$name`;\n            if provided, used to upload the workflow's project\n        ignore_file: an optional path to a `.prefectignore` file that specifies\n            filename patterns to ignore when uploading to remote storage; if not\n            provided, looks for `.prefectignore` in the current working directory\n    \"\"\"\n    file_count = None\n    if storage_block:\n        storage = await Block.load(storage_block)\n\n        if \"put-directory\" not in storage.get_block_capabilities():\n            raise BlockMissingCapabilities(\n                f\"Storage block {storage!r} missing 'put-directory' capability.\"\n            )\n\n        self.storage = storage\n\n        # upload current directory to storage location\n        file_count = await self.storage.put_directory(\n            ignore_file=ignore_file, to_path=self.path\n        )\n    elif self.storage:\n        if \"put-directory\" not in self.storage.get_block_capabilities():\n            raise BlockMissingCapabilities(\n                f\"Storage block {self.storage!r} missing 'put-directory'\"\n                \" capability.\"\n            )\n\n        file_count = await self.storage.put_directory(\n            ignore_file=ignore_file, to_path=self.path\n        )\n\n    # persists storage now in case it contains secret values\n    if self.storage and not self.storage._block_document_id:\n        await self.storage._save(is_anonymous=True)\n\n    return file_count\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.Deployment.validate_automation_names","title":"validate_automation_names","text":"

Ensure that each trigger has a name for its automation if none is provided.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@validator(\"triggers\")\ndef validate_automation_names(cls, field_value, values, field, config):\n\"\"\"Ensure that each trigger has a name for its automation if none is provided.\"\"\"\n    for i, trigger in enumerate(field_value, start=1):\n        if trigger.name is None:\n            trigger.name = f\"{values['name']}__automation_{i}\"\n\n    return field_value\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.load_deployments_from_yaml","title":"load_deployments_from_yaml","text":"

Load deployments from a yaml file.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
def load_deployments_from_yaml(\n    path: str,\n) -> PrefectObjectRegistry:\n\"\"\"\n    Load deployments from a yaml file.\n    \"\"\"\n    with open(path, \"r\") as f:\n        contents = f.read()\n\n    # Parse into a yaml tree to retrieve separate documents\n    nodes = yaml.compose_all(contents)\n\n    with PrefectObjectRegistry(capture_failures=True) as registry:\n        for node in nodes:\n            with tmpchdir(path):\n                deployment_dict = yaml.safe_load(yaml.serialize(node))\n                # The return value is not necessary, just instantiating the Deployment\n                # is enough to get it recorded on the registry\n                parse_obj_as(Deployment, deployment_dict)\n\n    return registry\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.load_flow_from_flow_run","title":"load_flow_from_flow_run async","text":"

Load a flow from the location/script provided in a deployment's storage document.

If ignore_storage=True is provided, no pull from remote storage occurs. This flag is largely for testing, and assumes the flow is already available locally.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@inject_client\nasync def load_flow_from_flow_run(\n    flow_run: FlowRun, client: PrefectClient, ignore_storage: bool = False\n) -> Flow:\n\"\"\"\n    Load a flow from the location/script provided in a deployment's storage document.\n\n    If `ignore_storage=True` is provided, no pull from remote storage occurs.  This flag\n    is largely for testing, and assumes the flow is already available locally.\n    \"\"\"\n    deployment = await client.read_deployment(flow_run.deployment_id)\n    logger = flow_run_logger(flow_run)\n\n    if not ignore_storage and not deployment.pull_steps:\n        sys.path.insert(0, \".\")\n        if deployment.storage_document_id:\n            storage_document = await client.read_block_document(\n                deployment.storage_document_id\n            )\n            storage_block = Block._from_block_document(storage_document)\n        else:\n            basepath = deployment.path or Path(deployment.manifest_path).parent\n            storage_block = LocalFileSystem(basepath=basepath)\n\n        logger.info(f\"Downloading flow code from storage at {deployment.path!r}\")\n        await storage_block.get_directory(from_path=deployment.path, local_path=\".\")\n\n    if deployment.pull_steps:\n        logger.debug(f\"Running {len(deployment.pull_steps)} deployment pull steps\")\n        output = await run_steps(deployment.pull_steps)\n        if output.get(\"directory\"):\n            logger.debug(f\"Changing working directory to {output['directory']!r}\")\n            os.chdir(output[\"directory\"])\n\n    import_path = relative_path_to_current_platform(deployment.entrypoint)\n    logger.debug(f\"Importing flow code from '{import_path}'\")\n\n    # for backwards compat\n    if deployment.manifest_path:\n        with open(deployment.manifest_path, \"r\") as f:\n            import_path = json.load(f)[\"import_path\"]\n            import_path = (\n                Path(deployment.manifest_path).parent / import_path\n            ).absolute()\n    flow = await run_sync_in_worker_thread(load_flow_from_entrypoint, str(import_path))\n    return flow\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/deployments/#prefect.deployments.deployments.run_deployment","title":"run_deployment async","text":"

Create a flow run for a deployment and return it after completion or a timeout.

This function will return when the created flow run enters any terminal state or the timeout is reached. If the timeout is reached and the flow run has not reached a terminal state, it will still be returned. When using a timeout, we suggest checking the state of the flow run if completion is important moving forward.

Parameters:

Name Type Description Default name Union[str, UUID]

The deployment id or deployment name in the form: <slugified-flow-name>/<slugified-deployment-name>

required parameters Optional[dict]

Parameter overrides for this flow run. Merged with the deployment defaults.

None scheduled_time Optional[datetime]

The time to schedule the flow run for, defaults to scheduling the flow run to start now.

None flow_run_name Optional[str]

A name for the created flow run

None timeout Optional[float]

The amount of time to wait for the flow run to complete before returning. Setting timeout to 0 will return the flow run immediately. Setting timeout to None will allow this function to poll indefinitely. Defaults to None

None poll_interval Optional[float]

The number of seconds between polls

5 tags Optional[Iterable[str]]

A list of tags to associate with this flow run; note that tags are used only for organizational purposes.

None idempotency_key Optional[str]

A unique value to recognize retries of the same run, and prevent creating multiple flow runs.

None work_queue_name Optional[str]

The name of a work queue to use for this run. Defaults to the default work queue for the deployment.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/deployments.py
@sync_compatible\n@inject_client\nasync def run_deployment(\n    name: Union[str, UUID],\n    client: Optional[PrefectClient] = None,\n    parameters: Optional[dict] = None,\n    scheduled_time: Optional[datetime] = None,\n    flow_run_name: Optional[str] = None,\n    timeout: Optional[float] = None,\n    poll_interval: Optional[float] = 5,\n    tags: Optional[Iterable[str]] = None,\n    idempotency_key: Optional[str] = None,\n    work_queue_name: Optional[str] = None,\n):\n\"\"\"\n    Create a flow run for a deployment and return it after completion or a timeout.\n\n    This function will return when the created flow run enters any terminal state or\n    the timeout is reached. If the timeout is reached and the flow run has not reached\n    a terminal state, it will still be returned. When using a timeout, we suggest\n    checking the state of the flow run if completion is important moving forward.\n\n    Args:\n        name: The deployment id or deployment name in the form:\n            `<slugified-flow-name>/<slugified-deployment-name>`\n        parameters: Parameter overrides for this flow run. Merged with the deployment\n            defaults.\n        scheduled_time: The time to schedule the flow run for, defaults to scheduling\n            the flow run to start now.\n        flow_run_name: A name for the created flow run\n        timeout: The amount of time to wait for the flow run to complete before\n            returning. Setting `timeout` to 0 will return the flow run immediately.\n            Setting `timeout` to None will allow this function to poll indefinitely.\n            Defaults to None\n        poll_interval: The number of seconds between polls\n        tags: A list of tags to associate with this flow run; note that tags are used\n            only for organizational purposes.\n        idempotency_key: A unique value to recognize retries of the same run, and\n            prevent creating multiple flow runs.\n        work_queue_name: The name of a work queue to use for this run. Defaults to\n            the default work queue for the deployment.\n    \"\"\"\n    if timeout is not None and timeout < 0:\n        raise ValueError(\"`timeout` cannot be negative\")\n\n    if scheduled_time is None:\n        scheduled_time = pendulum.now(\"UTC\")\n\n    parameters = parameters or {}\n\n    deployment_id = None\n\n    if isinstance(name, UUID):\n        deployment_id = name\n    else:\n        try:\n            deployment_id = UUID(name)\n        except ValueError:\n            pass\n\n    if deployment_id:\n        deployment = await client.read_deployment(deployment_id=deployment_id)\n    else:\n        deployment = await client.read_deployment_by_name(name)\n\n    flow_run_ctx = FlowRunContext.get()\n    if flow_run_ctx:\n        # This was called from a flow. Link the flow run as a subflow.\n        from prefect.engine import (\n            Pending,\n            _dynamic_key_for_task_run,\n            collect_task_run_inputs,\n        )\n\n        task_inputs = {\n            k: await collect_task_run_inputs(v) for k, v in parameters.items()\n        }\n\n        if deployment_id:\n            flow = await client.read_flow(deployment.flow_id)\n            deployment_name = f\"{flow.name}/{deployment.name}\"\n        else:\n            deployment_name = name\n\n        # Generate a task in the parent flow run to represent the result of the subflow\n        dummy_task = Task(\n            name=deployment_name,\n            fn=lambda: None,\n            version=deployment.version,\n        )\n        # Override the default task key to include the deployment name\n        dummy_task.task_key = f\"{__name__}.run_deployment.{slugify(deployment_name)}\"\n        parent_task_run = await client.create_task_run(\n            task=dummy_task,\n            flow_run_id=flow_run_ctx.flow_run.id,\n            dynamic_key=_dynamic_key_for_task_run(flow_run_ctx, dummy_task),\n            task_inputs=task_inputs,\n            state=Pending(),\n        )\n        parent_task_run_id = parent_task_run.id\n    else:\n        parent_task_run_id = None\n\n    flow_run = await client.create_flow_run_from_deployment(\n        deployment.id,\n        parameters=parameters,\n        state=Scheduled(scheduled_time=scheduled_time),\n        name=flow_run_name,\n        tags=tags,\n        idempotency_key=idempotency_key,\n        parent_task_run_id=parent_task_run_id,\n        work_queue_name=work_queue_name,\n    )\n\n    flow_run_id = flow_run.id\n\n    if timeout == 0:\n        return flow_run\n\n    with anyio.move_on_after(timeout):\n        while True:\n            flow_run = await client.read_flow_run(flow_run_id)\n            flow_state = flow_run.state\n            if flow_state and flow_state.is_final():\n                return flow_run\n            await anyio.sleep(poll_interval)\n\n    return flow_run\n
","tags":["Python API","flow runs","deployments"]},{"location":"api-ref/prefect/deployments/steps/core/","title":"prefect.deployments.steps.core","text":"","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core","title":"prefect.deployments.steps.core","text":"

Core primitives for running Prefect project steps.

Project steps are YAML representations of Python functions along with their inputs.

Whenever a step is run, the following actions are taken:

","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core.StepExecutionError","title":"StepExecutionError","text":"

Bases: Exception

Raised when a step fails to execute.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/core.py
class StepExecutionError(Exception):\n\"\"\"\n    Raised when a step fails to execute.\n    \"\"\"\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/core/#prefect.deployments.steps.core.run_step","title":"run_step async","text":"

Runs a step, returns the step's output.

Steps are assumed to be in the format {\"importable.func.name\": {\"kwarg1\": \"value1\", ...}}.

The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the inputs before passing to the step function:

This keyword is used to specify packages that should be installed before running the step.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/core.py
async def run_step(step: Dict, upstream_outputs: Optional[Dict] = None) -> Dict:\n\"\"\"\n    Runs a step, returns the step's output.\n\n    Steps are assumed to be in the format `{\"importable.func.name\": {\"kwarg1\": \"value1\", ...}}`.\n\n    The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the\n    inputs before passing to the step function:\n\n    This keyword is used to specify packages that should be installed before running the step.\n    \"\"\"\n    fqn, inputs = _get_step_fully_qualified_name_and_inputs(step)\n    upstream_outputs = upstream_outputs or {}\n\n    if len(step.keys()) > 1:\n        raise ValueError(\n            f\"Step has unexpected additional keys: {', '.join(list(step.keys())[1:])}\"\n        )\n\n    keywords = {\n        keyword: inputs.pop(keyword)\n        for keyword in RESERVED_KEYWORDS\n        if keyword in inputs\n    }\n\n    inputs = apply_values(inputs, upstream_outputs)\n    inputs = await resolve_block_document_references(inputs)\n    inputs = await resolve_variables(inputs)\n    inputs = apply_values(inputs, os.environ)\n    step_func = _get_function_for_step(fqn, requires=keywords.get(\"requires\"))\n    result = await from_async.call_soon_in_new_thread(\n        Call.new(step_func, **inputs)\n    ).aresult()\n    return result\n
","tags":["Python API","projects","deployments","steps"]},{"location":"api-ref/prefect/deployments/steps/pull/","title":"prefect.deployments.steps.pull","text":"","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull","title":"prefect.deployments.steps.pull","text":"

Core set of steps for specifying a Prefect project pull step.

","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.git_clone","title":"git_clone","text":"

Clones a git repository into the current working directory.

Parameters:

Name Type Description Default repository str

the URL of the repository to clone

required branch str

the branch to clone; if not provided, the default branch will be used

None include_submodules bool

whether to include git submodules when cloning the repository

False access_token str

an access token to use for cloning the repository; if not provided the repository will be cloned using the default git credentials

None credentials optional

a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the

None

Returns:

Name Type Description dict dict

a dictionary containing a directory key of the new directory that was created

Raises:

Type Description subprocess.CalledProcessError

if the git clone command fails for any reason

Examples:

Clone a public repository:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/PrefectHQ/prefect.git\n

Clone a branch of a public repository:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/PrefectHQ/prefect.git\nbranch: my-branch\n

Clone a private repository using a GitHubCredentials block:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/org/repo.git\ncredentials: \"{{ prefect.blocks.github-credentials.my-github-credentials-block }}\"\n

Clone a private repository using an access token:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/org/repo.git\naccess_token: \"{{ prefect.blocks.secret.github-access-token }}\" # Requires creation of a Secret block\n
Note that you will need to create a Secret block to store the value of your git credentials. You can also store a username/password combo or token prefix (e.g. x-token-auth) in your secret block. Refer to your git providers documentation for the correct authentication schema.

Clone a repository with submodules:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/org/repo.git\ninclude_submodules: true\n

Clone a repository with an SSH key (note that the SSH key must be added to the worker before executing flows):

pull:\n- prefect.deployments.steps.git_clone:\nrepository: git@github.com:org/repo.git\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/pull.py
def git_clone(\n    repository: str,\n    branch: Optional[str] = None,\n    include_submodules: bool = False,\n    access_token: Optional[str] = None,\n    credentials: Optional[Block] = None,\n) -> dict:\n\"\"\"\n    Clones a git repository into the current working directory.\n\n    Args:\n        repository (str): the URL of the repository to clone\n        branch (str, optional): the branch to clone; if not provided, the default branch will be used\n        include_submodules (bool): whether to include git submodules when cloning the repository\n        access_token (str, optional): an access token to use for cloning the repository; if not provided\n            the repository will be cloned using the default git credentials\n        credentials (optional): a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the\n        credentials to use for cloning the repository.\n\n    Returns:\n        dict: a dictionary containing a `directory` key of the new directory that was created\n\n    Raises:\n        subprocess.CalledProcessError: if the git clone command fails for any reason\n\n    Examples:\n        Clone a public repository:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/PrefectHQ/prefect.git\n        ```\n\n        Clone a branch of a public repository:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/PrefectHQ/prefect.git\n                branch: my-branch\n        ```\n\n        Clone a private repository using a GitHubCredentials block:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/org/repo.git\n                credentials: \"{{ prefect.blocks.github-credentials.my-github-credentials-block }}\"\n        ```\n\n        Clone a private repository using an access token:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/org/repo.git\n                access_token: \"{{ prefect.blocks.secret.github-access-token }}\" # Requires creation of a Secret block\n        ```\n        Note that you will need to [create a Secret block](/concepts/blocks/#using-existing-block-types) to store the\n        value of your git credentials. You can also store a username/password combo or token prefix (e.g. `x-token-auth`)\n        in your secret block. Refer to your git providers documentation for the correct authentication schema.\n\n        Clone a repository with submodules:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: https://github.com/org/repo.git\n                include_submodules: true\n        ```\n\n        Clone a repository with an SSH key (note that the SSH key must be added to the worker\n        before executing flows):\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                repository: git@github.com:org/repo.git\n        ```\n    \"\"\"\n    if access_token and credentials:\n        raise ValueError(\n            \"Please provide either an access token or credentials but not both.\"\n        )\n\n    if credentials:\n        if credentials.get(\"token\"):\n            access_token = credentials[\"token\"]\n        elif credentials.get(\"username\") and credentials.get(\"password\"):\n            access_token = f\"{credentials['username']}:{credentials['password']}\"\n\n    url_components = urllib.parse.urlparse(repository)\n    if url_components.scheme == \"https\" and access_token is not None:\n        updated_components = url_components._replace(\n            netloc=f\"{access_token}@{url_components.netloc}\"\n        )\n        repository_url = urllib.parse.urlunparse(updated_components)\n    else:\n        repository_url = repository\n\n    cmd = [\"git\", \"clone\", repository_url]\n    if branch:\n        cmd += [\"-b\", branch]\n    if include_submodules:\n        cmd += [\"--recurse-submodules\"]\n\n    # Limit git history\n    cmd += [\"--depth\", \"1\"]\n\n    try:\n        subprocess.check_call(\n            cmd, shell=sys.platform == \"win32\", stderr=sys.stderr, stdout=sys.stdout\n        )\n    except subprocess.CalledProcessError as exc:\n        # Hide the command used to avoid leaking the access token\n        exc_chain = None if access_token else exc\n        raise RuntimeError(\n            f\"Failed to clone repository {repository!r} with exit code\"\n            f\" {exc.returncode}.\"\n        ) from exc_chain\n\n    directory = \"/\".join(repository.strip().split(\"/\")[-1:]).replace(\".git\", \"\")\n    deployment_logger.info(f\"Cloned repository {repository!r} into {directory!r}\")\n    return {\"directory\": directory}\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.git_clone_project","title":"git_clone_project","text":"

Deprecated. Use git_clone instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/pull.py
@deprecated_callable(start_date=\"Jun 2023\", help=\"Use 'git clone' instead.\")\ndef git_clone_project(\n    repository: str,\n    branch: Optional[str] = None,\n    include_submodules: bool = False,\n    access_token: Optional[str] = None,\n) -> dict:\n\"\"\"Deprecated. Use `git_clone` instead.\"\"\"\n    return git_clone(\n        repository=repository,\n        branch=branch,\n        include_submodules=include_submodules,\n        access_token=access_token,\n    )\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/pull/#prefect.deployments.steps.pull.set_working_directory","title":"set_working_directory","text":"

Sets the working directory; works with both absolute and relative paths.

Parameters:

Name Type Description Default directory str

the directory to set as the working directory

required

Returns:

Name Type Description dict dict

a dictionary containing a directory key of the directory that was set

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/pull.py
def set_working_directory(directory: str) -> dict:\n\"\"\"\n    Sets the working directory; works with both absolute and relative paths.\n\n    Args:\n        directory (str): the directory to set as the working directory\n\n    Returns:\n        dict: a dictionary containing a `directory` key of the\n            directory that was set\n    \"\"\"\n    os.chdir(directory)\n    return dict(directory=directory)\n
","tags":["Python API","projects","deployments","steps","pull step"]},{"location":"api-ref/prefect/deployments/steps/utility/","title":"prefect.deployments.steps.utility","text":"","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility","title":"prefect.deployments.steps.utility","text":"

Utility project steps that are useful for managing a project's deployment lifecycle.

Steps within this module can be used within a build, push, or pull deployment action.

Example

Use the run_shell_script setp to retrieve the short Git commit hash of the current repository and use it as a Docker image tag:

build:\n- prefect.deployments.steps.run_shell_script:\nid: get-commit-hash\nscript: git rev-parse --short HEAD\nstream_output: false\n- prefect_docker.deployments.steps.build_docker_image:\nrequires: prefect-docker\nimage_name: my-image\nimage_tag: \"{{ get-commit-hash.stdout }}\"\ndockerfile: auto\n

","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.RunShellScriptResult","title":"RunShellScriptResult","text":"

Bases: TypedDict

The result of a run_shell_script step.

Attributes:

Name Type Description stdout str

The captured standard output of the script.

stderr str

The captured standard error of the script.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/utility.py
class RunShellScriptResult(TypedDict):\n\"\"\"\n    The result of a `run_shell_script` step.\n\n    Attributes:\n        stdout: The captured standard output of the script.\n        stderr: The captured standard error of the script.\n    \"\"\"\n\n    stdout: str\n    stderr: str\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.pip_install_requirements","title":"pip_install_requirements async","text":"

Installs dependencies from a requirements.txt file.

Parameters:

Name Type Description Default requirements_file str

The requirements.txt to use for installation.

'requirements.txt' directory Optional[str]

The directory the requirements.txt file is in. Defaults to the current working directory.

None stream_output bool

Whether to stream the output from pip install should be streamed to the console

True

Returns:

Type Description

A dictionary with the keys stdout and stderr containing the output the pip install command

Raises:

Type Description subprocess.CalledProcessError

if the pip install command fails for any reason

Example
pull:\n- prefect.deployments.steps.git_clone:\nid: clone-step\nrepository: https://github.com/org/repo.git\n- prefect.deployments.steps.pip_install_requirements:\ndirectory: {{ clone-step.directory }}\nrequirements_file: requirements.txt\nstream_output: False\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/utility.py
async def pip_install_requirements(\n    directory: Optional[str] = None,\n    requirements_file: str = \"requirements.txt\",\n    stream_output: bool = True,\n):\n\"\"\"\n    Installs dependencies from a requirements.txt file.\n\n    Args:\n        requirements_file: The requirements.txt to use for installation.\n        directory: The directory the requirements.txt file is in. Defaults to\n            the current working directory.\n        stream_output: Whether to stream the output from pip install should be\n            streamed to the console\n\n    Returns:\n        A dictionary with the keys `stdout` and `stderr` containing the output\n            the `pip install` command\n\n    Raises:\n        subprocess.CalledProcessError: if the pip install command fails for any reason\n\n    Example:\n        ```yaml\n        pull:\n            - prefect.deployments.steps.git_clone:\n                id: clone-step\n                repository: https://github.com/org/repo.git\n            - prefect.deployments.steps.pip_install_requirements:\n                directory: {{ clone-step.directory }}\n                requirements_file: requirements.txt\n                stream_output: False\n        ```\n    \"\"\"\n    stdout_sink = io.StringIO()\n    stderr_sink = io.StringIO()\n\n    async with open_process(\n        [sys.executable, \"-m\", \"pip\", \"install\", \"-r\", requirements_file],\n        stdout=subprocess.PIPE,\n        stderr=subprocess.PIPE,\n        cwd=directory,\n    ) as process:\n        await _stream_capture_process_output(\n            process,\n            stdout_sink=stdout_sink,\n            stderr_sink=stderr_sink,\n            stream_output=stream_output,\n        )\n        await process.wait()\n\n    return {\n        \"stdout\": stdout_sink.getvalue().strip(),\n        \"stderr\": stderr_sink.getvalue().strip(),\n    }\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/deployments/steps/utility/#prefect.deployments.steps.utility.run_shell_script","title":"run_shell_script async","text":"

Runs one or more shell commands in a subprocess. Returns the standard output and standard error of the script.

Parameters:

Name Type Description Default script str

The script to run

required directory Optional[str]

The directory to run the script in. Defaults to the current working directory.

None env Optional[Dict[str, str]]

A dictionary of environment variables to set for the script

None stream_output bool

Whether to stream the output of the script to stdout/stderr

True

Returns:

Type Description RunShellScriptResult

A dictionary with the keys stdout and stderr containing the output of the script

Examples:

Retrieve the short Git commit hash of the current repository to use as a Docker image tag:

build:\n- prefect.deployments.steps.run_shell_script:\nid: get-commit-hash\nscript: git rev-parse --short HEAD\nstream_output: false\n- prefect_docker.deployments.steps.build_docker_image:\nrequires: prefect-docker\nimage_name: my-image\nimage_tag: \"{{ get-commit-hash.stdout }}\"\ndockerfile: auto\n

Run a multi-line shell script:

build:\n- prefect.deployments.steps.run_shell_script:\nscript: |\necho \"Hello\"\necho \"World\"\n

Run a shell script with environment variables:

build:\n- prefect.deployments.steps.run_shell_script:\nscript: echo \"Hello $NAME\"\nenv:\nNAME: World\n

Run a shell script in a specific directory:

build:\n- prefect.deployments.steps.run_shell_script:\nscript: echo \"Hello\"\ndirectory: /path/to/directory\n

Run a script stored in a file:

build:\n- prefect.deployments.steps.run_shell_script:\nscript: \"bash path/to/script.sh\"\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/deployments/steps/utility.py
async def run_shell_script(\n    script: str,\n    directory: Optional[str] = None,\n    env: Optional[Dict[str, str]] = None,\n    stream_output: bool = True,\n) -> RunShellScriptResult:\n\"\"\"\n    Runs one or more shell commands in a subprocess. Returns the standard\n    output and standard error of the script.\n\n    Args:\n        script: The script to run\n        directory: The directory to run the script in. Defaults to the current\n            working directory.\n        env: A dictionary of environment variables to set for the script\n        stream_output: Whether to stream the output of the script to\n            stdout/stderr\n\n    Returns:\n        A dictionary with the keys `stdout` and `stderr` containing the output\n            of the script\n\n    Examples:\n        Retrieve the short Git commit hash of the current repository to use as\n            a Docker image tag:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                id: get-commit-hash\n                script: git rev-parse --short HEAD\n                stream_output: false\n            - prefect_docker.deployments.steps.build_docker_image:\n                requires: prefect-docker\n                image_name: my-image\n                image_tag: \"{{ get-commit-hash.stdout }}\"\n                dockerfile: auto\n        ```\n\n        Run a multi-line shell script:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: |\n                    echo \"Hello\"\n                    echo \"World\"\n        ```\n\n        Run a shell script with environment variables:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: echo \"Hello $NAME\"\n                env:\n                    NAME: World\n        ```\n\n        Run a shell script in a specific directory:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: echo \"Hello\"\n                directory: /path/to/directory\n        ```\n\n        Run a script stored in a file:\n        ```yaml\n        build:\n            - prefect.deployments.steps.run_shell_script:\n                script: \"bash path/to/script.sh\"\n        ```\n    \"\"\"\n    current_env = os.environ.copy()\n    current_env.update(env or {})\n\n    commands = script.splitlines()\n    stdout_sink = io.StringIO()\n    stderr_sink = io.StringIO()\n\n    for command in commands:\n        split_command = shlex.split(command)\n        if not split_command:\n            continue\n        async with open_process(\n            split_command,\n            stdout=subprocess.PIPE,\n            stderr=subprocess.PIPE,\n            cwd=directory,\n            env=current_env,\n        ) as process:\n            await _stream_capture_process_output(\n                process,\n                stdout_sink=stdout_sink,\n                stderr_sink=stderr_sink,\n                stream_output=stream_output,\n            )\n\n            await process.wait()\n\n    return {\n        \"stdout\": stdout_sink.getvalue().strip(),\n        \"stderr\": stderr_sink.getvalue().strip(),\n    }\n
","tags":["Python API","projects","deployments","steps","shell","pip install","requirements.txt"]},{"location":"api-ref/prefect/runtime/deployment/","title":"prefect.runtime.deployment","text":"","tags":["Python API","deployment context","context"]},{"location":"api-ref/prefect/runtime/deployment/#prefect.runtime.deployment","title":"prefect.runtime.deployment","text":"

Access attributes of the current deployment run dynamically.

Note that if a deployment is not currently being run, all attributes will return empty values.

You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__DEPLOYMENT.

Example usage
from prefect.runtime import deployment\n\ndef get_task_runner():\n    task_runner_config = deployment.parameters.get(\"runner_config\", \"default config here\")\n    return DummyTaskRunner(task_runner_specs=task_runner_config)\n
Available attributes ","tags":["Python API","deployment context","context"]},{"location":"api-ref/prefect/runtime/flow_run/","title":"prefect.runtime.flow_run","text":"","tags":["Python API","flow run context","context"]},{"location":"api-ref/prefect/runtime/flow_run/#prefect.runtime.flow_run","title":"prefect.runtime.flow_run","text":"

Access attributes of the current flow run dynamically.

Note that if a flow run cannot be discovered, all attributes will return empty values.

You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__FLOW_RUN.

Available attributes ","tags":["Python API","flow run context","context"]},{"location":"api-ref/prefect/runtime/task_run/","title":"prefect.runtime.task_run","text":"","tags":["Python API","task run context","context","task run"]},{"location":"api-ref/prefect/runtime/task_run/#prefect.runtime.task_run","title":"prefect.runtime.task_run","text":"

Access attributes of the current task run dynamically.

Note that if a task run cannot be discovered, all attributes will return empty values.

You can mock the runtime attributes for testing purposes by setting environment variables prefixed with PREFECT__RUNTIME__TASK_RUN.

Available attributes ","tags":["Python API","task run context","context","task run"]},{"location":"api-ref/prefect/utilities/annotations/","title":"prefect.utilities.annotations","text":"","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations","title":"prefect.utilities.annotations","text":"","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.BaseAnnotation","title":"BaseAnnotation","text":"

Bases: namedtuple('BaseAnnotation', field_names='value'), ABC, Generic[T]

Base class for Prefect annotation types.

Inherits from namedtuple for unpacking support in another tools.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/annotations.py
class BaseAnnotation(\n    namedtuple(\"BaseAnnotation\", field_names=\"value\"), ABC, Generic[T]\n):\n\"\"\"\n    Base class for Prefect annotation types.\n\n    Inherits from `namedtuple` for unpacking support in another tools.\n    \"\"\"\n\n    def unwrap(self) -> T:\n        if sys.version_info < (3, 8):\n            # cannot simply return self.value due to recursion error in Python 3.7\n            # also _asdict does not follow convention; it's not an internal method\n            # https://stackoverflow.com/a/26180604\n            return self._asdict()[\"value\"]\n        else:\n            return self.value\n\n    def rewrap(self, value: T) -> \"BaseAnnotation[T]\":\n        return type(self)(value)\n\n    def __eq__(self, other: object) -> bool:\n        if not type(self) == type(other):\n            return False\n        return self.unwrap() == other.unwrap()\n\n    def __repr__(self) -> str:\n        return f\"{type(self).__name__}({self.value!r})\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.NotSet","title":"NotSet","text":"

Singleton to distinguish None from a value that is not provided by the user.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/annotations.py
class NotSet:\n\"\"\"\n    Singleton to distinguish `None` from a value that is not provided by the user.\n    \"\"\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.allow_failure","title":"allow_failure","text":"

Bases: BaseAnnotation[T]

Wrapper for states or futures.

Indicates that the upstream run for this input can be failed.

Generally, Prefect will not allow a downstream run to start if any of its inputs are failed. This annotation allows you to opt into receiving a failed input downstream.

If the input is from a failed run, the attached exception will be passed to your function.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/annotations.py
class allow_failure(BaseAnnotation[T]):\n\"\"\"\n    Wrapper for states or futures.\n\n    Indicates that the upstream run for this input can be failed.\n\n    Generally, Prefect will not allow a downstream run to start if any of its inputs\n    are failed. This annotation allows you to opt into receiving a failed input\n    downstream.\n\n    If the input is from a failed run, the attached exception will be passed to your\n    function.\n    \"\"\"\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.quote","title":"quote","text":"

Bases: BaseAnnotation[T]

Simple wrapper to mark an expression as a different type so it will not be coerced by Prefect. For example, if you want to return a state from a flow without having the flow assume that state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/annotations.py
class quote(BaseAnnotation[T]):\n\"\"\"\n    Simple wrapper to mark an expression as a different type so it will not be coerced\n    by Prefect. For example, if you want to return a state from a flow without having\n    the flow assume that state.\n    \"\"\"\n\n    def unquote(self) -> T:\n        return self.unwrap()\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/annotations/#prefect.utilities.annotations.unmapped","title":"unmapped","text":"

Bases: BaseAnnotation[T]

Wrapper for iterables.

Indicates that this input should be sent as-is to all runs created during a mapping operation instead of being split.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/annotations.py
class unmapped(BaseAnnotation[T]):\n\"\"\"\n    Wrapper for iterables.\n\n    Indicates that this input should be sent as-is to all runs created during a mapping\n    operation instead of being split.\n    \"\"\"\n\n    def __getitem__(self, _) -> T:\n        # Internally, this acts as an infinite array where all items are the same value\n        return self.unwrap()\n
","tags":["Python API","annotations"]},{"location":"api-ref/prefect/utilities/asyncutils/","title":"prefect.utilities.asyncutils","text":"","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils","title":"prefect.utilities.asyncutils","text":"

Utilities for interoperability with async functions and workers from various contexts.

","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherIncomplete","title":"GatherIncomplete","text":"

Bases: RuntimeError

Used to indicate retrieving gather results before completion

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
class GatherIncomplete(RuntimeError):\n\"\"\"Used to indicate retrieving gather results before completion\"\"\"\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherTaskGroup","title":"GatherTaskGroup","text":"

Bases: anyio.abc.TaskGroup

A task group that gathers results.

AnyIO does not include support gather. This class extends the TaskGroup interface to allow simple gathering.

See https://github.com/agronholm/anyio/issues/100

This class should be instantiated with create_gather_task_group.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
class GatherTaskGroup(anyio.abc.TaskGroup):\n\"\"\"\n    A task group that gathers results.\n\n    AnyIO does not include support `gather`. This class extends the `TaskGroup`\n    interface to allow simple gathering.\n\n    See https://github.com/agronholm/anyio/issues/100\n\n    This class should be instantiated with `create_gather_task_group`.\n    \"\"\"\n\n    def __init__(self, task_group: anyio.abc.TaskGroup):\n        self._results: Dict[UUID, Any] = {}\n        # The concrete task group implementation to use\n        self._task_group: anyio.abc.TaskGroup = task_group\n\n    async def _run_and_store(self, key, fn, args):\n        self._results[key] = await fn(*args)\n\n    def start_soon(self, fn, *args) -> UUID:\n        key = uuid4()\n        # Put a placeholder in-case the result is retrieved earlier\n        self._results[key] = GatherIncomplete\n        self._task_group.start_soon(self._run_and_store, key, fn, args)\n        return key\n\n    async def start(self, fn, *args):\n\"\"\"\n        Since `start` returns the result of `task_status.started()` but here we must\n        return the key instead, we just won't support this method for now.\n        \"\"\"\n        raise RuntimeError(\"`GatherTaskGroup` does not support `start`.\")\n\n    def get_result(self, key: UUID) -> Any:\n        result = self._results[key]\n        if result is GatherIncomplete:\n            raise GatherIncomplete(\n                \"Task is not complete. \"\n                \"Results should not be retrieved until the task group exits.\"\n            )\n        return result\n\n    async def __aenter__(self):\n        await self._task_group.__aenter__()\n        return self\n\n    async def __aexit__(self, *tb):\n        try:\n            retval = await self._task_group.__aexit__(*tb)\n            return retval\n        finally:\n            del self._task_group\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.GatherTaskGroup.start","title":"start async","text":"

Since start returns the result of task_status.started() but here we must return the key instead, we just won't support this method for now.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
async def start(self, fn, *args):\n\"\"\"\n    Since `start` returns the result of `task_status.started()` but here we must\n    return the key instead, we just won't support this method for now.\n    \"\"\"\n    raise RuntimeError(\"`GatherTaskGroup` does not support `start`.\")\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.add_event_loop_shutdown_callback","title":"add_event_loop_shutdown_callback async","text":"

Adds a callback to the given callable on event loop closure. The callable must be a coroutine function. It will be awaited when the current event loop is shutting down.

Requires use of asyncio.run() which waits for async generator shutdown by default or explicit call of asyncio.shutdown_asyncgens(). If the application is entered with asyncio.run_until_complete() and the user calls asyncio.close() without the generator shutdown call, this will not trigger callbacks.

asyncio does not provided any other way to clean up a resource when the event loop is about to close.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
async def add_event_loop_shutdown_callback(coroutine_fn: Callable[[], Awaitable]):\n\"\"\"\n    Adds a callback to the given callable on event loop closure. The callable must be\n    a coroutine function. It will be awaited when the current event loop is shutting\n    down.\n\n    Requires use of `asyncio.run()` which waits for async generator shutdown by\n    default or explicit call of `asyncio.shutdown_asyncgens()`. If the application\n    is entered with `asyncio.run_until_complete()` and the user calls\n    `asyncio.close()` without the generator shutdown call, this will not trigger\n    callbacks.\n\n    asyncio does not provided _any_ other way to clean up a resource when the event\n    loop is about to close.\n    \"\"\"\n\n    async def on_shutdown(key):\n        # It appears that EVENT_LOOP_GC_REFS is somehow being garbage collected early.\n        # We hold a reference to it so as to preserve it, at least for the lifetime of\n        # this coroutine. See the issue below for the initial report/discussion:\n        # https://github.com/PrefectHQ/prefect/issues/7709#issuecomment-1560021109\n        _ = EVENT_LOOP_GC_REFS\n        try:\n            yield\n        except GeneratorExit:\n            await coroutine_fn()\n            # Remove self from the garbage collection set\n            EVENT_LOOP_GC_REFS.pop(key)\n\n    # Create the iterator and store it in a global variable so it is not garbage\n    # collected. If the iterator is garbage collected before the event loop closes, the\n    # callback will not run. Since this function does not know the scope of the event\n    # loop that is calling it, a reference with global scope is necessary to ensure\n    # garbage collection does not occur until after event loop closure.\n    key = id(on_shutdown)\n    EVENT_LOOP_GC_REFS[key] = on_shutdown(key)\n\n    # Begin iterating so it will be cleaned up as an incomplete generator\n    await EVENT_LOOP_GC_REFS[key].__anext__()\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.create_gather_task_group","title":"create_gather_task_group","text":"

Create a new task group that gathers results

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def create_gather_task_group() -> GatherTaskGroup:\n\"\"\"Create a new task group that gathers results\"\"\"\n    # This function matches the AnyIO API which uses callables since the concrete\n    # task group class depends on the async library being used and cannot be\n    # determined until runtime\n    return GatherTaskGroup(anyio.create_task_group())\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.gather","title":"gather async","text":"

Run calls concurrently and gather their results.

Unlike asyncio.gather this expects to receieve callables not coroutines. This matches anyio semantics.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
async def gather(*calls: Callable[[], Coroutine[Any, Any, T]]) -> List[T]:\n\"\"\"\n    Run calls concurrently and gather their results.\n\n    Unlike `asyncio.gather` this expects to receieve _callables_ not _coroutines_.\n    This matches `anyio` semantics.\n    \"\"\"\n    keys = []\n    async with create_gather_task_group() as tg:\n        for call in calls:\n            keys.append(tg.start_soon(call))\n    return [tg.get_result(key) for key in keys]\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.is_async_fn","title":"is_async_fn","text":"

Returns True if a function returns a coroutine.

See https://github.com/microsoft/pyright/issues/2142 for an example use

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def is_async_fn(\n    func: Union[Callable[P, R], Callable[P, Awaitable[R]]]\n) -> TypeGuard[Callable[P, Awaitable[R]]]:\n\"\"\"\n    Returns `True` if a function returns a coroutine.\n\n    See https://github.com/microsoft/pyright/issues/2142 for an example use\n    \"\"\"\n    while hasattr(func, \"__wrapped__\"):\n        func = func.__wrapped__\n\n    return inspect.iscoroutinefunction(func)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.is_async_gen_fn","title":"is_async_gen_fn","text":"

Returns True if a function is an async generator.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def is_async_gen_fn(func):\n\"\"\"\n    Returns `True` if a function is an async generator.\n    \"\"\"\n    while hasattr(func, \"__wrapped__\"):\n        func = func.__wrapped__\n\n    return inspect.isasyncgenfunction(func)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.raise_async_exception_in_thread","title":"raise_async_exception_in_thread","text":"

Raise an exception in a thread asynchronously.

This will not interrupt long-running system calls like sleep or wait.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def raise_async_exception_in_thread(thread: Thread, exc_type: Type[BaseException]):\n\"\"\"\n    Raise an exception in a thread asynchronously.\n\n    This will not interrupt long-running system calls like `sleep` or `wait`.\n    \"\"\"\n    ret = ctypes.pythonapi.PyThreadState_SetAsyncExc(\n        ctypes.c_long(thread.ident), ctypes.py_object(exc_type)\n    )\n    if ret == 0:\n        raise ValueError(\"Thread not found.\")\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_async_from_worker_thread","title":"run_async_from_worker_thread","text":"

Runs an async function in the main thread's event loop, blocking the worker thread until completion

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def run_async_from_worker_thread(\n    __fn: Callable[..., Awaitable[T]], *args: Any, **kwargs: Any\n) -> T:\n\"\"\"\n    Runs an async function in the main thread's event loop, blocking the worker\n    thread until completion\n    \"\"\"\n    call = partial(__fn, *args, **kwargs)\n    return anyio.from_thread.run(call)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync_in_interruptible_worker_thread","title":"run_sync_in_interruptible_worker_thread async","text":"

Runs a sync function in a new interruptible worker thread so that the main thread's event loop is not blocked

Unlike the anyio function, this performs best-effort cancellation of the thread using the C API. Cancellation will not interrupt system calls like sleep.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
async def run_sync_in_interruptible_worker_thread(\n    __fn: Callable[..., T], *args: Any, **kwargs: Any\n) -> T:\n\"\"\"\n    Runs a sync function in a new interruptible worker thread so that the main\n    thread's event loop is not blocked\n\n    Unlike the anyio function, this performs best-effort cancellation of the\n    thread using the C API. Cancellation will not interrupt system calls like\n    `sleep`.\n    \"\"\"\n\n    class NotSet:\n        pass\n\n    thread: Thread = None\n    result = NotSet\n    event = asyncio.Event()\n    loop = asyncio.get_running_loop()\n\n    def capture_worker_thread_and_result():\n        # Captures the worker thread that AnyIO is using to execute the function so\n        # the main thread can perform actions on it\n        nonlocal thread, result\n        try:\n            thread = threading.current_thread()\n            result = __fn(*args, **kwargs)\n        except BaseException as exc:\n            result = exc\n            raise\n        finally:\n            loop.call_soon_threadsafe(event.set)\n\n    async def send_interrupt_to_thread():\n        # This task waits until the result is returned from the thread, if cancellation\n        # occurs during that time, we will raise the exception in the thread as well\n        try:\n            await event.wait()\n        except anyio.get_cancelled_exc_class():\n            # NOTE: We could send a SIGINT here which allow us to interrupt system\n            # calls but the interrupt bubbles from the child thread into the main thread\n            # and there is not a clear way to prevent it.\n            raise_async_exception_in_thread(thread, anyio.get_cancelled_exc_class())\n            raise\n\n    async with anyio.create_task_group() as tg:\n        tg.start_soon(send_interrupt_to_thread)\n        tg.start_soon(\n            partial(\n                anyio.to_thread.run_sync,\n                capture_worker_thread_and_result,\n                cancellable=True,\n                limiter=get_thread_limiter(),\n            )\n        )\n\n    assert result is not NotSet\n    return result\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.run_sync_in_worker_thread","title":"run_sync_in_worker_thread async","text":"

Runs a sync function in a new worker thread so that the main thread's event loop is not blocked

Unlike the anyio function, this defaults to a cancellable thread and does not allow passing arguments to the anyio function so users can pass kwargs to their function.

Note that cancellation of threads will not result in interrupted computation, the thread may continue running \u2014 the outcome will just be ignored.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
async def run_sync_in_worker_thread(\n    __fn: Callable[..., T], *args: Any, **kwargs: Any\n) -> T:\n\"\"\"\n    Runs a sync function in a new worker thread so that the main thread's event loop\n    is not blocked\n\n    Unlike the anyio function, this defaults to a cancellable thread and does not allow\n    passing arguments to the anyio function so users can pass kwargs to their function.\n\n    Note that cancellation of threads will not result in interrupted computation, the\n    thread may continue running \u2014 the outcome will just be ignored.\n    \"\"\"\n    call = partial(__fn, *args, **kwargs)\n    return await anyio.to_thread.run_sync(\n        call, cancellable=True, limiter=get_thread_limiter()\n    )\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.sync","title":"sync","text":"

Call an async function from a synchronous context. Block until completion.

If in an asynchronous context, we will run the code in a separate loop instead of failing but a warning will be displayed since this is not recommended.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def sync(__async_fn: Callable[P, Awaitable[T]], *args: P.args, **kwargs: P.kwargs) -> T:\n\"\"\"\n    Call an async function from a synchronous context. Block until completion.\n\n    If in an asynchronous context, we will run the code in a separate loop instead of\n    failing but a warning will be displayed since this is not recommended.\n    \"\"\"\n    if in_async_main_thread():\n        warnings.warn(\n            \"`sync` called from an asynchronous context; \"\n            \"you should `await` the async function directly instead.\"\n        )\n        with anyio.start_blocking_portal() as portal:\n            return portal.call(partial(__async_fn, *args, **kwargs))\n    elif in_async_worker_thread():\n        # In a sync context but we can access the event loop thread; send the async\n        # call to the parent\n        return run_async_from_worker_thread(__async_fn, *args, **kwargs)\n    else:\n        # In a sync context and there is no event loop; just create an event loop\n        # to run the async code then tear it down\n        return run_async_in_new_loop(__async_fn, *args, **kwargs)\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/asyncutils/#prefect.utilities.asyncutils.sync_compatible","title":"sync_compatible","text":"

Converts an async function into a dual async and sync function.

When the returned function is called, we will attempt to determine the best way to enter the async function.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/asyncutils.py
def sync_compatible(async_fn: T) -> T:\n\"\"\"\n    Converts an async function into a dual async and sync function.\n\n    When the returned function is called, we will attempt to determine the best way\n    to enter the async function.\n\n    - If in a thread with a running event loop, we will return the coroutine for the\n        caller to await. This is normal async behavior.\n    - If in a blocking worker thread with access to an event loop in another thread, we\n        will submit the async method to the event loop.\n    - If we cannot find an event loop, we will create a new one and run the async method\n        then tear down the loop.\n    \"\"\"\n\n    @wraps(async_fn)\n    def coroutine_wrapper(*args, **kwargs):\n        from prefect._internal.concurrency.api import create_call, from_sync\n        from prefect._internal.concurrency.calls import get_current_call, logger\n        from prefect._internal.concurrency.event_loop import get_running_loop\n        from prefect._internal.concurrency.threads import get_global_loop\n\n        global_thread_portal = get_global_loop()\n        current_thread = threading.current_thread()\n        current_call = get_current_call()\n        current_loop = get_running_loop()\n\n        if current_thread.ident == global_thread_portal.thread.ident:\n            logger.debug(f\"{async_fn} --> return coroutine for internal await\")\n            # In the prefect async context; return the coro for us to await\n            return async_fn(*args, **kwargs)\n        elif in_async_main_thread() and (\n            not current_call or is_async_fn(current_call.fn)\n        ):\n            # In the main async context; return the coro for them to await\n            logger.debug(f\"{async_fn} --> return coroutine for user await\")\n            return async_fn(*args, **kwargs)\n        elif in_async_worker_thread():\n            # In a sync context but we can access the event loop thread; send the async\n            # call to the parent\n            return run_async_from_worker_thread(async_fn, *args, **kwargs)\n        elif current_loop is not None:\n            logger.debug(f\"{async_fn} --> run async in global loop portal\")\n            # An event loop is already present but we are in a sync context, run the\n            # call in Prefect's event loop thread\n            return from_sync.call_soon_in_loop_thread(\n                create_call(async_fn, *args, **kwargs)\n            ).result()\n        else:\n            logger.debug(f\"{async_fn} --> run async in new loop\")\n            # Run in a new event loop, but use a `Call` for nested context detection\n            call = create_call(async_fn, *args, **kwargs)\n            return call()\n\n    # TODO: This is breaking type hints on the callable... mypy is behind the curve\n    #       on argument annotations. We can still fix this for editors though.\n    if is_async_fn(async_fn):\n        wrapper = coroutine_wrapper\n    elif is_async_gen_fn(async_fn):\n        raise ValueError(\"Async generators cannot yet be marked as `sync_compatible`\")\n    else:\n        raise TypeError(\"The decorated function must be async.\")\n\n    wrapper.aio = async_fn\n    return wrapper\n
","tags":["Python API","async"]},{"location":"api-ref/prefect/utilities/callables/","title":"prefect.utilities.callables","text":"","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables","title":"prefect.utilities.callables","text":"

Utilities for working with Python callables.

","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.ParameterSchema","title":"ParameterSchema","text":"

Bases: pydantic.BaseModel

Simple data model corresponding to an OpenAPI Schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
class ParameterSchema(pydantic.BaseModel):\n\"\"\"Simple data model corresponding to an OpenAPI `Schema`.\"\"\"\n\n    title: Literal[\"Parameters\"] = \"Parameters\"\n    type: Literal[\"object\"] = \"object\"\n    properties: Dict[str, Any] = pydantic.Field(default_factory=dict)\n    required: List[str] = None\n    definitions: Dict[str, Any] = None\n\n    def dict(self, *args, **kwargs):\n\"\"\"Exclude `None` fields by default to comply with\n        the OpenAPI spec.\n        \"\"\"\n        kwargs.setdefault(\"exclude_none\", True)\n        return super().dict(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.ParameterSchema.dict","title":"dict","text":"

Exclude None fields by default to comply with the OpenAPI spec.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def dict(self, *args, **kwargs):\n\"\"\"Exclude `None` fields by default to comply with\n    the OpenAPI spec.\n    \"\"\"\n    kwargs.setdefault(\"exclude_none\", True)\n    return super().dict(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.call_with_parameters","title":"call_with_parameters","text":"

Call a function with parameters extracted with get_call_parameters

The function must have an identical signature to the original function or this will fail. If you need to send to a function with a different signature, extract the args/kwargs using parameters_to_positional_and_keyword directly

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def call_with_parameters(fn: Callable, parameters: Dict[str, Any]):\n\"\"\"\n    Call a function with parameters extracted with `get_call_parameters`\n\n    The function _must_ have an identical signature to the original function or this\n    will fail. If you need to send to a function with a different signature, extract\n    the args/kwargs using `parameters_to_positional_and_keyword` directly\n    \"\"\"\n    args, kwargs = parameters_to_args_kwargs(fn, parameters)\n    return fn(*args, **kwargs)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.cloudpickle_wrapped_call","title":"cloudpickle_wrapped_call","text":"

Serializes a function call using cloudpickle then returns a callable which will execute that call and return a cloudpickle serialized return value

This is particularly useful for sending calls to libraries that only use the Python built-in pickler (e.g. anyio.to_process and multiprocessing) but may require a wider range of pickling support.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def cloudpickle_wrapped_call(\n    __fn: Callable, *args: Any, **kwargs: Any\n) -> Callable[[], bytes]:\n\"\"\"\n    Serializes a function call using cloudpickle then returns a callable which will\n    execute that call and return a cloudpickle serialized return value\n\n    This is particularly useful for sending calls to libraries that only use the Python\n    built-in pickler (e.g. `anyio.to_process` and `multiprocessing`) but may require\n    a wider range of pickling support.\n    \"\"\"\n    payload = cloudpickle.dumps((__fn, args, kwargs))\n    return partial(_run_serialized_call, payload)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.collapse_variadic_parameters","title":"collapse_variadic_parameters","text":"

Given a parameter dictionary, move any parameters stored not present in the signature into the variadic keyword argument.

Example
def foo(a, b, **kwargs):\n    pass\n\nparameters = {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\ncollapse_variadic_parameters(foo, parameters)\n# {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def collapse_variadic_parameters(\n    fn: Callable, parameters: Dict[str, Any]\n) -> Dict[str, Any]:\n\"\"\"\n    Given a parameter dictionary, move any parameters stored not present in the\n    signature into the variadic keyword argument.\n\n    Example:\n\n        ```python\n        def foo(a, b, **kwargs):\n            pass\n\n        parameters = {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n        collapse_variadic_parameters(foo, parameters)\n        # {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n        ```\n    \"\"\"\n    signature_parameters = inspect.signature(fn).parameters\n    variadic_key = None\n    for key, parameter in signature_parameters.items():\n        if parameter.kind == parameter.VAR_KEYWORD:\n            variadic_key = key\n            break\n\n    missing_parameters = set(parameters.keys()) - set(signature_parameters.keys())\n\n    if not variadic_key and missing_parameters:\n        raise ValueError(\n            f\"Signature for {fn} does not include any variadic keyword argument \"\n            \"but parameters were given that are not present in the signature.\"\n        )\n\n    if variadic_key and not missing_parameters:\n        # variadic key is present but no missing parameters, return parameters unchanged\n        return parameters\n\n    new_parameters = parameters.copy()\n    if variadic_key:\n        new_parameters[variadic_key] = {}\n\n    for key in missing_parameters:\n        new_parameters[variadic_key][key] = new_parameters.pop(key)\n\n    return new_parameters\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.explode_variadic_parameter","title":"explode_variadic_parameter","text":"

Given a parameter dictionary, move any parameters stored in a variadic keyword argument parameter (i.e. **kwargs) into the top level.

Example
def foo(a, b, **kwargs):\n    pass\n\nparameters = {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\nexplode_variadic_parameter(foo, parameters)\n# {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def explode_variadic_parameter(\n    fn: Callable, parameters: Dict[str, Any]\n) -> Dict[str, Any]:\n\"\"\"\n    Given a parameter dictionary, move any parameters stored in a variadic keyword\n    argument parameter (i.e. **kwargs) into the top level.\n\n    Example:\n\n        ```python\n        def foo(a, b, **kwargs):\n            pass\n\n        parameters = {\"a\": 1, \"b\": 2, \"kwargs\": {\"c\": 3, \"d\": 4}}\n        explode_variadic_parameter(foo, parameters)\n        # {\"a\": 1, \"b\": 2, \"c\": 3, \"d\": 4}\n        ```\n    \"\"\"\n    variadic_key = None\n    for key, parameter in inspect.signature(fn).parameters.items():\n        if parameter.kind == parameter.VAR_KEYWORD:\n            variadic_key = key\n            break\n\n    if not variadic_key:\n        return parameters\n\n    new_parameters = parameters.copy()\n    for key, value in new_parameters.pop(variadic_key, {}).items():\n        new_parameters[key] = value\n\n    return new_parameters\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.get_call_parameters","title":"get_call_parameters","text":"

Bind a call to a function to get parameter/value mapping. Default values on the signature will be included if not overriden.

Raises a ParameterBindError if the arguments/kwargs are not valid for the function

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def get_call_parameters(\n    fn: Callable,\n    call_args: Tuple[Any, ...],\n    call_kwargs: Dict[str, Any],\n    apply_defaults: bool = True,\n) -> Dict[str, Any]:\n\"\"\"\n    Bind a call to a function to get parameter/value mapping. Default values on the\n    signature will be included if not overriden.\n\n    Raises a ParameterBindError if the arguments/kwargs are not valid for the function\n    \"\"\"\n    try:\n        bound_signature = inspect.signature(fn).bind(*call_args, **call_kwargs)\n    except TypeError as exc:\n        raise ParameterBindError.from_bind_failure(fn, exc, call_args, call_kwargs)\n\n    if apply_defaults:\n        bound_signature.apply_defaults()\n\n    # We cast from `OrderedDict` to `dict` because Dask will not convert futures in an\n    # ordered dictionary to values during execution; this is the default behavior in\n    # Python 3.9 anyway.\n    return dict(bound_signature.arguments)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.get_parameter_defaults","title":"get_parameter_defaults","text":"

Get default parameter values for a callable.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def get_parameter_defaults(\n    fn: Callable,\n) -> Dict[str, Any]:\n\"\"\"\n    Get default parameter values for a callable.\n    \"\"\"\n    signature = inspect.signature(fn)\n\n    parameter_defaults = {}\n\n    for name, param in signature.parameters.items():\n        if param.default is not signature.empty:\n            parameter_defaults[name] = param.default\n\n    return parameter_defaults\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameter_docstrings","title":"parameter_docstrings","text":"

Given a docstring in Google docstring format, parse the parameter section and return a dictionary that maps parameter names to docstring.

Parameters:

Name Type Description Default docstring Optional[str]

The function's docstring.

required

Returns:

Type Description Dict[str, str]

Mapping from parameter names to docstrings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def parameter_docstrings(docstring: Optional[str]) -> Dict[str, str]:\n\"\"\"\n    Given a docstring in Google docstring format, parse the parameter section\n    and return a dictionary that maps parameter names to docstring.\n\n    Args:\n        docstring: The function's docstring.\n\n    Returns:\n        Mapping from parameter names to docstrings.\n    \"\"\"\n    param_docstrings = {}\n\n    if not docstring:\n        return param_docstrings\n\n    with disable_logger(\"griffe.docstrings.google\"), disable_logger(\n        \"griffe.agents.nodes\"\n    ):\n        parsed = parse(Docstring(docstring), Parser.google)\n        for section in parsed:\n            if section.kind != DocstringSectionKind.parameters:\n                continue\n            param_docstrings = {\n                parameter.name: parameter.description for parameter in section.value\n            }\n\n    return param_docstrings\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameter_schema","title":"parameter_schema","text":"

Given a function, generates an OpenAPI-compatible description of the function's arguments, including: - name - typing information - whether it is required - a default value - additional constraints (like possible enum values)

Parameters:

Name Type Description Default fn function

The function whose arguments will be serialized

required

Returns:

Name Type Description dict ParameterSchema

the argument schema

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def parameter_schema(fn: Callable) -> ParameterSchema:\n\"\"\"Given a function, generates an OpenAPI-compatible description\n    of the function's arguments, including:\n        - name\n        - typing information\n        - whether it is required\n        - a default value\n        - additional constraints (like possible enum values)\n\n    Args:\n        fn (function): The function whose arguments will be serialized\n\n    Returns:\n        dict: the argument schema\n    \"\"\"\n    signature = inspect.signature(fn)\n    model_fields = {}\n    aliases = {}\n    docstrings = parameter_docstrings(inspect.getdoc(fn))\n\n    class ModelConfig:\n        arbitrary_types_allowed = True\n\n    for position, param in enumerate(signature.parameters.values()):\n        # Pydantic model creation will fail if names collide with the BaseModel type\n        if hasattr(pydantic.BaseModel, param.name):\n            name = param.name + \"__\"\n            aliases[name] = param.name\n        else:\n            name = param.name\n\n        type_, field = (\n            Any if param.annotation is inspect._empty else param.annotation,\n            pydantic.Field(\n                default=... if param.default is param.empty else param.default,\n                title=param.name,\n                description=docstrings.get(param.name, None),\n                alias=aliases.get(name),\n                position=position,\n            ),\n        )\n\n        # Generate a Pydantic model at each step so we can check if this parameter\n        # type is supported schema generation\n        try:\n            pydantic.create_model(\n                \"CheckParameter\", __config__=ModelConfig, **{name: (type_, field)}\n            ).schema(by_alias=True)\n        except ValueError:\n            # This field's type is not valid for schema creation, update it to `Any`\n            type_ = Any\n\n        model_fields[name] = (type_, field)\n\n    # Generate the final model and schema\n    model = pydantic.create_model(\"Parameters\", __config__=ModelConfig, **model_fields)\n    schema = model.schema(by_alias=True)\n    return ParameterSchema(**schema)\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.parameters_to_args_kwargs","title":"parameters_to_args_kwargs","text":"

Convert a parameters dictionary to positional and keyword arguments

The function must have an identical signature to the original function or this will return an empty tuple and dict.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def parameters_to_args_kwargs(\n    fn: Callable,\n    parameters: Dict[str, Any],\n) -> Tuple[Tuple[Any, ...], Dict[str, Any]]:\n\"\"\"\n    Convert a `parameters` dictionary to positional and keyword arguments\n\n    The function _must_ have an identical signature to the original function or this\n    will return an empty tuple and dict.\n    \"\"\"\n    function_params = dict(inspect.signature(fn).parameters).keys()\n    # Check for parameters that are not present in the function signature\n    unknown_params = parameters.keys() - function_params\n    if unknown_params:\n        raise SignatureMismatchError.from_bad_params(\n            list(function_params), list(parameters.keys())\n        )\n    bound_signature = inspect.signature(fn).bind_partial()\n    bound_signature.arguments = parameters\n\n    return bound_signature.args, bound_signature.kwargs\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/callables/#prefect.utilities.callables.raise_for_reserved_arguments","title":"raise_for_reserved_arguments","text":"

Raise a ReservedArgumentError if fn has any parameters that conflict with the names contained in reserved_arguments.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/callables.py
def raise_for_reserved_arguments(fn: Callable, reserved_arguments: Iterable[str]):\n\"\"\"Raise a ReservedArgumentError if `fn` has any parameters that conflict\n    with the names contained in `reserved_arguments`.\"\"\"\n    function_paremeters = inspect.signature(fn).parameters\n\n    for argument in reserved_arguments:\n        if argument in function_paremeters:\n            raise ReservedArgumentError(\n                f\"{argument!r} is a reserved argument name and cannot be used.\"\n            )\n
","tags":["Python API","callables"]},{"location":"api-ref/prefect/utilities/collections/","title":"prefect.utilities.collections","text":"","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections","title":"prefect.utilities.collections","text":"

Utilities for extensions of and operations on Python collections.

","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.AutoEnum","title":"AutoEnum","text":"

Bases: str, Enum

An enum class that automatically generates value from variable names.

This guards against common errors where variable names are updated but values are not.

In addition, because AutoEnums inherit from str, they are automatically JSON-serializable.

See https://docs.python.org/3/library/enum.html#using-automatic-values

Example
class MyEnum(AutoEnum):\n    RED = AutoEnum.auto() # equivalent to RED = 'RED'\n    BLUE = AutoEnum.auto() # equivalent to BLUE = 'BLUE'\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
class AutoEnum(str, Enum):\n\"\"\"\n    An enum class that automatically generates value from variable names.\n\n    This guards against common errors where variable names are updated but values are\n    not.\n\n    In addition, because AutoEnums inherit from `str`, they are automatically\n    JSON-serializable.\n\n    See https://docs.python.org/3/library/enum.html#using-automatic-values\n\n    Example:\n        ```python\n        class MyEnum(AutoEnum):\n            RED = AutoEnum.auto() # equivalent to RED = 'RED'\n            BLUE = AutoEnum.auto() # equivalent to BLUE = 'BLUE'\n        ```\n    \"\"\"\n\n    def _generate_next_value_(name, start, count, last_values):\n        return name\n\n    @staticmethod\n    def auto():\n\"\"\"\n        Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n        \"\"\"\n        return auto()\n\n    def __repr__(self) -> str:\n        return f\"{type(self).__name__}.{self.value}\"\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.AutoEnum.auto","title":"auto staticmethod","text":"

Exposes enum.auto() to avoid requiring a second import to use AutoEnum

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
@staticmethod\ndef auto():\n\"\"\"\n    Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n    \"\"\"\n    return auto()\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.StopVisiting","title":"StopVisiting","text":"

Bases: BaseException

A special exception used to stop recursive visits in visit_collection.

When raised, the expression is returned without modification and recursive visits in that path will end.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
class StopVisiting(BaseException):\n\"\"\"\n    A special exception used to stop recursive visits in `visit_collection`.\n\n    When raised, the expression is returned without modification and recursive visits\n    in that path will end.\n    \"\"\"\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.batched_iterable","title":"batched_iterable","text":"

Yield batches of a certain size from an iterable

Parameters:

Name Type Description Default iterable Iterable

An iterable

required size int

The batch size to return

required

Yields:

Name Type Description tuple T

A batch of the iterable

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def batched_iterable(iterable: Iterable[T], size: int) -> Iterator[Tuple[T, ...]]:\n\"\"\"\n    Yield batches of a certain size from an iterable\n\n    Args:\n        iterable (Iterable): An iterable\n        size (int): The batch size to return\n\n    Yields:\n        tuple: A batch of the iterable\n    \"\"\"\n    it = iter(iterable)\n    while True:\n        batch = tuple(itertools.islice(it, size))\n        if not batch:\n            break\n        yield batch\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.dict_to_flatdict","title":"dict_to_flatdict","text":"

Converts a (nested) dictionary to a flattened representation.

Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\" for the corresponding value.

Parameters:

Name Type Description Default dct dict

The dictionary to flatten

required _parent Tuple

The current parent for recursion

None

Returns:

Type Description Dict[Tuple[KT, ...], Any]

A flattened dict of the same type as dct

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def dict_to_flatdict(\n    dct: Dict[KT, Union[Any, Dict[KT, Any]]], _parent: Tuple[KT, ...] = None\n) -> Dict[Tuple[KT, ...], Any]:\n\"\"\"Converts a (nested) dictionary to a flattened representation.\n\n    Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\"\n    for the corresponding value.\n\n    Args:\n        dct (dict): The dictionary to flatten\n        _parent (Tuple, optional): The current parent for recursion\n\n    Returns:\n        A flattened dict of the same type as dct\n    \"\"\"\n    typ = cast(Type[Dict[Tuple[KT, ...], Any]], type(dct))\n    items: List[Tuple[Tuple[KT, ...], Any]] = []\n    parent = _parent or tuple()\n\n    for k, v in dct.items():\n        k_parent = tuple(parent + (k,))\n        # if v is a non-empty dict, recurse\n        if isinstance(v, dict) and v:\n            items.extend(dict_to_flatdict(v, _parent=k_parent).items())\n        else:\n            items.append((k_parent, v))\n    return typ(items)\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.extract_instances","title":"extract_instances","text":"

Extract objects from a file and returns a dict of type -> instances

Parameters:

Name Type Description Default objects Iterable

An iterable of objects

required types Union[Type[T], Tuple[Type[T], ...]]

A type or tuple of types to extract, defaults to all objects

object

Returns:

Type Description Union[List[T], Dict[Type[T], T]]

If a single type is given: a list of instances of that type

Union[List[T], Dict[Type[T], T]]

If a tuple of types is given: a mapping of type to a list of instances

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def extract_instances(\n    objects: Iterable,\n    types: Union[Type[T], Tuple[Type[T], ...]] = object,\n) -> Union[List[T], Dict[Type[T], T]]:\n\"\"\"\n    Extract objects from a file and returns a dict of type -> instances\n\n    Args:\n        objects: An iterable of objects\n        types: A type or tuple of types to extract, defaults to all objects\n\n    Returns:\n        If a single type is given: a list of instances of that type\n        If a tuple of types is given: a mapping of type to a list of instances\n    \"\"\"\n    types = ensure_iterable(types)\n\n    # Create a mapping of type -> instance from the exec values\n    ret = defaultdict(list)\n\n    for o in objects:\n        # We iterate here so that the key is the passed type rather than type(o)\n        for type_ in types:\n            if isinstance(o, type_):\n                ret[type_].append(o)\n\n    if len(types) == 1:\n        return ret[types[0]]\n\n    return ret\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.flatdict_to_dict","title":"flatdict_to_dict","text":"

Converts a flattened dictionary back to a nested dictionary.

Parameters:

Name Type Description Default dct dict

The dictionary to be nested. Each key should be a tuple of keys as generated by dict_to_flatdict

required

Returns A nested dict of the same type as dct

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def flatdict_to_dict(\n    dct: Dict[Tuple[KT, ...], VT]\n) -> Dict[KT, Union[VT, Dict[KT, VT]]]:\n\"\"\"Converts a flattened dictionary back to a nested dictionary.\n\n    Args:\n        dct (dict): The dictionary to be nested. Each key should be a tuple of keys\n            as generated by `dict_to_flatdict`\n\n    Returns\n        A nested dict of the same type as dct\n    \"\"\"\n    typ = type(dct)\n    result = cast(Dict[KT, Union[VT, Dict[KT, VT]]], typ())\n    for key_tuple, value in dct.items():\n        current_dict = result\n        for prefix_key in key_tuple[:-1]:\n            # Build nested dictionaries up for the current key tuple\n            # Use `setdefault` in case the nested dict has already been created\n            current_dict = current_dict.setdefault(prefix_key, typ())  # type: ignore\n        # Set the value\n        current_dict[key_tuple[-1]] = value\n\n    return result\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.get_from_dict","title":"get_from_dict","text":"

Fetch a value from a nested dictionary or list using a sequence of keys.

This function allows to fetch a value from a deeply nested structure of dictionaries and lists using either a dot-separated string or a list of keys. If a requested key does not exist, the function returns the provided default value.

Parameters:

Name Type Description Default dct Dict

The nested dictionary or list from which to fetch the value.

required keys Union[str, List[str]]

The sequence of keys to use for access. Can be a dot-separated string or a list of keys. List indices can be included in the sequence as either integer keys or as string indices in square brackets.

required default Any

The default value to return if the requested key path does not exist. Defaults to None.

None

Returns:

Type Description Any

The fetched value if the key exists, or the default value if it does not.

Examples:

get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]') 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1]) 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default') 'default'

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def get_from_dict(dct: Dict, keys: Union[str, List[str]], default: Any = None) -> Any:\n\"\"\"\n    Fetch a value from a nested dictionary or list using a sequence of keys.\n\n    This function allows to fetch a value from a deeply nested structure\n    of dictionaries and lists using either a dot-separated string or a list\n    of keys. If a requested key does not exist, the function returns the\n    provided default value.\n\n    Args:\n        dct: The nested dictionary or list from which to fetch the value.\n        keys: The sequence of keys to use for access. Can be a\n            dot-separated string or a list of keys. List indices can be included\n            in the sequence as either integer keys or as string indices in square\n            brackets.\n        default: The default value to return if the requested key path does not\n            exist. Defaults to None.\n\n    Returns:\n        The fetched value if the key exists, or the default value if it does not.\n\n    Examples:\n    >>> get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]')\n    2\n    >>> get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1])\n    2\n    >>> get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default')\n    'default'\n    \"\"\"\n    if isinstance(keys, str):\n        keys = keys.replace(\"[\", \".\").replace(\"]\", \"\").split(\".\")\n    try:\n        for key in keys:\n            try:\n                # Try to cast to int to handle list indices\n                key = int(key)\n            except ValueError:\n                # If it's not an int, use the key as-is\n                # for dict lookup\n                pass\n            dct = dct[key]\n        return dct\n    except (TypeError, KeyError, IndexError):\n        return default\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.isiterable","title":"isiterable","text":"

Return a boolean indicating if an object is iterable.

Excludes types that are iterable but typically used as singletons: - str - bytes - IO objects

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def isiterable(obj: Any) -> bool:\n\"\"\"\n    Return a boolean indicating if an object is iterable.\n\n    Excludes types that are iterable but typically used as singletons:\n    - str\n    - bytes\n    - IO objects\n    \"\"\"\n    try:\n        iter(obj)\n    except TypeError:\n        return False\n    else:\n        return not isinstance(obj, (str, bytes, io.IOBase))\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.remove_nested_keys","title":"remove_nested_keys","text":"

Recurses a dictionary returns a copy without all keys that match an entry in key_to_remove. Return obj unchanged if not a dictionary.

Parameters:

Name Type Description Default keys_to_remove List[Hashable]

A list of keys to remove from obj obj: The object to remove keys from.

required

Returns:

Type Description

obj without keys matching an entry in keys_to_remove if obj is a dictionary. obj if obj is not a dictionary.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def remove_nested_keys(keys_to_remove: List[Hashable], obj):\n\"\"\"\n    Recurses a dictionary returns a copy without all keys that match an entry in\n    `key_to_remove`. Return `obj` unchanged if not a dictionary.\n\n    Args:\n        keys_to_remove: A list of keys to remove from obj obj: The object to remove keys\n            from.\n\n    Returns:\n        `obj` without keys matching an entry in `keys_to_remove` if `obj` is a\n            dictionary. `obj` if `obj` is not a dictionary.\n    \"\"\"\n    if not isinstance(obj, dict):\n        return obj\n    return {\n        key: remove_nested_keys(keys_to_remove, value)\n        for key, value in obj.items()\n        if key not in keys_to_remove\n    }\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/collections/#prefect.utilities.collections.visit_collection","title":"visit_collection","text":"

This function visits every element of an arbitrary Python collection. If an element is a Python collection, it will be visited recursively. If an element is not a collection, visit_fn will be called with the element. The return value of visit_fn can be used to alter the element if return_data is set.

Note that when using return_data a copy of each collection is created to avoid mutating the original object. This may have significant performance penalities and should only be used if you intend to transform the collection.

Supported types: - List - Tuple - Set - Dict (note: keys are also visited recursively) - Dataclass - Pydantic model - Prefect annotations

Parameters:

Name Type Description Default expr Any

a Python object or expression

required visit_fn Callable[[Any], Awaitable[Any]]

an async function that will be applied to every non-collection element of expr.

required return_data bool

if True, a copy of expr containing data modified by visit_fn will be returned. This is slower than return_data=False (the default).

False max_depth int

Controls the depth of recursive visitation. If set to zero, no recursion will occur. If set to a positive integer N, visitation will only descend to N layers deep. If set to any negative integer, no limit will be enforced and recursion will continue until terminal items are reached. By default, recursion is unlimited.

-1 context Optional[dict]

An optional dictionary. If passed, the context will be sent to each call to the visit_fn. The context can be mutated by each visitor and will be available for later visits to expressions at the given depth. Values will not be available \"up\" a level from a given expression.

The context will be automatically populated with an 'annotation' key when visiting collections within a BaseAnnotation type. This requires the caller to pass context={} and will not be activated by default.

None remove_annotations bool

If set, annotations will be replaced by their contents. By default, annotations are preserved but their contents are visited.

False Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
def visit_collection(\n    expr,\n    visit_fn: Callable[[Any], Any],\n    return_data: bool = False,\n    max_depth: int = -1,\n    context: Optional[dict] = None,\n    remove_annotations: bool = False,\n):\n\"\"\"\n    This function visits every element of an arbitrary Python collection. If an element\n    is a Python collection, it will be visited recursively. If an element is not a\n    collection, `visit_fn` will be called with the element. The return value of\n    `visit_fn` can be used to alter the element if `return_data` is set.\n\n    Note that when using `return_data` a copy of each collection is created to avoid\n    mutating the original object. This may have significant performance penalities and\n    should only be used if you intend to transform the collection.\n\n    Supported types:\n    - List\n    - Tuple\n    - Set\n    - Dict (note: keys are also visited recursively)\n    - Dataclass\n    - Pydantic model\n    - Prefect annotations\n\n    Args:\n        expr (Any): a Python object or expression\n        visit_fn (Callable[[Any], Awaitable[Any]]): an async function that\n            will be applied to every non-collection element of expr.\n        return_data (bool): if `True`, a copy of `expr` containing data modified\n            by `visit_fn` will be returned. This is slower than `return_data=False`\n            (the default).\n        max_depth: Controls the depth of recursive visitation. If set to zero, no\n            recursion will occur. If set to a positive integer N, visitation will only\n            descend to N layers deep. If set to any negative integer, no limit will be\n            enforced and recursion will continue until terminal items are reached. By\n            default, recursion is unlimited.\n        context: An optional dictionary. If passed, the context will be sent to each\n            call to the `visit_fn`. The context can be mutated by each visitor and will\n            be available for later visits to expressions at the given depth. Values\n            will not be available \"up\" a level from a given expression.\n\n            The context will be automatically populated with an 'annotation' key when\n            visiting collections within a `BaseAnnotation` type. This requires the\n            caller to pass `context={}` and will not be activated by default.\n        remove_annotations: If set, annotations will be replaced by their contents. By\n            default, annotations are preserved but their contents are visited.\n    \"\"\"\n\n    def visit_nested(expr):\n        # Utility for a recursive call, preserving options and updating the depth.\n        return visit_collection(\n            expr,\n            visit_fn=visit_fn,\n            return_data=return_data,\n            remove_annotations=remove_annotations,\n            max_depth=max_depth - 1,\n            # Copy the context on nested calls so it does not \"propagate up\"\n            context=context.copy() if context is not None else None,\n        )\n\n    def visit_expression(expr):\n        if context is not None:\n            return visit_fn(expr, context)\n        else:\n            return visit_fn(expr)\n\n    # Visit every expression\n    try:\n        result = visit_expression(expr)\n    except StopVisiting:\n        max_depth = 0\n        result = expr\n\n    if return_data:\n        # Only mutate the expression while returning data, otherwise it could be null\n        expr = result\n\n    # Then, visit every child of the expression recursively\n\n    # If we have reached the maximum depth, do not perform any recursion\n    if max_depth == 0:\n        return result if return_data else None\n\n    # Get the expression type; treat iterators like lists\n    typ = list if isinstance(expr, IteratorABC) and isiterable(expr) else type(expr)\n    typ = cast(type, typ)  # mypy treats this as 'object' otherwise and complains\n\n    # Then visit every item in the expression if it is a collection\n    if isinstance(expr, Mock):\n        # Do not attempt to recurse into mock objects\n        result = expr\n\n    elif isinstance(expr, BaseAnnotation):\n        if context is not None:\n            context[\"annotation\"] = expr\n        value = visit_nested(expr.unwrap())\n\n        if remove_annotations:\n            result = value if return_data else None\n        else:\n            result = expr.rewrap(value) if return_data else None\n\n    elif typ in (list, tuple, set):\n        items = [visit_nested(o) for o in expr]\n        result = typ(items) if return_data else None\n\n    elif typ in (dict, OrderedDict):\n        assert isinstance(expr, (dict, OrderedDict))  # typecheck assertion\n        items = [(visit_nested(k), visit_nested(v)) for k, v in expr.items()]\n        result = typ(items) if return_data else None\n\n    elif is_dataclass(expr) and not isinstance(expr, type):\n        values = [visit_nested(getattr(expr, f.name)) for f in fields(expr)]\n        items = {field.name: value for field, value in zip(fields(expr), values)}\n        result = typ(**items) if return_data else None\n\n    elif isinstance(expr, pydantic.BaseModel):\n        # NOTE: This implementation *does not* traverse private attributes\n        # Pydantic does not expose extras in `__fields__` so we use `__fields_set__`\n        # as well to get all of the relevant attributes\n        # Check for presence of attrs even if they're in the field set due to pydantic#4916\n        model_fields = {\n            f for f in expr.__fields_set__.union(expr.__fields__) if hasattr(expr, f)\n        }\n        items = [visit_nested(getattr(expr, key)) for key in model_fields]\n\n        if return_data:\n            # Collect fields with aliases so reconstruction can use the correct field name\n            aliases = {\n                key: value.alias\n                for key, value in expr.__fields__.items()\n                if value.has_alias\n            }\n\n            model_instance = typ(\n                **{\n                    aliases.get(key) or key: value\n                    for key, value in zip(model_fields, items)\n                }\n            )\n\n            # Private attributes are not included in `__fields_set__` but we do not want\n            # to drop them from the model so we restore them after constructing a new\n            # model\n            for attr in expr.__private_attributes__:\n                # Use `object.__setattr__` to avoid errors on immutable models\n                object.__setattr__(model_instance, attr, getattr(expr, attr))\n\n            result = model_instance\n        else:\n            result = None\n\n    else:\n        result = result if return_data else None\n\n    return result\n
","tags":["Python API","collections"]},{"location":"api-ref/prefect/utilities/filesystem/","title":"prefect.utilities.filesystem","text":"","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem","title":"prefect.utilities.filesystem","text":"

Utilities for working with file systems

","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.create_default_ignore_file","title":"create_default_ignore_file","text":"

Creates default ignore file in the provided path if one does not already exist; returns boolean specifying whether a file was created.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def create_default_ignore_file(path: str) -> bool:\n\"\"\"\n    Creates default ignore file in the provided path if one does not already exist; returns boolean specifying\n    whether a file was created.\n    \"\"\"\n    path = pathlib.Path(path)\n    ignore_file = path / \".prefectignore\"\n    if ignore_file.exists():\n        return False\n    default_file = pathlib.Path(prefect.__module_path__) / \".prefectignore\"\n    with ignore_file.open(mode=\"w\") as f:\n        f.write(default_file.read_text())\n    return True\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.filename","title":"filename","text":"

Extract the file name from a path with remote file system support

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def filename(path: str) -> str:\n\"\"\"Extract the file name from a path with remote file system support\"\"\"\n    try:\n        of: OpenFile = fsspec.open(path)\n        sep = of.fs.sep\n    except (ImportError, AttributeError):\n        sep = \"\\\\\" if \"\\\\\" in path else \"/\"\n    return path.split(sep)[-1]\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.filter_files","title":"filter_files","text":"

This function accepts a root directory path and a list of file patterns to ignore, and returns a list of files that excludes those that should be ignored.

The specification matches that of .gitignore files.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def filter_files(\n    root: str = \".\", ignore_patterns: list = None, include_dirs: bool = True\n) -> set:\n\"\"\"\n    This function accepts a root directory path and a list of file patterns to ignore, and returns\n    a list of files that excludes those that should be ignored.\n\n    The specification matches that of [.gitignore files](https://git-scm.com/docs/gitignore).\n    \"\"\"\n    if ignore_patterns is None:\n        ignore_patterns = []\n    spec = pathspec.PathSpec.from_lines(\"gitwildmatch\", ignore_patterns)\n    ignored_files = {p.path for p in spec.match_tree_entries(root)}\n    if include_dirs:\n        all_files = {p.path for p in pathspec.util.iter_tree_entries(root)}\n    else:\n        all_files = set(pathspec.util.iter_tree_files(root))\n    included_files = all_files - ignored_files\n    return included_files\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.get_open_file_limit","title":"get_open_file_limit","text":"

Get the maximum number of open files allowed for the current process

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def get_open_file_limit() -> int:\n\"\"\"Get the maximum number of open files allowed for the current process\"\"\"\n\n    try:\n        if os.name == \"nt\":\n            import ctypes\n\n            return ctypes.cdll.ucrtbase._getmaxstdio()\n        else:\n            import resource\n\n            soft_limit, _ = resource.getrlimit(resource.RLIMIT_NOFILE)\n            return soft_limit\n    except Exception:\n        # Catch all exceptions, as ctypes can raise several errors\n        # depending on what went wrong. Return a safe default if we\n        # can't get the limit from the OS.\n        return 200\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.is_local_path","title":"is_local_path","text":"

Check if the given path points to a local or remote file system

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def is_local_path(path: Union[str, pathlib.Path, OpenFile]):\n\"\"\"Check if the given path points to a local or remote file system\"\"\"\n    if isinstance(path, str):\n        try:\n            of = fsspec.open(path)\n        except ImportError:\n            # The path is a remote file system that uses a lib that is not installed\n            return False\n    elif isinstance(path, pathlib.Path):\n        return True\n    elif isinstance(path, OpenFile):\n        of = path\n    else:\n        raise TypeError(f\"Invalid path of type {type(path).__name__!r}\")\n\n    return type(of.fs) == LocalFileSystem\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.relative_path_to_current_platform","title":"relative_path_to_current_platform","text":"

Converts a relative path generated on any platform to a relative path for the current platform.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def relative_path_to_current_platform(path_str: str) -> Path:\n\"\"\"\n    Converts a relative path generated on any platform to a relative path for the\n    current platform.\n    \"\"\"\n\n    return Path(PureWindowsPath(path_str).as_posix())\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.tmpchdir","title":"tmpchdir","text":"

Change current-working directories for the duration of the context

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
@contextmanager\ndef tmpchdir(path: str):\n\"\"\"\n    Change current-working directories for the duration of the context\n    \"\"\"\n    path = os.path.abspath(path)\n    if os.path.isfile(path) or (not os.path.exists(path) and not path.endswith(\"/\")):\n        path = os.path.dirname(path)\n\n    owd = os.getcwd()\n\n    try:\n        os.chdir(path)\n        yield path\n    finally:\n        os.chdir(owd)\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/filesystem/#prefect.utilities.filesystem.to_display_path","title":"to_display_path","text":"

Convert a path to a displayable path. The absolute path or relative path to the current (or given) directory will be returned, whichever is shorter.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/filesystem.py
def to_display_path(\n    path: Union[pathlib.Path, str], relative_to: Union[pathlib.Path, str] = None\n) -> str:\n\"\"\"\n    Convert a path to a displayable path. The absolute path or relative path to the\n    current (or given) directory will be returned, whichever is shorter.\n    \"\"\"\n    path, relative_to = (\n        pathlib.Path(path).resolve(),\n        pathlib.Path(relative_to or \".\").resolve(),\n    )\n    relative_path = str(path.relative_to(relative_to))\n    absolute_path = str(path)\n    return relative_path if len(relative_path) < len(absolute_path) else absolute_path\n
","tags":["Python API","files","filesystems"]},{"location":"api-ref/prefect/utilities/hashing/","title":"prefect.utilities.hashing","text":"","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing","title":"prefect.utilities.hashing","text":"","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.file_hash","title":"file_hash","text":"

Given a path to a file, produces a stable hash of the file contents.

Parameters:

Name Type Description Default path str

the path to a file

required hash_algo

Hash algorithm from hashlib to use.

_md5

Returns:

Name Type Description str str

a hash of the file contents

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/hashing.py
def file_hash(path: str, hash_algo=_md5) -> str:\n\"\"\"Given a path to a file, produces a stable hash of the file contents.\n\n    Args:\n        path (str): the path to a file\n        hash_algo: Hash algorithm from hashlib to use.\n\n    Returns:\n        str: a hash of the file contents\n    \"\"\"\n    contents = Path(path).read_bytes()\n    return stable_hash(contents, hash_algo=hash_algo)\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.hash_objects","title":"hash_objects","text":"

Attempt to hash objects by dumping to JSON or serializing with cloudpickle. On failure of both, None will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/hashing.py
def hash_objects(*args, hash_algo=_md5, **kwargs) -> Optional[str]:\n\"\"\"\n    Attempt to hash objects by dumping to JSON or serializing with cloudpickle.\n    On failure of both, `None` will be returned\n    \"\"\"\n    try:\n        serializer = JSONSerializer(dumps_kwargs={\"sort_keys\": True})\n        return stable_hash(serializer.dumps((args, kwargs)), hash_algo=hash_algo)\n    except Exception:\n        pass\n\n    try:\n        return stable_hash(cloudpickle.dumps((args, kwargs)), hash_algo=hash_algo)\n    except Exception:\n        pass\n\n    return None\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/hashing/#prefect.utilities.hashing.stable_hash","title":"stable_hash","text":"

Given some arguments, produces a stable 64-bit hash of their contents.

Supports bytes and strings. Strings will be UTF-8 encoded.

Parameters:

Name Type Description Default *args Union[str, bytes]

Items to include in the hash.

() hash_algo

Hash algorithm from hashlib to use.

_md5

Returns:

Type Description str

A hex hash.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/hashing.py
def stable_hash(*args: Union[str, bytes], hash_algo=_md5) -> str:\n\"\"\"Given some arguments, produces a stable 64-bit hash of their contents.\n\n    Supports bytes and strings. Strings will be UTF-8 encoded.\n\n    Args:\n        *args: Items to include in the hash.\n        hash_algo: Hash algorithm from hashlib to use.\n\n    Returns:\n        A hex hash.\n    \"\"\"\n    h = hash_algo()\n    for a in args:\n        if isinstance(a, str):\n            a = a.encode()\n        h.update(a)\n    return h.hexdigest()\n
","tags":["Python API","hashes","hashing"]},{"location":"api-ref/prefect/utilities/processutils/","title":"prefect.utilities.processutils","text":""},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils","title":"prefect.utilities.processutils","text":""},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.forward_signal_handler","title":"forward_signal_handler","text":"

Forward subsequent signum events (e.g. interrupts) to respective signums.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/processutils.py
def forward_signal_handler(\n    pid: int, signum: int, *signums: int, process_name: str, print_fn: Callable\n):\n\"\"\"Forward subsequent signum events (e.g. interrupts) to respective signums.\"\"\"\n    current_signal, future_signals = signums[0], signums[1:]\n\n    # avoid RecursionError when setting up a direct signal forward to the same signal for the main pid\n    avoid_infinite_recursion = signum == current_signal and pid == os.getpid()\n    if avoid_infinite_recursion:\n        # store the vanilla handler so it can be temporarily restored below\n        original_handler = signal.getsignal(current_signal)\n\n    def handler(*args):\n        print_fn(\n            f\"Received {getattr(signum, 'name', signum)}. \"\n            f\"Sending {getattr(current_signal, 'name', current_signal)} to\"\n            f\" {process_name} (PID {pid})...\"\n        )\n        if avoid_infinite_recursion:\n            signal.signal(current_signal, original_handler)\n        os.kill(pid, current_signal)\n        if future_signals:\n            forward_signal_handler(\n                pid,\n                signum,\n                *future_signals,\n                process_name=process_name,\n                print_fn=print_fn,\n            )\n\n    # register current and future signal handlers\n    signal.signal(signum, handler)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.open_process","title":"open_process async","text":"

Like anyio.open_process but with: - Support for Windows command joining - Termination of the process on exception during yield - Forced cleanup of process resources during cancellation

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/processutils.py
@asynccontextmanager\nasync def open_process(command: List[str], **kwargs):\n\"\"\"\n    Like `anyio.open_process` but with:\n    - Support for Windows command joining\n    - Termination of the process on exception during yield\n    - Forced cleanup of process resources during cancellation\n    \"\"\"\n    # Passing a string to open_process is equivalent to shell=True which is\n    # generally necessary for Unix-like commands on Windows but otherwise should\n    # be avoided\n    if not isinstance(command, list):\n        raise TypeError(\n            \"The command passed to open process must be a list. You passed the command\"\n            f\"'{command}', which is type '{type(command)}'.\"\n        )\n\n    if sys.platform == \"win32\":\n        command = \" \".join(command)\n        process = await _open_anyio_process(command, **kwargs)\n    else:\n        process = await anyio.open_process(command, **kwargs)\n\n    # if there's a creationflags kwarg and it contains CREATE_NEW_PROCESS_GROUP,\n    # use SetConsoleCtrlHandler to handle CTRL-C\n    win32_process_group = False\n    if (\n        sys.platform == \"win32\"\n        and \"creationflags\" in kwargs\n        and kwargs[\"creationflags\"] & subprocess.CREATE_NEW_PROCESS_GROUP\n    ):\n        win32_process_group = True\n        _windows_process_group_pids.add(process.pid)\n        # Add a handler for CTRL-C. Re-adding the handler is safe as Windows\n        # will not add a duplicate handler if _win32_ctrl_handler is\n        # already registered.\n        windll.kernel32.SetConsoleCtrlHandler(_win32_ctrl_handler, 1)\n\n    try:\n        async with process:\n            yield process\n    finally:\n        try:\n            process.terminate()\n            if win32_process_group:\n                _windows_process_group_pids.remove(process.pid)\n\n        except OSError:\n            # Occurs if the process is already terminated\n            pass\n\n        # Ensure the process resource is closed. If not shielded from cancellation,\n        # this resource can be left open and the subprocess output can appear after\n        # the parent process has exited.\n        with anyio.CancelScope(shield=True):\n            await process.aclose()\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.run_process","title":"run_process async","text":"

Like anyio.run_process but with:

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/processutils.py
async def run_process(\n    command: List[str],\n    stream_output: Union[bool, Tuple[Optional[TextSink], Optional[TextSink]]] = False,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n    task_status_handler: Optional[Callable[[anyio.abc.Process], Any]] = None,\n    **kwargs,\n):\n\"\"\"\n    Like `anyio.run_process` but with:\n\n    - Use of our `open_process` utility to ensure resources are cleaned up\n    - Simple `stream_output` support to connect the subprocess to the parent stdout/err\n    - Support for submission with `TaskGroup.start` marking as 'started' after the\n        process has been created. When used, the PID is returned to the task status.\n\n    \"\"\"\n    if stream_output is True:\n        stream_output = (sys.stdout, sys.stderr)\n\n    async with open_process(\n        command,\n        stdout=subprocess.PIPE if stream_output else subprocess.DEVNULL,\n        stderr=subprocess.PIPE if stream_output else subprocess.DEVNULL,\n        **kwargs,\n    ) as process:\n        if task_status is not None:\n            if not task_status_handler:\n\n                def task_status_handler(process):\n                    return process.pid\n\n            task_status.started(task_status_handler(process))\n\n        if stream_output:\n            await consume_process_output(\n                process, stdout_sink=stream_output[0], stderr_sink=stream_output[1]\n            )\n\n        await process.wait()\n\n    return process\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_agent","title":"setup_signal_handlers_agent","text":"

Handle interrupts of the agent gracefully.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/processutils.py
def setup_signal_handlers_agent(pid: int, process_name: str, print_fn: Callable):\n\"\"\"Handle interrupts of the agent gracefully.\"\"\"\n    setup_handler = partial(\n        forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n    )\n    # when agent receives SIGINT, it stops dequeueing new FlowRuns, and runs until the subprocesses finish\n    # the signal is not forwarded to subprocesses, so they can continue to run and hopefully still complete\n    if sys.platform == \"win32\":\n        # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n    else:\n        # forward first SIGINT directly, send SIGKILL on subsequent interrupt\n        setup_handler(signal.SIGINT, signal.SIGINT, signal.SIGKILL)\n        # first SIGTERM: send SIGINT, send SIGKILL on subsequent SIGTERM\n        setup_handler(signal.SIGTERM, signal.SIGINT, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_server","title":"setup_signal_handlers_server","text":"

Handle interrupts of the server gracefully.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/processutils.py
def setup_signal_handlers_server(pid: int, process_name: str, print_fn: Callable):\n\"\"\"Handle interrupts of the server gracefully.\"\"\"\n    setup_handler = partial(\n        forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n    )\n    # when server receives a signal, it needs to be propagated to the uvicorn subprocess\n    if sys.platform == \"win32\":\n        # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n    else:\n        # first interrupt: SIGTERM, second interrupt: SIGKILL\n        setup_handler(signal.SIGINT, signal.SIGTERM, signal.SIGKILL)\n        # forward first SIGTERM directly, send SIGKILL on subsequent SIGTERM\n        setup_handler(signal.SIGTERM, signal.SIGTERM, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/utilities/processutils/#prefect.utilities.processutils.setup_signal_handlers_worker","title":"setup_signal_handlers_worker","text":"

Handle interrupts of workers gracefully.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/processutils.py
def setup_signal_handlers_worker(pid: int, process_name: str, print_fn: Callable):\n\"\"\"Handle interrupts of workers gracefully.\"\"\"\n    setup_handler = partial(\n        forward_signal_handler, pid, process_name=process_name, print_fn=print_fn\n    )\n    # when agent receives SIGINT, it stops dequeueing new FlowRuns, and runs until the subprocesses finish\n    # the signal is not forwarded to subprocesses, so they can continue to run and hopefully still complete\n    if sys.platform == \"win32\":\n        # on Windows, use CTRL_BREAK_EVENT as SIGTERM is useless:\n        # https://bugs.python.org/issue26350\n        setup_handler(signal.SIGINT, signal.CTRL_BREAK_EVENT)\n    else:\n        # forward first SIGINT directly, send SIGKILL on subsequent interrupt\n        setup_handler(signal.SIGINT, signal.SIGINT, signal.SIGKILL)\n        # first SIGTERM: send SIGINT, send SIGKILL on subsequent SIGTERM\n        setup_handler(signal.SIGTERM, signal.SIGINT, signal.SIGKILL)\n
"},{"location":"api-ref/prefect/workers/base/","title":"prefect.workers.base","text":"","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base","title":"prefect.workers.base","text":"","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration","title":"BaseJobConfiguration","text":"

Bases: BaseModel

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
class BaseJobConfiguration(BaseModel):\n    command: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The command to use when starting a flow run. \"\n            \"In most cases, this should be left blank and the command \"\n            \"will be automatically generated by the worker.\"\n        ),\n    )\n    env: Dict[str, Optional[str]] = Field(\n        default_factory=dict,\n        title=\"Environment Variables\",\n        description=\"Environment variables to set when starting a flow run.\",\n    )\n    labels: Dict[str, str] = Field(\n        default_factory=dict,\n        description=(\n            \"Labels applied to infrastructure created by the worker using \"\n            \"this job configuration.\"\n        ),\n    )\n    name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"Name given to infrastructure created by the worker using this \"\n            \"job configuration.\"\n        ),\n    )\n\n    _related_objects: Dict[str, Any] = PrivateAttr(default_factory=dict)\n\n    @validator(\"command\")\n    def _coerce_command(cls, v):\n\"\"\"Make sure that empty strings are treated as None\"\"\"\n        if not v:\n            return None\n        return v\n\n    @staticmethod\n    def _get_base_config_defaults(variables: dict) -> dict:\n\"\"\"Get default values from base config for all variables that have them.\"\"\"\n        defaults = dict()\n        for variable_name, attrs in variables.items():\n            if \"default\" in attrs:\n                defaults[variable_name] = attrs[\"default\"]\n\n        return defaults\n\n    @classmethod\n    @inject_client\n    async def from_template_and_values(\n        cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n    ):\n\"\"\"Creates a valid worker configuration object from the provided base\n        configuration and overrides.\n\n        Important: this method expects that the base_job_template was already\n        validated server-side.\n        \"\"\"\n        job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n        variables_schema = base_job_template[\"variables\"]\n        variables = cls._get_base_config_defaults(\n            variables_schema.get(\"properties\", {})\n        )\n        variables.update(values)\n\n        populated_configuration = apply_values(template=job_config, values=variables)\n        populated_configuration = await resolve_block_document_references(\n            template=populated_configuration, client=client\n        )\n        populated_configuration = await resolve_variables(\n            template=populated_configuration, client=client\n        )\n        return cls(**populated_configuration)\n\n    @classmethod\n    def json_template(cls) -> dict:\n\"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n        Defaults to using the job configuration parameter name as the template variable name.\n\n        e.g.\n        {\n            key1: '{{ key1 }}',     # default variable template\n            key2: '{{ template2 }}', # `template2` specifically provide as template\n        }\n        \"\"\"\n        configuration = {}\n        properties = cls.schema()[\"properties\"]\n        for k, v in properties.items():\n            if v.get(\"template\"):\n                template = v[\"template\"]\n            else:\n                template = \"{{ \" + k + \" }}\"\n            configuration[k] = template\n\n        return configuration\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n\"\"\"\n        Prepare the job configuration for a flow run.\n\n        This method is called by the worker before starting a flow run. It\n        should be used to set any configuration values that are dependent on\n        the flow run.\n\n        Args:\n            flow_run: The flow run to be executed.\n            deployment: The deployment that the flow run is associated with.\n            flow: The flow that the flow run is associated with.\n        \"\"\"\n\n        self._related_objects = {\n            \"deployment\": deployment,\n            \"flow\": flow,\n            \"flow-run\": flow_run,\n        }\n        if deployment is not None:\n            deployment_labels = self._base_deployment_labels(deployment)\n        else:\n            deployment_labels = {}\n\n        if flow is not None:\n            flow_labels = self._base_flow_labels(flow)\n        else:\n            flow_labels = {}\n\n        env = {\n            **self._base_environment(),\n            **self._base_flow_run_environment(flow_run),\n            **self.env,\n        }\n        self.env = {key: value for key, value in env.items() if value is not None}\n        self.labels = {\n            **self._base_flow_run_labels(flow_run),\n            **deployment_labels,\n            **flow_labels,\n            **self.labels,\n        }\n        self.name = self.name or flow_run.name\n        self.command = self.command or self._base_flow_run_command()\n\n    @staticmethod\n    def _base_flow_run_command() -> str:\n\"\"\"\n        Generate a command for a flow run job.\n        \"\"\"\n        return \"python -m prefect.engine\"\n\n    @staticmethod\n    def _base_flow_run_labels(flow_run: \"FlowRun\") -> Dict[str, str]:\n\"\"\"\n        Generate a dictionary of labels for a flow run job.\n        \"\"\"\n        return {\n            \"prefect.io/flow-run-id\": str(flow_run.id),\n            \"prefect.io/flow-run-name\": flow_run.name,\n            \"prefect.io/version\": prefect.__version__,\n        }\n\n    @classmethod\n    def _base_environment(cls) -> Dict[str, str]:\n\"\"\"\n        Environment variables that should be passed to all created infrastructure.\n\n        These values should be overridable with the `env` field.\n        \"\"\"\n        return get_current_settings().to_environment_variables(exclude_unset=True)\n\n    @staticmethod\n    def _base_flow_run_environment(flow_run: \"FlowRun\") -> Dict[str, str]:\n\"\"\"\n        Generate a dictionary of environment variables for a flow run job.\n        \"\"\"\n        return {\"PREFECT__FLOW_RUN_ID\": flow_run.id.hex}\n\n    @staticmethod\n    def _base_deployment_labels(deployment: \"DeploymentResponse\") -> Dict[str, str]:\n        labels = {\n            \"prefect.io/deployment-id\": str(deployment.id),\n            \"prefect.io/deployment-name\": deployment.name,\n        }\n        if deployment.updated is not None:\n            labels[\"prefect.io/deployment-updated\"] = deployment.updated.in_timezone(\n                \"utc\"\n            ).to_iso8601_string()\n        return labels\n\n    @staticmethod\n    def _base_flow_labels(flow: \"Flow\") -> Dict[str, str]:\n        return {\n            \"prefect.io/flow-id\": str(flow.id),\n            \"prefect.io/flow-name\": flow.name,\n        }\n\n    def _related_resources(self) -> List[RelatedResource]:\n        tags = set()\n        related = []\n\n        for kind, obj in self._related_objects.items():\n            if obj is None:\n                continue\n            if hasattr(obj, \"tags\"):\n                tags.update(obj.tags)\n            related.append(object_as_related_resource(kind=kind, role=kind, object=obj))\n\n        return related + tags_as_related_resources(tags)\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.from_template_and_values","title":"from_template_and_values async classmethod","text":"

Creates a valid worker configuration object from the provided base configuration and overrides.

Important: this method expects that the base_job_template was already validated server-side.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
@classmethod\n@inject_client\nasync def from_template_and_values(\n    cls, base_job_template: dict, values: dict, client: \"PrefectClient\" = None\n):\n\"\"\"Creates a valid worker configuration object from the provided base\n    configuration and overrides.\n\n    Important: this method expects that the base_job_template was already\n    validated server-side.\n    \"\"\"\n    job_config: Dict[str, Any] = base_job_template[\"job_configuration\"]\n    variables_schema = base_job_template[\"variables\"]\n    variables = cls._get_base_config_defaults(\n        variables_schema.get(\"properties\", {})\n    )\n    variables.update(values)\n\n    populated_configuration = apply_values(template=job_config, values=variables)\n    populated_configuration = await resolve_block_document_references(\n        template=populated_configuration, client=client\n    )\n    populated_configuration = await resolve_variables(\n        template=populated_configuration, client=client\n    )\n    return cls(**populated_configuration)\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.json_template","title":"json_template classmethod","text":"

Returns a dict with job configuration as keys and the corresponding templates as values

Defaults to using the job configuration parameter name as the template variable name.

e.g. { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # template2 specifically provide as template }

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
@classmethod\ndef json_template(cls) -> dict:\n\"\"\"Returns a dict with job configuration as keys and the corresponding templates as values\n\n    Defaults to using the job configuration parameter name as the template variable name.\n\n    e.g.\n    {\n        key1: '{{ key1 }}',     # default variable template\n        key2: '{{ template2 }}', # `template2` specifically provide as template\n    }\n    \"\"\"\n    configuration = {}\n    properties = cls.schema()[\"properties\"]\n    for k, v in properties.items():\n        if v.get(\"template\"):\n            template = v[\"template\"]\n        else:\n            template = \"{{ \" + k + \" }}\"\n        configuration[k] = template\n\n    return configuration\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseJobConfiguration.prepare_for_flow_run","title":"prepare_for_flow_run","text":"

Prepare the job configuration for a flow run.

This method is called by the worker before starting a flow run. It should be used to set any configuration values that are dependent on the flow run.

Parameters:

Name Type Description Default flow_run FlowRun

The flow run to be executed.

required deployment Optional[DeploymentResponse]

The deployment that the flow run is associated with.

None flow Optional[Flow]

The flow that the flow run is associated with.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
def prepare_for_flow_run(\n    self,\n    flow_run: \"FlowRun\",\n    deployment: Optional[\"DeploymentResponse\"] = None,\n    flow: Optional[\"Flow\"] = None,\n):\n\"\"\"\n    Prepare the job configuration for a flow run.\n\n    This method is called by the worker before starting a flow run. It\n    should be used to set any configuration values that are dependent on\n    the flow run.\n\n    Args:\n        flow_run: The flow run to be executed.\n        deployment: The deployment that the flow run is associated with.\n        flow: The flow that the flow run is associated with.\n    \"\"\"\n\n    self._related_objects = {\n        \"deployment\": deployment,\n        \"flow\": flow,\n        \"flow-run\": flow_run,\n    }\n    if deployment is not None:\n        deployment_labels = self._base_deployment_labels(deployment)\n    else:\n        deployment_labels = {}\n\n    if flow is not None:\n        flow_labels = self._base_flow_labels(flow)\n    else:\n        flow_labels = {}\n\n    env = {\n        **self._base_environment(),\n        **self._base_flow_run_environment(flow_run),\n        **self.env,\n    }\n    self.env = {key: value for key, value in env.items() if value is not None}\n    self.labels = {\n        **self._base_flow_run_labels(flow_run),\n        **deployment_labels,\n        **flow_labels,\n        **self.labels,\n    }\n    self.name = self.name or flow_run.name\n    self.command = self.command or self._base_flow_run_command()\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker","title":"BaseWorker","text":"

Bases: abc.ABC

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
@register_base_type\nclass BaseWorker(abc.ABC):\n    type: str\n    job_configuration: Type[BaseJobConfiguration] = BaseJobConfiguration\n    job_configuration_variables: Optional[Type[BaseVariables]] = None\n\n    _documentation_url = \"\"\n    _logo_url = \"\"\n    _description = \"\"\n\n    @experimental(feature=\"The workers feature\", group=\"workers\")\n    def __init__(\n        self,\n        work_pool_name: str,\n        work_queues: Optional[List[str]] = None,\n        name: Optional[str] = None,\n        prefetch_seconds: Optional[float] = None,\n        create_pool_if_not_found: bool = True,\n        limit: Optional[int] = None,\n    ):\n\"\"\"\n        Base class for all Prefect workers.\n\n        Args:\n            name: The name of the worker. If not provided, a random one\n                will be generated. If provided, it cannot contain '/' or '%'.\n                The name is used to identify the worker in the UI; if two\n                processes have the same name, they will be treated as the same\n                worker.\n            work_pool_name: The name of the work pool to poll.\n            work_queues: A list of work queues to poll. If not provided, all\n                work queue in the work pool will be polled.\n            prefetch_seconds: The number of seconds to prefetch flow runs for.\n            create_pool_if_not_found: Whether to create the work pool\n                if it is not found. Defaults to `True`, but can be set to `False` to\n                ensure that work pools are not created accidentally.\n            limit: The maximum number of flow runs this worker should be running at\n                a given time.\n        \"\"\"\n        if name and (\"/\" in name or \"%\" in name):\n            raise ValueError(\"Worker name cannot contain '/' or '%'\")\n        self.name = name or f\"{self.__class__.__name__} {uuid4()}\"\n        self._logger = get_logger(f\"worker.{self.__class__.type}.{self.name.lower()}\")\n\n        self.is_setup = False\n        self._create_pool_if_not_found = create_pool_if_not_found\n        self._work_pool_name = work_pool_name\n        self._work_queues: Set[str] = set(work_queues) if work_queues else set()\n\n        self._prefetch_seconds: float = (\n            prefetch_seconds or PREFECT_WORKER_PREFETCH_SECONDS.value()\n        )\n\n        self._work_pool: Optional[WorkPool] = None\n        self._runs_task_group: Optional[anyio.abc.TaskGroup] = None\n        self._client: Optional[PrefectClient] = None\n        self._last_polled_time: pendulum.DateTime = pendulum.now(\"utc\")\n        self._limit = limit\n        self._limiter: Optional[anyio.CapacityLimiter] = None\n        self._submitting_flow_run_ids = set()\n        self._cancelling_flow_run_ids = set()\n        self._scheduled_task_scopes = set()\n\n    @classmethod\n    def get_documentation_url(cls) -> str:\n        return cls._documentation_url\n\n    @classmethod\n    def get_logo_url(cls) -> str:\n        return cls._logo_url\n\n    @classmethod\n    def get_description(cls) -> str:\n        return cls._description\n\n    @classmethod\n    def get_default_base_job_template(cls) -> Dict:\n        if cls.job_configuration_variables is None:\n            schema = cls.job_configuration.schema()\n            # remove \"template\" key from all dicts in schema['properties'] because it is not a\n            # relevant field\n            for key, value in schema[\"properties\"].items():\n                if isinstance(value, dict):\n                    schema[\"properties\"][key].pop(\"template\", None)\n            variables_schema = schema\n        else:\n            variables_schema = cls.job_configuration_variables.schema()\n        variables_schema.pop(\"title\", None)\n        return {\n            \"job_configuration\": cls.job_configuration.json_template(),\n            \"variables\": variables_schema,\n        }\n\n    @staticmethod\n    def get_worker_class_from_type(type: str) -> Optional[Type[\"BaseWorker\"]]:\n\"\"\"\n        Returns the worker class for a given worker type. If the worker type\n        is not recognized, returns None.\n        \"\"\"\n        load_prefect_collections()\n        worker_registry = get_registry_for_type(BaseWorker)\n        if worker_registry is not None:\n            return worker_registry.get(type)\n\n    @staticmethod\n    def get_all_available_worker_types() -> List[str]:\n\"\"\"\n        Returns all worker types available in the local registry.\n        \"\"\"\n        load_prefect_collections()\n        worker_registry = get_registry_for_type(BaseWorker)\n        if worker_registry is not None:\n            return list(worker_registry.keys())\n        return []\n\n    def get_name_slug(self):\n        return slugify(self.name)\n\n    def get_flow_run_logger(self, flow_run: \"FlowRun\") -> PrefectLogAdapter:\n        return flow_run_logger(flow_run=flow_run).getChild(\n            \"worker\",\n            extra={\n                \"worker_name\": self.name,\n                \"work_pool_name\": (\n                    self._work_pool_name if self._work_pool else \"<unknown>\"\n                ),\n                \"work_pool_id\": str(getattr(self._work_pool, \"id\", \"unknown\")),\n            },\n        )\n\n    @abc.abstractmethod\n    async def run(\n        self,\n        flow_run: \"FlowRun\",\n        configuration: BaseJobConfiguration,\n        task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n\"\"\"\n        Runs a given flow run on the current worker.\n        \"\"\"\n        raise NotImplementedError(\n            \"Workers must implement a method for running submitted flow runs\"\n        )\n\n    async def kill_infrastructure(\n        self,\n        infrastructure_pid: str,\n        configuration: BaseJobConfiguration,\n        grace_seconds: int = 30,\n    ):\n\"\"\"\n        Method for killing infrastructure created by a worker. Should be implemented by\n        individual workers if they support killing infrastructure.\n        \"\"\"\n        raise NotImplementedError(\n            \"This worker does not support killing infrastructure.\"\n        )\n\n    @classmethod\n    def __dispatch_key__(cls):\n        if cls.__name__ == \"BaseWorker\":\n            return None  # The base class is abstract\n        return cls.type\n\n    async def setup(self):\n\"\"\"Prepares the worker to run.\"\"\"\n        self._logger.debug(\"Setting up worker...\")\n        self._runs_task_group = anyio.create_task_group()\n        self._limiter = (\n            anyio.CapacityLimiter(self._limit) if self._limit is not None else None\n        )\n        self._client = get_client()\n        await self._client.__aenter__()\n        await self._runs_task_group.__aenter__()\n\n        self.is_setup = True\n\n    async def teardown(self, *exc_info):\n\"\"\"Cleans up resources after the worker is stopped.\"\"\"\n        self._logger.debug(\"Tearing down worker...\")\n        self.is_setup = False\n        for scope in self._scheduled_task_scopes:\n            scope.cancel()\n        if self._runs_task_group:\n            await self._runs_task_group.__aexit__(*exc_info)\n        if self._client:\n            await self._client.__aexit__(*exc_info)\n        self._runs_task_group = None\n        self._client = None\n\n    def is_worker_still_polling(self, query_interval_seconds: int) -> bool:\n\"\"\"\n        This method is invoked by a webserver healthcheck handler\n        and returns a boolean indicating if the worker has recorded a\n        scheduled flow run poll within a variable amount of time.\n\n        The `query_interval_seconds` is the same value that is used by\n        the loop services - we will evaluate if the _last_polled_time\n        was within that interval x 30 (so 10s -> 5m)\n\n        The instance property `self._last_polled_time`\n        is currently set/updated in `get_and_submit_flow_runs()`\n        \"\"\"\n        threshold_seconds = query_interval_seconds * 30\n\n        seconds_since_last_poll = (\n            pendulum.now(\"utc\") - self._last_polled_time\n        ).in_seconds()\n\n        is_still_polling = seconds_since_last_poll <= threshold_seconds\n\n        if not is_still_polling:\n            self._logger.error(\n                f\"Worker has not polled in the last {seconds_since_last_poll} seconds \"\n                \"and should be restarted\"\n            )\n\n        return is_still_polling\n\n    async def get_and_submit_flow_runs(self):\n        runs_response = await self._get_scheduled_flow_runs()\n\n        self._last_polled_time = pendulum.now(\"utc\")\n\n        return await self._submit_scheduled_flow_runs(flow_run_response=runs_response)\n\n    async def check_for_cancelled_flow_runs(self):\n        if not self.is_setup:\n            raise RuntimeError(\n                \"Worker is not set up. Please make sure you are running this worker \"\n                \"as an async context manager.\"\n            )\n\n        self._logger.debug(\"Checking for cancelled flow runs...\")\n\n        work_queue_filter = (\n            WorkQueueFilter(name=WorkQueueFilterName(any_=list(self._work_queues)))\n            if self._work_queues\n            else None\n        )\n\n        named_cancelling_flow_runs = await self._client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLED]),\n                    name=FlowRunFilterStateName(any_=[\"Cancelling\"]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self._cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=WorkPoolFilter(\n                name=WorkPoolFilterName(any_=[self._work_pool_name])\n            ),\n            work_queue_filter=work_queue_filter,\n        )\n\n        typed_cancelling_flow_runs = await self._client.read_flow_runs(\n            flow_run_filter=FlowRunFilter(\n                state=FlowRunFilterState(\n                    type=FlowRunFilterStateType(any_=[StateType.CANCELLING]),\n                ),\n                # Avoid duplicate cancellation calls\n                id=FlowRunFilterId(not_any_=list(self._cancelling_flow_run_ids)),\n            ),\n            work_pool_filter=WorkPoolFilter(\n                name=WorkPoolFilterName(any_=[self._work_pool_name])\n            ),\n            work_queue_filter=work_queue_filter,\n        )\n\n        cancelling_flow_runs = named_cancelling_flow_runs + typed_cancelling_flow_runs\n\n        if cancelling_flow_runs:\n            self._logger.info(\n                f\"Found {len(cancelling_flow_runs)} flow runs awaiting cancellation.\"\n            )\n\n        for flow_run in cancelling_flow_runs:\n            self._cancelling_flow_run_ids.add(flow_run.id)\n            self._runs_task_group.start_soon(self.cancel_run, flow_run)\n\n        return cancelling_flow_runs\n\n    async def cancel_run(self, flow_run: \"FlowRun\"):\n        run_logger = self.get_flow_run_logger(flow_run)\n\n        if not flow_run.infrastructure_pid:\n            run_logger.error(\n                f\"Flow run '{flow_run.id}' does not have an infrastructure pid\"\n                \" attached. Cancellation cannot be guaranteed.\"\n            )\n            await self._mark_flow_run_as_cancelled(\n                flow_run,\n                state_updates={\n                    \"message\": (\n                        \"This flow run is missing infrastructure tracking information\"\n                        \" and cancellation cannot be guaranteed.\"\n                    )\n                },\n            )\n            return\n\n        try:\n            configuration = await self._get_configuration(flow_run)\n        except ObjectNotFound:\n            self._logger.warning(\n                f\"Flow run {flow_run.id!r} cannot be cancelled by this worker:\"\n                f\" associated deployment {flow_run.deployment_id!r} does not exist.\"\n            )\n\n        try:\n            await self.kill_infrastructure(\n                infrastructure_pid=flow_run.infrastructure_pid,\n                configuration=configuration,\n            )\n        except NotImplementedError:\n            self._logger.error(\n                f\"Worker type {self.type!r} does not support killing created \"\n                \"infrastructure. Cancellation cannot be guaranteed.\"\n            )\n        except InfrastructureNotFound as exc:\n            self._logger.warning(f\"{exc} Marking flow run as cancelled.\")\n            await self._mark_flow_run_as_cancelled(flow_run)\n        except InfrastructureNotAvailable as exc:\n            self._logger.warning(f\"{exc} Flow run cannot be cancelled by this worker.\")\n        except Exception:\n            run_logger.exception(\n                \"Encountered exception while killing infrastructure for flow run \"\n                f\"'{flow_run.id}'. Flow run may not be cancelled.\"\n            )\n            # We will try again on generic exceptions\n            self._cancelling_flow_run_ids.remove(flow_run.id)\n            return\n        else:\n            self._emit_flow_run_cancelled_event(\n                flow_run=flow_run, configuration=configuration\n            )\n            await self._mark_flow_run_as_cancelled(flow_run)\n            run_logger.info(f\"Cancelled flow run '{flow_run.id}'!\")\n\n    async def _update_local_work_pool_info(self):\n        try:\n            work_pool = await self._client.read_work_pool(\n                work_pool_name=self._work_pool_name\n            )\n        except ObjectNotFound:\n            if self._create_pool_if_not_found:\n                work_pool = await self._client.create_work_pool(\n                    work_pool=WorkPoolCreate(name=self._work_pool_name, type=self.type)\n                )\n                self._logger.info(f\"Worker pool {self._work_pool_name!r} created.\")\n            else:\n                self._logger.warning(f\"Worker pool {self._work_pool_name!r} not found!\")\n                return\n\n        # if the remote config type changes (or if it's being loaded for the\n        # first time), check if it matches the local type and warn if not\n        if getattr(self._work_pool, \"type\", 0) != work_pool.type:\n            if work_pool.type != self.__class__.type:\n                self._logger.warning(\n                    \"Worker type mismatch! This worker process expects type \"\n                    f\"{self.type!r} but received {work_pool.type!r}\"\n                    \" from the server. Unexpected behavior may occur.\"\n                )\n\n        # once the work pool is loaded, verify that it has a `base_job_template` and\n        # set it if not\n        if not work_pool.base_job_template:\n            job_template = self.__class__.get_default_base_job_template()\n            await self._set_work_pool_template(work_pool, job_template)\n            work_pool.base_job_template = job_template\n\n        self._work_pool = work_pool\n\n    async def _send_worker_heartbeat(self):\n        if self._work_pool:\n            await self._client.send_worker_heartbeat(\n                work_pool_name=self._work_pool_name, worker_name=self.name\n            )\n\n    async def sync_with_backend(self):\n\"\"\"\n        Updates the worker's local information about it's current work pool and\n        queues. Sends a worker heartbeat to the API.\n        \"\"\"\n        await self._update_local_work_pool_info()\n\n        await self._send_worker_heartbeat()\n\n        self._logger.debug(\"Worker synchronized with the Prefect API server.\")\n\n    async def _get_scheduled_flow_runs(\n        self,\n    ) -> List[\"WorkerFlowRunResponse\"]:\n\"\"\"\n        Retrieve scheduled flow runs from the work pool's queues.\n        \"\"\"\n        scheduled_before = pendulum.now(\"utc\").add(seconds=int(self._prefetch_seconds))\n        self._logger.debug(\n            f\"Querying for flow runs scheduled before {scheduled_before}\"\n        )\n        try:\n            scheduled_flow_runs = (\n                await self._client.get_scheduled_flow_runs_for_work_pool(\n                    work_pool_name=self._work_pool_name,\n                    scheduled_before=scheduled_before,\n                    work_queue_names=list(self._work_queues),\n                )\n            )\n            self._logger.debug(\n                f\"Discovered {len(scheduled_flow_runs)} scheduled_flow_runs\"\n            )\n            return scheduled_flow_runs\n        except ObjectNotFound:\n            # the pool doesn't exist; it will be created on the next\n            # heartbeat (or an appropriate warning will be logged)\n            return []\n\n    async def _submit_scheduled_flow_runs(\n        self, flow_run_response: List[\"WorkerFlowRunResponse\"]\n    ) -> List[\"FlowRun\"]:\n\"\"\"\n        Takes a list of WorkerFlowRunResponses and submits the referenced flow runs\n        for execution by the worker.\n        \"\"\"\n        submittable_flow_runs = [entry.flow_run for entry in flow_run_response]\n        submittable_flow_runs.sort(key=lambda run: run.next_scheduled_start_time)\n        for flow_run in submittable_flow_runs:\n            if flow_run.id in self._submitting_flow_run_ids:\n                continue\n\n            try:\n                if self._limiter:\n                    self._limiter.acquire_on_behalf_of_nowait(flow_run.id)\n            except anyio.WouldBlock:\n                self._logger.info(\n                    f\"Flow run limit reached; {self._limiter.borrowed_tokens} flow runs\"\n                    \" in progress.\"\n                )\n                break\n            else:\n                run_logger = self.get_flow_run_logger(flow_run)\n                run_logger.info(\n                    f\"Worker '{self.name}' submitting flow run '{flow_run.id}'\"\n                )\n                self._submitting_flow_run_ids.add(flow_run.id)\n                self._runs_task_group.start_soon(\n                    self._submit_run,\n                    flow_run,\n                )\n\n        return list(\n            filter(\n                lambda run: run.id in self._submitting_flow_run_ids,\n                submittable_flow_runs,\n            )\n        )\n\n    async def _check_flow_run(self, flow_run: \"FlowRun\") -> None:\n\"\"\"\n        Performs a check on a submitted flow run to warn the user if the flow run\n        was created from a deployment with a storage block.\n        \"\"\"\n        if flow_run.deployment_id:\n            deployment = await self._client.read_deployment(flow_run.deployment_id)\n            if deployment.storage_document_id:\n                raise ValueError(\n                    f\"Flow run {flow_run.id!r} was created from deployment\"\n                    f\" {deployment.name!r} which is configured with a storage block.\"\n                    \" Workers currently only support local storage. Please use an\"\n                    \" agent to execute this flow run.\"\n                )\n\n    async def _submit_run(self, flow_run: \"FlowRun\") -> None:\n\"\"\"\n        Submits a given flow run for execution by the worker.\n        \"\"\"\n        run_logger = self.get_flow_run_logger(flow_run)\n\n        try:\n            await self._check_flow_run(flow_run)\n        except (ValueError, ObjectNotFound):\n            self._logger.exception(\n                (\n                    \"Flow run %s did not pass checks and will not be submitted for\"\n                    \" execution\"\n                ),\n                flow_run.id,\n            )\n            self._submitting_flow_run_ids.remove(flow_run.id)\n            return\n\n        ready_to_submit = await self._propose_pending_state(flow_run)\n\n        if ready_to_submit:\n            readiness_result = await self._runs_task_group.start(\n                self._submit_run_and_capture_errors, flow_run\n            )\n\n            if readiness_result and not isinstance(readiness_result, Exception):\n                try:\n                    await self._client.update_flow_run(\n                        flow_run_id=flow_run.id,\n                        infrastructure_pid=str(readiness_result),\n                    )\n                except Exception:\n                    run_logger.exception(\n                        \"An error occurred while setting the `infrastructure_pid` on \"\n                        f\"flow run {flow_run.id!r}. The flow run will \"\n                        \"not be cancellable.\"\n                    )\n\n            run_logger.info(f\"Completed submission of flow run '{flow_run.id}'\")\n\n        else:\n            # If the run is not ready to submit, release the concurrency slot\n            if self._limiter:\n                self._limiter.release_on_behalf_of(flow_run.id)\n\n        self._submitting_flow_run_ids.remove(flow_run.id)\n\n    async def _submit_run_and_capture_errors(\n        self, flow_run: \"FlowRun\", task_status: anyio.abc.TaskStatus = None\n    ) -> Union[BaseWorkerResult, Exception]:\n        run_logger = self.get_flow_run_logger(flow_run)\n\n        try:\n            configuration = await self._get_configuration(flow_run)\n            submitted_event = self._emit_flow_run_submitted_event(configuration)\n            result = await self.run(\n                flow_run=flow_run,\n                task_status=task_status,\n                configuration=configuration,\n            )\n        except Exception as exc:\n            if not task_status._future.done():\n                # This flow run was being submitted and did not start successfully\n                run_logger.exception(\n                    f\"Failed to submit flow run '{flow_run.id}' to infrastructure.\"\n                )\n                # Mark the task as started to prevent agent crash\n                task_status.started(exc)\n                await self._propose_crashed_state(\n                    flow_run, \"Flow run could not be submitted to infrastructure\"\n                )\n            else:\n                run_logger.exception(\n                    f\"An error occurred while monitoring flow run '{flow_run.id}'. \"\n                    \"The flow run will not be marked as failed, but an issue may have \"\n                    \"occurred.\"\n                )\n            return exc\n        finally:\n            if self._limiter:\n                self._limiter.release_on_behalf_of(flow_run.id)\n\n        if not task_status._future.done():\n            run_logger.error(\n                f\"Infrastructure returned without reporting flow run '{flow_run.id}' \"\n                \"as started or raising an error. This behavior is not expected and \"\n                \"generally indicates improper implementation of infrastructure. The \"\n                \"flow run will not be marked as failed, but an issue may have occurred.\"\n            )\n            # Mark the task as started to prevent agent crash\n            task_status.started()\n\n        if result.status_code != 0:\n            await self._propose_crashed_state(\n                flow_run,\n                (\n                    \"Flow run infrastructure exited with non-zero status code\"\n                    f\" {result.status_code}.\"\n                ),\n            )\n\n        self._emit_flow_run_executed_event(result, configuration, submitted_event)\n\n        return result\n\n    def get_status(self):\n\"\"\"\n        Retrieves the status of the current worker including its name, current worker\n        pool, the work pool queues it is polling, and its local settings.\n        \"\"\"\n        return {\n            \"name\": self.name,\n            \"work_pool\": (\n                self._work_pool.dict(json_compatible=True)\n                if self._work_pool is not None\n                else None\n            ),\n            \"settings\": {\n                \"prefetch_seconds\": self._prefetch_seconds,\n            },\n        }\n\n    async def _get_configuration(\n        self,\n        flow_run: \"FlowRun\",\n    ) -> BaseJobConfiguration:\n        deployment = await self._client.read_deployment(flow_run.deployment_id)\n        flow = await self._client.read_flow(flow_run.flow_id)\n        configuration = await self.job_configuration.from_template_and_values(\n            base_job_template=self._work_pool.base_job_template,\n            values=deployment.infra_overrides or {},\n            client=self._client,\n        )\n        configuration.prepare_for_flow_run(\n            flow_run=flow_run, deployment=deployment, flow=flow\n        )\n        return configuration\n\n    async def _propose_pending_state(self, flow_run: \"FlowRun\") -> bool:\n        run_logger = self.get_flow_run_logger(flow_run)\n        state = flow_run.state\n        try:\n            state = await propose_state(\n                self._client, Pending(), flow_run_id=flow_run.id\n            )\n        except Abort as exc:\n            run_logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}'. \"\n                    f\"Server sent an abort signal: {exc}\"\n                ),\n            )\n            return False\n        except Exception:\n            run_logger.exception(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n            )\n            return False\n\n        if not state.is_pending():\n            run_logger.info(\n                (\n                    f\"Aborted submission of flow run '{flow_run.id}': \"\n                    f\"Server returned a non-pending state {state.type.value!r}\"\n                ),\n            )\n            return False\n\n        return True\n\n    async def _propose_failed_state(self, flow_run: \"FlowRun\", exc: Exception) -> None:\n        run_logger = self.get_flow_run_logger(flow_run)\n        try:\n            await propose_state(\n                self._client,\n                await exception_to_failed_state(message=\"Submission failed.\", exc=exc),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # We've already failed, no need to note the abort but we don't want it to\n            # raise in the agent process\n            pass\n        except Exception:\n            run_logger.error(\n                f\"Failed to update state of flow run '{flow_run.id}'\",\n                exc_info=True,\n            )\n\n    async def _propose_crashed_state(self, flow_run: \"FlowRun\", message: str) -> None:\n        run_logger = self.get_flow_run_logger(flow_run)\n        try:\n            state = await propose_state(\n                self._client,\n                Crashed(message=message),\n                flow_run_id=flow_run.id,\n            )\n        except Abort:\n            # Flow run already marked as failed\n            pass\n        except Exception:\n            run_logger.exception(f\"Failed to update state of flow run '{flow_run.id}'\")\n        else:\n            if state.is_crashed():\n                run_logger.info(\n                    f\"Reported flow run '{flow_run.id}' as crashed: {message}\"\n                )\n\n    async def _mark_flow_run_as_cancelled(\n        self, flow_run: \"FlowRun\", state_updates: Optional[dict] = None\n    ) -> None:\n        state_updates = state_updates or {}\n        state_updates.setdefault(\"name\", \"Cancelled\")\n        state_updates.setdefault(\"type\", StateType.CANCELLED)\n        state = flow_run.state.copy(update=state_updates)\n\n        await self._client.set_flow_run_state(flow_run.id, state, force=True)\n\n        # Do not remove the flow run from the cancelling set immediately because\n        # the API caches responses for the `read_flow_runs` and we do not want to\n        # duplicate cancellations.\n        await self._schedule_task(\n            60 * 10, self._cancelling_flow_run_ids.remove, flow_run.id\n        )\n\n    async def _set_work_pool_template(self, work_pool, job_template):\n\"\"\"Updates the `base_job_template` for the worker's work pool server side.\"\"\"\n        await self._client.update_work_pool(\n            work_pool_name=work_pool.name,\n            work_pool=WorkPoolUpdate(\n                base_job_template=job_template,\n            ),\n        )\n\n    async def _schedule_task(self, __in_seconds: int, fn, *args, **kwargs):\n\"\"\"\n        Schedule a background task to start after some time.\n\n        These tasks will be run immediately when the worker exits instead of waiting.\n\n        The function may be async or sync. Async functions will be awaited.\n        \"\"\"\n\n        async def wrapper(task_status):\n            # If we are shutting down, do not sleep; otherwise sleep until the scheduled\n            # time or shutdown\n            if self.is_setup:\n                with anyio.CancelScope() as scope:\n                    self._scheduled_task_scopes.add(scope)\n                    task_status.started()\n                    await anyio.sleep(__in_seconds)\n\n                self._scheduled_task_scopes.remove(scope)\n            else:\n                task_status.started()\n\n            result = fn(*args, **kwargs)\n            if inspect.iscoroutine(result):\n                await result\n\n        await self._runs_task_group.start(wrapper)\n\n    async def __aenter__(self):\n        self._logger.debug(\"Entering worker context...\")\n        await self.setup()\n        return self\n\n    async def __aexit__(self, *exc_info):\n        self._logger.debug(\"Exiting worker context...\")\n        await self.teardown(*exc_info)\n\n    def __repr__(self):\n        return f\"Worker(pool={self._work_pool_name!r}, name={self.name!r})\"\n\n    def _event_resource(self):\n        return {\n            \"prefect.resource.id\": f\"prefect.worker.{self.type}.{self.get_name_slug()}\",\n            \"prefect.resource.name\": self.name,\n            \"prefect.version\": prefect.__version__,\n            \"prefect.worker-type\": self.type,\n        }\n\n    def _event_related_resources(\n        self,\n        configuration: Optional[BaseJobConfiguration] = None,\n        include_self: bool = False,\n    ) -> List[RelatedResource]:\n        related = []\n        if configuration:\n            related += configuration._related_resources()\n\n        if self._work_pool:\n            related.append(\n                object_as_related_resource(\n                    kind=\"work-pool\", role=\"work-pool\", object=self._work_pool\n                )\n            )\n\n        if include_self:\n            worker_resource = self._event_resource()\n            worker_resource[\"prefect.resource.role\"] = \"worker\"\n            related.append(RelatedResource(__root__=worker_resource))\n\n        return related\n\n    def _emit_flow_run_submitted_event(\n        self, configuration: BaseJobConfiguration\n    ) -> Event:\n        return emit_event(\n            event=\"prefect.worker.submitted-flow-run\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(configuration=configuration),\n        )\n\n    def _emit_flow_run_executed_event(\n        self,\n        result: BaseWorkerResult,\n        configuration: BaseJobConfiguration,\n        submitted_event: Event,\n    ):\n        related = self._event_related_resources(configuration=configuration)\n\n        for resource in related:\n            if resource.role == \"flow-run\":\n                resource[\"prefect.infrastructure.identifier\"] = str(result.identifier)\n                resource[\"prefect.infrastructure.status-code\"] = str(result.status_code)\n\n        emit_event(\n            event=\"prefect.worker.executed-flow-run\",\n            resource=self._event_resource(),\n            related=related,\n            follows=submitted_event,\n        )\n\n    async def _emit_worker_started_event(self) -> Event:\n        return emit_event(\n            \"prefect.worker.started\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(),\n        )\n\n    async def _emit_worker_stopped_event(self, started_event: Event):\n        emit_event(\n            \"prefect.worker.stopped\",\n            resource=self._event_resource(),\n            related=self._event_related_resources(),\n            follows=started_event,\n        )\n\n    def _emit_flow_run_cancelled_event(\n        self, flow_run: \"FlowRun\", configuration: BaseJobConfiguration\n    ):\n        related = self._event_related_resources(configuration=configuration)\n\n        for resource in related:\n            if resource.role == \"flow-run\":\n                resource[\"prefect.infrastructure.identifier\"] = str(\n                    flow_run.infrastructure_pid\n                )\n\n        emit_event(\n            event=\"prefect.worker.cancelled-flow-run\",\n            resource=self._event_resource(),\n            related=related,\n        )\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_all_available_worker_types","title":"get_all_available_worker_types staticmethod","text":"

Returns all worker types available in the local registry.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
@staticmethod\ndef get_all_available_worker_types() -> List[str]:\n\"\"\"\n    Returns all worker types available in the local registry.\n    \"\"\"\n    load_prefect_collections()\n    worker_registry = get_registry_for_type(BaseWorker)\n    if worker_registry is not None:\n        return list(worker_registry.keys())\n    return []\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_status","title":"get_status","text":"

Retrieves the status of the current worker including its name, current worker pool, the work pool queues it is polling, and its local settings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
def get_status(self):\n\"\"\"\n    Retrieves the status of the current worker including its name, current worker\n    pool, the work pool queues it is polling, and its local settings.\n    \"\"\"\n    return {\n        \"name\": self.name,\n        \"work_pool\": (\n            self._work_pool.dict(json_compatible=True)\n            if self._work_pool is not None\n            else None\n        ),\n        \"settings\": {\n            \"prefetch_seconds\": self._prefetch_seconds,\n        },\n    }\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.get_worker_class_from_type","title":"get_worker_class_from_type staticmethod","text":"

Returns the worker class for a given worker type. If the worker type is not recognized, returns None.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
@staticmethod\ndef get_worker_class_from_type(type: str) -> Optional[Type[\"BaseWorker\"]]:\n\"\"\"\n    Returns the worker class for a given worker type. If the worker type\n    is not recognized, returns None.\n    \"\"\"\n    load_prefect_collections()\n    worker_registry = get_registry_for_type(BaseWorker)\n    if worker_registry is not None:\n        return worker_registry.get(type)\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.is_worker_still_polling","title":"is_worker_still_polling","text":"

This method is invoked by a webserver healthcheck handler and returns a boolean indicating if the worker has recorded a scheduled flow run poll within a variable amount of time.

The query_interval_seconds is the same value that is used by the loop services - we will evaluate if the _last_polled_time was within that interval x 30 (so 10s -> 5m)

The instance property self._last_polled_time is currently set/updated in get_and_submit_flow_runs()

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
def is_worker_still_polling(self, query_interval_seconds: int) -> bool:\n\"\"\"\n    This method is invoked by a webserver healthcheck handler\n    and returns a boolean indicating if the worker has recorded a\n    scheduled flow run poll within a variable amount of time.\n\n    The `query_interval_seconds` is the same value that is used by\n    the loop services - we will evaluate if the _last_polled_time\n    was within that interval x 30 (so 10s -> 5m)\n\n    The instance property `self._last_polled_time`\n    is currently set/updated in `get_and_submit_flow_runs()`\n    \"\"\"\n    threshold_seconds = query_interval_seconds * 30\n\n    seconds_since_last_poll = (\n        pendulum.now(\"utc\") - self._last_polled_time\n    ).in_seconds()\n\n    is_still_polling = seconds_since_last_poll <= threshold_seconds\n\n    if not is_still_polling:\n        self._logger.error(\n            f\"Worker has not polled in the last {seconds_since_last_poll} seconds \"\n            \"and should be restarted\"\n        )\n\n    return is_still_polling\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.kill_infrastructure","title":"kill_infrastructure async","text":"

Method for killing infrastructure created by a worker. Should be implemented by individual workers if they support killing infrastructure.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
async def kill_infrastructure(\n    self,\n    infrastructure_pid: str,\n    configuration: BaseJobConfiguration,\n    grace_seconds: int = 30,\n):\n\"\"\"\n    Method for killing infrastructure created by a worker. Should be implemented by\n    individual workers if they support killing infrastructure.\n    \"\"\"\n    raise NotImplementedError(\n        \"This worker does not support killing infrastructure.\"\n    )\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.run","title":"run abstractmethod async","text":"

Runs a given flow run on the current worker.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
@abc.abstractmethod\nasync def run(\n    self,\n    flow_run: \"FlowRun\",\n    configuration: BaseJobConfiguration,\n    task_status: Optional[anyio.abc.TaskStatus] = None,\n) -> BaseWorkerResult:\n\"\"\"\n    Runs a given flow run on the current worker.\n    \"\"\"\n    raise NotImplementedError(\n        \"Workers must implement a method for running submitted flow runs\"\n    )\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.setup","title":"setup async","text":"

Prepares the worker to run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
async def setup(self):\n\"\"\"Prepares the worker to run.\"\"\"\n    self._logger.debug(\"Setting up worker...\")\n    self._runs_task_group = anyio.create_task_group()\n    self._limiter = (\n        anyio.CapacityLimiter(self._limit) if self._limit is not None else None\n    )\n    self._client = get_client()\n    await self._client.__aenter__()\n    await self._runs_task_group.__aenter__()\n\n    self.is_setup = True\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.sync_with_backend","title":"sync_with_backend async","text":"

Updates the worker's local information about it's current work pool and queues. Sends a worker heartbeat to the API.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
async def sync_with_backend(self):\n\"\"\"\n    Updates the worker's local information about it's current work pool and\n    queues. Sends a worker heartbeat to the API.\n    \"\"\"\n    await self._update_local_work_pool_info()\n\n    await self._send_worker_heartbeat()\n\n    self._logger.debug(\"Worker synchronized with the Prefect API server.\")\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/base/#prefect.workers.base.BaseWorker.teardown","title":"teardown async","text":"

Cleans up resources after the worker is stopped.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/base.py
async def teardown(self, *exc_info):\n\"\"\"Cleans up resources after the worker is stopped.\"\"\"\n    self._logger.debug(\"Tearing down worker...\")\n    self.is_setup = False\n    for scope in self._scheduled_task_scopes:\n        scope.cancel()\n    if self._runs_task_group:\n        await self._runs_task_group.__aexit__(*exc_info)\n    if self._client:\n        await self._client.__aexit__(*exc_info)\n    self._runs_task_group = None\n    self._client = None\n
","tags":["Python API","workers"]},{"location":"api-ref/prefect/workers/process/","title":"prefect.workers.process","text":"","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process","title":"prefect.workers.process","text":"

Module containing the Process worker used for executing flow runs as subprocesses.

Note this module is in beta. The interfaces within may change without notice.

To start a Process worker, run the following command:

prefect worker start --pool 'my-work-pool' --type process\n

Replace my-work-pool with the name of the work pool you want the worker to poll for flow runs.

For more information about work pools and workers, checkout out the Prefect docs.

","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessJobConfiguration","title":"ProcessJobConfiguration","text":"

Bases: BaseJobConfiguration

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/process.py
class ProcessJobConfiguration(BaseJobConfiguration):\n    stream_output: bool = Field(default=True)\n    working_dir: Optional[Path] = Field(default=None)\n\n    @validator(\"working_dir\")\n    def validate_command(cls, v):\n\"\"\"Make sure that the working directory is formatted for the current platform.\"\"\"\n        if v:\n            return relative_path_to_current_platform(v)\n        return v\n\n    def prepare_for_flow_run(\n        self,\n        flow_run: \"FlowRun\",\n        deployment: Optional[\"DeploymentResponse\"] = None,\n        flow: Optional[\"Flow\"] = None,\n    ):\n        super().prepare_for_flow_run(flow_run, deployment, flow)\n\n        self.env = {**os.environ, **self.env}\n        self.command = (\n            f\"{sys.executable} -m prefect.engine\"\n            if self.command == self._base_flow_run_command()\n            else self.command\n        )\n
","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessJobConfiguration.validate_command","title":"validate_command","text":"

Make sure that the working directory is formatted for the current platform.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/process.py
@validator(\"working_dir\")\ndef validate_command(cls, v):\n\"\"\"Make sure that the working directory is formatted for the current platform.\"\"\"\n    if v:\n        return relative_path_to_current_platform(v)\n    return v\n
","tags":["Python API","workers","process"]},{"location":"api-ref/prefect/workers/process/#prefect.workers.process.ProcessWorkerResult","title":"ProcessWorkerResult","text":"

Bases: BaseWorkerResult

Contains information about the final state of a completed process

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/workers/process.py
class ProcessWorkerResult(BaseWorkerResult):\n\"\"\"Contains information about the final state of a completed process\"\"\"\n
","tags":["Python API","workers","process"]},{"location":"api-ref/python/","title":"Python SDK API","text":"

The Prefect Python SDK API is used to build, test, and execute workflows against the Prefect orchestration engine. This is the primary user-facing API.

Select links in the left navigation menu to explore.

","tags":["API","Python SDK"]},{"location":"api-ref/rest-api/","title":"REST API","text":"

The Prefect REST API is used for communicating data from clients to a Prefect server so that orchestration can be performed. This API is consumed by clients such as the Prefect Python SDK or the server dashboard.

Prefect Cloud and a locally hosted Prefect server each provide a REST API.

","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#interacting-with-the-rest-api","title":"Interacting with the REST API","text":"

You have many options to interact with the Prefect REST API:

","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#prefectclient-with-a-prefect-server","title":"PrefectClient with a Prefect server","text":"

This example uses PrefectClient with a locally hosted Prefect server:

import asyncio\nfrom prefect.client import get_client\n\nasync def get_flows():\n    client = get_client()\n    r = await client.read_flows(limit=5)\n    return r\n\nr = asyncio.run(get_flows())\n\nfor flow in r:\n    print(flow.name, flow.id)\n\nif __name__ == \"__main__\":\n    asyncio.run(get_flows())\n

Output:

cat-facts 58ed68b1-0201-4f37-adef-0ea24bd2a022\ndog-facts e7c0403d-44e7-45cf-a6c8-79117b7f3766\nsloth-facts 771c0574-f5bf-4f59-a69d-3be3e061a62d\ncapybara-facts fbadaf8b-584f-48b9-b092-07d351edd424\nlemur-facts 53f710e7-3b0f-4b2f-ab6b-44934111818c\n
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#requests-with-prefect","title":"Requests with Prefect","text":"

This example uses the Requests library with Prefect Cloud to return the five newest artifacts.

import requests\n\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/abc-my-cloud-account-id-is-here/workspaces/123-my-workspace-id-is-here\"\nPREFECT_API_KEY=\"123abc_my_api_key_goes_here\"\ndata = {\n    \"sort\": \"CREATED_DESC\",\n    \"limit\": 5,\n    \"artifacts\": {\n        \"key\": {\n            \"exists_\": True\n        }\n    }\n}\n\nheaders = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\nendpoint = f\"{PREFECT_API_URL}/artifacts/filter\"\n\nresponse = requests.post(endpoint, headers=headers, json=data)\nassert response.status_code == 200\nfor artifact in response.json():\n    print(artifact)\n
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#curl-with-prefect-cloud","title":"curl with Prefect Cloud","text":"

This example uses curl with Prefect Cloud to create a flow run:

ACCOUNT_ID=\"abc-my-cloud-account-id-goes-here\"\nWORKSPACE_ID=\"123-my-workspace-id-goes-here\"\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/$ACCOUNT_ID/workspaces/$WORKSPACE_ID\"\nPREFECT_API_KEY=\"123abc_my_api_key_goes_here\"\nDEPLOYMENT_ID=\"my_deployment_id\"\n\ncurl --location --request POST \"$PREFECT_API_URL/deployments/$DEPLOYMENT_ID/create_flow_run\" \\\n--header \"Content-Type: application/json\" \\\n--header \"Authorization: Bearer $PREFECT_API_KEY\" \\\n--header \"X-PREFECT-API-VERSION: 0.8.4\" \\\n--data-raw \"{}\"\n

Note that in this example --data-raw \"{}\" is required and is where you can specify other aspects of the flow run such as the state. Windows users substitute ^ for \\ for line multi-line commands.

","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#finding-your-prefect-cloud-details","title":"Finding your Prefect Cloud details","text":"

When working with the Prefect Cloud REST API you will need your Account ID and often the Workspace ID for the workspace you want to interact with. You can find both IDs for a Prefect profile in the CLI with prefect profile inspect my_profile. This command will also display your Prefect API key, as shown below:

PREFECT_API_URL='https://api.prefect.cloud/api/accounts/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here'\nPREFECT_API_KEY='123abc_my_api_key_is_here'\n

Alternatively, view your Account ID and Workspace ID in your browser URL. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.

","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#rest-guidelines","title":"REST Guidelines","text":"

The REST APIs adhere to the following guidelines:

","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#http-verbs","title":"HTTP verbs","text":"","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#filtering","title":"Filtering","text":"

Objects can be filtered by providing filter criteria in the body of a POST request. When multiple criteria are specified, logical AND will be applied to the criteria.

Filter criteria are structured as follows:

{\n\"objects\": {\n\"object_field\": {\n\"field_operator_\": <field_value>\n}\n}\n}\n

In this example, objects is the name of the collection to filter over (for example, flows). The collection can be either the object being queried for (flows for POST /flows/filter) or a related object (flow_runs for POST /flows/filter).

object_field is the name of the field over which to filter (name for flows). Note that some objects may have nested object fields, such as {flow_run: {state: {type: {any_: []}}}}.

field_operator_ is the operator to apply to a field when filtering. Common examples include:

For example, to query for flows with the tag \"database\" and failed flow runs, POST /flows/filter with the following request body:

{\n\"flows\": {\n\"tags\": {\n\"all_\": [\"database\"]\n}\n},\n\"flow_runs\": {\n\"state\": {\n\"type\": {\n\"any_\": [\"FAILED\"]\n}\n}\n}\n}\n
","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/rest-api/#openapi","title":"OpenAPI","text":"

The Prefect REST API can be fully described with an OpenAPI 3.0 compliant document. OpenAPI is a standard specification for describing REST APIs.

To generate the Prefect server's complete OpenAPI document, run the following commands in an interactive Python session:

from prefect.server.api.server import create_app\n\napp = create_app()\nopenapi_doc = app.openapi()\n

This document allows you to generate your own API client, explore the API using an API inspection tool, or write tests to ensure API compliance.

","tags":["REST API","Prefect Cloud","Prefect server","curl","PrefectClient","Requests","API reference"]},{"location":"api-ref/server/","title":"Server API","text":"

The Prefect server API is used by the server to work with workflow metadata and enforce orchestration logic. This API is primarily used by Prefect developers.

Select links in the left navigation menu to explore.

","tags":["API","Server API"]},{"location":"api-ref/server/api/admin/","title":"server.api.admin","text":"","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin","title":"prefect.server.api.admin","text":"

Routes for admin-level interactions with the Prefect REST API.

","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.clear_database","title":"clear_database async","text":"

Clear all database tables without dropping them.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/admin.py
@router.post(\"/database/clear\", status_code=status.HTTP_204_NO_CONTENT)\nasync def clear_database(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    confirm: bool = Body(\n        False,\n        embed=True,\n        description=\"Pass confirm=True to confirm you want to modify the database.\",\n    ),\n    response: Response = None,\n):\n\"\"\"Clear all database tables without dropping them.\"\"\"\n    if not confirm:\n        response.status_code = status.HTTP_400_BAD_REQUEST\n        return\n    async with db.session_context(begin_transaction=True) as session:\n        # work pool has a circular dependency on pool queue; delete it first\n        await session.execute(db.WorkPool.__table__.delete())\n        for table in reversed(db.Base.metadata.sorted_tables):\n            await session.execute(table.delete())\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.create_database","title":"create_database async","text":"

Create all database objects.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/admin.py
@router.post(\"/database/create\", status_code=status.HTTP_204_NO_CONTENT)\nasync def create_database(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    confirm: bool = Body(\n        False,\n        embed=True,\n        description=\"Pass confirm=True to confirm you want to modify the database.\",\n    ),\n    response: Response = None,\n):\n\"\"\"Create all database objects.\"\"\"\n    if not confirm:\n        response.status_code = status.HTTP_400_BAD_REQUEST\n        return\n\n    await db.create_db()\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.drop_database","title":"drop_database async","text":"

Drop all database objects.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/admin.py
@router.post(\"/database/drop\", status_code=status.HTTP_204_NO_CONTENT)\nasync def drop_database(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    confirm: bool = Body(\n        False,\n        embed=True,\n        description=\"Pass confirm=True to confirm you want to modify the database.\",\n    ),\n    response: Response = None,\n):\n\"\"\"Drop all database objects.\"\"\"\n    if not confirm:\n        response.status_code = status.HTTP_400_BAD_REQUEST\n        return\n\n    await db.drop_db()\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.read_settings","title":"read_settings async","text":"

Get the current Prefect REST API settings.

Secret setting values will be obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/admin.py
@router.get(\"/settings\")\nasync def read_settings() -> prefect.settings.Settings:\n\"\"\"\n    Get the current Prefect REST API settings.\n\n    Secret setting values will be obfuscated.\n    \"\"\"\n    return prefect.settings.get_current_settings().with_obfuscated_secrets()\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/admin/#prefect.server.api.admin.read_version","title":"read_version async","text":"

Returns the Prefect version number

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/admin.py
@router.get(\"/version\")\nasync def read_version() -> str:\n\"\"\"Returns the Prefect version number\"\"\"\n    return prefect.__version__\n
","tags":["Prefect API","administration"]},{"location":"api-ref/server/api/dependencies/","title":"server.api.dependencies","text":"","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies","title":"prefect.server.api.dependencies","text":"

Utilities for injecting FastAPI dependencies.

","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.EnforceMinimumAPIVersion","title":"EnforceMinimumAPIVersion","text":"

FastAPI Dependency used to check compatibility between the version of the api and a given request.

Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it to the api's version. Rejects requests that are lower than the minimum version.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/dependencies.py
class EnforceMinimumAPIVersion:\n\"\"\"\n    FastAPI Dependency used to check compatibility between the version of the api\n    and a given request.\n\n    Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it\n    to the api's version. Rejects requests that are lower than the minimum version.\n    \"\"\"\n\n    def __init__(self, minimum_api_version: str, logger: logging.Logger):\n        self.minimum_api_version = minimum_api_version\n        versions = [int(v) for v in minimum_api_version.split(\".\")]\n        self.api_major = versions[0]\n        self.api_minor = versions[1]\n        self.api_patch = versions[2]\n        self.logger = logger\n\n    async def __call__(\n        self,\n        x_prefect_api_version: str = Header(None),\n    ):\n        request_version = x_prefect_api_version\n\n        # if no version header, assume latest and continue\n        if not request_version:\n            return\n\n        # parse version\n        try:\n            major, minor, patch = [int(v) for v in request_version.split(\".\")]\n        except ValueError:\n            await self._notify_of_invalid_value(request_version)\n            raise HTTPException(\n                status_code=status.HTTP_400_BAD_REQUEST,\n                detail=(\n                    \"Invalid X-PREFECT-API-VERSION header format.\"\n                    f\"Expected header in format 'x.y.z' but received {request_version}\"\n                ),\n            )\n\n        if (major, minor, patch) < (self.api_major, self.api_minor, self.api_patch):\n            await self._notify_of_outdated_version(request_version)\n            raise HTTPException(\n                status_code=status.HTTP_400_BAD_REQUEST,\n                detail=(\n                    f\"The request specified API version {request_version} but this \"\n                    f\"server requires version {self.minimum_api_version} or higher.\"\n                ),\n            )\n\n    async def _notify_of_invalid_value(self, request_version: str):\n        self.logger.error(\n            f\"Invalid X-PREFECT-API-VERSION header format: '{request_version}'\"\n        )\n\n    async def _notify_of_outdated_version(self, request_version: str):\n        self.logger.error(\n            f\"X-PREFECT-API-VERSION header specifies version '{request_version}' \"\n            f\"but minimum allowed version is '{self.minimum_api_version}'\"\n        )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/dependencies/#prefect.server.api.dependencies.LimitBody","title":"LimitBody","text":"

A fastapi.Depends factory for pulling a limit: int parameter from the request body while determing the default from the current settings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/dependencies.py
def LimitBody() -> Depends:\n\"\"\"\n    A `fastapi.Depends` factory for pulling a `limit: int` parameter from the\n    request body while determing the default from the current settings.\n    \"\"\"\n\n    def get_limit(\n        limit: int = Body(\n            None,\n            description=\"Defaults to PREFECT_API_DEFAULT_LIMIT if not provided.\",\n        )\n    ):\n        default_limit = PREFECT_API_DEFAULT_LIMIT.value()\n        limit = limit if limit is not None else default_limit\n        if not limit >= 0:\n            raise HTTPException(\n                status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n                detail=\"Invalid limit: must be greater than or equal to 0.\",\n            )\n        if limit > default_limit:\n            raise HTTPException(\n                status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n                detail=f\"Invalid limit: must be less than or equal to {default_limit}.\",\n            )\n        return limit\n\n    return Depends(get_limit)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/deployments/","title":"server.api.deployments","text":"","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments","title":"prefect.server.api.deployments","text":"

Routes for interacting with Deployment objects.

","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.count_deployments","title":"count_deployments async","text":"

Count deployments.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/count\")\nasync def count_deployments(\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n\"\"\"\n    Count deployments.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.deployments.count_deployments(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.create_deployment","title":"create_deployment async","text":"

Gracefully creates a new deployment from the provided schema. If a deployment with the same name and flow_id already exists, the deployment is updated.

If the deployment has an active schedule, flow runs will be scheduled. When upserting, any scheduled runs from the existing deployment will be deleted.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/\")\nasync def create_deployment(\n    deployment: schemas.actions.DeploymentCreate,\n    response: Response,\n    worker_lookups: WorkerLookups = Depends(WorkerLookups),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n\"\"\"\n    Gracefully creates a new deployment from the provided schema. If a deployment with\n    the same name and flow_id already exists, the deployment is updated.\n\n    If the deployment has an active schedule, flow runs will be scheduled.\n    When upserting, any scheduled runs from the existing deployment will be deleted.\n    \"\"\"\n\n    async with db.session_context(begin_transaction=True) as session:\n        if (\n            deployment.work_pool_name\n            and deployment.work_pool_name != DEFAULT_AGENT_WORK_POOL_NAME\n        ):\n            # Make sure that deployment is valid before beginning creation process\n            work_pool = await models.workers.read_work_pool_by_name(\n                session=session, work_pool_name=deployment.work_pool_name\n            )\n            if work_pool is None:\n                raise HTTPException(\n                    status_code=status.HTTP_404_NOT_FOUND,\n                    detail=f'Work pool \"{deployment.work_pool_name}\" not found.',\n                )\n            try:\n                deployment.check_valid_configuration(work_pool.base_job_template)\n            except (MissingVariableError, jsonschema.exceptions.ValidationError) as exc:\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=f\"Error creating deployment: {exc!r}\",\n                )\n\n        # hydrate the input model into a full model\n        deployment_dict = deployment.dict(exclude={\"work_pool_name\"})\n        if deployment.work_pool_name and deployment.work_queue_name:\n            # If a specific pool name/queue name combination was provided, get the\n            # ID for that work pool queue.\n            deployment_dict[\"work_queue_id\"] = (\n                await worker_lookups._get_work_queue_id_from_name(\n                    session=session,\n                    work_pool_name=deployment.work_pool_name,\n                    work_queue_name=deployment.work_queue_name,\n                    create_queue_if_not_found=True,\n                )\n            )\n        elif deployment.work_pool_name:\n            # If just a pool name was provided, get the ID for its default\n            # work pool queue.\n            deployment_dict[\"work_queue_id\"] = (\n                await worker_lookups._get_default_work_queue_id_from_work_pool_name(\n                    session=session,\n                    work_pool_name=deployment.work_pool_name,\n                )\n            )\n        elif deployment.work_queue_name:\n            # If just a queue name was provided, ensure that the queue exists and\n            # get its ID.\n            work_queue = await models.work_queues._ensure_work_queue_exists(\n                session=session, name=deployment.work_queue_name\n            )\n            deployment_dict[\"work_queue_id\"] = work_queue.id\n\n        deployment = schemas.core.Deployment(**deployment_dict)\n        # check to see if relevant blocks exist, allowing us throw a useful error message\n        # for debugging\n        if deployment.infrastructure_document_id is not None:\n            infrastructure_block = (\n                await models.block_documents.read_block_document_by_id(\n                    session=session,\n                    block_document_id=deployment.infrastructure_document_id,\n                )\n            )\n            if not infrastructure_block:\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=(\n                        \"Error creating deployment. Could not find infrastructure\"\n                        f\" block with id: {deployment.infrastructure_document_id}. This\"\n                        \" usually occurs when applying a deployment specification that\"\n                        \" was built against a different Prefect database / workspace.\"\n                    ),\n                )\n\n        if deployment.storage_document_id is not None:\n            infrastructure_block = (\n                await models.block_documents.read_block_document_by_id(\n                    session=session,\n                    block_document_id=deployment.storage_document_id,\n                )\n            )\n            if not infrastructure_block:\n                raise HTTPException(\n                    status_code=status.HTTP_409_CONFLICT,\n                    detail=(\n                        \"Error creating deployment. Could not find storage block with\"\n                        f\" id: {deployment.storage_document_id}. This usually occurs\"\n                        \" when applying a deployment specification that was built\"\n                        \" against a different Prefect database / workspace.\"\n                    ),\n                )\n\n        now = pendulum.now(\"UTC\")\n        model = await models.deployments.create_deployment(\n            session=session, deployment=deployment\n        )\n\n        if model.created >= now:\n            response.status_code = status.HTTP_201_CREATED\n\n        return schemas.responses.DeploymentResponse.from_orm(model)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.create_flow_run_from_deployment","title":"create_flow_run_from_deployment async","text":"

Create a flow run from a deployment.

Any parameters not provided will be inferred from the deployment's parameters. If tags are not provided, the deployment's tags will be used.

If no state is provided, the flow run will be created in a SCHEDULED state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/{id}/create_flow_run\")\nasync def create_flow_run_from_deployment(\n    flow_run: schemas.actions.DeploymentFlowRunCreate,\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    worker_lookups: WorkerLookups = Depends(WorkerLookups),\n    response: Response = None,\n) -> schemas.responses.FlowRunResponse:\n\"\"\"\n    Create a flow run from a deployment.\n\n    Any parameters not provided will be inferred from the deployment's parameters.\n    If tags are not provided, the deployment's tags will be used.\n\n    If no state is provided, the flow run will be created in a SCHEDULED state.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        # get relevant info from the deployment\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n\n        parameters = deployment.parameters\n        parameters.update(flow_run.parameters or {})\n\n        work_queue_name = deployment.work_queue_name\n        work_queue_id = deployment.work_queue_id\n\n        if flow_run.work_queue_name:\n            # cant mutate the ORM model or else it will commit the changes back\n            work_queue_id = await worker_lookups._get_work_queue_id_from_name(\n                session=session,\n                work_pool_name=deployment.work_queue.work_pool.name,\n                work_queue_name=flow_run.work_queue_name,\n                create_queue_if_not_found=True,\n            )\n            work_queue_name = flow_run.work_queue_name\n\n        # hydrate the input model into a full flow run / state model\n        flow_run = schemas.core.FlowRun(\n            **flow_run.dict(\n                exclude={\n                    \"parameters\",\n                    \"tags\",\n                    \"infrastructure_document_id\",\n                    \"work_queue_name\",\n                }\n            ),\n            flow_id=deployment.flow_id,\n            deployment_id=deployment.id,\n            parameters=parameters,\n            tags=set(deployment.tags).union(flow_run.tags),\n            infrastructure_document_id=(\n                flow_run.infrastructure_document_id\n                or deployment.infrastructure_document_id\n            ),\n            work_queue_name=work_queue_name,\n            work_queue_id=work_queue_id,\n        )\n\n        if not flow_run.state:\n            flow_run.state = schemas.states.Scheduled()\n\n        now = pendulum.now(\"UTC\")\n        model = await models.flow_runs.create_flow_run(\n            session=session, flow_run=flow_run\n        )\n        if model.created >= now:\n            response.status_code = status.HTTP_201_CREATED\n        return schemas.responses.FlowRunResponse.from_orm(model)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.delete_deployment","title":"delete_deployment async","text":"

Delete a deployment by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Delete a deployment by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.deployments.delete_deployment(\n            session=session, deployment_id=deployment_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n        )\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployment","title":"read_deployment async","text":"

Get a deployment by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.get(\"/{id}\")\nasync def read_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n\"\"\"\n    Get a deployment by id.\n    \"\"\"\n    async with db.session_context() as session:\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        return schemas.responses.DeploymentResponse.from_orm(deployment)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployment_by_name","title":"read_deployment_by_name async","text":"

Get a deployment using the name of the flow and the deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.get(\"/name/{flow_name}/{deployment_name}\")\nasync def read_deployment_by_name(\n    flow_name: str = Path(..., description=\"The name of the flow\"),\n    deployment_name: str = Path(..., description=\"The name of the deployment\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.DeploymentResponse:\n\"\"\"\n    Get a deployment using the name of the flow and the deployment.\n    \"\"\"\n    async with db.session_context() as session:\n        deployment = await models.deployments.read_deployment_by_name(\n            session=session, name=deployment_name, flow_name=flow_name\n        )\n        if not deployment:\n            raise HTTPException(\n                status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        return schemas.responses.DeploymentResponse.from_orm(deployment)\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.read_deployments","title":"read_deployments async","text":"

Query for deployments.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/filter\")\nasync def read_deployments(\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    sort: schemas.sorting.DeploymentSort = Body(\n        schemas.sorting.DeploymentSort.NAME_ASC\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.DeploymentResponse]:\n\"\"\"\n    Query for deployments.\n    \"\"\"\n    async with db.session_context() as session:\n        response = await models.deployments.read_deployments(\n            session=session,\n            offset=offset,\n            sort=sort,\n            limit=limit,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n        return [\n            schemas.responses.DeploymentResponse.from_orm(orm_deployment=deployment)\n            for deployment in response\n        ]\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.schedule_deployment","title":"schedule_deployment async","text":"

Schedule runs for a deployment. For backfills, provide start/end times in the past.

This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected.

- Runs will be generated starting on or after the `start_time`\n- No more than `max_runs` runs will be generated\n- No runs will be generated after `end_time` is reached\n- At least `min_runs` runs will be generated\n- Runs will be generated until at least `start_time + min_time` is reached\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/{id}/schedule\")\nasync def schedule_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    start_time: DateTimeTZ = Body(None, description=\"The earliest date to schedule\"),\n    end_time: DateTimeTZ = Body(None, description=\"The latest date to schedule\"),\n    min_time: datetime.timedelta = Body(\n        None,\n        description=(\n            \"Runs will be scheduled until at least this long after the `start_time`\"\n        ),\n    ),\n    min_runs: int = Body(None, description=\"The minimum number of runs to schedule\"),\n    max_runs: int = Body(None, description=\"The maximum number of runs to schedule\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n\"\"\"\n    Schedule runs for a deployment. For backfills, provide start/end times in the past.\n\n    This function will generate the minimum number of runs that satisfy the min\n    and max times, and the min and max counts. Specifically, the following order\n    will be respected.\n\n        - Runs will be generated starting on or after the `start_time`\n        - No more than `max_runs` runs will be generated\n        - No runs will be generated after `end_time` is reached\n        - At least `min_runs` runs will be generated\n        - Runs will be generated until at least `start_time + min_time` is reached\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        await models.deployments.schedule_runs(\n            session=session,\n            deployment_id=deployment_id,\n            start_time=start_time,\n            min_time=min_time,\n            end_time=end_time,\n            min_runs=min_runs,\n            max_runs=max_runs,\n        )\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.set_schedule_active","title":"set_schedule_active async","text":"

Set a deployment schedule to active. Runs will be scheduled immediately.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/{id}/set_schedule_active\")\nasync def set_schedule_active(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n\"\"\"\n    Set a deployment schedule to active. Runs will be scheduled immediately.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        deployment.is_schedule_active = True\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.set_schedule_inactive","title":"set_schedule_inactive async","text":"

Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled state will be deleted.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.post(\"/{id}/set_schedule_inactive\")\nasync def set_schedule_inactive(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> None:\n\"\"\"\n    Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled\n    state will be deleted.\n    \"\"\"\n    async with db.session_context(begin_transaction=False) as session:\n        deployment = await models.deployments.read_deployment(\n            session=session, deployment_id=deployment_id\n        )\n        if not deployment:\n            raise HTTPException(\n                status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n            )\n        deployment.is_schedule_active = False\n        # commit here to make the inactive schedule \"visible\" to the scheduler service\n        await session.commit()\n\n        # delete any auto scheduled runs\n        await models.deployments._delete_scheduled_runs(\n            session=session,\n            deployment_id=deployment_id,\n            db=db,\n            auto_scheduled_only=True,\n        )\n\n        await session.commit()\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/deployments/#prefect.server.api.deployments.work_queue_check_for_deployment","title":"work_queue_check_for_deployment async","text":"

Get list of work-queues that are able to pick up the specified deployment.

This endpoint is intended to be used by the UI to provide users warnings about deployments that are unable to be executed because there are no work queues that will pick up their runs, based on existing filter criteria. It may be deprecated in the future because there is not a strict relationship between work queues and deployments.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/deployments.py
@router.get(\"/{id}/work_queue_check\", deprecated=True)\nasync def work_queue_check_for_deployment(\n    deployment_id: UUID = Path(..., description=\"The deployment id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.WorkQueue]:\n\"\"\"\n    Get list of work-queues that are able to pick up the specified deployment.\n\n    This endpoint is intended to be used by the UI to provide users warnings\n    about deployments that are unable to be executed because there are no work\n    queues that will pick up their runs, based on existing filter criteria. It\n    may be deprecated in the future because there is not a strict relationship\n    between work queues and deployments.\n    \"\"\"\n    try:\n        async with db.session_context() as session:\n            work_queues = await models.deployments.check_work_queues_for_deployment(\n                session=session, deployment_id=deployment_id\n            )\n    except ObjectNotFoundError:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Deployment not found\"\n        )\n    return work_queues\n
","tags":["Prefect API","deployments"]},{"location":"api-ref/server/api/flow_run_states/","title":"server.api.flow_run_states","text":"","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states","title":"prefect.server.api.flow_run_states","text":"

Routes for interacting with flow run state objects.

","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states.read_flow_run_state","title":"read_flow_run_state async","text":"

Get a flow run state by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_run_states.py
@router.get(\"/{id}\")\nasync def read_flow_run_state(\n    flow_run_state_id: UUID = Path(\n        ..., description=\"The flow run state id\", alias=\"id\"\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.states.State:\n\"\"\"\n    Get a flow run state by id.\n    \"\"\"\n    async with db.session_context() as session:\n        flow_run_state = await models.flow_run_states.read_flow_run_state(\n            session=session, flow_run_state_id=flow_run_state_id\n        )\n    if not flow_run_state:\n        raise HTTPException(\n            status.HTTP_404_NOT_FOUND, detail=\"Flow run state not found\"\n        )\n    return flow_run_state\n
","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_run_states/#prefect.server.api.flow_run_states.read_flow_run_states","title":"read_flow_run_states async","text":"

Get states associated with a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_run_states.py
@router.get(\"/\")\nasync def read_flow_run_states(\n    flow_run_id: UUID,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.states.State]:\n\"\"\"\n    Get states associated with a flow run.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_run_states.read_flow_run_states(\n            session=session, flow_run_id=flow_run_id\n        )\n
","tags":["Prefect API","flow runs","states"]},{"location":"api-ref/server/api/flow_runs/","title":"server.api.flow_runs","text":"","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs","title":"prefect.server.api.flow_runs","text":"

Routes for interacting with flow run objects.

","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.average_flow_run_lateness","title":"average_flow_run_lateness async","text":"

Query for average flow-run lateness in seconds.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/lateness\")\nasync def average_flow_run_lateness(\n    flows: Optional[schemas.filters.FlowFilter] = None,\n    flow_runs: Optional[schemas.filters.FlowRunFilter] = None,\n    task_runs: Optional[schemas.filters.TaskRunFilter] = None,\n    deployments: Optional[schemas.filters.DeploymentFilter] = None,\n    work_pools: Optional[schemas.filters.WorkPoolFilter] = None,\n    work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> Optional[float]:\n\"\"\"\n    Query for average flow-run lateness in seconds.\n    \"\"\"\n    async with db.session_context() as session:\n        if db.dialect.name == \"sqlite\":\n            # Since we want an _average_ of the lateness we're unable to use\n            # the existing FlowRun.expected_start_time_delta property as it\n            # returns a timedelta and SQLite is unable to properly deal with it\n            # and always returns 1970.0 as the average. This copies the same\n            # logic but ensures that it returns the number of seconds instead\n            # so it's compatible with SQLite.\n            base_query = sa.case(\n                (\n                    db.FlowRun.start_time > db.FlowRun.expected_start_time,\n                    sa.func.strftime(\"%s\", db.FlowRun.start_time)\n                    - sa.func.strftime(\"%s\", db.FlowRun.expected_start_time),\n                ),\n                (\n                    db.FlowRun.start_time.is_(None)\n                    & db.FlowRun.state_type.notin_(schemas.states.TERMINAL_STATES)\n                    & (db.FlowRun.expected_start_time < sa.func.datetime(\"now\")),\n                    sa.func.strftime(\"%s\", sa.func.datetime(\"now\"))\n                    - sa.func.strftime(\"%s\", db.FlowRun.expected_start_time),\n                ),\n                else_=0,\n            )\n        else:\n            base_query = db.FlowRun.estimated_start_time_delta\n\n        query = await models.flow_runs._apply_flow_run_filters(\n            sa.select(sa.func.avg(base_query)),\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n        result = await session.execute(query)\n\n        avg_lateness = result.scalar()\n\n        if avg_lateness is None:\n            return None\n        elif isinstance(avg_lateness, datetime.timedelta):\n            return avg_lateness.total_seconds()\n        else:\n            return avg_lateness\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.count_flow_runs","title":"count_flow_runs async","text":"

Query for flow runs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/count\")\nasync def count_flow_runs(\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n\"\"\"\n    Query for flow runs.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_runs.count_flow_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n        )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.create_flow_run","title":"create_flow_run async","text":"

Create a flow run. If a flow run with the same flow_id and idempotency key already exists, the existing flow run will be returned.

If no state is provided, the flow run will be created in a PENDING state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/\")\nasync def create_flow_run(\n    flow_run: schemas.actions.FlowRunCreate,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    orchestration_parameters: dict = Depends(\n        orchestration_dependencies.provide_flow_orchestration_parameters\n    ),\n    api_version=Depends(dependencies.provide_request_api_version),\n) -> schemas.responses.FlowRunResponse:\n\"\"\"\n    Create a flow run. If a flow run with the same flow_id and\n    idempotency key already exists, the existing flow run will be returned.\n\n    If no state is provided, the flow run will be created in a PENDING state.\n    \"\"\"\n    # hydrate the input model into a full flow run / state model\n    flow_run = schemas.core.FlowRun(**flow_run.dict())\n\n    # pass the request version to the orchestration engine to support compatibility code\n    orchestration_parameters.update({\"api-version\": api_version})\n\n    if not flow_run.state:\n        flow_run.state = schemas.states.Pending()\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.flow_runs.create_flow_run(\n            session=session,\n            flow_run=flow_run,\n            orchestration_parameters=orchestration_parameters,\n        )\n        if model.created >= now:\n            response.status_code = status.HTTP_201_CREATED\n\n        return schemas.responses.FlowRunResponse.from_orm(model)\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.delete_flow_run","title":"delete_flow_run async","text":"

Delete a flow run by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow_run(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Delete a flow run by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flow_runs.delete_flow_run(\n            session=session, flow_run_id=flow_run_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\"\n        )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.flow_run_history","title":"flow_run_history async","text":"

Query for flow run history data across a given range and interval.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/history\")\nasync def flow_run_history(\n    history_start: DateTimeTZ = Body(..., description=\"The history's start time.\"),\n    history_end: DateTimeTZ = Body(..., description=\"The history's end time.\"),\n    history_interval: datetime.timedelta = Body(\n        ...,\n        description=(\n            \"The size of each history interval, in seconds. Must be at least 1 second.\"\n        ),\n        alias=\"history_interval_seconds\",\n    ),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.HistoryResponse]:\n\"\"\"\n    Query for flow run history data across a given range and interval.\n    \"\"\"\n    if history_interval < datetime.timedelta(seconds=1):\n        raise HTTPException(\n            status.HTTP_422_UNPROCESSABLE_ENTITY,\n            detail=\"History interval must not be less than 1 second.\",\n        )\n\n    async with db.session_context() as session:\n        return await run_history(\n            session=session,\n            run_type=\"flow_run\",\n            history_start=history_start,\n            history_end=history_end,\n            history_interval=history_interval,\n            flows=flows,\n            flow_runs=flow_runs,\n            task_runs=task_runs,\n            deployments=deployments,\n            work_pools=work_pools,\n            work_queues=work_queues,\n        )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run","title":"read_flow_run async","text":"

Get a flow run by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.get(\"/{id}\")\nasync def read_flow_run(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.responses.FlowRunResponse:\n\"\"\"\n    Get a flow run by id.\n    \"\"\"\n    async with db.session_context() as session:\n        flow_run = await models.flow_runs.read_flow_run(\n            session=session, flow_run_id=flow_run_id\n        )\n        if not flow_run:\n            raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\")\n        return schemas.responses.FlowRunResponse.from_orm(flow_run)\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_run_graph","title":"read_flow_run_graph async","text":"

Get a task run dependency map for a given flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.get(\"/{id}/graph\")\nasync def read_flow_run_graph(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[DependencyResult]:\n\"\"\"\n    Get a task run dependency map for a given flow run.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flow_runs.read_task_run_dependencies(\n            session=session, flow_run_id=flow_run_id\n        )\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.read_flow_runs","title":"read_flow_runs async","text":"

Query for flow runs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/filter\", response_class=ORJSONResponse)\nasync def read_flow_runs(\n    sort: schemas.sorting.FlowRunSort = Body(schemas.sorting.FlowRunSort.ID_DESC),\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_pool_queues: schemas.filters.WorkQueueFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.FlowRunResponse]:\n\"\"\"\n    Query for flow runs.\n    \"\"\"\n    async with db.session_context() as session:\n        db_flow_runs = await models.flow_runs.read_flow_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_pool_queues,\n            offset=offset,\n            limit=limit,\n            sort=sort,\n        )\n\n        # Instead of relying on fastapi.encoders.jsonable_encoder to convert the\n        # response to JSON, we do so more efficiently ourselves.\n        # In particular, the FastAPI encoder is very slow for large, nested objects.\n        # See: https://github.com/tiangolo/fastapi/issues/1224\n        encoded = [\n            schemas.responses.FlowRunResponse.from_orm(fr).dict(json_compatible=True)\n            for fr in db_flow_runs\n        ]\n        return ORJSONResponse(content=encoded)\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.resume_flow_run","title":"resume_flow_run async","text":"

Resume a paused flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/{id}/resume\")\nasync def resume_flow_run(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    flow_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_flow_policy\n    ),\n    orchestration_parameters: dict = Depends(\n        orchestration_dependencies.provide_flow_orchestration_parameters\n    ),\n    api_version=Depends(dependencies.provide_request_api_version),\n) -> OrchestrationResult:\n\"\"\"\n    Resume a paused flow run.\n    \"\"\"\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        flow_run = await models.flow_runs.read_flow_run(session, flow_run_id)\n        state = flow_run.state\n\n        if state is None or state.type != schemas.states.StateType.PAUSED:\n            result = OrchestrationResult(\n                state=None,\n                status=schemas.responses.SetStateStatus.ABORT,\n                details=schemas.responses.StateAbortDetails(\n                    reason=\"Cannot resume a flow run that is not paused.\"\n                ),\n            )\n            return result\n\n        orchestration_parameters.update({\"api-version\": api_version})\n\n        if state.state_details.pause_reschedule:\n            orchestration_result = await models.flow_runs.set_flow_run_state(\n                session=session,\n                flow_run_id=flow_run_id,\n                state=schemas.states.Scheduled(\n                    name=\"Resuming\", scheduled_time=pendulum.now(\"UTC\")\n                ),\n                flow_policy=flow_policy,\n                orchestration_parameters=orchestration_parameters,\n            )\n        else:\n            orchestration_result = await models.flow_runs.set_flow_run_state(\n                session=session,\n                flow_run_id=flow_run_id,\n                state=schemas.states.Running(),\n                flow_policy=flow_policy,\n                orchestration_parameters=orchestration_parameters,\n            )\n\n        # set the 201 if a new state was created\n        if orchestration_result.state and orchestration_result.state.timestamp >= now:\n            response.status_code = status.HTTP_201_CREATED\n        else:\n            response.status_code = status.HTTP_200_OK\n\n        return orchestration_result\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.set_flow_run_state","title":"set_flow_run_state async","text":"

Set a flow run state, invoking any orchestration rules.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.post(\"/{id}/set_state\")\nasync def set_flow_run_state(\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    state: schemas.actions.StateCreate = Body(..., description=\"The intended state.\"),\n    force: bool = Body(\n        False,\n        description=(\n            \"If false, orchestration rules will be applied that may alter or prevent\"\n            \" the state transition. If True, orchestration rules are not applied.\"\n        ),\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    flow_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_flow_policy\n    ),\n    orchestration_parameters: dict = Depends(\n        orchestration_dependencies.provide_flow_orchestration_parameters\n    ),\n    api_version=Depends(dependencies.provide_request_api_version),\n) -> OrchestrationResult:\n\"\"\"Set a flow run state, invoking any orchestration rules.\"\"\"\n\n    # pass the request version to the orchestration engine to support compatibility code\n    orchestration_parameters.update({\"api-version\": api_version})\n\n    now = pendulum.now(\"UTC\")\n\n    # create the state\n    async with db.session_context(\n        begin_transaction=True, with_for_update=True\n    ) as session:\n        orchestration_result = await models.flow_runs.set_flow_run_state(\n            session=session,\n            flow_run_id=flow_run_id,\n            # convert to a full State object\n            state=schemas.states.State.parse_obj(state),\n            force=force,\n            flow_policy=flow_policy,\n            orchestration_parameters=orchestration_parameters,\n        )\n\n    # set the 201 if a new state was created\n    if orchestration_result.state and orchestration_result.state.timestamp >= now:\n        response.status_code = status.HTTP_201_CREATED\n    else:\n        response.status_code = status.HTTP_200_OK\n\n    return orchestration_result\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flow_runs/#prefect.server.api.flow_runs.update_flow_run","title":"update_flow_run async","text":"

Updates a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flow_runs.py
@router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_flow_run(\n    flow_run: schemas.actions.FlowRunUpdate,\n    flow_run_id: UUID = Path(..., description=\"The flow run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Updates a flow run.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flow_runs.update_flow_run(\n            session=session, flow_run=flow_run, flow_run_id=flow_run_id\n        )\n    if not result:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Flow run not found\")\n
","tags":["Prefect API","flow runs"]},{"location":"api-ref/server/api/flows/","title":"server.api.flows","text":"","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows","title":"prefect.server.api.flows","text":"

Routes for interacting with flow objects.

","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.count_flows","title":"count_flows async","text":"

Count flows.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.post(\"/count\")\nasync def count_flows(\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> int:\n\"\"\"\n    Count flows.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flows.count_flows(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n        )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.create_flow","title":"create_flow async","text":"

Gracefully creates a new flow from the provided schema. If a flow with the same name already exists, the existing flow is returned.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.post(\"/\")\nasync def create_flow(\n    flow: schemas.actions.FlowCreate,\n    response: Response,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n\"\"\"Gracefully creates a new flow from the provided schema. If a flow with the\n    same name already exists, the existing flow is returned.\n    \"\"\"\n    # hydrate the input model into a full flow model\n    flow = schemas.core.Flow(**flow.dict())\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.flows.create_flow(session=session, flow=flow)\n\n    if model.created >= now:\n        response.status_code = status.HTTP_201_CREATED\n    return model\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.delete_flow","title":"delete_flow async","text":"

Delete a flow by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_flow(\n    flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Delete a flow by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flows.delete_flow(session=session, flow_id=flow_id)\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flow","title":"read_flow async","text":"

Get a flow by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.get(\"/{id}\")\nasync def read_flow(\n    flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n\"\"\"\n    Get a flow by id.\n    \"\"\"\n    async with db.session_context() as session:\n        flow = await models.flows.read_flow(session=session, flow_id=flow_id)\n    if not flow:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n    return flow\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flow_by_name","title":"read_flow_by_name async","text":"

Get a flow by name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.get(\"/name/{name}\")\nasync def read_flow_by_name(\n    name: str = Path(..., description=\"The name of the flow\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.Flow:\n\"\"\"\n    Get a flow by name.\n    \"\"\"\n    async with db.session_context() as session:\n        flow = await models.flows.read_flow_by_name(session=session, name=name)\n    if not flow:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n    return flow\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.read_flows","title":"read_flows async","text":"

Query for flows.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.post(\"/filter\")\nasync def read_flows(\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    sort: schemas.sorting.FlowSort = Body(schemas.sorting.FlowSort.NAME_ASC),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.Flow]:\n\"\"\"\n    Query for flows.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.flows.read_flows(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            sort=sort,\n            offset=offset,\n            limit=limit,\n        )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/flows/#prefect.server.api.flows.update_flow","title":"update_flow async","text":"

Updates a flow.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/flows.py
@router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_flow(\n    flow: schemas.actions.FlowUpdate,\n    flow_id: UUID = Path(..., description=\"The flow id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Updates a flow.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.flows.update_flow(\n            session=session, flow=flow, flow_id=flow_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow not found\"\n        )\n
","tags":["Prefect API","flows"]},{"location":"api-ref/server/api/run_history/","title":"server.api.run_history","text":"","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/run_history/#prefect.server.api.run_history","title":"prefect.server.api.run_history","text":"

Utilities for querying flow and task run history.

","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/run_history/#prefect.server.api.run_history.run_history","title":"run_history async","text":"

Produce a history of runs aggregated by interval and state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/run_history.py
@inject_db\nasync def run_history(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    run_type: Literal[\"flow_run\", \"task_run\"],\n    history_start: DateTimeTZ,\n    history_end: DateTimeTZ,\n    history_interval: datetime.timedelta,\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    work_pools: schemas.filters.WorkPoolFilter = None,\n    work_queues: schemas.filters.WorkQueueFilter = None,\n) -> List[schemas.responses.HistoryResponse]:\n\"\"\"\n    Produce a history of runs aggregated by interval and state\n    \"\"\"\n\n    # SQLite has issues with very small intervals\n    # (by 0.001 seconds it stops incrementing the interval)\n    if history_interval < datetime.timedelta(seconds=1):\n        raise ValueError(\"History interval must not be less than 1 second.\")\n\n    # prepare run-specific models\n    if run_type == \"flow_run\":\n        run_model = db.FlowRun\n        run_filter_function = models.flow_runs._apply_flow_run_filters\n    elif run_type == \"task_run\":\n        run_model = db.TaskRun\n        run_filter_function = models.task_runs._apply_task_run_filters\n    else:\n        raise ValueError(\n            f\"Unknown run type {run_type!r}. Expected 'flow_run' or 'task_run'.\"\n        )\n\n    # create a CTE for timestamp intervals\n    intervals = db.make_timestamp_intervals(\n        history_start,\n        history_end,\n        history_interval,\n    ).cte(\"intervals\")\n\n    # apply filters to the flow runs (and related states)\n    runs = (\n        await run_filter_function(\n            sa.select(\n                run_model.id,\n                run_model.expected_start_time,\n                run_model.estimated_run_time,\n                run_model.estimated_start_time_delta,\n                run_model.state_type,\n                run_model.state_name,\n            ).select_from(run_model),\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            work_pool_filter=work_pools,\n            work_queue_filter=work_queues,\n        )\n    ).alias(\"runs\")\n    # outer join intervals to the filtered runs to create a dataset composed of\n    # every interval and the aggregate of all its runs. The runs aggregate is represented\n    # by a descriptive JSON object\n    counts = (\n        sa.select(\n            intervals.c.interval_start,\n            intervals.c.interval_end,\n            # build a JSON object, ignoring the case where the count of runs is 0\n            sa.case(\n                (sa.func.count(runs.c.id) == 0, None),\n                else_=db.build_json_object(\n                    \"state_type\",\n                    runs.c.state_type,\n                    \"state_name\",\n                    runs.c.state_name,\n                    \"count_runs\",\n                    sa.func.count(runs.c.id),\n                    # estimated run times only includes positive run times (to avoid any unexpected corner cases)\n                    \"sum_estimated_run_time\",\n                    sa.func.sum(\n                        db.greatest(0, sa.extract(\"epoch\", runs.c.estimated_run_time))\n                    ),\n                    # estimated lateness is the sum of any positive start time deltas\n                    \"sum_estimated_lateness\",\n                    sa.func.sum(\n                        db.greatest(\n                            0, sa.extract(\"epoch\", runs.c.estimated_start_time_delta)\n                        )\n                    ),\n                ),\n            ).label(\"state_agg\"),\n        )\n        .select_from(intervals)\n        .join(\n            runs,\n            sa.and_(\n                runs.c.expected_start_time >= intervals.c.interval_start,\n                runs.c.expected_start_time < intervals.c.interval_end,\n            ),\n            isouter=True,\n        )\n        .group_by(\n            intervals.c.interval_start,\n            intervals.c.interval_end,\n            runs.c.state_type,\n            runs.c.state_name,\n        )\n    ).alias(\"counts\")\n\n    # aggregate all state-aggregate objects into a single array for each interval,\n    # ensuring that intervals with no runs have an empty array\n    query = (\n        sa.select(\n            counts.c.interval_start,\n            counts.c.interval_end,\n            sa.func.coalesce(\n                db.json_arr_agg(db.cast_to_json(counts.c.state_agg)).filter(\n                    counts.c.state_agg.is_not(None)\n                ),\n                sa.text(\"'[]'\"),\n            ).label(\"states\"),\n        )\n        .group_by(counts.c.interval_start, counts.c.interval_end)\n        .order_by(counts.c.interval_start)\n        # return no more than 500 bars\n        .limit(500)\n    )\n\n    # issue the query\n    result = await session.execute(query)\n    records = result.mappings()\n\n    # load and parse the record if the database returns JSON as strings\n    if db.uses_json_strings:\n        records = [dict(r) for r in records]\n        for r in records:\n            r[\"states\"] = json.loads(r[\"states\"])\n\n    return pydantic.parse_obj_as(List[schemas.responses.HistoryResponse], list(records))\n
","tags":["Prefect API","flow runs","task runs","observability"]},{"location":"api-ref/server/api/saved_searches/","title":"server.api.saved_searches","text":"","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches","title":"prefect.server.api.saved_searches","text":"

Routes for interacting with saved search objects.

","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.create_saved_search","title":"create_saved_search async","text":"

Gracefully creates a new saved search from the provided schema.

If a saved search with the same name already exists, the saved search's fields are replaced.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/saved_searches.py
@router.put(\"/\")\nasync def create_saved_search(\n    saved_search: schemas.actions.SavedSearchCreate,\n    response: Response,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.SavedSearch:\n\"\"\"Gracefully creates a new saved search from the provided schema.\n\n    If a saved search with the same name already exists, the saved search's fields are\n    replaced.\n    \"\"\"\n\n    # hydrate the input model into a full model\n    saved_search = schemas.core.SavedSearch(**saved_search.dict())\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.saved_searches.create_saved_search(\n            session=session, saved_search=saved_search\n        )\n\n    if model.created >= now:\n        response.status_code = status.HTTP_201_CREATED\n\n    return model\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.delete_saved_search","title":"delete_saved_search async","text":"

Delete a saved search by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/saved_searches.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_saved_search(\n    saved_search_id: UUID = Path(..., description=\"The saved search id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Delete a saved search by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.saved_searches.delete_saved_search(\n            session=session, saved_search_id=saved_search_id\n        )\n    if not result:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Saved search not found\"\n        )\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.read_saved_search","title":"read_saved_search async","text":"

Get a saved search by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/saved_searches.py
@router.get(\"/{id}\")\nasync def read_saved_search(\n    saved_search_id: UUID = Path(..., description=\"The saved search id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.SavedSearch:\n\"\"\"\n    Get a saved search by id.\n    \"\"\"\n    async with db.session_context() as session:\n        saved_search = await models.saved_searches.read_saved_search(\n            session=session, saved_search_id=saved_search_id\n        )\n    if not saved_search:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Saved search not found\"\n        )\n    return saved_search\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/saved_searches/#prefect.server.api.saved_searches.read_saved_searches","title":"read_saved_searches async","text":"

Query for saved searches.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/saved_searches.py
@router.post(\"/filter\")\nasync def read_saved_searches(\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.SavedSearch]:\n\"\"\"\n    Query for saved searches.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.saved_searches.read_saved_searches(\n            session=session,\n            offset=offset,\n            limit=limit,\n        )\n
","tags":["Prefect API","search","saved search"]},{"location":"api-ref/server/api/server/","title":"server.api.server","text":"","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server","title":"prefect.server.api.server","text":"

Defines the Prefect REST API FastAPI app.

","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.RequestLimitMiddleware","title":"RequestLimitMiddleware","text":"

A middleware that limits the number of concurrent requests handled by the API.

This is a blunt tool for limiting SQLite concurrent writes which will cause failures at high volume. Ideally, we would only apply the limit to routes that perform writes.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
class RequestLimitMiddleware:\n\"\"\"\n    A middleware that limits the number of concurrent requests handled by the API.\n\n    This is a blunt tool for limiting SQLite concurrent writes which will cause failures\n    at high volume. Ideally, we would only apply the limit to routes that perform\n    writes.\n    \"\"\"\n\n    def __init__(self, app, limit: float):\n        self.app = app\n        self._limiter = anyio.CapacityLimiter(limit)\n\n    async def __call__(self, scope, receive, send) -> None:\n        async with self._limiter:\n            await self.app(scope, receive, send)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.SPAStaticFiles","title":"SPAStaticFiles","text":"

Bases: StaticFiles

Implementation of StaticFiles for serving single page applications.

Adds get_response handling to ensure that when a resource isn't found the application still returns the index.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
class SPAStaticFiles(StaticFiles):\n\"\"\"\n    Implementation of `StaticFiles` for serving single page applications.\n\n    Adds `get_response` handling to ensure that when a resource isn't found the\n    application still returns the index.\n    \"\"\"\n\n    async def get_response(self, path: str, scope):\n        try:\n            return await super().get_response(path, scope)\n        except HTTPException:\n            return await super().get_response(\"./index.html\", scope)\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.create_api_app","title":"create_api_app","text":"

Create a FastAPI app that includes the Prefect REST API

Parameters:

Name Type Description Default router_prefix Optional[str]

a prefix to apply to all included routers

'' dependencies Optional[List[Depends]]

a list of global dependencies to add to each Prefect REST API router

None health_check_path str

the health check route path

'/health' fast_api_app_kwargs dict

kwargs to pass to the FastAPI constructor

None router_overrides Mapping[str, Optional[APIRouter]]

a mapping of route prefixes (i.e. \"/admin\") to new routers allowing the caller to override the default routers. If None is provided as a value, the default router will be dropped from the application.

None

Returns:

Type Description FastAPI

a FastAPI app that serves the Prefect REST API

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
def create_api_app(\n    router_prefix: Optional[str] = \"\",\n    dependencies: Optional[List[Depends]] = None,\n    health_check_path: str = \"/health\",\n    version_check_path: str = \"/version\",\n    fast_api_app_kwargs: dict = None,\n    router_overrides: Mapping[str, Optional[APIRouter]] = None,\n) -> FastAPI:\n\"\"\"\n    Create a FastAPI app that includes the Prefect REST API\n\n    Args:\n        router_prefix: a prefix to apply to all included routers\n        dependencies: a list of global dependencies to add to each Prefect REST API router\n        health_check_path: the health check route path\n        fast_api_app_kwargs: kwargs to pass to the FastAPI constructor\n        router_overrides: a mapping of route prefixes (i.e. \"/admin\") to new routers\n            allowing the caller to override the default routers. If `None` is provided\n            as a value, the default router will be dropped from the application.\n\n    Returns:\n        a FastAPI app that serves the Prefect REST API\n    \"\"\"\n    fast_api_app_kwargs = fast_api_app_kwargs or {}\n    api_app = FastAPI(title=API_TITLE, **fast_api_app_kwargs)\n    api_app.add_middleware(GZipMiddleware)\n\n    @api_app.get(health_check_path, tags=[\"Root\"])\n    async def health_check():\n        return True\n\n    @api_app.get(version_check_path, tags=[\"Root\"])\n    async def orion_info():\n        return SERVER_API_VERSION\n\n    # always include version checking\n    if dependencies is None:\n        dependencies = [Depends(enforce_minimum_version)]\n    else:\n        dependencies.append(Depends(enforce_minimum_version))\n\n    routers = {router.prefix: router for router in API_ROUTERS}\n\n    if router_overrides:\n        for prefix, router in router_overrides.items():\n            # We may want to allow this behavior in the future to inject new routes, but\n            # for now this will be treated an as an exception\n            if prefix not in routers:\n                raise KeyError(\n                    \"Router override provided for prefix that does not exist:\"\n                    f\" {prefix!r}\"\n                )\n\n            # Drop the existing router\n            existing_router = routers.pop(prefix)\n\n            # Replace it with a new router if provided\n            if router is not None:\n                if prefix != router.prefix:\n                    # We may want to allow this behavior in the future, but it will\n                    # break expectations without additional routing and is banned for\n                    # now\n                    raise ValueError(\n                        f\"Router override for {prefix!r} defines a different prefix \"\n                        f\"{router.prefix!r}.\"\n                    )\n\n                existing_paths = method_paths_from_routes(existing_router.routes)\n                new_paths = method_paths_from_routes(router.routes)\n                if not existing_paths.issubset(new_paths):\n                    raise ValueError(\n                        f\"Router override for {prefix!r} is missing paths defined by \"\n                        f\"the original router: {existing_paths.difference(new_paths)}\"\n                    )\n\n                routers[prefix] = router\n\n    for router in routers.values():\n        api_app.include_router(router, prefix=router_prefix, dependencies=dependencies)\n\n    return api_app\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.create_app","title":"create_app","text":"

Create an FastAPI app that includes the Prefect REST API and UI

Parameters:

Name Type Description Default settings prefect.settings.Settings

The settings to use to create the app. If not set, settings are pulled from the context.

None ignore_cache bool

If set, a new application will be created even if the settings match. Otherwise, an application is returned from the cache.

False ephemeral bool

If set, the application will be treated as ephemeral. The UI and services will be disabled.

False Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
def create_app(\n    settings: prefect.settings.Settings = None,\n    ephemeral: bool = False,\n    ignore_cache: bool = False,\n) -> FastAPI:\n\"\"\"\n    Create an FastAPI app that includes the Prefect REST API and UI\n\n    Args:\n        settings: The settings to use to create the app. If not set, settings are pulled\n            from the context.\n        ignore_cache: If set, a new application will be created even if the settings\n            match. Otherwise, an application is returned from the cache.\n        ephemeral: If set, the application will be treated as ephemeral. The UI\n            and services will be disabled.\n    \"\"\"\n    settings = settings or prefect.settings.get_current_settings()\n    cache_key = (settings, ephemeral)\n\n    if cache_key in APP_CACHE and not ignore_cache:\n        return APP_CACHE[cache_key]\n\n    # TODO: Move these startup functions out of this closure into the top-level or\n    #       another dedicated location\n    async def run_migrations():\n\"\"\"Ensure the database is created and up to date with the current migrations\"\"\"\n        if prefect.settings.PREFECT_API_DATABASE_MIGRATE_ON_START:\n            from prefect.server.database.dependencies import provide_database_interface\n\n            db = provide_database_interface()\n            await db.create_db()\n\n    @_memoize_block_auto_registration\n    async def add_block_types():\n\"\"\"Add all registered blocks to the database\"\"\"\n        if not prefect.settings.PREFECT_API_BLOCKS_REGISTER_ON_START:\n            return\n\n        from prefect.server.database.dependencies import provide_database_interface\n        from prefect.server.models.block_registration import run_block_auto_registration\n\n        db = provide_database_interface()\n        session = await db.session()\n\n        try:\n            async with session:\n                await run_block_auto_registration(session=session)\n        except Exception as exc:\n            logger.warn(f\"Error occurred during block auto-registration: {exc!r}\")\n\n    async def start_services():\n\"\"\"Start additional services when the Prefect REST API starts up.\"\"\"\n\n        if ephemeral:\n            app.state.services = None\n            return\n\n        service_instances = []\n\n        if prefect.settings.PREFECT_API_SERVICES_SCHEDULER_ENABLED.value():\n            service_instances.append(services.scheduler.Scheduler())\n            service_instances.append(services.scheduler.RecentDeploymentsScheduler())\n\n        if prefect.settings.PREFECT_API_SERVICES_LATE_RUNS_ENABLED.value():\n            service_instances.append(services.late_runs.MarkLateRuns())\n\n        if prefect.settings.PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED.value():\n            service_instances.append(services.pause_expirations.FailExpiredPauses())\n\n        if prefect.settings.PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED.value():\n            service_instances.append(\n                services.cancellation_cleanup.CancellationCleanup()\n            )\n\n        if prefect.settings.PREFECT_SERVER_ANALYTICS_ENABLED.value():\n            service_instances.append(services.telemetry.Telemetry())\n\n        if prefect.settings.PREFECT_API_SERVICES_FLOW_RUN_NOTIFICATIONS_ENABLED.value():\n            service_instances.append(\n                services.flow_run_notifications.FlowRunNotifications()\n            )\n\n        loop = asyncio.get_running_loop()\n\n        app.state.services = {\n            service: loop.create_task(service.start()) for service in service_instances\n        }\n\n        for service, task in app.state.services.items():\n            logger.info(f\"{service.name} service scheduled to start in-app\")\n            task.add_done_callback(partial(on_service_exit, service))\n\n    async def stop_services():\n\"\"\"Ensure services are stopped before the Prefect REST API shuts down.\"\"\"\n        if hasattr(app.state, \"services\") and app.state.services:\n            await asyncio.gather(*[service.stop() for service in app.state.services])\n            try:\n                await asyncio.gather(\n                    *[task.stop() for task in app.state.services.values()]\n                )\n            except Exception:\n                # `on_service_exit` should handle logging exceptions on exit\n                pass\n\n    @asynccontextmanager\n    async def lifespan(app):\n        try:\n            await run_migrations()\n            await add_block_types()\n            await start_services()\n            yield\n        finally:\n            await stop_services()\n\n    def on_service_exit(service, task):\n\"\"\"\n        Added as a callback for completion of services to log exit\n        \"\"\"\n        try:\n            # Retrieving the result will raise the exception\n            task.result()\n        except Exception:\n            logger.error(f\"{service.name} service failed!\", exc_info=True)\n        else:\n            logger.info(f\"{service.name} service stopped!\")\n\n    app = FastAPI(\n        title=TITLE,\n        version=API_VERSION,\n        lifespan=lifespan,\n    )\n    api_app = create_api_app(\n        fast_api_app_kwargs={\n            \"exception_handlers\": {\n                # NOTE: FastAPI special cases the generic `Exception` handler and\n                #       registers it as a separate middleware from the others\n                Exception: custom_internal_exception_handler,\n                RequestValidationError: validation_exception_handler,\n                sa.exc.IntegrityError: integrity_exception_handler,\n                ObjectNotFoundError: prefect_object_not_found_exception_handler,\n            }\n        },\n    )\n    ui_app = create_ui_app(ephemeral)\n\n    # middleware\n    app.add_middleware(\n        CORSMiddleware,\n        allow_origins=[\"*\"],\n        allow_methods=[\"*\"],\n        allow_headers=[\"*\"],\n    )\n\n    # Limit the number of concurrent requests when using a SQLite datbase to reduce\n    # chance of errors where the database cannot be opened due to a high number of\n    # concurrent writes\n    if (\n        get_dialect(prefect.settings.PREFECT_API_DATABASE_CONNECTION_URL.value()).name\n        == \"sqlite\"\n    ):\n        app.add_middleware(RequestLimitMiddleware, limit=100)\n\n    api_app.mount(\n        \"/static\",\n        StaticFiles(\n            directory=os.path.join(\n                os.path.dirname(os.path.realpath(__file__)), \"static\"\n            )\n        ),\n        name=\"static\",\n    )\n    app.api_app = api_app\n    app.mount(\"/api\", app=api_app, name=\"api\")\n    app.mount(\"/\", app=ui_app, name=\"ui\")\n\n    def openapi():\n\"\"\"\n        Convenience method for extracting the user facing OpenAPI schema from the API app.\n\n        This method is attached to the global public app for easy access.\n        \"\"\"\n        partial_schema = get_openapi(\n            title=API_TITLE,\n            version=API_VERSION,\n            routes=api_app.routes,\n        )\n        new_schema = partial_schema.copy()\n        new_schema[\"paths\"] = {}\n        for path, value in partial_schema[\"paths\"].items():\n            new_schema[\"paths\"][f\"/api{path}\"] = value\n\n        new_schema[\"info\"][\"x-logo\"] = {\"url\": \"static/prefect-logo-mark-gradient.png\"}\n        return new_schema\n\n    app.openapi = openapi\n\n    APP_CACHE[cache_key] = app\n    return app\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.custom_internal_exception_handler","title":"custom_internal_exception_handler async","text":"

Log a detailed exception for internal server errors before returning.

Send 503 for errors clients can retry on.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
async def custom_internal_exception_handler(request: Request, exc: Exception):\n\"\"\"\n    Log a detailed exception for internal server errors before returning.\n\n    Send 503 for errors clients can retry on.\n    \"\"\"\n    logger.error(\"Encountered exception in request:\", exc_info=True)\n\n    if is_client_retryable_exception(exc):\n        return JSONResponse(\n            content={\"exception_message\": \"Service Unavailable\"},\n            status_code=status.HTTP_503_SERVICE_UNAVAILABLE,\n        )\n\n    return JSONResponse(\n        content={\"exception_message\": \"Internal Server Error\"},\n        status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,\n    )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.integrity_exception_handler","title":"integrity_exception_handler async","text":"

Capture database integrity errors.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
async def integrity_exception_handler(request: Request, exc: Exception):\n\"\"\"Capture database integrity errors.\"\"\"\n    logger.error(\"Encountered exception in request:\", exc_info=True)\n    return JSONResponse(\n        content={\n            \"detail\": (\n                \"Data integrity conflict. This usually means a \"\n                \"unique or foreign key constraint was violated. \"\n                \"See server logs for details.\"\n            )\n        },\n        status_code=status.HTTP_409_CONFLICT,\n    )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.prefect_object_not_found_exception_handler","title":"prefect_object_not_found_exception_handler async","text":"

Return 404 status code on object not found exceptions.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
async def prefect_object_not_found_exception_handler(\n    request: Request, exc: ObjectNotFoundError\n):\n\"\"\"Return 404 status code on object not found exceptions.\"\"\"\n    return JSONResponse(\n        content={\"exception_message\": str(exc)}, status_code=status.HTTP_404_NOT_FOUND\n    )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/server/#prefect.server.api.server.validation_exception_handler","title":"validation_exception_handler async","text":"

Provide a detailed message for request validation errors.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/server.py
async def validation_exception_handler(request: Request, exc: RequestValidationError):\n\"\"\"Provide a detailed message for request validation errors.\"\"\"\n    return JSONResponse(\n        status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,\n        content=jsonable_encoder(\n            {\n                \"exception_message\": \"Invalid request received.\",\n                \"exception_detail\": exc.errors(),\n                \"request_body\": exc.body,\n            }\n        ),\n    )\n
","tags":["Prefect API","FastAPI"]},{"location":"api-ref/server/api/task_run_states/","title":"server.api.task_run_states","text":"","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states","title":"prefect.server.api.task_run_states","text":"

Routes for interacting with task run state objects.

","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states.read_task_run_state","title":"read_task_run_state async","text":"

Get a task run state by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_run_states.py
@router.get(\"/{id}\")\nasync def read_task_run_state(\n    task_run_state_id: UUID = Path(\n        ..., description=\"The task run state id\", alias=\"id\"\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.states.State:\n\"\"\"\n    Get a task run state by id.\n    \"\"\"\n    async with db.session_context() as session:\n        task_run_state = await models.task_run_states.read_task_run_state(\n            session=session, task_run_state_id=task_run_state_id\n        )\n    if not task_run_state:\n        raise HTTPException(\n            status_code=status.HTTP_404_NOT_FOUND, detail=\"Flow run state not found\"\n        )\n    return task_run_state\n
","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_run_states/#prefect.server.api.task_run_states.read_task_run_states","title":"read_task_run_states async","text":"

Get states associated with a task run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_run_states.py
@router.get(\"/\")\nasync def read_task_run_states(\n    task_run_id: UUID,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.states.State]:\n\"\"\"\n    Get states associated with a task run.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.task_run_states.read_task_run_states(\n            session=session, task_run_id=task_run_id\n        )\n
","tags":["Prefect API","task runs","states"]},{"location":"api-ref/server/api/task_runs/","title":"server.api.task_runs","text":"","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs","title":"prefect.server.api.task_runs","text":"

Routes for interacting with task run objects.

","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.count_task_runs","title":"count_task_runs async","text":"

Count task runs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.post(\"/count\")\nasync def count_task_runs(\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n) -> int:\n\"\"\"\n    Count task runs.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.task_runs.count_task_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n        )\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.create_task_run","title":"create_task_run async","text":"

Create a task run. If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned.

If no state is provided, the task run will be created in a PENDING state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.post(\"/\")\nasync def create_task_run(\n    task_run: schemas.actions.TaskRunCreate,\n    response: Response,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    orchestration_parameters: dict = Depends(\n        orchestration_dependencies.provide_task_orchestration_parameters\n    ),\n) -> schemas.core.TaskRun:\n\"\"\"\n    Create a task run. If a task run with the same flow_run_id,\n    task_key, and dynamic_key already exists, the existing task\n    run will be returned.\n\n    If no state is provided, the task run will be created in a PENDING state.\n    \"\"\"\n    # hydrate the input model into a full task run / state model\n    task_run = schemas.core.TaskRun(**task_run.dict())\n\n    if not task_run.state:\n        task_run.state = schemas.states.Pending()\n\n    now = pendulum.now(\"UTC\")\n\n    async with db.session_context(begin_transaction=True) as session:\n        model = await models.task_runs.create_task_run(\n            session=session,\n            task_run=task_run,\n            orchestration_parameters=orchestration_parameters,\n        )\n\n    if model.created >= now:\n        response.status_code = status.HTTP_201_CREATED\n    return model\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.delete_task_run","title":"delete_task_run async","text":"

Delete a task run by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.delete(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def delete_task_run(\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Delete a task run by id.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.task_runs.delete_task_run(\n            session=session, task_run_id=task_run_id\n        )\n    if not result:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task not found\")\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.read_task_run","title":"read_task_run async","text":"

Get a task run by id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.get(\"/{id}\")\nasync def read_task_run(\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> schemas.core.TaskRun:\n\"\"\"\n    Get a task run by id.\n    \"\"\"\n    async with db.session_context() as session:\n        task_run = await models.task_runs.read_task_run(\n            session=session, task_run_id=task_run_id\n        )\n    if not task_run:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task not found\")\n    return task_run\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.read_task_runs","title":"read_task_runs async","text":"

Query for task runs.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.post(\"/filter\")\nasync def read_task_runs(\n    sort: schemas.sorting.TaskRunSort = Body(schemas.sorting.TaskRunSort.ID_DESC),\n    limit: int = dependencies.LimitBody(),\n    offset: int = Body(0, ge=0),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.core.TaskRun]:\n\"\"\"\n    Query for task runs.\n    \"\"\"\n    async with db.session_context() as session:\n        return await models.task_runs.read_task_runs(\n            session=session,\n            flow_filter=flows,\n            flow_run_filter=flow_runs,\n            task_run_filter=task_runs,\n            deployment_filter=deployments,\n            offset=offset,\n            limit=limit,\n            sort=sort,\n        )\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.set_task_run_state","title":"set_task_run_state async","text":"

Set a task run state, invoking any orchestration rules.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.post(\"/{id}/set_state\")\nasync def set_task_run_state(\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    state: schemas.actions.StateCreate = Body(..., description=\"The intended state.\"),\n    force: bool = Body(\n        False,\n        description=(\n            \"If false, orchestration rules will be applied that may alter or prevent\"\n            \" the state transition. If True, orchestration rules are not applied.\"\n        ),\n    ),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n    response: Response = None,\n    task_policy: BaseOrchestrationPolicy = Depends(\n        orchestration_dependencies.provide_task_policy\n    ),\n    orchestration_parameters: dict = Depends(\n        orchestration_dependencies.provide_task_orchestration_parameters\n    ),\n) -> OrchestrationResult:\n\"\"\"Set a task run state, invoking any orchestration rules.\"\"\"\n\n    now = pendulum.now(\"UTC\")\n\n    # create the state\n    async with db.session_context(\n        begin_transaction=True, with_for_update=True\n    ) as session:\n        orchestration_result = await models.task_runs.set_task_run_state(\n            session=session,\n            task_run_id=task_run_id,\n            state=schemas.states.State.parse_obj(\n                state\n            ),  # convert to a full State object\n            force=force,\n            task_policy=task_policy,\n            orchestration_parameters=orchestration_parameters,\n        )\n\n    # set the 201 if a new state was created\n    if orchestration_result.state and orchestration_result.state.timestamp >= now:\n        response.status_code = status.HTTP_201_CREATED\n    else:\n        response.status_code = status.HTTP_200_OK\n\n    return orchestration_result\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.task_run_history","title":"task_run_history async","text":"

Query for task run history data across a given range and interval.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.post(\"/history\")\nasync def task_run_history(\n    history_start: DateTimeTZ = Body(..., description=\"The history's start time.\"),\n    history_end: DateTimeTZ = Body(..., description=\"The history's end time.\"),\n    history_interval: datetime.timedelta = Body(\n        ...,\n        description=(\n            \"The size of each history interval, in seconds. Must be at least 1 second.\"\n        ),\n        alias=\"history_interval_seconds\",\n    ),\n    flows: schemas.filters.FlowFilter = None,\n    flow_runs: schemas.filters.FlowRunFilter = None,\n    task_runs: schemas.filters.TaskRunFilter = None,\n    deployments: schemas.filters.DeploymentFilter = None,\n    db: PrefectDBInterface = Depends(provide_database_interface),\n) -> List[schemas.responses.HistoryResponse]:\n\"\"\"\n    Query for task run history data across a given range and interval.\n    \"\"\"\n    if history_interval < datetime.timedelta(seconds=1):\n        raise HTTPException(\n            status.HTTP_422_UNPROCESSABLE_ENTITY,\n            detail=\"History interval must not be less than 1 second.\",\n        )\n\n    async with db.session_context() as session:\n        return await run_history(\n            session=session,\n            run_type=\"task_run\",\n            history_start=history_start,\n            history_end=history_end,\n            history_interval=history_interval,\n            flows=flows,\n            flow_runs=flow_runs,\n            task_runs=task_runs,\n            deployments=deployments,\n        )\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/api/task_runs/#prefect.server.api.task_runs.update_task_run","title":"update_task_run async","text":"

Updates a task run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/api/task_runs.py
@router.patch(\"/{id}\", status_code=status.HTTP_204_NO_CONTENT)\nasync def update_task_run(\n    task_run: schemas.actions.TaskRunUpdate,\n    task_run_id: UUID = Path(..., description=\"The task run id\", alias=\"id\"),\n    db: PrefectDBInterface = Depends(provide_database_interface),\n):\n\"\"\"\n    Updates a task run.\n    \"\"\"\n    async with db.session_context(begin_transaction=True) as session:\n        result = await models.task_runs.update_task_run(\n            session=session, task_run=task_run, task_run_id=task_run_id\n        )\n    if not result:\n        raise HTTPException(status.HTTP_404_NOT_FOUND, detail=\"Task run not found\")\n
","tags":["Prefect API","task runs"]},{"location":"api-ref/server/models/deployments/","title":"server.models.deployments","text":""},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments","title":"prefect.server.models.deployments","text":"

Functions for interacting with deployment ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.check_work_queues_for_deployment","title":"check_work_queues_for_deployment async","text":"

Get work queues that can pick up the specified deployment.

Work queues will pick up a deployment when all of the following are met.

Notes on the query:

Returns:

Type Description List[schemas.core.WorkQueue]

List[db.WorkQueue]: WorkQueues

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def check_work_queues_for_deployment(\n    db: PrefectDBInterface, session: sa.orm.Session, deployment_id: UUID\n) -> List[schemas.core.WorkQueue]:\n\"\"\"\n    Get work queues that can pick up the specified deployment.\n\n    Work queues will pick up a deployment when all of the following are met.\n\n    - The deployment has ALL tags that the work queue has (i.e. the work\n    queue's tags must be a subset of the deployment's tags).\n    - The work queue's specified deployment IDs match the deployment's ID,\n    or the work queue does NOT have specified deployment IDs.\n    - The work queue's specified flow runners match the deployment's flow\n    runner or the work queue does NOT have a specified flow runner.\n\n    Notes on the query:\n\n    - Our database currently allows either \"null\" and empty lists as\n    null values in filters, so we need to catch both cases with \"or\".\n    - `json_contains(A, B)` should be interepreted as \"True if A\n    contains B\".\n\n    Returns:\n        List[db.WorkQueue]: WorkQueues\n    \"\"\"\n    deployment = await session.get(db.Deployment, deployment_id)\n    if not deployment:\n        raise ObjectNotFoundError(f\"Deployment with id {deployment_id} not found\")\n\n    query = (\n        select(db.WorkQueue)\n        # work queue tags are a subset of deployment tags\n        .filter(\n            or_(\n                json_contains(deployment.tags, db.WorkQueue.filter[\"tags\"]),\n                json_contains([], db.WorkQueue.filter[\"tags\"]),\n                json_contains(None, db.WorkQueue.filter[\"tags\"]),\n            )\n        )\n        # deployment_ids is null or contains the deployment's ID\n        .filter(\n            or_(\n                json_contains(\n                    db.WorkQueue.filter[\"deployment_ids\"],\n                    str(deployment.id),\n                ),\n                json_contains(None, db.WorkQueue.filter[\"deployment_ids\"]),\n                json_contains([], db.WorkQueue.filter[\"deployment_ids\"]),\n            )\n        )\n    )\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.count_deployments","title":"count_deployments async","text":"

Count deployments.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_filter schemas.filters.FlowFilter

only count deployments whose flows match these criteria

None flow_run_filter schemas.filters.FlowRunFilter

only count deployments whose flow runs match these criteria

None task_run_filter schemas.filters.TaskRunFilter

only count deployments whose task runs match these criteria

None deployment_filter schemas.filters.DeploymentFilter

only count deployment that match these filters

None work_pool_filter schemas.filters.WorkPoolFilter

only count deployments that match these work pool filters

None work_queue_filter schemas.filters.WorkQueueFilter

only count deployments that match these work pool queue filters

None

Returns:

Name Type Description int int

the number of deployments matching filters

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def count_deployments(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n) -> int:\n\"\"\"\n    Count deployments.\n\n    Args:\n        session: A database session\n        flow_filter: only count deployments whose flows match these criteria\n        flow_run_filter: only count deployments whose flow runs match these criteria\n        task_run_filter: only count deployments whose task runs match these criteria\n        deployment_filter: only count deployment that match these filters\n        work_pool_filter: only count deployments that match these work pool filters\n        work_queue_filter: only count deployments that match these work pool queue filters\n\n    Returns:\n        int: the number of deployments matching filters\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.Deployment)\n\n    query = await _apply_deployment_filters(\n        query=query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n        db=db,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.create_deployment","title":"create_deployment async","text":"

Upserts a deployment.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required deployment schemas.core.Deployment

a deployment model

required

Returns:

Type Description

db.Deployment: the newly-created or updated deployment

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def create_deployment(\n    session: sa.orm.Session, deployment: schemas.core.Deployment, db: PrefectDBInterface\n):\n\"\"\"Upserts a deployment.\n\n    Args:\n        session: a database session\n        deployment: a deployment model\n\n    Returns:\n        db.Deployment: the newly-created or updated deployment\n\n    \"\"\"\n\n    # set `updated` manually\n    # known limitation of `on_conflict_do_update`, will not use `Column.onupdate`\n    # https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#the-set-clause\n    deployment.updated = pendulum.now(\"UTC\")\n\n    insert_values = deployment.dict(shallow=True, exclude_unset=True)\n\n    insert_stmt = (\n        (await db.insert(db.Deployment))\n        .values(**insert_values)\n        .on_conflict_do_update(\n            index_elements=db.deployment_unique_upsert_columns,\n            set_={\n                **deployment.dict(\n                    shallow=True,\n                    exclude_unset=True,\n                    exclude={\"id\", \"created\", \"created_by\"},\n                ),\n            },\n        )\n    )\n\n    await session.execute(insert_stmt)\n\n    query = (\n        sa.select(db.Deployment)\n        .where(\n            sa.and_(\n                db.Deployment.flow_id == deployment.flow_id,\n                db.Deployment.name == deployment.name,\n            )\n        )\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    model = result.scalar()\n\n    # because this could upsert a different schedule, delete any runs from the old\n    # deployment\n    await _delete_scheduled_runs(\n        session=session, deployment_id=model.id, db=db, auto_scheduled_only=True\n    )\n\n    return model\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.delete_deployment","title":"delete_deployment async","text":"

Delete a deployment by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required deployment_id UUID

a deployment id

required

Returns:

Name Type Description bool bool

whether or not the deployment was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def delete_deployment(\n    session: sa.orm.Session, deployment_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a deployment by id.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n\n    Returns:\n        bool: whether or not the deployment was deleted\n    \"\"\"\n\n    # delete scheduled runs, both auto- and user- created.\n    await _delete_scheduled_runs(\n        session=session, deployment_id=deployment_id, auto_scheduled_only=False\n    )\n\n    result = await session.execute(\n        delete(db.Deployment).where(db.Deployment.id == deployment_id)\n    )\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment","title":"read_deployment async","text":"

Reads a deployment by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required deployment_id UUID

a deployment id

required

Returns:

Type Description

db.Deployment: the deployment

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def read_deployment(\n    session: sa.orm.Session, deployment_id: UUID, db: PrefectDBInterface\n):\n\"\"\"Reads a deployment by id.\n\n    Args:\n        session: A database session\n        deployment_id: a deployment id\n\n    Returns:\n        db.Deployment: the deployment\n    \"\"\"\n\n    return await session.get(db.Deployment, deployment_id)\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployment_by_name","title":"read_deployment_by_name async","text":"

Reads a deployment by name.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required name str

a deployment name

required flow_name str

the name of the flow the deployment belongs to

required

Returns:

Type Description

db.Deployment: the deployment

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def read_deployment_by_name(\n    session: sa.orm.Session, name: str, flow_name: str, db: PrefectDBInterface\n):\n\"\"\"Reads a deployment by name.\n\n    Args:\n        session: A database session\n        name: a deployment name\n        flow_name: the name of the flow the deployment belongs to\n\n    Returns:\n        db.Deployment: the deployment\n    \"\"\"\n\n    result = await session.execute(\n        select(db.Deployment)\n        .join(db.Flow, db.Deployment.flow_id == db.Flow.id)\n        .where(\n            sa.and_(\n                db.Flow.name == flow_name,\n                db.Deployment.name == name,\n            )\n        )\n        .limit(1)\n    )\n    return result.scalar()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.read_deployments","title":"read_deployments async","text":"

Read deployments.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required offset int

Query offset

None limit int

Query limit

None flow_filter schemas.filters.FlowFilter

only select deployments whose flows match these criteria

None flow_run_filter schemas.filters.FlowRunFilter

only select deployments whose flow runs match these criteria

None task_run_filter schemas.filters.TaskRunFilter

only select deployments whose task runs match these criteria

None deployment_filter schemas.filters.DeploymentFilter

only select deployment that match these filters

None work_pool_filter schemas.filters.WorkPoolFilter

only select deployments whose work pools match these criteria

None work_queue_filter schemas.filters.WorkQueueFilter

only select deployments whose work pool queues match these criteria

None sort schemas.sorting.DeploymentSort

the sort criteria for selected deployments. Defaults to name ASC.

schemas.sorting.DeploymentSort.NAME_ASC

Returns:

Type Description

List[db.Deployment]: deployments

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def read_deployments(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    offset: int = None,\n    limit: int = None,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n    sort: schemas.sorting.DeploymentSort = schemas.sorting.DeploymentSort.NAME_ASC,\n):\n\"\"\"\n    Read deployments.\n\n    Args:\n        session: A database session\n        offset: Query offset\n        limit: Query limit\n        flow_filter: only select deployments whose flows match these criteria\n        flow_run_filter: only select deployments whose flow runs match these criteria\n        task_run_filter: only select deployments whose task runs match these criteria\n        deployment_filter: only select deployment that match these filters\n        work_pool_filter: only select deployments whose work pools match these criteria\n        work_queue_filter: only select deployments whose work pool queues match these criteria\n        sort: the sort criteria for selected deployments. Defaults to `name` ASC.\n\n    Returns:\n        List[db.Deployment]: deployments\n    \"\"\"\n\n    query = select(db.Deployment).order_by(sort.as_sql_sort(db=db))\n\n    query = await _apply_deployment_filters(\n        query=query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n        db=db,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.schedule_runs","title":"schedule_runs async","text":"

Schedule flow runs for a deployment

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required deployment_id UUID

the id of the deployment to schedule

required start_time datetime.datetime

the time from which to start scheduling runs

None end_time datetime.datetime

runs will be scheduled until at most this time

None min_time datetime.timedelta

runs will be scheduled until at least this far in the future

None min_runs int

a minimum amount of runs to schedule

None max_runs int

a maximum amount of runs to schedule

None

This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected.

- Runs will be generated starting on or after the `start_time`\n- No more than `max_runs` runs will be generated\n- No runs will be generated after `end_time` is reached\n- At least `min_runs` runs will be generated\n- Runs will be generated until at least `start_time` + `min_time` is reached\n

Returns:

Type Description List[UUID]

a list of flow run ids scheduled for the deployment

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
async def schedule_runs(\n    session: sa.orm.Session,\n    deployment_id: UUID,\n    start_time: datetime.datetime = None,\n    end_time: datetime.datetime = None,\n    min_time: datetime.timedelta = None,\n    min_runs: int = None,\n    max_runs: int = None,\n    auto_scheduled: bool = True,\n) -> List[UUID]:\n\"\"\"\n    Schedule flow runs for a deployment\n\n    Args:\n        session: a database session\n        deployment_id: the id of the deployment to schedule\n        start_time: the time from which to start scheduling runs\n        end_time: runs will be scheduled until at most this time\n        min_time: runs will be scheduled until at least this far in the future\n        min_runs: a minimum amount of runs to schedule\n        max_runs: a maximum amount of runs to schedule\n\n    This function will generate the minimum number of runs that satisfy the min\n    and max times, and the min and max counts. Specifically, the following order\n    will be respected.\n\n        - Runs will be generated starting on or after the `start_time`\n        - No more than `max_runs` runs will be generated\n        - No runs will be generated after `end_time` is reached\n        - At least `min_runs` runs will be generated\n        - Runs will be generated until at least `start_time` + `min_time` is reached\n\n    Returns:\n        a list of flow run ids scheduled for the deployment\n    \"\"\"\n    if min_runs is None:\n        min_runs = PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS.value()\n    if max_runs is None:\n        max_runs = PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS.value()\n    if start_time is None:\n        start_time = pendulum.now(\"UTC\")\n    if end_time is None:\n        end_time = start_time + (\n            PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME.value()\n        )\n    if min_time is None:\n        min_time = PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME.value()\n\n    start_time = pendulum.instance(start_time)\n    end_time = pendulum.instance(end_time)\n\n    runs = await _generate_scheduled_flow_runs(\n        session=session,\n        deployment_id=deployment_id,\n        start_time=start_time,\n        end_time=end_time,\n        min_time=min_time,\n        min_runs=min_runs,\n        max_runs=max_runs,\n        auto_scheduled=auto_scheduled,\n    )\n    return await _insert_scheduled_flow_runs(session=session, runs=runs)\n
"},{"location":"api-ref/server/models/deployments/#prefect.server.models.deployments.update_deployment","title":"update_deployment async","text":"

Updates a deployment.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required deployment_id UUID

the ID of the deployment to modify

required deployment schemas.actions.DeploymentUpdate

changes to a deployment model

required

Returns:

Name Type Description bool bool

whether the deployment was updated

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/deployments.py
@inject_db\nasync def update_deployment(\n    session: sa.orm.Session,\n    deployment_id: UUID,\n    deployment: schemas.actions.DeploymentUpdate,\n    db: PrefectDBInterface,\n) -> bool:\n\"\"\"Updates a deployment.\n\n    Args:\n        session: a database session\n        deployment_id: the ID of the deployment to modify\n        deployment: changes to a deployment model\n\n    Returns:\n        bool: whether the deployment was updated\n\n    \"\"\"\n\n    # exclude_unset=True allows us to only update values provided by\n    # the user, ignoring any defaults on the model\n    update_data = deployment.dict(\n        shallow=True,\n        exclude_unset=True,\n        exclude={\"work_pool_name\"},\n    )\n    if deployment.work_pool_name and deployment.work_queue_name:\n        # If a specific pool name/queue name combination was provided, get the\n        # ID for that work pool queue.\n        update_data[\n            \"work_queue_id\"\n        ] = await WorkerLookups()._get_work_queue_id_from_name(\n            session=session,\n            work_pool_name=deployment.work_pool_name,\n            work_queue_name=deployment.work_queue_name,\n            create_queue_if_not_found=True,\n        )\n    elif deployment.work_pool_name:\n        # If just a pool name was provided, get the ID for its default\n        # work pool queue.\n        update_data[\n            \"work_queue_id\"\n        ] = await WorkerLookups()._get_default_work_queue_id_from_work_pool_name(\n            session=session,\n            work_pool_name=deployment.work_pool_name,\n        )\n    elif deployment.work_queue_name:\n        # If just a queue name was provided, ensure the queue exists and\n        # get its ID.\n        work_queue = await models.work_queues._ensure_work_queue_exists(\n            session=session, name=update_data[\"work_queue_name\"], db=db\n        )\n        update_data[\"work_queue_id\"] = work_queue.id\n\n    update_stmt = (\n        sa.update(db.Deployment)\n        .where(db.Deployment.id == deployment_id)\n        .values(**update_data)\n    )\n    result = await session.execute(update_stmt)\n\n    # delete any auto scheduled runs that would have reflected the old deployment config\n    await _delete_scheduled_runs(\n        session=session, deployment_id=deployment_id, db=db, auto_scheduled_only=True\n    )\n\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flow_run_states/","title":"server.models.flow_run_states","text":""},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states","title":"prefect.server.models.flow_run_states","text":"

Functions for interacting with flow run state ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.delete_flow_run_state","title":"delete_flow_run_state async","text":"

Delete a flow run state by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_run_state_id UUID

a flow run state id

required

Returns:

Name Type Description bool bool

whether or not the flow run state was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_run_states.py
@inject_db\nasync def delete_flow_run_state(\n    session: sa.orm.Session, flow_run_state_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a flow run state by id.\n\n    Args:\n        session: A database session\n        flow_run_state_id: a flow run state id\n\n    Returns:\n        bool: whether or not the flow run state was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.FlowRunState).where(db.FlowRunState.id == flow_run_state_id)\n    )\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.read_flow_run_state","title":"read_flow_run_state async","text":"

Reads a flow run state by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_run_state_id UUID

a flow run state id

required

Returns:

Type Description

db.FlowRunState: the flow state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_run_states.py
@inject_db\nasync def read_flow_run_state(\n    session: sa.orm.Session, flow_run_state_id: UUID, db: PrefectDBInterface\n):\n\"\"\"\n    Reads a flow run state by id.\n\n    Args:\n        session: A database session\n        flow_run_state_id: a flow run state id\n\n    Returns:\n        db.FlowRunState: the flow state\n    \"\"\"\n\n    return await session.get(db.FlowRunState, flow_run_state_id)\n
"},{"location":"api-ref/server/models/flow_run_states/#prefect.server.models.flow_run_states.read_flow_run_states","title":"read_flow_run_states async","text":"

Reads flow runs states for a flow run.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_run_id UUID

the flow run id

required

Returns:

Type Description

List[db.FlowRunState]: the flow run states

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_run_states.py
@inject_db\nasync def read_flow_run_states(\n    session: sa.orm.Session, flow_run_id: UUID, db: PrefectDBInterface\n):\n\"\"\"\n    Reads flow runs states for a flow run.\n\n    Args:\n        session: A database session\n        flow_run_id: the flow run id\n\n    Returns:\n        List[db.FlowRunState]: the flow run states\n    \"\"\"\n\n    query = (\n        select(db.FlowRunState)\n        .filter_by(flow_run_id=flow_run_id)\n        .order_by(db.FlowRunState.timestamp)\n    )\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/flow_runs/","title":"server.models.flow_runs","text":""},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs","title":"prefect.server.models.flow_runs","text":"

Functions for interacting with flow run ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.count_flow_runs","title":"count_flow_runs async","text":"

Count flow runs.

Parameters:

Name Type Description Default session AsyncSession

a database session

required flow_filter schemas.filters.FlowFilter

only count flow runs whose flows match these filters

None flow_run_filter schemas.filters.FlowRunFilter

only count flow runs that match these filters

None task_run_filter schemas.filters.TaskRunFilter

only count flow runs whose task runs match these filters

None deployment_filter schemas.filters.DeploymentFilter

only count flow runs whose deployments match these filters

None

Returns:

Name Type Description int int

count of flow runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
@inject_db\nasync def count_flow_runs(\n    session: AsyncSession,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n) -> int:\n\"\"\"\n    Count flow runs.\n\n    Args:\n        session: a database session\n        flow_filter: only count flow runs whose flows match these filters\n        flow_run_filter: only count flow runs that match these filters\n        task_run_filter: only count flow runs whose task runs match these filters\n        deployment_filter: only count flow runs whose deployments match these filters\n\n    Returns:\n        int: count of flow runs\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.FlowRun)\n\n    query = await _apply_flow_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n        db=db,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.create_flow_run","title":"create_flow_run async","text":"

Creates a new flow run.

If the provided flow run has a state attached, it will also be created.

Parameters:

Name Type Description Default session AsyncSession

a database session

required flow_run schemas.core.FlowRun

a flow run model

required

Returns:

Type Description

db.FlowRun: the newly-created flow run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
@inject_db\nasync def create_flow_run(\n    session: AsyncSession,\n    flow_run: schemas.core.FlowRun,\n    db: PrefectDBInterface,\n    orchestration_parameters: dict = None,\n):\n\"\"\"Creates a new flow run.\n\n    If the provided flow run has a state attached, it will also be created.\n\n    Args:\n        session: a database session\n        flow_run: a flow run model\n\n    Returns:\n        db.FlowRun: the newly-created flow run\n    \"\"\"\n    now = pendulum.now(\"UTC\")\n\n    flow_run_dict = dict(\n        **flow_run.dict(\n            shallow=True,\n            exclude={\n                \"created\",\n                \"state\",\n                \"estimated_run_time\",\n                \"estimated_start_time_delta\",\n            },\n            exclude_unset=True,\n        ),\n        created=now,\n    )\n\n    # if no idempotency key was provided, create the run directly\n    if not flow_run.idempotency_key:\n        model = db.FlowRun(**flow_run_dict)\n        session.add(model)\n        await session.flush()\n\n    # otherwise let the database take care of enforcing idempotency\n    else:\n        insert_stmt = (\n            (await db.insert(db.FlowRun))\n            .values(**flow_run_dict)\n            .on_conflict_do_nothing(\n                index_elements=db.flow_run_unique_upsert_columns,\n            )\n        )\n        await session.execute(insert_stmt)\n\n        # read the run to see if idempotency was applied or not\n        query = (\n            sa.select(db.FlowRun)\n            .where(\n                sa.and_(\n                    db.FlowRun.flow_id == flow_run.flow_id,\n                    db.FlowRun.idempotency_key == flow_run.idempotency_key,\n                )\n            )\n            .limit(1)\n            .execution_options(populate_existing=True)\n            .options(\n                selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n            )\n        )\n        result = await session.execute(query)\n        model = result.scalar()\n\n    # if the flow run was created in this function call then we need to set the\n    # state. If it was created idempotently, the created time won't match.\n    if model.created == now and flow_run.state:\n        await models.flow_runs.set_flow_run_state(\n            session=session,\n            flow_run_id=model.id,\n            state=flow_run.state,\n            force=True,\n            orchestration_parameters=orchestration_parameters,\n        )\n    return model\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.delete_flow_run","title":"delete_flow_run async","text":"

Delete a flow run by flow_run_id.

Parameters:

Name Type Description Default session AsyncSession

A database session

required flow_run_id UUID

a flow run id

required

Returns:

Name Type Description bool bool

whether or not the flow run was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
@inject_db\nasync def delete_flow_run(\n    session: AsyncSession, flow_run_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a flow run by flow_run_id.\n\n    Args:\n        session: A database session\n        flow_run_id: a flow run id\n\n    Returns:\n        bool: whether or not the flow run was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.FlowRun).where(db.FlowRun.id == flow_run_id)\n    )\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_run","title":"read_flow_run async","text":"

Reads a flow run by id.

Parameters:

Name Type Description Default session AsyncSession

A database session

required flow_run_id UUID

a flow run id

required

Returns:

Type Description

db.FlowRun: the flow run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
@inject_db\nasync def read_flow_run(\n    session: AsyncSession,\n    flow_run_id: UUID,\n    db: PrefectDBInterface,\n    for_update: bool = False,\n):\n\"\"\"\n    Reads a flow run by id.\n\n    Args:\n        session: A database session\n        flow_run_id: a flow run id\n\n    Returns:\n        db.FlowRun: the flow run\n    \"\"\"\n    select = (\n        sa.select(db.FlowRun)\n        .where(db.FlowRun.id == flow_run_id)\n        .options(\n            selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n        )\n    )\n\n    if for_update:\n        select = select.with_for_update()\n\n    result = await session.execute(select)\n    return result.scalar()\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_flow_runs","title":"read_flow_runs async","text":"

Read flow runs.

Parameters:

Name Type Description Default session AsyncSession

a database session

required columns List

a list of the flow run ORM columns to load, for performance

None flow_filter schemas.filters.FlowFilter

only select flow runs whose flows match these filters

None flow_run_filter schemas.filters.FlowRunFilter

only select flow runs match these filters

None task_run_filter schemas.filters.TaskRunFilter

only select flow runs whose task runs match these filters

None deployment_filter schemas.filters.DeploymentFilter

only sleect flow runs whose deployments match these filters

None offset int

Query offset

None limit int

Query limit

None sort schemas.sorting.FlowRunSort

Query sort

schemas.sorting.FlowRunSort.ID_DESC

Returns:

Type Description

List[db.FlowRun]: flow runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
@inject_db\nasync def read_flow_runs(\n    session: AsyncSession,\n    db: PrefectDBInterface,\n    columns: List = None,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    work_queue_filter: schemas.filters.WorkQueueFilter = None,\n    offset: int = None,\n    limit: int = None,\n    sort: schemas.sorting.FlowRunSort = schemas.sorting.FlowRunSort.ID_DESC,\n):\n\"\"\"\n    Read flow runs.\n\n    Args:\n        session: a database session\n        columns: a list of the flow run ORM columns to load, for performance\n        flow_filter: only select flow runs whose flows match these filters\n        flow_run_filter: only select flow runs match these filters\n        task_run_filter: only select flow runs whose task runs match these filters\n        deployment_filter: only sleect flow runs whose deployments match these filters\n        offset: Query offset\n        limit: Query limit\n        sort: Query sort\n\n    Returns:\n        List[db.FlowRun]: flow runs\n    \"\"\"\n    query = (\n        select(db.FlowRun)\n        .order_by(sort.as_sql_sort(db))\n        .options(\n            selectinload(db.FlowRun.work_queue).selectinload(db.WorkQueue.work_pool)\n        )\n    )\n\n    if columns:\n        query = query.options(load_only(*columns))\n\n    query = await _apply_flow_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        work_queue_filter=work_queue_filter,\n        db=db,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.read_task_run_dependencies","title":"read_task_run_dependencies async","text":"

Get a task run dependency map for a given flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
async def read_task_run_dependencies(\n    session: AsyncSession,\n    flow_run_id: UUID,\n) -> List[DependencyResult]:\n\"\"\"\n    Get a task run dependency map for a given flow run.\n    \"\"\"\n    flow_run = await models.flow_runs.read_flow_run(\n        session=session, flow_run_id=flow_run_id\n    )\n    if not flow_run:\n        raise ObjectNotFoundError(f\"Flow run with id {flow_run_id} not found\")\n\n    task_runs = await models.task_runs.read_task_runs(\n        session=session,\n        flow_run_filter=schemas.filters.FlowRunFilter(\n            id=schemas.filters.FlowRunFilterId(any_=[flow_run_id])\n        ),\n    )\n\n    dependency_graph = []\n\n    for task_run in task_runs:\n        inputs = list(set(chain(*task_run.task_inputs.values())))\n        untrackable_result_status = (\n            False\n            if task_run.state is None\n            else task_run.state.state_details.untrackable_result\n        )\n        dependency_graph.append(\n            {\n                \"id\": task_run.id,\n                \"upstream_dependencies\": inputs,\n                \"state\": task_run.state,\n                \"expected_start_time\": task_run.expected_start_time,\n                \"name\": task_run.name,\n                \"start_time\": task_run.start_time,\n                \"end_time\": task_run.end_time,\n                \"total_run_time\": task_run.total_run_time,\n                \"estimated_run_time\": task_run.estimated_run_time,\n                \"untrackable_result\": untrackable_result_status,\n            }\n        )\n\n    return dependency_graph\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.set_flow_run_state","title":"set_flow_run_state async","text":"

Creates a new orchestrated flow run state.

Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed state input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A force flag is supplied to bypass a subset of orchestration logic.

Parameters:

Name Type Description Default session AsyncSession

a database session

required flow_run_id UUID

the flow run id

required state schemas.states.State

a flow run state model

required force bool

if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.

False

Returns:

Type Description OrchestrationResult

OrchestrationResult object

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
async def set_flow_run_state(\n    session: AsyncSession,\n    flow_run_id: UUID,\n    state: schemas.states.State,\n    force: bool = False,\n    flow_policy: BaseOrchestrationPolicy = None,\n    orchestration_parameters: dict = None,\n) -> OrchestrationResult:\n\"\"\"\n    Creates a new orchestrated flow run state.\n\n    Setting a new state on a run is the one of the principal actions that is governed by\n    Prefect's orchestration logic. Setting a new run state will not guarantee creation,\n    but instead trigger orchestration rules to govern the proposed `state` input. If\n    the state is considered valid, it will be written to the database. Otherwise, a\n    it's possible a different state, or no state, will be created. A `force` flag is\n    supplied to bypass a subset of orchestration logic.\n\n    Args:\n        session: a database session\n        flow_run_id: the flow run id\n        state: a flow run state model\n        force: if False, orchestration rules will be applied that may alter or prevent\n            the state transition. If True, orchestration rules are not applied.\n\n    Returns:\n        OrchestrationResult object\n    \"\"\"\n\n    # load the flow run\n    run = await models.flow_runs.read_flow_run(\n        session=session,\n        flow_run_id=flow_run_id,\n        # Lock the row to prevent orchestration race conditions\n        for_update=True,\n    )\n\n    if not run:\n        raise ObjectNotFoundError(f\"Flow run with id {flow_run_id} not found\")\n\n    initial_state = run.state.as_state() if run.state else None\n    initial_state_type = initial_state.type if initial_state else None\n    proposed_state_type = state.type if state else None\n    intended_transition = (initial_state_type, proposed_state_type)\n\n    if force or flow_policy is None:\n        flow_policy = MinimalFlowPolicy\n\n    orchestration_rules = flow_policy.compile_transition_rules(*intended_transition)\n    global_rules = GlobalFlowPolicy.compile_transition_rules(*intended_transition)\n\n    context = FlowOrchestrationContext(\n        session=session,\n        run=run,\n        initial_state=initial_state,\n        proposed_state=state,\n    )\n\n    if orchestration_parameters is not None:\n        context.parameters = orchestration_parameters\n\n    # apply orchestration rules and create the new flow run state\n    async with contextlib.AsyncExitStack() as stack:\n        for rule in orchestration_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        for rule in global_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        await context.validate_proposed_state()\n\n    if context.orchestration_error is not None:\n        raise context.orchestration_error\n\n    result = OrchestrationResult(\n        state=context.validated_state,\n        status=context.response_status,\n        details=context.response_details,\n    )\n\n    # if a new state is being set (either ACCEPTED from user or REJECTED\n    # and set by the server), check for any notification policies\n    if result.status in (SetStateStatus.ACCEPT, SetStateStatus.REJECT):\n        await models.flow_run_notification_policies.queue_flow_run_notifications(\n            session=session, flow_run=run\n        )\n\n    return result\n
"},{"location":"api-ref/server/models/flow_runs/#prefect.server.models.flow_runs.update_flow_run","title":"update_flow_run async","text":"

Updates a flow run.

Parameters:

Name Type Description Default session AsyncSession

a database session

required flow_run_id UUID

the flow run id to update

required flow_run schemas.actions.FlowRunUpdate

a flow run model

required

Returns:

Name Type Description bool bool

whether or not matching rows were found to update

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flow_runs.py
@inject_db\nasync def update_flow_run(\n    session: AsyncSession,\n    flow_run_id: UUID,\n    flow_run: schemas.actions.FlowRunUpdate,\n    db: PrefectDBInterface,\n) -> bool:\n\"\"\"\n    Updates a flow run.\n\n    Args:\n        session: a database session\n        flow_run_id: the flow run id to update\n        flow_run: a flow run model\n\n    Returns:\n        bool: whether or not matching rows were found to update\n    \"\"\"\n    update_stmt = (\n        sa.update(db.FlowRun).where(db.FlowRun.id == flow_run_id)\n        # exclude_unset=True allows us to only update values provided by\n        # the user, ignoring any defaults on the model\n        .values(**flow_run.dict(shallow=True, exclude_unset=True))\n    )\n    result = await session.execute(update_stmt)\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flows/","title":"server.models.flows","text":""},{"location":"api-ref/server/models/flows/#prefect.server.models.flows","title":"prefect.server.models.flows","text":"

Functions for interacting with flow ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.count_flows","title":"count_flows async","text":"

Count flows.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_filter schemas.filters.FlowFilter

only count flows that match these filters

None flow_run_filter schemas.filters.FlowRunFilter

only count flows whose flow runs match these filters

None task_run_filter schemas.filters.TaskRunFilter

only count flows whose task runs match these filters

None deployment_filter schemas.filters.DeploymentFilter

only count flows whose deployments match these filters

None work_pool_filter schemas.filters.WorkPoolFilter

only count flows whose work pools match these filters

None

Returns:

Name Type Description int int

count of flows

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def count_flows(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n) -> int:\n\"\"\"\n    Count flows.\n\n    Args:\n        session: A database session\n        flow_filter: only count flows that match these filters\n        flow_run_filter: only count flows whose flow runs match these filters\n        task_run_filter: only count flows whose task runs match these filters\n        deployment_filter: only count flows whose deployments match these filters\n        work_pool_filter: only count flows whose work pools match these filters\n\n    Returns:\n        int: count of flows\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.Flow)\n\n    query = await _apply_flow_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        db=db,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.create_flow","title":"create_flow async","text":"

Creates a new flow.

If a flow with the same name already exists, the existing flow is returned.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required flow schemas.core.Flow

a flow model

required

Returns:

Type Description

db.Flow: the newly-created or existing flow

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def create_flow(\n    session: sa.orm.Session, flow: schemas.core.Flow, db: PrefectDBInterface\n):\n\"\"\"\n    Creates a new flow.\n\n    If a flow with the same name already exists, the existing flow is returned.\n\n    Args:\n        session: a database session\n        flow: a flow model\n\n    Returns:\n        db.Flow: the newly-created or existing flow\n    \"\"\"\n\n    insert_stmt = (\n        (await db.insert(db.Flow))\n        .values(**flow.dict(shallow=True, exclude_unset=True))\n        .on_conflict_do_nothing(\n            index_elements=db.flow_unique_upsert_columns,\n        )\n    )\n    await session.execute(insert_stmt)\n\n    query = (\n        sa.select(db.Flow)\n        .where(\n            db.Flow.name == flow.name,\n        )\n        .limit(1)\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    model = result.scalar()\n    return model\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.delete_flow","title":"delete_flow async","text":"

Delete a flow by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_id UUID

a flow id

required

Returns:

Name Type Description bool bool

whether or not the flow was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def delete_flow(\n    session: sa.orm.Session, flow_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a flow by id.\n\n    Args:\n        session: A database session\n        flow_id: a flow id\n\n    Returns:\n        bool: whether or not the flow was deleted\n    \"\"\"\n\n    result = await session.execute(delete(db.Flow).where(db.Flow.id == flow_id))\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flow","title":"read_flow async","text":"

Reads a flow by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_id UUID

a flow id

required

Returns:

Type Description

db.Flow: the flow

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def read_flow(session: sa.orm.Session, flow_id: UUID, db: PrefectDBInterface):\n\"\"\"\n    Reads a flow by id.\n\n    Args:\n        session: A database session\n        flow_id: a flow id\n\n    Returns:\n        db.Flow: the flow\n    \"\"\"\n    return await session.get(db.Flow, flow_id)\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flow_by_name","title":"read_flow_by_name async","text":"

Reads a flow by name.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required name str

a flow name

required

Returns:

Type Description

db.Flow: the flow

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def read_flow_by_name(session: sa.orm.Session, name: str, db: PrefectDBInterface):\n\"\"\"\n    Reads a flow by name.\n\n    Args:\n        session: A database session\n        name: a flow name\n\n    Returns:\n        db.Flow: the flow\n    \"\"\"\n\n    result = await session.execute(select(db.Flow).filter_by(name=name))\n    return result.scalar()\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.read_flows","title":"read_flows async","text":"

Read multiple flows.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required flow_filter schemas.filters.FlowFilter

only select flows that match these filters

None flow_run_filter schemas.filters.FlowRunFilter

only select flows whose flow runs match these filters

None task_run_filter schemas.filters.TaskRunFilter

only select flows whose task runs match these filters

None deployment_filter schemas.filters.DeploymentFilter

only select flows whose deployments match these filters

None work_pool_filter schemas.filters.WorkPoolFilter

only select flows whose work pools match these filters

None offset int

Query offset

None limit int

Query limit

None

Returns:

Type Description

List[db.Flow]: flows

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def read_flows(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    work_pool_filter: schemas.filters.WorkPoolFilter = None,\n    sort: schemas.sorting.FlowSort = schemas.sorting.FlowSort.NAME_ASC,\n    offset: int = None,\n    limit: int = None,\n):\n\"\"\"\n    Read multiple flows.\n\n    Args:\n        session: A database session\n        flow_filter: only select flows that match these filters\n        flow_run_filter: only select flows whose flow runs match these filters\n        task_run_filter: only select flows whose task runs match these filters\n        deployment_filter: only select flows whose deployments match these filters\n        work_pool_filter: only select flows whose work pools match these filters\n        offset: Query offset\n        limit: Query limit\n\n    Returns:\n        List[db.Flow]: flows\n    \"\"\"\n\n    query = select(db.Flow).order_by(sort.as_sql_sort(db=db))\n\n    query = await _apply_flow_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        work_pool_filter=work_pool_filter,\n        db=db,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/flows/#prefect.server.models.flows.update_flow","title":"update_flow async","text":"

Updates a flow.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required flow_id UUID

the flow id to update

required flow schemas.actions.FlowUpdate

a flow update model

required

Returns:

Name Type Description bool

whether or not matching rows were found to update

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/flows.py
@inject_db\nasync def update_flow(\n    session: sa.orm.Session,\n    flow_id: UUID,\n    flow: schemas.actions.FlowUpdate,\n    db: PrefectDBInterface,\n):\n\"\"\"\n    Updates a flow.\n\n    Args:\n        session: a database session\n        flow_id: the flow id to update\n        flow: a flow update model\n\n    Returns:\n        bool: whether or not matching rows were found to update\n    \"\"\"\n    update_stmt = (\n        sa.update(db.Flow).where(db.Flow.id == flow_id)\n        # exclude_unset=True allows us to only update values provided by\n        # the user, ignoring any defaults on the model\n        .values(**flow.dict(shallow=True, exclude_unset=True))\n    )\n    result = await session.execute(update_stmt)\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/saved_searches/","title":"server.models.saved_searches","text":""},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches","title":"prefect.server.models.saved_searches","text":"

Functions for interacting with saved search ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.create_saved_search","title":"create_saved_search async","text":"

Upserts a SavedSearch.

If a SavedSearch with the same name exists, all properties will be updated.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required saved_search schemas.core.SavedSearch

a SavedSearch model

required

Returns:

Type Description

db.SavedSearch: the newly-created or updated SavedSearch

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/saved_searches.py
@inject_db\nasync def create_saved_search(\n    session: sa.orm.Session,\n    saved_search: schemas.core.SavedSearch,\n    db: PrefectDBInterface,\n):\n\"\"\"\n    Upserts a SavedSearch.\n\n    If a SavedSearch with the same name exists, all properties will be updated.\n\n    Args:\n        session (sa.orm.Session): a database session\n        saved_search (schemas.core.SavedSearch): a SavedSearch model\n\n    Returns:\n        db.SavedSearch: the newly-created or updated SavedSearch\n\n    \"\"\"\n\n    insert_stmt = (\n        (await db.insert(db.SavedSearch))\n        .values(**saved_search.dict(shallow=True, exclude_unset=True))\n        .on_conflict_do_update(\n            index_elements=db.saved_search_unique_upsert_columns,\n            set_=saved_search.dict(shallow=True, include={\"filters\"}),\n        )\n    )\n\n    await session.execute(insert_stmt)\n\n    query = (\n        sa.select(db.SavedSearch)\n        .where(\n            db.SavedSearch.name == saved_search.name,\n        )\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    model = result.scalar()\n\n    return model\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.delete_saved_search","title":"delete_saved_search async","text":"

Delete a SavedSearch by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required saved_search_id str

a SavedSearch id

required

Returns:

Name Type Description bool bool

whether or not the SavedSearch was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/saved_searches.py
@inject_db\nasync def delete_saved_search(\n    session: sa.orm.Session, saved_search_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a SavedSearch by id.\n\n    Args:\n        session (sa.orm.Session): A database session\n        saved_search_id (str): a SavedSearch id\n\n    Returns:\n        bool: whether or not the SavedSearch was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.SavedSearch).where(db.SavedSearch.id == saved_search_id)\n    )\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_search","title":"read_saved_search async","text":"

Reads a SavedSearch by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required saved_search_id str

a SavedSearch id

required

Returns:

Type Description

db.SavedSearch: the SavedSearch

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/saved_searches.py
@inject_db\nasync def read_saved_search(\n    session: sa.orm.Session, saved_search_id: UUID, db: PrefectDBInterface\n):\n\"\"\"\n    Reads a SavedSearch by id.\n\n    Args:\n        session (sa.orm.Session): A database session\n        saved_search_id (str): a SavedSearch id\n\n    Returns:\n        db.SavedSearch: the SavedSearch\n    \"\"\"\n\n    return await session.get(db.SavedSearch, saved_search_id)\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_search_by_name","title":"read_saved_search_by_name async","text":"

Reads a SavedSearch by name.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required name str

a SavedSearch name

required

Returns:

Type Description

db.SavedSearch: the SavedSearch

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/saved_searches.py
@inject_db\nasync def read_saved_search_by_name(\n    session: sa.orm.Session, name: str, db: PrefectDBInterface\n):\n\"\"\"\n    Reads a SavedSearch by name.\n\n    Args:\n        session (sa.orm.Session): A database session\n        name (str): a SavedSearch name\n\n    Returns:\n        db.SavedSearch: the SavedSearch\n    \"\"\"\n    result = await session.execute(\n        select(db.SavedSearch).where(db.SavedSearch.name == name).limit(1)\n    )\n    return result.scalar()\n
"},{"location":"api-ref/server/models/saved_searches/#prefect.server.models.saved_searches.read_saved_searches","title":"read_saved_searches async","text":"

Read SavedSearches.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required offset int

Query offset

None limit(int)

Query limit

required

Returns:

Type Description

List[db.SavedSearch]: SavedSearches

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/saved_searches.py
@inject_db\nasync def read_saved_searches(\n    db: PrefectDBInterface,\n    session: sa.orm.Session,\n    offset: int = None,\n    limit: int = None,\n):\n\"\"\"\n    Read SavedSearches.\n\n    Args:\n        session (sa.orm.Session): A database session\n        offset (int): Query offset\n        limit(int): Query limit\n\n    Returns:\n        List[db.SavedSearch]: SavedSearches\n    \"\"\"\n\n    query = select(db.SavedSearch).order_by(db.SavedSearch.name)\n\n    if offset is not None:\n        query = query.offset(offset)\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/task_run_states/","title":"server.models.task_run_states","text":""},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states","title":"prefect.server.models.task_run_states","text":"

Functions for interacting with task run state ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.delete_task_run_state","title":"delete_task_run_state async","text":"

Delete a task run state by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required task_run_state_id UUID

a task run state id

required

Returns:

Name Type Description bool bool

whether or not the task run state was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_run_states.py
@inject_db\nasync def delete_task_run_state(\n    session: sa.orm.Session, task_run_state_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a task run state by id.\n\n    Args:\n        session: A database session\n        task_run_state_id: a task run state id\n\n    Returns:\n        bool: whether or not the task run state was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.TaskRunState).where(db.TaskRunState.id == task_run_state_id)\n    )\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.read_task_run_state","title":"read_task_run_state async","text":"

Reads a task run state by id.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required task_run_state_id UUID

a task run state id

required

Returns:

Type Description

db.TaskRunState: the task state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_run_states.py
@inject_db\nasync def read_task_run_state(\n    session: sa.orm.Session, task_run_state_id: UUID, db: PrefectDBInterface\n):\n\"\"\"\n    Reads a task run state by id.\n\n    Args:\n        session: A database session\n        task_run_state_id: a task run state id\n\n    Returns:\n        db.TaskRunState: the task state\n    \"\"\"\n\n    return await session.get(db.TaskRunState, task_run_state_id)\n
"},{"location":"api-ref/server/models/task_run_states/#prefect.server.models.task_run_states.read_task_run_states","title":"read_task_run_states async","text":"

Reads task runs states for a task run.

Parameters:

Name Type Description Default session sa.orm.Session

A database session

required task_run_id UUID

the task run id

required

Returns:

Type Description

List[db.TaskRunState]: the task run states

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_run_states.py
@inject_db\nasync def read_task_run_states(\n    session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n):\n\"\"\"\n    Reads task runs states for a task run.\n\n    Args:\n        session: A database session\n        task_run_id: the task run id\n\n    Returns:\n        List[db.TaskRunState]: the task run states\n    \"\"\"\n\n    query = (\n        select(db.TaskRunState)\n        .filter_by(task_run_id=task_run_id)\n        .order_by(db.TaskRunState.timestamp)\n    )\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/task_runs/","title":"server.models.task_runs","text":""},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs","title":"prefect.server.models.task_runs","text":"

Functions for interacting with task run ORM objects. Intended for internal use by the Prefect REST API.

"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.count_task_runs","title":"count_task_runs async","text":"

Count task runs.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required flow_filter schemas.filters.FlowFilter

only count task runs whose flows match these filters

None flow_run_filter schemas.filters.FlowRunFilter

only count task runs whose flow runs match these filters

None task_run_filter schemas.filters.TaskRunFilter

only count task runs that match these filters

None deployment_filter schemas.filters.DeploymentFilter

only count task runs whose deployments match these filters

None

Returns:

Name Type Description int int

count of task runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
@inject_db\nasync def count_task_runs(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n) -> int:\n\"\"\"\n    Count task runs.\n\n    Args:\n        session: a database session\n        flow_filter: only count task runs whose flows match these filters\n        flow_run_filter: only count task runs whose flow runs match these filters\n        task_run_filter: only count task runs that match these filters\n        deployment_filter: only count task runs whose deployments match these filters\n    Returns:\n        int: count of task runs\n    \"\"\"\n\n    query = select(sa.func.count(sa.text(\"*\"))).select_from(db.TaskRun)\n\n    query = await _apply_task_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        db=db,\n    )\n\n    result = await session.execute(query)\n    return result.scalar()\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.create_task_run","title":"create_task_run async","text":"

Creates a new task run.

If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned. If the provided task run has a state attached, it will also be created.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required task_run schemas.core.TaskRun

a task run model

required

Returns:

Type Description

db.TaskRun: the newly-created or existing task run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
@inject_db\nasync def create_task_run(\n    session: sa.orm.Session,\n    task_run: schemas.core.TaskRun,\n    db: PrefectDBInterface,\n    orchestration_parameters: dict = None,\n):\n\"\"\"\n    Creates a new task run.\n\n    If a task run with the same flow_run_id, task_key, and dynamic_key already exists,\n    the existing task run will be returned. If the provided task run has a state\n    attached, it will also be created.\n\n    Args:\n        session: a database session\n        task_run: a task run model\n\n    Returns:\n        db.TaskRun: the newly-created or existing task run\n    \"\"\"\n\n    now = pendulum.now(\"UTC\")\n\n    # if a dynamic key exists, we need to guard against conflicts\n    insert_stmt = (\n        (await db.insert(db.TaskRun))\n        .values(\n            created=now,\n            **task_run.dict(\n                shallow=True, exclude={\"state\", \"created\"}, exclude_unset=True\n            ),\n        )\n        .on_conflict_do_nothing(\n            index_elements=db.task_run_unique_upsert_columns,\n        )\n    )\n    await session.execute(insert_stmt)\n\n    query = (\n        sa.select(db.TaskRun)\n        .where(\n            sa.and_(\n                db.TaskRun.flow_run_id == task_run.flow_run_id,\n                db.TaskRun.task_key == task_run.task_key,\n                db.TaskRun.dynamic_key == task_run.dynamic_key,\n            )\n        )\n        .limit(1)\n        .execution_options(populate_existing=True)\n    )\n    result = await session.execute(query)\n    model = result.scalar()\n\n    if model.created == now and task_run.state:\n        await models.task_runs.set_task_run_state(\n            session=session,\n            task_run_id=model.id,\n            state=task_run.state,\n            force=True,\n            orchestration_parameters=orchestration_parameters,\n        )\n    return model\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.delete_task_run","title":"delete_task_run async","text":"

Delete a task run by id.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required task_run_id UUID

the task run id to delete

required

Returns:

Name Type Description bool bool

whether or not the task run was deleted

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
@inject_db\nasync def delete_task_run(\n    session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n) -> bool:\n\"\"\"\n    Delete a task run by id.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id to delete\n\n    Returns:\n        bool: whether or not the task run was deleted\n    \"\"\"\n\n    result = await session.execute(\n        delete(db.TaskRun).where(db.TaskRun.id == task_run_id)\n    )\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.read_task_run","title":"read_task_run async","text":"

Read a task run by id.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required task_run_id UUID

the task run id

required

Returns:

Type Description

db.TaskRun: the task run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
@inject_db\nasync def read_task_run(\n    session: sa.orm.Session, task_run_id: UUID, db: PrefectDBInterface\n):\n\"\"\"\n    Read a task run by id.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id\n\n    Returns:\n        db.TaskRun: the task run\n    \"\"\"\n\n    model = await session.get(db.TaskRun, task_run_id)\n    return model\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.read_task_runs","title":"read_task_runs async","text":"

Read task runs.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required flow_filter schemas.filters.FlowFilter

only select task runs whose flows match these filters

None flow_run_filter schemas.filters.FlowRunFilter

only select task runs whose flow runs match these filters

None task_run_filter schemas.filters.TaskRunFilter

only select task runs that match these filters

None deployment_filter schemas.filters.DeploymentFilter

only select task runs whose deployments match these filters

None offset int

Query offset

None limit int

Query limit

None sort schemas.sorting.TaskRunSort

Query sort

schemas.sorting.TaskRunSort.ID_DESC

Returns:

Type Description

List[db.TaskRun]: the task runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
@inject_db\nasync def read_task_runs(\n    session: sa.orm.Session,\n    db: PrefectDBInterface,\n    flow_filter: schemas.filters.FlowFilter = None,\n    flow_run_filter: schemas.filters.FlowRunFilter = None,\n    task_run_filter: schemas.filters.TaskRunFilter = None,\n    deployment_filter: schemas.filters.DeploymentFilter = None,\n    offset: int = None,\n    limit: int = None,\n    sort: schemas.sorting.TaskRunSort = schemas.sorting.TaskRunSort.ID_DESC,\n):\n\"\"\"\n    Read task runs.\n\n    Args:\n        session: a database session\n        flow_filter: only select task runs whose flows match these filters\n        flow_run_filter: only select task runs whose flow runs match these filters\n        task_run_filter: only select task runs that match these filters\n        deployment_filter: only select task runs whose deployments match these filters\n        offset: Query offset\n        limit: Query limit\n        sort: Query sort\n\n    Returns:\n        List[db.TaskRun]: the task runs\n    \"\"\"\n\n    query = select(db.TaskRun).order_by(sort.as_sql_sort(db))\n\n    query = await _apply_task_run_filters(\n        query,\n        flow_filter=flow_filter,\n        flow_run_filter=flow_run_filter,\n        task_run_filter=task_run_filter,\n        deployment_filter=deployment_filter,\n        db=db,\n    )\n\n    if offset is not None:\n        query = query.offset(offset)\n\n    if limit is not None:\n        query = query.limit(limit)\n\n    result = await session.execute(query)\n    return result.scalars().unique().all()\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.set_task_run_state","title":"set_task_run_state async","text":"

Creates a new orchestrated task run state.

Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed state input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A force flag is supplied to bypass a subset of orchestration logic.

Parameters:

Name Type Description Default session sa.orm.Session

a database session

required task_run_id UUID

the task run id

required state schemas.states.State

a task run state model

required force bool

if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.

False

Returns:

Type Description OrchestrationResult

OrchestrationResult object

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
async def set_task_run_state(\n    session: sa.orm.Session,\n    task_run_id: UUID,\n    state: schemas.states.State,\n    force: bool = False,\n    task_policy: BaseOrchestrationPolicy = None,\n    orchestration_parameters: dict = None,\n) -> OrchestrationResult:\n\"\"\"\n    Creates a new orchestrated task run state.\n\n    Setting a new state on a run is the one of the principal actions that is governed by\n    Prefect's orchestration logic. Setting a new run state will not guarantee creation,\n    but instead trigger orchestration rules to govern the proposed `state` input. If\n    the state is considered valid, it will be written to the database. Otherwise, a\n    it's possible a different state, or no state, will be created. A `force` flag is\n    supplied to bypass a subset of orchestration logic.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id\n        state: a task run state model\n        force: if False, orchestration rules will be applied that may alter or prevent\n            the state transition. If True, orchestration rules are not applied.\n\n    Returns:\n        OrchestrationResult object\n    \"\"\"\n\n    # load the task run\n    run = await models.task_runs.read_task_run(session=session, task_run_id=task_run_id)\n\n    if not run:\n        raise ObjectNotFoundError(f\"Task run with id {task_run_id} not found\")\n\n    initial_state = run.state.as_state() if run.state else None\n    initial_state_type = initial_state.type if initial_state else None\n    proposed_state_type = state.type if state else None\n    intended_transition = (initial_state_type, proposed_state_type)\n\n    if force or task_policy is None:\n        task_policy = MinimalTaskPolicy\n\n    orchestration_rules = task_policy.compile_transition_rules(*intended_transition)\n    global_rules = GlobalTaskPolicy.compile_transition_rules(*intended_transition)\n\n    context = TaskOrchestrationContext(\n        session=session,\n        run=run,\n        initial_state=initial_state,\n        proposed_state=state,\n    )\n\n    if orchestration_parameters is not None:\n        context.parameters = orchestration_parameters\n\n    # apply orchestration rules and create the new task run state\n    async with contextlib.AsyncExitStack() as stack:\n        for rule in orchestration_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        for rule in global_rules:\n            context = await stack.enter_async_context(\n                rule(context, *intended_transition)\n            )\n\n        await context.validate_proposed_state()\n\n    if context.orchestration_error is not None:\n        raise context.orchestration_error\n\n    result = OrchestrationResult(\n        state=context.validated_state,\n        status=context.response_status,\n        details=context.response_details,\n    )\n\n    return result\n
"},{"location":"api-ref/server/models/task_runs/#prefect.server.models.task_runs.update_task_run","title":"update_task_run async","text":"

Updates a task run.

Parameters:

Name Type Description Default session AsyncSession

a database session

required task_run_id UUID

the task run id to update

required task_run schemas.actions.TaskRunUpdate

a task run model

required

Returns:

Name Type Description bool bool

whether or not matching rows were found to update

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/models/task_runs.py
@inject_db\nasync def update_task_run(\n    session: AsyncSession,\n    task_run_id: UUID,\n    task_run: schemas.actions.TaskRunUpdate,\n    db: PrefectDBInterface,\n) -> bool:\n\"\"\"\n    Updates a task run.\n\n    Args:\n        session: a database session\n        task_run_id: the task run id to update\n        task_run: a task run model\n\n    Returns:\n        bool: whether or not matching rows were found to update\n    \"\"\"\n    update_stmt = (\n        sa.update(db.TaskRun).where(db.TaskRun.id == task_run_id)\n        # exclude_unset=True allows us to only update values provided by\n        # the user, ignoring any defaults on the model\n        .values(**task_run.dict(shallow=True, exclude_unset=True))\n    )\n    result = await session.execute(update_stmt)\n    return result.rowcount > 0\n
"},{"location":"api-ref/server/orchestration/core_policy/","title":"server.orchestration.core_policy","text":""},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy","title":"prefect.server.orchestration.core_policy","text":"

Orchestration logic that fires on state transitions.

CoreFlowPolicy and CoreTaskPolicy contain all default orchestration rules that Prefect enforces on a state transition.

"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CoreFlowPolicy","title":"CoreFlowPolicy","text":"

Bases: BaseOrchestrationPolicy

Orchestration rules that run against flow-run-state transitions in priority order.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class CoreFlowPolicy(BaseOrchestrationPolicy):\n\"\"\"\n    Orchestration rules that run against flow-run-state transitions in priority order.\n    \"\"\"\n\n    def priority():\n        return [\n            HandleFlowTerminalStateTransitions,\n            EnforceCancellingToCancelledTransition,\n            BypassCancellingScheduledFlowRuns,\n            PreventPendingTransitions,\n            HandlePausingFlows,\n            HandleResumingPausedFlows,\n            CopyScheduledTime,\n            WaitForScheduledTime,\n            RetryFailedFlows,\n        ]\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CoreTaskPolicy","title":"CoreTaskPolicy","text":"

Bases: BaseOrchestrationPolicy

Orchestration rules that run against task-run-state transitions in priority order.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class CoreTaskPolicy(BaseOrchestrationPolicy):\n\"\"\"\n    Orchestration rules that run against task-run-state transitions in priority order.\n    \"\"\"\n\n    def priority():\n        return [\n            CacheRetrieval,\n            HandleTaskTerminalStateTransitions,\n            PreventRunningTasksFromStoppedFlows,\n            SecureTaskConcurrencySlots,  # retrieve cached states even if slots are full\n            CopyScheduledTime,\n            WaitForScheduledTime,\n            RetryFailedTasks,\n            RenameReruns,\n            UpdateFlowRunTrackerOnTasks,\n            CacheInsertion,\n            ReleaseTaskConcurrencySlots,\n        ]\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.SecureTaskConcurrencySlots","title":"SecureTaskConcurrencySlots","text":"

Bases: BaseOrchestrationRule

Checks relevant concurrency slots are available before entering a Running state.

This rule checks if concurrency limits have been set on the tags associated with a TaskRun. If so, a concurrency slot will be secured against each concurrency limit before being allowed to transition into a running state. If a concurrency limit has been reached, the client will be instructed to delay the transition for 30 seconds before trying again. If the concurrency limit set on a tag is 0, the transition will be aborted to prevent deadlocks.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class SecureTaskConcurrencySlots(BaseOrchestrationRule):\n\"\"\"\n    Checks relevant concurrency slots are available before entering a Running state.\n\n    This rule checks if concurrency limits have been set on the tags associated with a\n    TaskRun. If so, a concurrency slot will be secured against each concurrency limit\n    before being allowed to transition into a running state. If a concurrency limit has\n    been reached, the client will be instructed to delay the transition for 30 seconds\n    before trying again. If the concurrency limit set on a tag is 0, the transition will\n    be aborted to prevent deadlocks.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        self._applied_limits = []\n        filtered_limits = (\n            await concurrency_limits.filter_concurrency_limits_for_orchestration(\n                context.session, tags=context.run.tags\n            )\n        )\n        run_limits = {limit.tag: limit for limit in filtered_limits}\n        for tag, cl in run_limits.items():\n            limit = cl.concurrency_limit\n            if limit == 0:\n                # limits of 0 will deadlock, and the transition needs to abort\n                for stale_tag in self._applied_limits:\n                    stale_limit = run_limits.get(stale_tag, None)\n                    active_slots = set(stale_limit.active_slots)\n                    active_slots.discard(str(context.run.id))\n                    stale_limit.active_slots = list(active_slots)\n\n                await self.abort_transition(\n                    reason=(\n                        f'The concurrency limit on tag \"{tag}\" is 0 and will deadlock'\n                        \" if the task tries to run again.\"\n                    ),\n                )\n            elif len(cl.active_slots) >= limit:\n                # if the limit has already been reached, delay the transition\n                for stale_tag in self._applied_limits:\n                    stale_limit = run_limits.get(stale_tag, None)\n                    active_slots = set(stale_limit.active_slots)\n                    active_slots.discard(str(context.run.id))\n                    stale_limit.active_slots = list(active_slots)\n\n                await self.delay_transition(\n                    30,\n                    f\"Concurrency limit for the {tag} tag has been reached\",\n                )\n            else:\n                # log the TaskRun ID to active_slots\n                self._applied_limits.append(tag)\n                active_slots = set(cl.active_slots)\n                active_slots.add(str(context.run.id))\n                cl.active_slots = list(active_slots)\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        for tag in self._applied_limits:\n            cl = await concurrency_limits.read_concurrency_limit_by_tag(\n                context.session, tag\n            )\n            active_slots = set(cl.active_slots)\n            active_slots.discard(str(context.run.id))\n            cl.active_slots = list(active_slots)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.ReleaseTaskConcurrencySlots","title":"ReleaseTaskConcurrencySlots","text":"

Bases: BaseUniversalTransform

Releases any concurrency slots held by a run upon exiting a Running or Cancelling state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class ReleaseTaskConcurrencySlots(BaseUniversalTransform):\n\"\"\"\n    Releases any concurrency slots held by a run upon exiting a Running or\n    Cancelling state.\n    \"\"\"\n\n    async def after_transition(\n        self,\n        context: OrchestrationContext,\n    ):\n        if self.nullified_transition():\n            return\n\n        if context.validated_state and context.validated_state.type not in [\n            states.StateType.RUNNING,\n            states.StateType.CANCELLING,\n        ]:\n            filtered_limits = (\n                await concurrency_limits.filter_concurrency_limits_for_orchestration(\n                    context.session, tags=context.run.tags\n                )\n            )\n            run_limits = {limit.tag: limit for limit in filtered_limits}\n            for tag, cl in run_limits.items():\n                active_slots = set(cl.active_slots)\n                active_slots.discard(str(context.run.id))\n                cl.active_slots = list(active_slots)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CacheInsertion","title":"CacheInsertion","text":"

Bases: BaseOrchestrationRule

Caches completed states with cache keys after they are validated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class CacheInsertion(BaseOrchestrationRule):\n\"\"\"\n    Caches completed states with cache keys after they are validated.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.COMPLETED]\n\n    @inject_db\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n        db: PrefectDBInterface,\n    ) -> None:\n        if not validated_state or not context.session:\n            return\n\n        cache_key = validated_state.state_details.cache_key\n        if cache_key:\n            new_cache_item = db.TaskRunStateCache(\n                cache_key=cache_key,\n                cache_expiration=validated_state.state_details.cache_expiration,\n                task_run_state_id=validated_state.id,\n            )\n            context.session.add(new_cache_item)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CacheRetrieval","title":"CacheRetrieval","text":"

Bases: BaseOrchestrationRule

Rejects running states if a completed state has been cached.

This rule rejects transitions into a running state with a cache key if the key has already been associated with a completed state in the cache table. The client will be instructed to transition into the cached completed state instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class CacheRetrieval(BaseOrchestrationRule):\n\"\"\"\n    Rejects running states if a completed state has been cached.\n\n    This rule rejects transitions into a running state with a cache key if the key\n    has already been associated with a completed state in the cache table. The client\n    will be instructed to transition into the cached completed state instead.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    @inject_db\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n        db: PrefectDBInterface,\n    ) -> None:\n        cache_key = proposed_state.state_details.cache_key\n        if cache_key and not proposed_state.state_details.refresh_cache:\n            # Check for cached states matching the cache key\n            cached_state_id = (\n                select(db.TaskRunStateCache.task_run_state_id)\n                .where(\n                    sa.and_(\n                        db.TaskRunStateCache.cache_key == cache_key,\n                        sa.or_(\n                            db.TaskRunStateCache.cache_expiration.is_(None),\n                            db.TaskRunStateCache.cache_expiration > pendulum.now(\"utc\"),\n                        ),\n                    ),\n                )\n                .order_by(db.TaskRunStateCache.created.desc())\n                .limit(1)\n            ).scalar_subquery()\n            query = select(db.TaskRunState).where(db.TaskRunState.id == cached_state_id)\n            cached_state = (await context.session.execute(query)).scalar()\n            if cached_state:\n                new_state = cached_state.as_state().copy(reset_fields=True)\n                new_state.name = \"Cached\"\n                await self.reject_transition(\n                    state=new_state, reason=\"Retrieved state from cache\"\n                )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RetryFailedFlows","title":"RetryFailedFlows","text":"

Bases: BaseOrchestrationRule

Rejects failed states and schedules a retry if the retry limit has not been reached.

This rule rejects transitions into a failed state if retries has been set and the run count has not reached the specified limit. The client will be instructed to transition into a scheduled state to retry flow execution.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class RetryFailedFlows(BaseOrchestrationRule):\n\"\"\"\n    Rejects failed states and schedules a retry if the retry limit has not been reached.\n\n    This rule rejects transitions into a failed state if `retries` has been\n    set and the run count has not reached the specified limit. The client will be\n    instructed to transition into a scheduled state to retry flow execution.\n    \"\"\"\n\n    FROM_STATES = [StateType.RUNNING]\n    TO_STATES = [StateType.FAILED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: FlowOrchestrationContext,\n    ) -> None:\n        run_settings = context.run_settings\n        run_count = context.run.run_count\n\n        if run_settings.retries is None or run_count > run_settings.retries:\n            return  # Retry count exceeded, allow transition to failed\n\n        scheduled_start_time = pendulum.now(\"UTC\").add(\n            seconds=run_settings.retry_delay or 0\n        )\n\n        # support old-style flow run retries for older clients\n        # older flow retries require us to loop over failed tasks to update their state\n        # this is not required after API version 0.8.3\n        api_version = context.parameters.get(\"api-version\", None)\n        if api_version and api_version < Version(\"0.8.3\"):\n            failed_task_runs = await models.task_runs.read_task_runs(\n                context.session,\n                flow_run_filter=filters.FlowRunFilter(id={\"any_\": [context.run.id]}),\n                task_run_filter=filters.TaskRunFilter(\n                    state={\"type\": {\"any_\": [\"FAILED\"]}}\n                ),\n            )\n            for run in failed_task_runs:\n                await models.task_runs.set_task_run_state(\n                    context.session,\n                    run.id,\n                    state=states.AwaitingRetry(scheduled_time=scheduled_start_time),\n                    force=True,\n                )\n                # Reset the run count so that the task run retries still work correctly\n                run.run_count = 0\n\n        # Reset pause metadata on retry\n        # Pauses as a concept only exist after API version 0.8.4\n        api_version = context.parameters.get(\"api-version\", None)\n        if api_version is None or api_version >= Version(\"0.8.4\"):\n            updated_policy = context.run.empirical_policy.dict()\n            updated_policy[\"resuming\"] = False\n            updated_policy[\"pause_keys\"] = set()\n            context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n\n        # Generate a new state for the flow\n        retry_state = states.AwaitingRetry(\n            scheduled_time=scheduled_start_time,\n            message=proposed_state.message,\n            data=proposed_state.data,\n        )\n        await self.reject_transition(state=retry_state, reason=\"Retrying\")\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RetryFailedTasks","title":"RetryFailedTasks","text":"

Bases: BaseOrchestrationRule

Rejects failed states and schedules a retry if the retry limit has not been reached.

This rule rejects transitions into a failed state if retries has been set and the run count has not reached the specified limit. The client will be instructed to transition into a scheduled state to retry task execution.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class RetryFailedTasks(BaseOrchestrationRule):\n\"\"\"\n    Rejects failed states and schedules a retry if the retry limit has not been reached.\n\n    This rule rejects transitions into a failed state if `retries` has been\n    set and the run count has not reached the specified limit. The client will be\n    instructed to transition into a scheduled state to retry task execution.\n    \"\"\"\n\n    FROM_STATES = [StateType.RUNNING]\n    TO_STATES = [StateType.FAILED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        run_settings = context.run_settings\n        run_count = context.run.run_count\n        delay = run_settings.retry_delay\n\n        if isinstance(delay, list):\n            base_delay = delay[min(run_count - 1, len(delay) - 1)]\n        else:\n            base_delay = run_settings.retry_delay or 0\n\n        # guard against negative relative jitter inputs\n        if run_settings.retry_jitter_factor:\n            delay = clamped_poisson_interval(\n                base_delay, clamping_factor=run_settings.retry_jitter_factor\n            )\n        else:\n            delay = base_delay\n\n        if run_settings.retries is not None and run_count <= run_settings.retries:\n            retry_state = states.AwaitingRetry(\n                scheduled_time=pendulum.now(\"UTC\").add(seconds=delay),\n                message=proposed_state.message,\n                data=proposed_state.data,\n            )\n            await self.reject_transition(state=retry_state, reason=\"Retrying\")\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.RenameReruns","title":"RenameReruns","text":"

Bases: BaseOrchestrationRule

Name the states if they have run more than once.

In the special case where the initial state is an \"AwaitingRetry\" scheduled state, the proposed state will be renamed to \"Retrying\" instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class RenameReruns(BaseOrchestrationRule):\n\"\"\"\n    Name the states if they have run more than once.\n\n    In the special case where the initial state is an \"AwaitingRetry\" scheduled state,\n    the proposed state will be renamed to \"Retrying\" instead.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        run_count = context.run.run_count\n        if run_count > 0:\n            if initial_state.name == \"AwaitingRetry\":\n                await self.rename_state(\"Retrying\")\n            else:\n                await self.rename_state(\"Rerunning\")\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.CopyScheduledTime","title":"CopyScheduledTime","text":"

Bases: BaseOrchestrationRule

Ensures scheduled time is copied from scheduled states to pending states.

If a new scheduled time has been proposed on the pending state, the scheduled time on the scheduled state will be ignored.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class CopyScheduledTime(BaseOrchestrationRule):\n\"\"\"\n    Ensures scheduled time is copied from scheduled states to pending states.\n\n    If a new scheduled time has been proposed on the pending state, the scheduled time\n    on the scheduled state will be ignored.\n    \"\"\"\n\n    FROM_STATES = [StateType.SCHEDULED]\n    TO_STATES = [StateType.PENDING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        if not proposed_state.state_details.scheduled_time:\n            proposed_state.state_details.scheduled_time = (\n                initial_state.state_details.scheduled_time\n            )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.WaitForScheduledTime","title":"WaitForScheduledTime","text":"

Bases: BaseOrchestrationRule

Prevents transitions to running states from happening to early.

This rule enforces that all scheduled states will only start with the machine clock used by the Prefect REST API instance. This rule will identify transitions from scheduled states that are too early and nullify them. Instead, no state will be written to the database and the client will be sent an instruction to wait for delay_seconds before attempting the transition again.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class WaitForScheduledTime(BaseOrchestrationRule):\n\"\"\"\n    Prevents transitions to running states from happening to early.\n\n    This rule enforces that all scheduled states will only start with the machine clock\n    used by the Prefect REST API instance. This rule will identify transitions from scheduled\n    states that are too early and nullify them. Instead, no state will be written to the\n    database and the client will be sent an instruction to wait for `delay_seconds`\n    before attempting the transition again.\n    \"\"\"\n\n    FROM_STATES = [StateType.SCHEDULED, StateType.PENDING]\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        scheduled_time = initial_state.state_details.scheduled_time\n        if not scheduled_time:\n            return\n\n        # At this moment, we round delay to the nearest second as the API schema\n        # specifies an integer return value.\n        delay = scheduled_time - pendulum.now(\"UTC\")\n        delay_seconds = delay.in_seconds()\n        delay_seconds += round(delay.microseconds / 1e6)\n        if delay_seconds > 0:\n            await self.delay_transition(\n                delay_seconds, reason=\"Scheduled time is in the future\"\n            )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandlePausingFlows","title":"HandlePausingFlows","text":"

Bases: BaseOrchestrationRule

Governs runs attempting to enter a Paused state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class HandlePausingFlows(BaseOrchestrationRule):\n\"\"\"\n    Governs runs attempting to enter a Paused state\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.PAUSED]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        if initial_state is None:\n            await self.abort_transition(\"Cannot pause flows with no state.\")\n            return\n\n        if not initial_state.is_running():\n            await self.reject_transition(\n                state=None, reason=\"Cannot pause flows that are not currently running.\"\n            )\n            return\n\n        self.key = proposed_state.state_details.pause_key\n        if self.key is None:\n            # if no pause key is provided, default to a UUID\n            self.key = str(uuid4())\n\n        if self.key in context.run.empirical_policy.pause_keys:\n            await self.reject_transition(\n                state=None, reason=\"This pause has already fired.\"\n            )\n            return\n\n        if proposed_state.state_details.pause_reschedule:\n            if context.run.parent_task_run_id:\n                await self.abort_transition(\n                    reason=\"Cannot pause subflows with the reschedule option.\",\n                )\n                return\n\n            if context.run.deployment_id is None:\n                await self.abort_transition(\n                    reason=(\n                        \"Cannot pause flows without a deployment with the reschedule\"\n                        \" option.\"\n                    ),\n                )\n                return\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        updated_policy = context.run.empirical_policy.dict()\n        updated_policy[\"pause_keys\"].add(self.key)\n        context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleResumingPausedFlows","title":"HandleResumingPausedFlows","text":"

Bases: BaseOrchestrationRule

Governs runs attempting to leave a Paused state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class HandleResumingPausedFlows(BaseOrchestrationRule):\n\"\"\"\n    Governs runs attempting to leave a Paused state\n    \"\"\"\n\n    FROM_STATES = [StateType.PAUSED]\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        if not (\n            proposed_state.is_running()\n            or proposed_state.is_scheduled()\n            or proposed_state.is_final()\n        ):\n            await self.reject_transition(\n                state=None,\n                reason=(\n                    f\"This run cannot transition to the {proposed_state.type} state\"\n                    f\" from the {initial_state.type} state.\"\n                ),\n            )\n            return\n\n        if initial_state.state_details.pause_reschedule:\n            if not context.run.deployment_id:\n                await self.reject_transition(\n                    state=None,\n                    reason=\"Cannot reschedule a paused flow run without a deployment.\",\n                )\n                return\n        pause_timeout = initial_state.state_details.pause_timeout\n        if pause_timeout and pause_timeout < pendulum.now(\"UTC\"):\n            pause_timeout_failure = states.Failed(\n                message=\"The flow was paused and never resumed.\",\n            )\n            await self.reject_transition(\n                state=pause_timeout_failure,\n                reason=\"The flow run pause has timed out and can no longer resume.\",\n            )\n            return\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        updated_policy = context.run.empirical_policy.dict()\n        updated_policy[\"resuming\"] = True\n        context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.UpdateFlowRunTrackerOnTasks","title":"UpdateFlowRunTrackerOnTasks","text":"

Bases: BaseOrchestrationRule

Tracks the flow run attempt a task run state is associated with.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class UpdateFlowRunTrackerOnTasks(BaseOrchestrationRule):\n\"\"\"\n    Tracks the flow run attempt a task run state is associated with.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        self.flow_run = await context.flow_run()\n        if self.flow_run:\n            context.run.flow_run_run_count = self.flow_run.run_count\n        else:\n            raise ObjectNotFoundError(\n                (\n                    \"Unable to read flow run associated with task run:\"\n                    f\" {context.run.id}, this flow run might have been deleted\"\n                ),\n            )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleTaskTerminalStateTransitions","title":"HandleTaskTerminalStateTransitions","text":"

Bases: BaseOrchestrationRule

We do not allow tasks to leave terminal states if: - The task is completed and has a persisted result - The task is going to CANCELLING / PAUSED / CRASHED

We reset the run count when a task leaves a terminal state for a non-terminal state which resets task run retries; this is particularly relevant for flow run retries.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class HandleTaskTerminalStateTransitions(BaseOrchestrationRule):\n\"\"\"\n    We do not allow tasks to leave terminal states if:\n    - The task is completed and has a persisted result\n    - The task is going to CANCELLING / PAUSED / CRASHED\n\n    We reset the run count when a task leaves a terminal state for a non-terminal state\n    which resets task run retries; this is particularly relevant for flow run retries.\n    \"\"\"\n\n    FROM_STATES = TERMINAL_STATES\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        self.original_run_count = context.run.run_count\n\n        # Do not allow runs to be marked as crashed, paused, or cancelling if already terminal\n        if proposed_state.type in {\n            StateType.CANCELLING,\n            StateType.PAUSED,\n            StateType.CRASHED,\n        }:\n            await self.abort_transition(f\"Run is already {initial_state.type.value}.\")\n            return\n\n        # Only allow departure from a happily completed state if the result is not persisted\n        if (\n            initial_state.is_completed()\n            and initial_state.data\n            and initial_state.data.get(\"type\") != \"unpersisted\"\n        ):\n            await self.reject_transition(None, \"This run is already completed.\")\n            return\n\n        if not proposed_state.is_final():\n            # Reset run count to reset retries\n            context.run.run_count = 0\n\n        # Change the name of the state to retrying if its a flow run retry\n        if proposed_state.is_running():\n            self.flow_run = await context.flow_run()\n            flow_retrying = context.run.flow_run_run_count < self.flow_run.run_count\n            if flow_retrying:\n                await self.rename_state(\"Retrying\")\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ):\n        # reset run count\n        context.run.run_count = self.original_run_count\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.HandleFlowTerminalStateTransitions","title":"HandleFlowTerminalStateTransitions","text":"

Bases: BaseOrchestrationRule

We do not allow flows to leave terminal states if: - The flow is completed and has a persisted result - The flow is going to CANCELLING / PAUSED / CRASHED - The flow is going to scheduled and has no deployment

We reset the pause metadata when a flow leaves a terminal state for a non-terminal state. This resets pause behavior during manual flow run retries.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class HandleFlowTerminalStateTransitions(BaseOrchestrationRule):\n\"\"\"\n    We do not allow flows to leave terminal states if:\n    - The flow is completed and has a persisted result\n    - The flow is going to CANCELLING / PAUSED / CRASHED\n    - The flow is going to scheduled and has no deployment\n\n    We reset the pause metadata when a flow leaves a terminal state for a non-terminal\n    state. This resets pause behavior during manual flow run retries.\n    \"\"\"\n\n    FROM_STATES = TERMINAL_STATES\n    TO_STATES = ALL_ORCHESTRATION_STATES\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: FlowOrchestrationContext,\n    ) -> None:\n        self.original_flow_policy = context.run.empirical_policy.dict()\n\n        # Do not allow runs to be marked as crashed, paused, or cancelling if already terminal\n        if proposed_state.type in {\n            StateType.CANCELLING,\n            StateType.PAUSED,\n            StateType.CRASHED,\n        }:\n            await self.abort_transition(\n                f\"Run is already in terminal state {initial_state.type.value}.\"\n            )\n            return\n\n        # Only allow departure from a happily completed state if the result is not\n        # persisted and the a rerun is being proposed\n        if (\n            initial_state.is_completed()\n            and not proposed_state.is_final()\n            and initial_state.data\n            and initial_state.data.get(\"type\") != \"unpersisted\"\n        ):\n            await self.reject_transition(None, \"Run is already COMPLETED.\")\n            return\n\n        # Do not allows runs to be rescheduled without a deployment\n        if proposed_state.is_scheduled() and not context.run.deployment_id:\n            await self.abort_transition(\n                \"Cannot reschedule a run without an associated deployment.\"\n            )\n            return\n\n        if not proposed_state.is_final():\n            # Reset pause metadata when leaving a terminal state\n            api_version = context.parameters.get(\"api-version\", None)\n            if api_version is None or api_version >= Version(\"0.8.4\"):\n                updated_policy = context.run.empirical_policy.dict()\n                updated_policy[\"resuming\"] = False\n                updated_policy[\"pause_keys\"] = set()\n                context.run.empirical_policy = core.FlowRunPolicy(**updated_policy)\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ):\n        context.run.empirical_policy = core.FlowRunPolicy(**self.original_flow_policy)\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventPendingTransitions","title":"PreventPendingTransitions","text":"

Bases: BaseOrchestrationRule

Prevents transitions to PENDING.

This rule is only used for flow runs.

This is intended to prevent race conditions during duplicate submissions of runs. Before a run is submitted to its execution environment, it should be placed in a PENDING state. If two workers attempt to submit the same run, one of them should encounter a PENDING -> PENDING transition and abort orchestration of the run.

Similarly, if the execution environment starts quickly the run may be in a RUNNING state when the second worker attempts the PENDING transition. We deny these state changes as well to prevent duplicate submission. If a run has transitioned to a RUNNING state a worker should not attempt to submit it again unless it has moved into a terminal state.

CANCELLING and CANCELLED runs should not be allowed to transition to PENDING. For re-runs of deployed runs, they should transition to SCHEDULED first. For re-runs of ad-hoc runs, they should transition directly to RUNNING.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class PreventPendingTransitions(BaseOrchestrationRule):\n\"\"\"\n    Prevents transitions to PENDING.\n\n    This rule is only used for flow runs.\n\n    This is intended to prevent race conditions during duplicate submissions of runs.\n    Before a run is submitted to its execution environment, it should be placed in a\n    PENDING state. If two workers attempt to submit the same run, one of them should\n    encounter a PENDING -> PENDING transition and abort orchestration of the run.\n\n    Similarly, if the execution environment starts quickly the run may be in a RUNNING\n    state when the second worker attempts the PENDING transition. We deny these state\n    changes as well to prevent duplicate submission. If a run has transitioned to a\n    RUNNING state a worker should not attempt to submit it again unless it has moved\n    into a terminal state.\n\n    CANCELLING and CANCELLED runs should not be allowed to transition to PENDING.\n    For re-runs of deployed runs, they should transition to SCHEDULED first.\n    For re-runs of ad-hoc runs, they should transition directly to RUNNING.\n    \"\"\"\n\n    FROM_STATES = [\n        StateType.PENDING,\n        StateType.CANCELLING,\n        StateType.RUNNING,\n        StateType.CANCELLED,\n    ]\n    TO_STATES = [StateType.PENDING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n        await self.abort_transition(\n            reason=(\n                f\"This run is in a {initial_state.type.name} state and cannot\"\n                \" transition to a PENDING state.\"\n            )\n        )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.PreventRunningTasksFromStoppedFlows","title":"PreventRunningTasksFromStoppedFlows","text":"

Bases: BaseOrchestrationRule

Prevents running tasks from stopped flows.

A running state implies execution, but also the converse. This rule ensures that a flow's tasks cannot be run unless the flow is also running.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class PreventRunningTasksFromStoppedFlows(BaseOrchestrationRule):\n\"\"\"\n    Prevents running tasks from stopped flows.\n\n    A running state implies execution, but also the converse. This rule ensures that a\n    flow's tasks cannot be run unless the flow is also running.\n    \"\"\"\n\n    FROM_STATES = ALL_ORCHESTRATION_STATES\n    TO_STATES = [StateType.RUNNING]\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        flow_run = await context.flow_run()\n        if flow_run.state is None:\n            await self.abort_transition(\n                reason=\"The enclosing flow must be running to begin task execution.\"\n            )\n        elif flow_run.state.type == StateType.PAUSED:\n            await self.reject_transition(\n                state=states.Paused(name=\"NotReady\"),\n                reason=(\n                    \"The flow is paused, new tasks can execute after resuming flow\"\n                    f\" run: {flow_run.id}.\"\n                ),\n            )\n        elif not flow_run.state.type == StateType.RUNNING:\n            # task runners should abort task run execution\n            await self.abort_transition(\n                reason=\"The enclosing flow must be running to begin task execution.\",\n            )\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.EnforceCancellingToCancelledTransition","title":"EnforceCancellingToCancelledTransition","text":"

Bases: BaseOrchestrationRule

Rejects transitions from Cancelling to any terminal state except for Cancelled.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class EnforceCancellingToCancelledTransition(BaseOrchestrationRule):\n\"\"\"\n    Rejects transitions from Cancelling to any terminal state except for Cancelled.\n    \"\"\"\n\n    FROM_STATES = {StateType.CANCELLED, StateType.CANCELLING}\n    TO_STATES = ALL_ORCHESTRATION_STATES - {StateType.CANCELLED}\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: TaskOrchestrationContext,\n    ) -> None:\n        await self.reject_transition(\n            state=None,\n            reason=(\n                \"Cannot transition flows that are cancelling to a state other \"\n                \"than Cancelled.\"\n            ),\n        )\n        return\n
"},{"location":"api-ref/server/orchestration/core_policy/#prefect.server.orchestration.core_policy.BypassCancellingScheduledFlowRuns","title":"BypassCancellingScheduledFlowRuns","text":"

Bases: BaseOrchestrationRule

Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled, if the flow run has no associated infrastructure process ID.

The Cancelling state is used to clean up infrastructure. If there is not infrastructure to clean up, we can transition directly to Cancelled. Runs that are AwaitingRetry are a Scheduled state that may have associated infrastructure.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/core_policy.py
class BypassCancellingScheduledFlowRuns(BaseOrchestrationRule):\n\"\"\"Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled,\n    if the flow run has no associated infrastructure process ID.\n\n    The `Cancelling` state is used to clean up infrastructure. If there is not infrastructure\n    to clean up, we can transition directly to `Cancelled`. Runs that are `AwaitingRetry` are\n    a `Scheduled` state that may have associated infrastructure.\n    \"\"\"\n\n    FROM_STATES = {StateType.SCHEDULED}\n    TO_STATES = {StateType.CANCELLING}\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: FlowOrchestrationContext,\n    ) -> None:\n        if not context.run.infrastructure_pid:\n            await self.reject_transition(\n                state=states.Cancelled(),\n                reason=\"Scheduled flow run has no infrastructure to terminate.\",\n            )\n
"},{"location":"api-ref/server/orchestration/global_policy/","title":"server.orchestration.global_policy","text":""},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy","title":"prefect.server.orchestration.global_policy","text":"

Bookkeeping logic that fires on every state transition.

For clarity, GlobalFlowpolicy and GlobalTaskPolicy contain all transition logic implemented using BaseUniversalTransform. None of these operations modify state, and regardless of what orchestration Prefect REST API might enforce on a transtition, the global policies contain Prefect's necessary bookkeeping. Because these transforms record information about the validated state committed to the state database, they should be the most deeply nested contexts in orchestration loop.

"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.GlobalFlowPolicy","title":"GlobalFlowPolicy","text":"

Bases: BaseOrchestrationPolicy

Global transforms that run against flow-run-state transitions in priority order.

These transforms are intended to run immediately before and after a state transition is validated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class GlobalFlowPolicy(BaseOrchestrationPolicy):\n\"\"\"\n    Global transforms that run against flow-run-state transitions in priority order.\n\n    These transforms are intended to run immediately before and after a state transition\n    is validated.\n    \"\"\"\n\n    def priority():\n        return COMMON_GLOBAL_TRANSFORMS() + [\n            UpdateSubflowParentTask,\n            UpdateSubflowStateDetails,\n            IncrementFlowRunCount,\n            RemoveResumingIndicator,\n        ]\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.GlobalTaskPolicy","title":"GlobalTaskPolicy","text":"

Bases: BaseOrchestrationPolicy

Global transforms that run against task-run-state transitions in priority order.

These transforms are intended to run immediately before and after a state transition is validated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class GlobalTaskPolicy(BaseOrchestrationPolicy):\n\"\"\"\n    Global transforms that run against task-run-state transitions in priority order.\n\n    These transforms are intended to run immediately before and after a state transition\n    is validated.\n    \"\"\"\n\n    def priority():\n        return COMMON_GLOBAL_TRANSFORMS() + [\n            IncrementTaskRunCount,\n        ]\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateType","title":"SetRunStateType","text":"

Bases: BaseUniversalTransform

Updates the state type of a run on a state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetRunStateType(BaseUniversalTransform):\n\"\"\"\n    Updates the state type of a run on a state transition.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # record the new state's type\n        context.run.state_type = context.proposed_state.type\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateName","title":"SetRunStateName","text":"

Bases: BaseUniversalTransform

Updates the state name of a run on a state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetRunStateName(BaseUniversalTransform):\n\"\"\"\n    Updates the state name of a run on a state transition.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # record the new state's name\n        context.run.state_name = context.proposed_state.name\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetStartTime","title":"SetStartTime","text":"

Bases: BaseUniversalTransform

Records the time a run enters a running state for the first time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetStartTime(BaseUniversalTransform):\n\"\"\"\n    Records the time a run enters a running state for the first time.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if entering a running state and no start time is set...\n        if context.proposed_state.is_running() and context.run.start_time is None:\n            # set the start time\n            context.run.start_time = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetRunStateTimestamp","title":"SetRunStateTimestamp","text":"

Bases: BaseUniversalTransform

Records the time a run changes states.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetRunStateTimestamp(BaseUniversalTransform):\n\"\"\"\n    Records the time a run changes states.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # record the new state's timestamp\n        context.run.state_timestamp = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetEndTime","title":"SetEndTime","text":"

Bases: BaseUniversalTransform

Records the time a run enters a terminal state.

With normal client usage, a run will not transition out of a terminal state. However, it's possible to force these transitions manually via the API. While leaving a terminal state, the end time will be unset.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetEndTime(BaseUniversalTransform):\n\"\"\"\n    Records the time a run enters a terminal state.\n\n    With normal client usage, a run will not transition out of a terminal state.\n    However, it's possible to force these transitions manually via the API. While\n    leaving a terminal state, the end time will be unset.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if exiting a final state for a non-final state...\n        if (\n            context.initial_state\n            and context.initial_state.is_final()\n            and not context.proposed_state.is_final()\n        ):\n            # clear the end time\n            context.run.end_time = None\n\n        # if entering a final state...\n        if context.proposed_state.is_final():\n            # if the run has a start time and no end time, give it one\n            if context.run.start_time and not context.run.end_time:\n                context.run.end_time = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementRunTime","title":"IncrementRunTime","text":"

Bases: BaseUniversalTransform

Records the amount of time a run spends in the running state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class IncrementRunTime(BaseUniversalTransform):\n\"\"\"\n    Records the amount of time a run spends in the running state.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if exiting a running state...\n        if context.initial_state and context.initial_state.is_running():\n            # increment the run time by the time spent in the previous state\n            context.run.total_run_time += (\n                context.proposed_state.timestamp - context.initial_state.timestamp\n            )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementFlowRunCount","title":"IncrementFlowRunCount","text":"

Bases: BaseUniversalTransform

Records the number of times a run enters a running state. For use with retries.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class IncrementFlowRunCount(BaseUniversalTransform):\n\"\"\"\n    Records the number of times a run enters a running state. For use with retries.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if entering a running state...\n        if context.proposed_state.is_running():\n            # do not increment the run count if resuming a paused flow\n            api_version = context.parameters.get(\"api-version\", None)\n            if api_version is None or api_version >= Version(\"0.8.4\"):\n                if context.run.empirical_policy.resuming:\n                    return\n\n            # increment the run count\n            context.run.run_count += 1\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.RemoveResumingIndicator","title":"RemoveResumingIndicator","text":"

Bases: BaseUniversalTransform

Removes the indicator on a flow run that marks it as resuming.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class RemoveResumingIndicator(BaseUniversalTransform):\n\"\"\"\n    Removes the indicator on a flow run that marks it as resuming.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        proposed_state = context.proposed_state\n\n        api_version = context.parameters.get(\"api-version\", None)\n        if api_version is None or api_version >= Version(\"0.8.4\"):\n            if proposed_state.is_running() or proposed_state.is_final():\n                if context.run.empirical_policy.resuming:\n                    updated_policy = context.run.empirical_policy.dict()\n                    updated_policy[\"resuming\"] = False\n                    context.run.empirical_policy = FlowRunPolicy(**updated_policy)\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.IncrementTaskRunCount","title":"IncrementTaskRunCount","text":"

Bases: BaseUniversalTransform

Records the number of times a run enters a running state. For use with retries.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class IncrementTaskRunCount(BaseUniversalTransform):\n\"\"\"\n    Records the number of times a run enters a running state. For use with retries.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # if entering a running state...\n        if context.proposed_state.is_running():\n            # increment the run count\n            context.run.run_count += 1\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetExpectedStartTime","title":"SetExpectedStartTime","text":"

Bases: BaseUniversalTransform

Estimates the time a state is expected to start running if not set.

For scheduled states, this estimate is simply the scheduled time. For other states, this is set to the time the proposed state was created by Prefect.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetExpectedStartTime(BaseUniversalTransform):\n\"\"\"\n    Estimates the time a state is expected to start running if not set.\n\n    For scheduled states, this estimate is simply the scheduled time. For other states,\n    this is set to the time the proposed state was created by Prefect.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # set expected start time if this is the first state\n        if not context.run.expected_start_time:\n            if context.proposed_state.is_scheduled():\n                context.run.expected_start_time = (\n                    context.proposed_state.state_details.scheduled_time\n                )\n            else:\n                context.run.expected_start_time = context.proposed_state.timestamp\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.SetNextScheduledStartTime","title":"SetNextScheduledStartTime","text":"

Bases: BaseUniversalTransform

Records the scheduled time on a run.

When a run enters a scheduled state, run.next_scheduled_start_time is set to the state's scheduled time. When leaving a scheduled state, run.next_scheduled_start_time is unset.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class SetNextScheduledStartTime(BaseUniversalTransform):\n\"\"\"\n    Records the scheduled time on a run.\n\n    When a run enters a scheduled state, `run.next_scheduled_start_time` is set to\n    the state's scheduled time. When leaving a scheduled state,\n    `run.next_scheduled_start_time` is unset.\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # remove the next scheduled start time if exiting a scheduled state\n        if context.initial_state and context.initial_state.is_scheduled():\n            context.run.next_scheduled_start_time = None\n\n        # set next scheduled start time if entering a scheduled state\n        if context.proposed_state.is_scheduled():\n            context.run.next_scheduled_start_time = (\n                context.proposed_state.state_details.scheduled_time\n            )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateSubflowParentTask","title":"UpdateSubflowParentTask","text":"

Bases: BaseUniversalTransform

Whenever a subflow changes state, it must update its parent task run's state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class UpdateSubflowParentTask(BaseUniversalTransform):\n\"\"\"\n    Whenever a subflow changes state, it must update its parent task run's state.\n    \"\"\"\n\n    async def after_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # only applies to flow runs with a parent task run id\n        if context.run.parent_task_run_id is not None:\n            # avoid mutation of the flow run state\n            subflow_parent_task_state = context.validated_state.copy(\n                reset_fields=True,\n                include={\n                    \"type\",\n                    \"timestamp\",\n                    \"name\",\n                    \"message\",\n                    \"state_details\",\n                    \"data\",\n                },\n            )\n\n            # set the task's \"child flow run id\" to be the subflow run id\n            subflow_parent_task_state.state_details.child_flow_run_id = context.run.id\n\n            await models.task_runs.set_task_run_state(\n                session=context.session,\n                task_run_id=context.run.parent_task_run_id,\n                state=subflow_parent_task_state,\n                force=True,\n            )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateSubflowStateDetails","title":"UpdateSubflowStateDetails","text":"

Bases: BaseUniversalTransform

Update a child subflow state's references to a corresponding tracking task run id in the parent flow run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class UpdateSubflowStateDetails(BaseUniversalTransform):\n\"\"\"\n    Update a child subflow state's references to a corresponding tracking task run id\n    in the parent flow run\n    \"\"\"\n\n    async def before_transition(self, context: OrchestrationContext) -> None:\n        if self.nullified_transition():\n            return\n\n        # only applies to flow runs with a parent task run id\n        if context.run.parent_task_run_id is not None:\n            context.proposed_state.state_details.task_run_id = (\n                context.run.parent_task_run_id\n            )\n
"},{"location":"api-ref/server/orchestration/global_policy/#prefect.server.orchestration.global_policy.UpdateStateDetails","title":"UpdateStateDetails","text":"

Bases: BaseUniversalTransform

Update a state's references to a corresponding flow- or task- run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/global_policy.py
class UpdateStateDetails(BaseUniversalTransform):\n\"\"\"\n    Update a state's references to a corresponding flow- or task- run.\n    \"\"\"\n\n    async def before_transition(\n        self,\n        context: OrchestrationContext,\n    ) -> None:\n        if self.nullified_transition():\n            return\n\n        if isinstance(context, FlowOrchestrationContext):\n            flow_run = await context.flow_run()\n            context.proposed_state.state_details.flow_run_id = flow_run.id\n\n        elif isinstance(context, TaskOrchestrationContext):\n            task_run = await context.task_run()\n            context.proposed_state.state_details.flow_run_id = task_run.flow_run_id\n            context.proposed_state.state_details.task_run_id = task_run.id\n
"},{"location":"api-ref/server/orchestration/policies/","title":"server.orchestration.policies","text":""},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies","title":"prefect.server.orchestration.policies","text":"

Policies are collections of orchestration rules and transforms.

Prefect implements (most) orchestration with logic that governs a Prefect flow or task changing state. Policies organize of orchestration logic both to provide an ordering mechanism as well as provide observability into the orchestration process.

While Prefect's orchestration rules can gracefully run independently of one another, ordering can still have an impact on the observed behavior of the system. For example, it makes no sense to secure a concurrency slot for a run if a cached state exists. Furthermore, policies, provide a mechanism to configure and observe exactly what logic will fire against a transition.

"},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy","title":"BaseOrchestrationPolicy","text":"

Bases: ABC

An abstract base class used to organize orchestration rules in priority order.

Different collections of orchestration rules might be used to govern various kinds of transitions. For example, flow-run states and task-run states might require different orchestration logic.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/policies.py
class BaseOrchestrationPolicy(ABC):\n\"\"\"\n    An abstract base class used to organize orchestration rules in priority order.\n\n    Different collections of orchestration rules might be used to govern various kinds\n    of transitions. For example, flow-run states and task-run states might require\n    different orchestration logic.\n    \"\"\"\n\n    @staticmethod\n    @abstractmethod\n    def priority():\n\"\"\"\n        A list of orchestration rules in priority order.\n        \"\"\"\n\n        return []\n\n    @classmethod\n    def compile_transition_rules(cls, from_state=None, to_state=None):\n\"\"\"\n        Returns rules in policy that are valid for the specified state transition.\n        \"\"\"\n\n        transition_rules = []\n        for rule in cls.priority():\n            if from_state in rule.FROM_STATES and to_state in rule.TO_STATES:\n                transition_rules.append(rule)\n        return transition_rules\n
"},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy.priority","title":"priority abstractmethod staticmethod","text":"

A list of orchestration rules in priority order.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/policies.py
@staticmethod\n@abstractmethod\ndef priority():\n\"\"\"\n    A list of orchestration rules in priority order.\n    \"\"\"\n\n    return []\n
"},{"location":"api-ref/server/orchestration/policies/#prefect.server.orchestration.policies.BaseOrchestrationPolicy.compile_transition_rules","title":"compile_transition_rules classmethod","text":"

Returns rules in policy that are valid for the specified state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/policies.py
@classmethod\ndef compile_transition_rules(cls, from_state=None, to_state=None):\n\"\"\"\n    Returns rules in policy that are valid for the specified state transition.\n    \"\"\"\n\n    transition_rules = []\n    for rule in cls.priority():\n        if from_state in rule.FROM_STATES and to_state in rule.TO_STATES:\n            transition_rules.append(rule)\n    return transition_rules\n
"},{"location":"api-ref/server/orchestration/rules/","title":"server.orchestration.rules","text":""},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules","title":"prefect.server.orchestration.rules","text":"

Prefect's flow and task-run orchestration machinery.

This module contains all the core concepts necessary to implement Prefect's state orchestration engine. These states correspond to intuitive descriptions of all the points that a Prefect flow or task can observe executing user code and intervene, if necessary. A detailed description of states can be found in our concept documentation.

Prefect's orchestration engine operates under the assumption that no governed user code will execute without first requesting Prefect REST API validate a change in state and record metadata about the run. With all attempts to run user code being checked against a Prefect instance, the Prefect REST API database becomes the unambiguous source of truth for managing the execution of complex interacting workflows. Orchestration rules can be implemented as discrete units of logic that operate against each state transition and can be fully observable, extensible, and customizable -- all without needing to store or parse a single line of user code.

"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext","title":"OrchestrationContext","text":"

Bases: PrefectBaseModel

A container for a state transition, governed by orchestration rules.

Note

An OrchestrationContext should not be instantiated directly, instead use the flow- or task- specific subclasses, FlowOrchestrationContext and TaskOrchestrationContext.

When a flow- or task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule ABC.

OrchestrationContext introduces the concept of a state being None in the context of an intended state transition. An initial state can be None if a run is is attempting to set a state for the first time. The proposed state might be None if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.

Attributes:

Name Type Description session Optional[Union[sa.orm.Session, AsyncSession]]

a SQLAlchemy database session

initial_state Optional[states.State]

the initial state of a run

proposed_state Optional[states.State]

the proposed state a run is transitioning into

validated_state Optional[states.State]

a proposed state that has committed to the database

rule_signature List[str]

a record of rules that have fired on entry into a managed context, currently only used for debugging purposes

finalization_signature List[str]

a record of rules that have fired on exit from a managed context, currently only used for debugging purposes

response_status SetStateStatus

a SetStateStatus object used to build the API response

response_details StateResponseDetails

a StateResponseDetails object use to build the API response

Parameters:

Name Type Description Default session

a SQLAlchemy database session

required initial_state

the initial state of a run

required proposed_state

the proposed state a run is transitioning into

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
class OrchestrationContext(PrefectBaseModel):\n\"\"\"\n    A container for a state transition, governed by orchestration rules.\n\n    Note:\n        An `OrchestrationContext` should not be instantiated directly, instead\n        use the flow- or task- specific subclasses, `FlowOrchestrationContext` and\n        `TaskOrchestrationContext`.\n\n    When a flow- or task- run attempts to change state, Prefect REST API has an opportunity\n    to decide whether this transition can proceed. All the relevant information\n    associated with the state transition is stored in an `OrchestrationContext`,\n    which is subsequently governed by nested orchestration rules implemented using\n    the `BaseOrchestrationRule` ABC.\n\n    `OrchestrationContext` introduces the concept of a state being `None` in the\n    context of an intended state transition. An initial state can be `None` if a run\n    is is attempting to set a state for the first time. The proposed state might be\n    `None` if a rule governing the transition determines that no state change\n    should occur at all and nothing is written to the database.\n\n    Attributes:\n        session: a SQLAlchemy database session\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n        validated_state: a proposed state that has committed to the database\n        rule_signature: a record of rules that have fired on entry into a\n            managed context, currently only used for debugging purposes\n        finalization_signature: a record of rules that have fired on exit from a\n            managed context, currently only used for debugging purposes\n        response_status: a SetStateStatus object used to build the API response\n        response_details:a StateResponseDetails object use to build the API response\n\n    Args:\n        session: a SQLAlchemy database session\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n    \"\"\"\n\n    class Config:\n        arbitrary_types_allowed = True\n\n    session: Optional[Union[sa.orm.Session, AsyncSession]] = ...\n    initial_state: Optional[states.State] = ...\n    proposed_state: Optional[states.State] = ...\n    validated_state: Optional[states.State]\n    rule_signature: List[str] = Field(default_factory=list)\n    finalization_signature: List[str] = Field(default_factory=list)\n    response_status: SetStateStatus = Field(default=SetStateStatus.ACCEPT)\n    response_details: StateResponseDetails = Field(default_factory=StateAcceptDetails)\n    orchestration_error: Optional[Exception] = Field(default=None)\n    parameters: Dict[Any, Any] = Field(default_factory=dict)\n\n    @property\n    def initial_state_type(self) -> Optional[states.StateType]:\n\"\"\"The state type of `self.initial_state` if it exists.\"\"\"\n\n        return self.initial_state.type if self.initial_state else None\n\n    @property\n    def proposed_state_type(self) -> Optional[states.StateType]:\n\"\"\"The state type of `self.proposed_state` if it exists.\"\"\"\n\n        return self.proposed_state.type if self.proposed_state else None\n\n    @property\n    def validated_state_type(self) -> Optional[states.StateType]:\n\"\"\"The state type of `self.validated_state` if it exists.\"\"\"\n        return self.validated_state.type if self.validated_state else None\n\n    def safe_copy(self):\n\"\"\"\n        Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n        Orchestration rules govern state transitions using information stored in\n        an `OrchestrationContext`. However, mutating objects stored on the context\n        directly can have unintended side-effects. To guard against this,\n        `self.safe_copy` can be used to pass information to orchestration rules\n        without risking mutation.\n\n        Returns:\n            A mutation-safe copy of the `OrchestrationContext`\n        \"\"\"\n\n        safe_copy = self.copy()\n\n        safe_copy.initial_state = (\n            self.initial_state.copy() if self.initial_state else None\n        )\n        safe_copy.proposed_state = (\n            self.proposed_state.copy() if self.proposed_state else None\n        )\n        safe_copy.validated_state = (\n            self.validated_state.copy() if self.validated_state else None\n        )\n        safe_copy.parameters = self.parameters.copy()\n        return safe_copy\n\n    def entry_context(self):\n\"\"\"\n        A convenience method that generates input parameters for orchestration rules.\n\n        An `OrchestrationContext` defines a state transition that is managed by\n        orchestration rules which can fire hooks before a transition has been committed\n        to the database. These hooks have a consistent interface which can be generated\n        with this method.\n        \"\"\"\n\n        safe_context = self.safe_copy()\n        return safe_context.initial_state, safe_context.proposed_state, safe_context\n\n    def exit_context(self):\n\"\"\"\n        A convenience method that generates input parameters for orchestration rules.\n\n        An `OrchestrationContext` defines a state transition that is managed by\n        orchestration rules which can fire hooks after a transition has been committed\n        to the database. These hooks have a consistent interface which can be generated\n        with this method.\n        \"\"\"\n\n        safe_context = self.safe_copy()\n        return safe_context.initial_state, safe_context.validated_state, safe_context\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.initial_state_type","title":"initial_state_type: Optional[states.StateType] property","text":"

The state type of self.initial_state if it exists.

"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.proposed_state_type","title":"proposed_state_type: Optional[states.StateType] property","text":"

The state type of self.proposed_state if it exists.

"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.validated_state_type","title":"validated_state_type: Optional[states.StateType] property","text":"

The state type of self.validated_state if it exists.

"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.safe_copy","title":"safe_copy","text":"

Creates a mostly-mutation-safe copy for use in orchestration rules.

Orchestration rules govern state transitions using information stored in an OrchestrationContext. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy can be used to pass information to orchestration rules without risking mutation.

Returns:

Type Description

A mutation-safe copy of the OrchestrationContext

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def safe_copy(self):\n\"\"\"\n    Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n    Orchestration rules govern state transitions using information stored in\n    an `OrchestrationContext`. However, mutating objects stored on the context\n    directly can have unintended side-effects. To guard against this,\n    `self.safe_copy` can be used to pass information to orchestration rules\n    without risking mutation.\n\n    Returns:\n        A mutation-safe copy of the `OrchestrationContext`\n    \"\"\"\n\n    safe_copy = self.copy()\n\n    safe_copy.initial_state = (\n        self.initial_state.copy() if self.initial_state else None\n    )\n    safe_copy.proposed_state = (\n        self.proposed_state.copy() if self.proposed_state else None\n    )\n    safe_copy.validated_state = (\n        self.validated_state.copy() if self.validated_state else None\n    )\n    safe_copy.parameters = self.parameters.copy()\n    return safe_copy\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.entry_context","title":"entry_context","text":"

A convenience method that generates input parameters for orchestration rules.

An OrchestrationContext defines a state transition that is managed by orchestration rules which can fire hooks before a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def entry_context(self):\n\"\"\"\n    A convenience method that generates input parameters for orchestration rules.\n\n    An `OrchestrationContext` defines a state transition that is managed by\n    orchestration rules which can fire hooks before a transition has been committed\n    to the database. These hooks have a consistent interface which can be generated\n    with this method.\n    \"\"\"\n\n    safe_context = self.safe_copy()\n    return safe_context.initial_state, safe_context.proposed_state, safe_context\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.OrchestrationContext.exit_context","title":"exit_context","text":"

A convenience method that generates input parameters for orchestration rules.

An OrchestrationContext defines a state transition that is managed by orchestration rules which can fire hooks after a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def exit_context(self):\n\"\"\"\n    A convenience method that generates input parameters for orchestration rules.\n\n    An `OrchestrationContext` defines a state transition that is managed by\n    orchestration rules which can fire hooks after a transition has been committed\n    to the database. These hooks have a consistent interface which can be generated\n    with this method.\n    \"\"\"\n\n    safe_context = self.safe_copy()\n    return safe_context.initial_state, safe_context.validated_state, safe_context\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext","title":"FlowOrchestrationContext","text":"

Bases: OrchestrationContext

A container for a flow run state transition, governed by orchestration rules.

When a flow- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule ABC.

FlowOrchestrationContext introduces the concept of a state being None in the context of an intended state transition. An initial state can be None if a run is is attempting to set a state for the first time. The proposed state might be None if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.

Attributes:

Name Type Description session

a SQLAlchemy database session

run Any

the flow run attempting to change state

initial_state Any

the initial state of the run

proposed_state Any

the proposed state the run is transitioning into

validated_state Any

a proposed state that has committed to the database

rule_signature Any

a record of rules that have fired on entry into a managed context, currently only used for debugging purposes

finalization_signature Any

a record of rules that have fired on exit from a managed context, currently only used for debugging purposes

response_status Any

a SetStateStatus object used to build the API response

response_details Any

a StateResponseDetails object use to build the API response

Parameters:

Name Type Description Default session

a SQLAlchemy database session

required run

the flow run attempting to change state

required initial_state

the initial state of a run

required proposed_state

the proposed state a run is transitioning into

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
class FlowOrchestrationContext(OrchestrationContext):\n\"\"\"\n    A container for a flow run state transition, governed by orchestration rules.\n\n    When a flow- run attempts to change state, Prefect REST API has an opportunity\n    to decide whether this transition can proceed. All the relevant information\n    associated with the state transition is stored in an `OrchestrationContext`,\n    which is subsequently governed by nested orchestration rules implemented using\n    the `BaseOrchestrationRule` ABC.\n\n    `FlowOrchestrationContext` introduces the concept of a state being `None` in the\n    context of an intended state transition. An initial state can be `None` if a run\n    is is attempting to set a state for the first time. The proposed state might be\n    `None` if a rule governing the transition determines that no state change\n    should occur at all and nothing is written to the database.\n\n    Attributes:\n        session: a SQLAlchemy database session\n        run: the flow run attempting to change state\n        initial_state: the initial state of the run\n        proposed_state: the proposed state the run is transitioning into\n        validated_state: a proposed state that has committed to the database\n        rule_signature: a record of rules that have fired on entry into a\n            managed context, currently only used for debugging purposes\n        finalization_signature: a record of rules that have fired on exit from a\n            managed context, currently only used for debugging purposes\n        response_status: a SetStateStatus object used to build the API response\n        response_details:a StateResponseDetails object use to build the API response\n\n    Args:\n        session: a SQLAlchemy database session\n        run: the flow run attempting to change state\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n    \"\"\"\n\n    # run: db.FlowRun = ...\n    run: Any = ...\n\n    @inject_db\n    async def validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n\"\"\"\n        Validates a proposed state by committing it to the database.\n\n        After the `FlowOrchestrationContext` is governed by orchestration rules, the\n        proposed state can be validated: the proposed state is added to the current\n        SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n        state. The state on the run is set to the validated state as well.\n\n        If the proposed state is `None` when this method is called, no state will be\n        written and `self.validated_state` will be set to the run's current state.\n\n        Returns:\n            None\n        \"\"\"\n        # (circular import)\n        from prefect.server.api.server import is_client_retryable_exception\n\n        try:\n            await self._validate_proposed_state()\n            return\n        except Exception as exc:\n            logger.exception(\"Encountered error during state validation\")\n            self.proposed_state = None\n\n            if is_client_retryable_exception(exc):\n                # Do not capture retryable database exceptions, this exception will be\n                # raised as a 503 in the API layer\n                raise\n\n            reason = f\"Error validating state: {exc!r}\"\n            self.response_status = SetStateStatus.ABORT\n            self.response_details = StateAbortDetails(reason=reason)\n\n    @inject_db\n    async def _validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n        if self.proposed_state is None:\n            validated_orm_state = self.run.state\n            # We cannot access `self.run.state.data` directly for unknown reasons\n            state_data = (\n                (\n                    await artifacts.read_artifact(\n                        self.session, self.run.state.result_artifact_id\n                    )\n                ).data\n                if self.run.state.result_artifact_id\n                else None\n            )\n        else:\n            state_payload = self.proposed_state.dict(shallow=True)\n            state_data = state_payload.pop(\"data\", None)\n\n            if state_data is not None:\n                state_result_artifact = core.Artifact.from_result(state_data)\n                state_result_artifact.flow_run_id = self.run.id\n                await artifacts.create_artifact(self.session, state_result_artifact)\n                state_payload[\"result_artifact_id\"] = state_result_artifact.id\n\n            validated_orm_state = db.FlowRunState(\n                flow_run_id=self.run.id,\n                **state_payload,\n            )\n\n        self.session.add(validated_orm_state)\n        self.run.set_state(validated_orm_state)\n\n        await self.session.flush()\n        if validated_orm_state:\n            self.validated_state = states.State.from_orm_without_result(\n                validated_orm_state, with_data=state_data\n            )\n        else:\n            self.validated_state = None\n\n    def safe_copy(self):\n\"\"\"\n        Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n        Orchestration rules govern state transitions using information stored in\n        an `OrchestrationContext`. However, mutating objects stored on the context\n        directly can have unintended side-effects. To guard against this,\n        `self.safe_copy` can be used to pass information to orchestration rules\n        without risking mutation.\n\n        Note:\n            `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n        Returns:\n            A mutation-safe copy of `FlowOrchestrationContext`\n        \"\"\"\n\n        return super().safe_copy()\n\n    @property\n    def run_settings(self) -> Dict:\n\"\"\"Run-level settings used to orchestrate the state transition.\"\"\"\n\n        return self.run.empirical_policy\n\n    async def task_run(self):\n        return None\n\n    async def flow_run(self):\n        return self.run\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.run_settings","title":"run_settings: Dict property","text":"

Run-level settings used to orchestrate the state transition.

"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.validate_proposed_state","title":"validate_proposed_state async","text":"

Validates a proposed state by committing it to the database.

After the FlowOrchestrationContext is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. self.validated_state set to the flushed state. The state on the run is set to the validated state as well.

If the proposed state is None when this method is called, no state will be written and self.validated_state will be set to the run's current state.

Returns:

Type Description

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
@inject_db\nasync def validate_proposed_state(\n    self,\n    db: PrefectDBInterface,\n):\n\"\"\"\n    Validates a proposed state by committing it to the database.\n\n    After the `FlowOrchestrationContext` is governed by orchestration rules, the\n    proposed state can be validated: the proposed state is added to the current\n    SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n    state. The state on the run is set to the validated state as well.\n\n    If the proposed state is `None` when this method is called, no state will be\n    written and `self.validated_state` will be set to the run's current state.\n\n    Returns:\n        None\n    \"\"\"\n    # (circular import)\n    from prefect.server.api.server import is_client_retryable_exception\n\n    try:\n        await self._validate_proposed_state()\n        return\n    except Exception as exc:\n        logger.exception(\"Encountered error during state validation\")\n        self.proposed_state = None\n\n        if is_client_retryable_exception(exc):\n            # Do not capture retryable database exceptions, this exception will be\n            # raised as a 503 in the API layer\n            raise\n\n        reason = f\"Error validating state: {exc!r}\"\n        self.response_status = SetStateStatus.ABORT\n        self.response_details = StateAbortDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.FlowOrchestrationContext.safe_copy","title":"safe_copy","text":"

Creates a mostly-mutation-safe copy for use in orchestration rules.

Orchestration rules govern state transitions using information stored in an OrchestrationContext. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy can be used to pass information to orchestration rules without risking mutation.

Note

self.run is an ORM model, and even when copied is unsafe to mutate

Returns:

Type Description

A mutation-safe copy of FlowOrchestrationContext

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def safe_copy(self):\n\"\"\"\n    Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n    Orchestration rules govern state transitions using information stored in\n    an `OrchestrationContext`. However, mutating objects stored on the context\n    directly can have unintended side-effects. To guard against this,\n    `self.safe_copy` can be used to pass information to orchestration rules\n    without risking mutation.\n\n    Note:\n        `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n    Returns:\n        A mutation-safe copy of `FlowOrchestrationContext`\n    \"\"\"\n\n    return super().safe_copy()\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext","title":"TaskOrchestrationContext","text":"

Bases: OrchestrationContext

A container for a task run state transition, governed by orchestration rules.

When a task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an OrchestrationContext, which is subsequently governed by nested orchestration rules implemented using the BaseOrchestrationRule ABC.

TaskOrchestrationContext introduces the concept of a state being None in the context of an intended state transition. An initial state can be None if a run is is attempting to set a state for the first time. The proposed state might be None if a rule governing the transition determines that no state change should occur at all and nothing is written to the database.

Attributes:

Name Type Description session

a SQLAlchemy database session

run Any

the task run attempting to change state

initial_state Any

the initial state of the run

proposed_state Any

the proposed state the run is transitioning into

validated_state Any

a proposed state that has committed to the database

rule_signature Any

a record of rules that have fired on entry into a managed context, currently only used for debugging purposes

finalization_signature Any

a record of rules that have fired on exit from a managed context, currently only used for debugging purposes

response_status Any

a SetStateStatus object used to build the API response

response_details Any

a StateResponseDetails object use to build the API response

Parameters:

Name Type Description Default session

a SQLAlchemy database session

required run

the task run attempting to change state

required initial_state

the initial state of a run

required proposed_state

the proposed state a run is transitioning into

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
class TaskOrchestrationContext(OrchestrationContext):\n\"\"\"\n    A container for a task run state transition, governed by orchestration rules.\n\n    When a task- run attempts to change state, Prefect REST API has an opportunity\n    to decide whether this transition can proceed. All the relevant information\n    associated with the state transition is stored in an `OrchestrationContext`,\n    which is subsequently governed by nested orchestration rules implemented using\n    the `BaseOrchestrationRule` ABC.\n\n    `TaskOrchestrationContext` introduces the concept of a state being `None` in the\n    context of an intended state transition. An initial state can be `None` if a run\n    is is attempting to set a state for the first time. The proposed state might be\n    `None` if a rule governing the transition determines that no state change\n    should occur at all and nothing is written to the database.\n\n    Attributes:\n        session: a SQLAlchemy database session\n        run: the task run attempting to change state\n        initial_state: the initial state of the run\n        proposed_state: the proposed state the run is transitioning into\n        validated_state: a proposed state that has committed to the database\n        rule_signature: a record of rules that have fired on entry into a\n            managed context, currently only used for debugging purposes\n        finalization_signature: a record of rules that have fired on exit from a\n            managed context, currently only used for debugging purposes\n        response_status: a SetStateStatus object used to build the API response\n        response_details:a StateResponseDetails object use to build the API response\n\n    Args:\n        session: a SQLAlchemy database session\n        run: the task run attempting to change state\n        initial_state: the initial state of a run\n        proposed_state: the proposed state a run is transitioning into\n    \"\"\"\n\n    # run: db.TaskRun = ...\n    run: Any = ...\n\n    @inject_db\n    async def validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n\"\"\"\n        Validates a proposed state by committing it to the database.\n\n        After the `TaskOrchestrationContext` is governed by orchestration rules, the\n        proposed state can be validated: the proposed state is added to the current\n        SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n        state. The state on the run is set to the validated state as well.\n\n        If the proposed state is `None` when this method is called, no state will be\n        written and `self.validated_state` will be set to the run's current state.\n\n        Returns:\n            None\n        \"\"\"\n        # (circular import)\n        from prefect.server.api.server import is_client_retryable_exception\n\n        try:\n            await self._validate_proposed_state()\n            return\n        except Exception as exc:\n            logger.exception(\"Encountered error during state validation\")\n            self.proposed_state = None\n\n            if is_client_retryable_exception(exc):\n                # Do not capture retryable database exceptions, this exception will be\n                # raised as a 503 in the API layer\n                raise\n\n            reason = f\"Error validating state: {exc!r}\"\n            self.response_status = SetStateStatus.ABORT\n            self.response_details = StateAbortDetails(reason=reason)\n\n    @inject_db\n    async def _validate_proposed_state(\n        self,\n        db: PrefectDBInterface,\n    ):\n        if self.proposed_state is None:\n            validated_orm_state = self.run.state\n            # We cannot access `self.run.state.data` directly for unknown reasons\n            state_data = (\n                (\n                    await artifacts.read_artifact(\n                        self.session, self.run.state.result_artifact_id\n                    )\n                ).data\n                if self.run.state.result_artifact_id\n                else None\n            )\n        else:\n            state_payload = self.proposed_state.dict(shallow=True)\n            state_data = state_payload.pop(\"data\", None)\n\n            if state_data is not None:\n                state_result_artifact = core.Artifact.from_result(state_data)\n                state_result_artifact.task_run_id = self.run.id\n\n                flow_run = await self.flow_run()\n                state_result_artifact.flow_run_id = flow_run.id\n\n                await artifacts.create_artifact(self.session, state_result_artifact)\n                state_payload[\"result_artifact_id\"] = state_result_artifact.id\n\n            validated_orm_state = db.TaskRunState(\n                task_run_id=self.run.id,\n                **state_payload,\n            )\n\n        self.session.add(validated_orm_state)\n        self.run.set_state(validated_orm_state)\n\n        await self.session.flush()\n        if validated_orm_state:\n            self.validated_state = states.State.from_orm_without_result(\n                validated_orm_state, with_data=state_data\n            )\n        else:\n            self.validated_state = None\n\n    def safe_copy(self):\n\"\"\"\n        Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n        Orchestration rules govern state transitions using information stored in\n        an `OrchestrationContext`. However, mutating objects stored on the context\n        directly can have unintended side-effects. To guard against this,\n        `self.safe_copy` can be used to pass information to orchestration rules\n        without risking mutation.\n\n        Note:\n            `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n        Returns:\n            A mutation-safe copy of `TaskOrchestrationContext`\n        \"\"\"\n\n        return super().safe_copy()\n\n    @property\n    def run_settings(self) -> Dict:\n\"\"\"Run-level settings used to orchestrate the state transition.\"\"\"\n\n        return self.run.empirical_policy\n\n    async def task_run(self):\n        return self.run\n\n    async def flow_run(self):\n        return await flow_runs.read_flow_run(\n            session=self.session,\n            flow_run_id=self.run.flow_run_id,\n        )\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.run_settings","title":"run_settings: Dict property","text":"

Run-level settings used to orchestrate the state transition.

"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.validate_proposed_state","title":"validate_proposed_state async","text":"

Validates a proposed state by committing it to the database.

After the TaskOrchestrationContext is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. self.validated_state set to the flushed state. The state on the run is set to the validated state as well.

If the proposed state is None when this method is called, no state will be written and self.validated_state will be set to the run's current state.

Returns:

Type Description

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
@inject_db\nasync def validate_proposed_state(\n    self,\n    db: PrefectDBInterface,\n):\n\"\"\"\n    Validates a proposed state by committing it to the database.\n\n    After the `TaskOrchestrationContext` is governed by orchestration rules, the\n    proposed state can be validated: the proposed state is added to the current\n    SQLAlchemy session and is flushed. `self.validated_state` set to the flushed\n    state. The state on the run is set to the validated state as well.\n\n    If the proposed state is `None` when this method is called, no state will be\n    written and `self.validated_state` will be set to the run's current state.\n\n    Returns:\n        None\n    \"\"\"\n    # (circular import)\n    from prefect.server.api.server import is_client_retryable_exception\n\n    try:\n        await self._validate_proposed_state()\n        return\n    except Exception as exc:\n        logger.exception(\"Encountered error during state validation\")\n        self.proposed_state = None\n\n        if is_client_retryable_exception(exc):\n            # Do not capture retryable database exceptions, this exception will be\n            # raised as a 503 in the API layer\n            raise\n\n        reason = f\"Error validating state: {exc!r}\"\n        self.response_status = SetStateStatus.ABORT\n        self.response_details = StateAbortDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.TaskOrchestrationContext.safe_copy","title":"safe_copy","text":"

Creates a mostly-mutation-safe copy for use in orchestration rules.

Orchestration rules govern state transitions using information stored in an OrchestrationContext. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, self.safe_copy can be used to pass information to orchestration rules without risking mutation.

Note

self.run is an ORM model, and even when copied is unsafe to mutate

Returns:

Type Description

A mutation-safe copy of TaskOrchestrationContext

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def safe_copy(self):\n\"\"\"\n    Creates a mostly-mutation-safe copy for use in orchestration rules.\n\n    Orchestration rules govern state transitions using information stored in\n    an `OrchestrationContext`. However, mutating objects stored on the context\n    directly can have unintended side-effects. To guard against this,\n    `self.safe_copy` can be used to pass information to orchestration rules\n    without risking mutation.\n\n    Note:\n        `self.run` is an ORM model, and even when copied is unsafe to mutate\n\n    Returns:\n        A mutation-safe copy of `TaskOrchestrationContext`\n    \"\"\"\n\n    return super().safe_copy()\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule","title":"BaseOrchestrationRule","text":"

Bases: contextlib.AbstractAsyncContextManager

An abstract base class used to implement a discrete piece of orchestration logic.

An OrchestrationRule is a stateful context manager that directly governs a state transition. Complex orchestration is achieved by nesting multiple rules. Each rule runs against an OrchestrationContext that contains the transition details; this context is then passed to subsequent rules. The context can be modified by hooks that fire before and after a new state is validated and committed to the database. These hooks will fire as long as the state transition is considered \"valid\" and govern a transition by either modifying the proposed state before it is validated or by producing a side-effect.

A state transition occurs whenever a flow- or task- run changes state, prompting Prefect REST API to decide whether or not this transition can proceed. The current state of the run is referred to as the \"initial state\", and the state a run is attempting to transition into is the \"proposed state\". Together, the initial state transitioning into the proposed state is the intended transition that is governed by these orchestration rules. After using rules to enter a runtime context, the OrchestrationContext will contain a proposed state that has been governed by each rule, and at that point can validate the proposed state and commit it to the database. The validated state will be set on the context as context.validated_state, and rules will call the self.after_transition hook upon exiting the managed context.

Examples:

Create a rule:

>>> class BasicRule(BaseOrchestrationRule):\n>>>     # allowed initial state types\n>>>     FROM_STATES = [StateType.RUNNING]\n>>>     # allowed proposed state types\n>>>     TO_STATES = [StateType.COMPLETED, StateType.FAILED]\n>>>\n>>>     async def before_transition(initial_state, proposed_state, ctx):\n>>>         # side effects and proposed state mutation can happen here\n>>>         ...\n>>>\n>>>     async def after_transition(initial_state, validated_state, ctx):\n>>>         # operations on states that have been validated can happen here\n>>>         ...\n>>>\n>>>     async def cleanup(intitial_state, validated_state, ctx):\n>>>         # reverts side effects generated by `before_transition` if necessary\n>>>         ...\n

Use a rule:

>>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n>>> async with BasicRule(context, *intended_transition):\n>>>     # context.proposed_state has been governed by BasicRule\n>>>     ...\n

Use multiple rules:

>>> rules = [BasicRule, BasicRule]\n>>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n>>> async with contextlib.AsyncExitStack() as stack:\n>>>     for rule in rules:\n>>>         stack.enter_async_context(rule(context, *intended_transition))\n>>>\n>>>     # context.proposed_state has been governed by all rules\n>>>     ...\n

Attributes:

Name Type Description FROM_STATES Iterable

list of valid initial state types this rule governs

TO_STATES Iterable

list of valid proposed state types this rule governs

context

the orchestration context

from_state_type

the state type a run is currently in

to_state_type

the intended proposed state type prior to any orchestration

Parameters:

Name Type Description Default context OrchestrationContext

A FlowOrchestrationContext or TaskOrchestrationContext that is passed between rules

required from_state_type Optional[states.StateType]

The state type of the initial state of a run, if this state type is not contained in FROM_STATES, no hooks will fire

required to_state_type Optional[states.StateType]

The state type of the proposed state before orchestration, if this state type is not contained in TO_STATES, no hooks will fire

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
class BaseOrchestrationRule(contextlib.AbstractAsyncContextManager):\n\"\"\"\n    An abstract base class used to implement a discrete piece of orchestration logic.\n\n    An `OrchestrationRule` is a stateful context manager that directly governs a state\n    transition. Complex orchestration is achieved by nesting multiple rules.\n    Each rule runs against an `OrchestrationContext` that contains the transition\n    details; this context is then passed to subsequent rules. The context can be\n    modified by hooks that fire before and after a new state is validated and committed\n    to the database. These hooks will fire as long as the state transition is\n    considered \"valid\" and govern a transition by either modifying the proposed state\n    before it is validated or by producing a side-effect.\n\n    A state transition occurs whenever a flow- or task- run changes state, prompting\n    Prefect REST API to decide whether or not this transition can proceed. The current state of\n    the run is referred to as the \"initial state\", and the state a run is\n    attempting to transition into is the \"proposed state\". Together, the initial state\n    transitioning into the proposed state is the intended transition that is governed\n    by these orchestration rules. After using rules to enter a runtime context, the\n    `OrchestrationContext` will contain a proposed state that has been governed by\n    each rule, and at that point can validate the proposed state and commit it to\n    the database. The validated state will be set on the context as\n    `context.validated_state`, and rules will call the `self.after_transition` hook\n    upon exiting the managed context.\n\n    Examples:\n\n        Create a rule:\n\n        >>> class BasicRule(BaseOrchestrationRule):\n        >>>     # allowed initial state types\n        >>>     FROM_STATES = [StateType.RUNNING]\n        >>>     # allowed proposed state types\n        >>>     TO_STATES = [StateType.COMPLETED, StateType.FAILED]\n        >>>\n        >>>     async def before_transition(initial_state, proposed_state, ctx):\n        >>>         # side effects and proposed state mutation can happen here\n        >>>         ...\n        >>>\n        >>>     async def after_transition(initial_state, validated_state, ctx):\n        >>>         # operations on states that have been validated can happen here\n        >>>         ...\n        >>>\n        >>>     async def cleanup(intitial_state, validated_state, ctx):\n        >>>         # reverts side effects generated by `before_transition` if necessary\n        >>>         ...\n\n        Use a rule:\n\n        >>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n        >>> async with BasicRule(context, *intended_transition):\n        >>>     # context.proposed_state has been governed by BasicRule\n        >>>     ...\n\n        Use multiple rules:\n\n        >>> rules = [BasicRule, BasicRule]\n        >>> intended_transition = (StateType.RUNNING, StateType.COMPLETED)\n        >>> async with contextlib.AsyncExitStack() as stack:\n        >>>     for rule in rules:\n        >>>         stack.enter_async_context(rule(context, *intended_transition))\n        >>>\n        >>>     # context.proposed_state has been governed by all rules\n        >>>     ...\n\n    Attributes:\n        FROM_STATES: list of valid initial state types this rule governs\n        TO_STATES: list of valid proposed state types this rule governs\n        context: the orchestration context\n        from_state_type: the state type a run is currently in\n        to_state_type: the intended proposed state type prior to any orchestration\n\n    Args:\n        context: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is\n            passed between rules\n        from_state_type: The state type of the initial state of a run, if this\n            state type is not contained in `FROM_STATES`, no hooks will fire\n        to_state_type: The state type of the proposed state before orchestration, if\n            this state type is not contained in `TO_STATES`, no hooks will fire\n    \"\"\"\n\n    FROM_STATES: Iterable = []\n    TO_STATES: Iterable = []\n\n    def __init__(\n        self,\n        context: OrchestrationContext,\n        from_state_type: Optional[states.StateType],\n        to_state_type: Optional[states.StateType],\n    ):\n        self.context = context\n        self.from_state_type = from_state_type\n        self.to_state_type = to_state_type\n        self._invalid_on_entry = None\n\n    async def __aenter__(self) -> OrchestrationContext:\n\"\"\"\n        Enter an async runtime context governed by this rule.\n\n        The `with` statement will bind a governed `OrchestrationContext` to the target\n        specified by the `as` clause. If the transition proposed by the\n        `OrchestrationContext` is considered invalid on entry, entering this context\n        will do nothing. Otherwise, `self.before_transition` will fire.\n        \"\"\"\n\n        if await self.invalid():\n            pass\n        else:\n            try:\n                entry_context = self.context.entry_context()\n                await self.before_transition(*entry_context)\n                self.context.rule_signature.append(str(self.__class__))\n            except Exception as before_transition_error:\n                reason = (\n                    f\"Aborting orchestration due to error in {self.__class__!r}:\"\n                    f\" !{before_transition_error!r}\"\n                )\n                logger.exception(\n                    f\"Error running before-transition hook in rule {self.__class__!r}:\"\n                    f\" !{before_transition_error!r}\"\n                )\n\n                self.context.proposed_state = None\n                self.context.response_status = SetStateStatus.ABORT\n                self.context.response_details = StateAbortDetails(reason=reason)\n                self.context.orchestration_error = before_transition_error\n\n        return self.context\n\n    async def __aexit__(\n        self,\n        exc_type: Optional[Type[BaseException]],\n        exc_val: Optional[BaseException],\n        exc_tb: Optional[TracebackType],\n    ) -> None:\n\"\"\"\n        Exit the async runtime context governed by this rule.\n\n        One of three outcomes can happen upon exiting this rule's context depending on\n        the state of the rule. If the rule was found to be invalid on entry, nothing\n        happens. If the rule was valid on entry and continues to be valid on exit,\n        `self.after_transition` will fire. If the rule was valid on entry but invalid\n        on exit, the rule will \"fizzle\" and `self.cleanup` will fire in order to revert\n        any side-effects produced by `self.before_transition`.\n        \"\"\"\n\n        exit_context = self.context.exit_context()\n        if await self.invalid():\n            pass\n        elif await self.fizzled():\n            await self.cleanup(*exit_context)\n        else:\n            await self.after_transition(*exit_context)\n            self.context.finalization_signature.append(str(self.__class__))\n\n    async def before_transition(\n        self,\n        initial_state: Optional[states.State],\n        proposed_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n\"\"\"\n        Implements a hook that can fire before a state is committed to the database.\n\n        This hook may produce side-effects or mutate the proposed state of a\n        transition using one of four methods: `self.reject_transition`,\n        `self.delay_transition`, `self.abort_transition`, and `self.rename_state`.\n\n        Note:\n            As currently implemented, the `before_transition` hook is not\n            perfectly isolated from mutating the transition. It is a standard instance\n            method that has access to `self`, and therefore `self.context`. This should\n            never be modified directly. Furthermore, `context.run` is an ORM model, and\n            mutating the run can also cause unintended writes to the database.\n\n        Args:\n            initial_state: The initial state of a transtion\n            proposed_state: The proposed state of a transition\n            context: A safe copy of the `OrchestrationContext`, with the exception of\n                `context.run`, mutating this context will have no effect on the broader\n                orchestration environment.\n\n        Returns:\n            None\n        \"\"\"\n\n    async def after_transition(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n\"\"\"\n        Implements a hook that can fire after a state is committed to the database.\n\n        Args:\n            initial_state: The initial state of a transtion\n            validated_state: The governed state that has been committed to the database\n            context: A safe copy of the `OrchestrationContext`, with the exception of\n                `context.run`, mutating this context will have no effect on the broader\n                orchestration environment.\n\n        Returns:\n            None\n        \"\"\"\n\n    async def cleanup(\n        self,\n        initial_state: Optional[states.State],\n        validated_state: Optional[states.State],\n        context: OrchestrationContext,\n    ) -> None:\n\"\"\"\n        Implements a hook that can fire after a state is committed to the database.\n\n        The intended use of this method is to revert side-effects produced by\n        `self.before_transition` when the transition is found to be invalid on exit.\n        This allows multiple rules to be gracefully run in sequence, without logic that\n        keeps track of all other rules that might govern a transition.\n\n        Args:\n            initial_state: The initial state of a transtion\n            validated_state: The governed state that has been committed to the database\n            context: A safe copy of the `OrchestrationContext`, with the exception of\n                `context.run`, mutating this context will have no effect on the broader\n                orchestration environment.\n\n        Returns:\n            None\n        \"\"\"\n\n    async def invalid(self) -> bool:\n\"\"\"\n        Determines if a rule is invalid.\n\n        Invalid rules do nothing and no hooks fire upon entering or exiting a governed\n        context. Rules are invalid if the transition states types are not contained in\n        `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing\n        a transition that differs from the transition the rule was instantiated with.\n\n        Returns:\n            True if the rules in invalid, False otherwise.\n        \"\"\"\n        # invalid and fizzled states are mutually exclusive,\n        # `_invalid_on_entry` holds this statefulness\n        if self.from_state_type not in self.FROM_STATES:\n            self._invalid_on_entry = True\n        if self.to_state_type not in self.TO_STATES:\n            self._invalid_on_entry = True\n\n        if self._invalid_on_entry is None:\n            self._invalid_on_entry = await self.invalid_transition()\n        return self._invalid_on_entry\n\n    async def fizzled(self) -> bool:\n\"\"\"\n        Determines if a rule is fizzled and side-effects need to be reverted.\n\n        Rules are fizzled if the transitions were valid on entry (thus firing\n        `self.before_transition`) but are invalid upon exiting the governed context,\n        most likely caused by another rule mutating the transition.\n\n        Returns:\n            True if the rule is fizzled, False otherwise.\n        \"\"\"\n\n        if self._invalid_on_entry:\n            return False\n        return await self.invalid_transition()\n\n    async def invalid_transition(self) -> bool:\n\"\"\"\n        Determines if the transition proposed by the `OrchestrationContext` is invalid.\n\n        If the `OrchestrationContext` is attempting to manage a transition with this\n        rule that differs from the transition the rule was instantiated with, the\n        transition is considered to be invalid. Depending on the context, a rule with an\n        invalid transition is either \"invalid\" or \"fizzled\".\n\n        Returns:\n            True if the transition is invalid, False otherwise.\n        \"\"\"\n\n        initial_state_type = self.context.initial_state_type\n        proposed_state_type = self.context.proposed_state_type\n        return (self.from_state_type != initial_state_type) or (\n            self.to_state_type != proposed_state_type\n        )\n\n    async def reject_transition(self, state: Optional[states.State], reason: str):\n\"\"\"\n        Rejects a proposed transition before the transition is validated.\n\n        This method will reject a proposed transition, mutating the proposed state to\n        the provided `state`. A reason for rejecting the transition is also passed on\n        to the `OrchestrationContext`. Rules that reject the transition will not fizzle,\n        despite the proposed state type changing.\n\n        Args:\n            state: The new proposed state. If `None`, the current run state will be\n                returned in the result instead.\n            reason: The reason for rejecting the transition\n        \"\"\"\n\n        # don't run if the transition is already validated\n        if self.context.validated_state:\n            raise RuntimeError(\"The transition is already validated\")\n\n        # the current state will be used if a new one is not provided\n        if state is None:\n            if self.from_state_type is None:\n                raise OrchestrationError(\n                    \"The current run has no state; this transition cannot be \"\n                    \"rejected without providing a new state.\"\n                )\n            self.to_state_type = None\n            self.context.proposed_state = None\n        else:\n            # a rule that mutates state should not fizzle itself\n            self.to_state_type = state.type\n            self.context.proposed_state = state\n\n        self.context.response_status = SetStateStatus.REJECT\n        self.context.response_details = StateRejectDetails(reason=reason)\n\n    async def delay_transition(\n        self,\n        delay_seconds: int,\n        reason: str,\n    ):\n\"\"\"\n        Delays a proposed transition before the transition is validated.\n\n        This method will delay a proposed transition, setting the proposed state to\n        `None`, signaling to the `OrchestrationContext` that no state should be\n        written to the database. The number of seconds a transition should be delayed is\n        passed to the `OrchestrationContext`. A reason for delaying the transition is\n        also provided. Rules that delay the transition will not fizzle, despite the\n        proposed state type changing.\n\n        Args:\n            delay_seconds: The number of seconds the transition should be delayed\n            reason: The reason for delaying the transition\n        \"\"\"\n\n        # don't run if the transition is already validated\n        if self.context.validated_state:\n            raise RuntimeError(\"The transition is already validated\")\n\n        # a rule that mutates state should not fizzle itself\n        self.to_state_type = None\n        self.context.proposed_state = None\n        self.context.response_status = SetStateStatus.WAIT\n        self.context.response_details = StateWaitDetails(\n            delay_seconds=delay_seconds, reason=reason\n        )\n\n    async def abort_transition(self, reason: str):\n\"\"\"\n        Aborts a proposed transition before the transition is validated.\n\n        This method will abort a proposed transition, expecting no further action to\n        occur for this run. The proposed state is set to `None`, signaling to the\n        `OrchestrationContext` that no state should be written to the database. A\n        reason for aborting the transition is also provided. Rules that abort the\n        transition will not fizzle, despite the proposed state type changing.\n\n        Args:\n            reason: The reason for aborting the transition\n        \"\"\"\n\n        # don't run if the transition is already validated\n        if self.context.validated_state:\n            raise RuntimeError(\"The transition is already validated\")\n\n        # a rule that mutates state should not fizzle itself\n        self.to_state_type = None\n        self.context.proposed_state = None\n        self.context.response_status = SetStateStatus.ABORT\n        self.context.response_details = StateAbortDetails(reason=reason)\n\n    async def rename_state(self, state_name):\n\"\"\"\n        Sets the \"name\" attribute on a proposed state.\n\n        The name of a state is an annotation intended to provide rich, human-readable\n        context for how a run is progressing. This method only updates the name and not\n        the canonical state TYPE, and will not fizzle or invalidate any other rules\n        that might govern this state transition.\n        \"\"\"\n        if self.context.proposed_state is not None:\n            self.context.proposed_state.name = state_name\n\n    async def update_context_parameters(self, key, value):\n\"\"\"\n        Updates the \"parameters\" dictionary attribute with the specified key-value pair.\n\n        This mechanism streamlines the process of passing messages and information\n        between orchestration rules if necessary and is simpler and more ephemeral than\n        message-passing via the database or some other side-effect. This mechanism can\n        be used to break up large rules for ease of testing or comprehension, but note\n        that any rules coupled this way (or any other way) are no longer independent and\n        the order in which they appear in the orchestration policy priority will matter.\n        \"\"\"\n\n        self.context.parameters.update({key: value})\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.before_transition","title":"before_transition async","text":"

Implements a hook that can fire before a state is committed to the database.

This hook may produce side-effects or mutate the proposed state of a transition using one of four methods: self.reject_transition, self.delay_transition, self.abort_transition, and self.rename_state.

Note

As currently implemented, the before_transition hook is not perfectly isolated from mutating the transition. It is a standard instance method that has access to self, and therefore self.context. This should never be modified directly. Furthermore, context.run is an ORM model, and mutating the run can also cause unintended writes to the database.

Parameters:

Name Type Description Default initial_state Optional[states.State]

The initial state of a transtion

required proposed_state Optional[states.State]

The proposed state of a transition

required context OrchestrationContext

A safe copy of the OrchestrationContext, with the exception of context.run, mutating this context will have no effect on the broader orchestration environment.

required

Returns:

Type Description None

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def before_transition(\n    self,\n    initial_state: Optional[states.State],\n    proposed_state: Optional[states.State],\n    context: OrchestrationContext,\n) -> None:\n\"\"\"\n    Implements a hook that can fire before a state is committed to the database.\n\n    This hook may produce side-effects or mutate the proposed state of a\n    transition using one of four methods: `self.reject_transition`,\n    `self.delay_transition`, `self.abort_transition`, and `self.rename_state`.\n\n    Note:\n        As currently implemented, the `before_transition` hook is not\n        perfectly isolated from mutating the transition. It is a standard instance\n        method that has access to `self`, and therefore `self.context`. This should\n        never be modified directly. Furthermore, `context.run` is an ORM model, and\n        mutating the run can also cause unintended writes to the database.\n\n    Args:\n        initial_state: The initial state of a transtion\n        proposed_state: The proposed state of a transition\n        context: A safe copy of the `OrchestrationContext`, with the exception of\n            `context.run`, mutating this context will have no effect on the broader\n            orchestration environment.\n\n    Returns:\n        None\n    \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.after_transition","title":"after_transition async","text":"

Implements a hook that can fire after a state is committed to the database.

Parameters:

Name Type Description Default initial_state Optional[states.State]

The initial state of a transtion

required validated_state Optional[states.State]

The governed state that has been committed to the database

required context OrchestrationContext

A safe copy of the OrchestrationContext, with the exception of context.run, mutating this context will have no effect on the broader orchestration environment.

required

Returns:

Type Description None

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def after_transition(\n    self,\n    initial_state: Optional[states.State],\n    validated_state: Optional[states.State],\n    context: OrchestrationContext,\n) -> None:\n\"\"\"\n    Implements a hook that can fire after a state is committed to the database.\n\n    Args:\n        initial_state: The initial state of a transtion\n        validated_state: The governed state that has been committed to the database\n        context: A safe copy of the `OrchestrationContext`, with the exception of\n            `context.run`, mutating this context will have no effect on the broader\n            orchestration environment.\n\n    Returns:\n        None\n    \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.cleanup","title":"cleanup async","text":"

Implements a hook that can fire after a state is committed to the database.

The intended use of this method is to revert side-effects produced by self.before_transition when the transition is found to be invalid on exit. This allows multiple rules to be gracefully run in sequence, without logic that keeps track of all other rules that might govern a transition.

Parameters:

Name Type Description Default initial_state Optional[states.State]

The initial state of a transtion

required validated_state Optional[states.State]

The governed state that has been committed to the database

required context OrchestrationContext

A safe copy of the OrchestrationContext, with the exception of context.run, mutating this context will have no effect on the broader orchestration environment.

required

Returns:

Type Description None

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def cleanup(\n    self,\n    initial_state: Optional[states.State],\n    validated_state: Optional[states.State],\n    context: OrchestrationContext,\n) -> None:\n\"\"\"\n    Implements a hook that can fire after a state is committed to the database.\n\n    The intended use of this method is to revert side-effects produced by\n    `self.before_transition` when the transition is found to be invalid on exit.\n    This allows multiple rules to be gracefully run in sequence, without logic that\n    keeps track of all other rules that might govern a transition.\n\n    Args:\n        initial_state: The initial state of a transtion\n        validated_state: The governed state that has been committed to the database\n        context: A safe copy of the `OrchestrationContext`, with the exception of\n            `context.run`, mutating this context will have no effect on the broader\n            orchestration environment.\n\n    Returns:\n        None\n    \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.invalid","title":"invalid async","text":"

Determines if a rule is invalid.

Invalid rules do nothing and no hooks fire upon entering or exiting a governed context. Rules are invalid if the transition states types are not contained in self.FROM_STATES and self.TO_STATES, or if the context is proposing a transition that differs from the transition the rule was instantiated with.

Returns:

Type Description bool

True if the rules in invalid, False otherwise.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def invalid(self) -> bool:\n\"\"\"\n    Determines if a rule is invalid.\n\n    Invalid rules do nothing and no hooks fire upon entering or exiting a governed\n    context. Rules are invalid if the transition states types are not contained in\n    `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing\n    a transition that differs from the transition the rule was instantiated with.\n\n    Returns:\n        True if the rules in invalid, False otherwise.\n    \"\"\"\n    # invalid and fizzled states are mutually exclusive,\n    # `_invalid_on_entry` holds this statefulness\n    if self.from_state_type not in self.FROM_STATES:\n        self._invalid_on_entry = True\n    if self.to_state_type not in self.TO_STATES:\n        self._invalid_on_entry = True\n\n    if self._invalid_on_entry is None:\n        self._invalid_on_entry = await self.invalid_transition()\n    return self._invalid_on_entry\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.fizzled","title":"fizzled async","text":"

Determines if a rule is fizzled and side-effects need to be reverted.

Rules are fizzled if the transitions were valid on entry (thus firing self.before_transition) but are invalid upon exiting the governed context, most likely caused by another rule mutating the transition.

Returns:

Type Description bool

True if the rule is fizzled, False otherwise.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def fizzled(self) -> bool:\n\"\"\"\n    Determines if a rule is fizzled and side-effects need to be reverted.\n\n    Rules are fizzled if the transitions were valid on entry (thus firing\n    `self.before_transition`) but are invalid upon exiting the governed context,\n    most likely caused by another rule mutating the transition.\n\n    Returns:\n        True if the rule is fizzled, False otherwise.\n    \"\"\"\n\n    if self._invalid_on_entry:\n        return False\n    return await self.invalid_transition()\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.invalid_transition","title":"invalid_transition async","text":"

Determines if the transition proposed by the OrchestrationContext is invalid.

If the OrchestrationContext is attempting to manage a transition with this rule that differs from the transition the rule was instantiated with, the transition is considered to be invalid. Depending on the context, a rule with an invalid transition is either \"invalid\" or \"fizzled\".

Returns:

Type Description bool

True if the transition is invalid, False otherwise.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def invalid_transition(self) -> bool:\n\"\"\"\n    Determines if the transition proposed by the `OrchestrationContext` is invalid.\n\n    If the `OrchestrationContext` is attempting to manage a transition with this\n    rule that differs from the transition the rule was instantiated with, the\n    transition is considered to be invalid. Depending on the context, a rule with an\n    invalid transition is either \"invalid\" or \"fizzled\".\n\n    Returns:\n        True if the transition is invalid, False otherwise.\n    \"\"\"\n\n    initial_state_type = self.context.initial_state_type\n    proposed_state_type = self.context.proposed_state_type\n    return (self.from_state_type != initial_state_type) or (\n        self.to_state_type != proposed_state_type\n    )\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.reject_transition","title":"reject_transition async","text":"

Rejects a proposed transition before the transition is validated.

This method will reject a proposed transition, mutating the proposed state to the provided state. A reason for rejecting the transition is also passed on to the OrchestrationContext. Rules that reject the transition will not fizzle, despite the proposed state type changing.

Parameters:

Name Type Description Default state Optional[states.State]

The new proposed state. If None, the current run state will be returned in the result instead.

required reason str

The reason for rejecting the transition

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def reject_transition(self, state: Optional[states.State], reason: str):\n\"\"\"\n    Rejects a proposed transition before the transition is validated.\n\n    This method will reject a proposed transition, mutating the proposed state to\n    the provided `state`. A reason for rejecting the transition is also passed on\n    to the `OrchestrationContext`. Rules that reject the transition will not fizzle,\n    despite the proposed state type changing.\n\n    Args:\n        state: The new proposed state. If `None`, the current run state will be\n            returned in the result instead.\n        reason: The reason for rejecting the transition\n    \"\"\"\n\n    # don't run if the transition is already validated\n    if self.context.validated_state:\n        raise RuntimeError(\"The transition is already validated\")\n\n    # the current state will be used if a new one is not provided\n    if state is None:\n        if self.from_state_type is None:\n            raise OrchestrationError(\n                \"The current run has no state; this transition cannot be \"\n                \"rejected without providing a new state.\"\n            )\n        self.to_state_type = None\n        self.context.proposed_state = None\n    else:\n        # a rule that mutates state should not fizzle itself\n        self.to_state_type = state.type\n        self.context.proposed_state = state\n\n    self.context.response_status = SetStateStatus.REJECT\n    self.context.response_details = StateRejectDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.delay_transition","title":"delay_transition async","text":"

Delays a proposed transition before the transition is validated.

This method will delay a proposed transition, setting the proposed state to None, signaling to the OrchestrationContext that no state should be written to the database. The number of seconds a transition should be delayed is passed to the OrchestrationContext. A reason for delaying the transition is also provided. Rules that delay the transition will not fizzle, despite the proposed state type changing.

Parameters:

Name Type Description Default delay_seconds int

The number of seconds the transition should be delayed

required reason str

The reason for delaying the transition

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def delay_transition(\n    self,\n    delay_seconds: int,\n    reason: str,\n):\n\"\"\"\n    Delays a proposed transition before the transition is validated.\n\n    This method will delay a proposed transition, setting the proposed state to\n    `None`, signaling to the `OrchestrationContext` that no state should be\n    written to the database. The number of seconds a transition should be delayed is\n    passed to the `OrchestrationContext`. A reason for delaying the transition is\n    also provided. Rules that delay the transition will not fizzle, despite the\n    proposed state type changing.\n\n    Args:\n        delay_seconds: The number of seconds the transition should be delayed\n        reason: The reason for delaying the transition\n    \"\"\"\n\n    # don't run if the transition is already validated\n    if self.context.validated_state:\n        raise RuntimeError(\"The transition is already validated\")\n\n    # a rule that mutates state should not fizzle itself\n    self.to_state_type = None\n    self.context.proposed_state = None\n    self.context.response_status = SetStateStatus.WAIT\n    self.context.response_details = StateWaitDetails(\n        delay_seconds=delay_seconds, reason=reason\n    )\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.abort_transition","title":"abort_transition async","text":"

Aborts a proposed transition before the transition is validated.

This method will abort a proposed transition, expecting no further action to occur for this run. The proposed state is set to None, signaling to the OrchestrationContext that no state should be written to the database. A reason for aborting the transition is also provided. Rules that abort the transition will not fizzle, despite the proposed state type changing.

Parameters:

Name Type Description Default reason str

The reason for aborting the transition

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def abort_transition(self, reason: str):\n\"\"\"\n    Aborts a proposed transition before the transition is validated.\n\n    This method will abort a proposed transition, expecting no further action to\n    occur for this run. The proposed state is set to `None`, signaling to the\n    `OrchestrationContext` that no state should be written to the database. A\n    reason for aborting the transition is also provided. Rules that abort the\n    transition will not fizzle, despite the proposed state type changing.\n\n    Args:\n        reason: The reason for aborting the transition\n    \"\"\"\n\n    # don't run if the transition is already validated\n    if self.context.validated_state:\n        raise RuntimeError(\"The transition is already validated\")\n\n    # a rule that mutates state should not fizzle itself\n    self.to_state_type = None\n    self.context.proposed_state = None\n    self.context.response_status = SetStateStatus.ABORT\n    self.context.response_details = StateAbortDetails(reason=reason)\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.rename_state","title":"rename_state async","text":"

Sets the \"name\" attribute on a proposed state.

The name of a state is an annotation intended to provide rich, human-readable context for how a run is progressing. This method only updates the name and not the canonical state TYPE, and will not fizzle or invalidate any other rules that might govern this state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def rename_state(self, state_name):\n\"\"\"\n    Sets the \"name\" attribute on a proposed state.\n\n    The name of a state is an annotation intended to provide rich, human-readable\n    context for how a run is progressing. This method only updates the name and not\n    the canonical state TYPE, and will not fizzle or invalidate any other rules\n    that might govern this state transition.\n    \"\"\"\n    if self.context.proposed_state is not None:\n        self.context.proposed_state.name = state_name\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseOrchestrationRule.update_context_parameters","title":"update_context_parameters async","text":"

Updates the \"parameters\" dictionary attribute with the specified key-value pair.

This mechanism streamlines the process of passing messages and information between orchestration rules if necessary and is simpler and more ephemeral than message-passing via the database or some other side-effect. This mechanism can be used to break up large rules for ease of testing or comprehension, but note that any rules coupled this way (or any other way) are no longer independent and the order in which they appear in the orchestration policy priority will matter.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def update_context_parameters(self, key, value):\n\"\"\"\n    Updates the \"parameters\" dictionary attribute with the specified key-value pair.\n\n    This mechanism streamlines the process of passing messages and information\n    between orchestration rules if necessary and is simpler and more ephemeral than\n    message-passing via the database or some other side-effect. This mechanism can\n    be used to break up large rules for ease of testing or comprehension, but note\n    that any rules coupled this way (or any other way) are no longer independent and\n    the order in which they appear in the orchestration policy priority will matter.\n    \"\"\"\n\n    self.context.parameters.update({key: value})\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform","title":"BaseUniversalTransform","text":"

Bases: contextlib.AbstractAsyncContextManager

An abstract base class used to implement privileged bookkeeping logic.

Warning

In almost all cases, use the BaseOrchestrationRule base class instead.

Beyond the orchestration rules implemented with the BaseOrchestrationRule ABC, Universal transforms are not stateful, and fire their before- and after- transition hooks on every state transition unless the proposed state is None, indicating that no state should be written to the database. Because there are no guardrails in place to prevent directly mutating state or other parts of the orchestration context, universal transforms should only be used with care.

Attributes:

Name Type Description FROM_STATES Iterable

for compatibility with BaseOrchestrationPolicy

TO_STATES Iterable

for compatibility with BaseOrchestrationPolicy

context

the orchestration context

from_state_type

the state type a run is currently in

to_state_type

the intended proposed state type prior to any orchestration

Parameters:

Name Type Description Default context OrchestrationContext

A FlowOrchestrationContext or TaskOrchestrationContext that is passed between transforms

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
class BaseUniversalTransform(contextlib.AbstractAsyncContextManager):\n\"\"\"\n    An abstract base class used to implement privileged bookkeeping logic.\n\n    Warning:\n        In almost all cases, use the `BaseOrchestrationRule` base class instead.\n\n    Beyond the orchestration rules implemented with the `BaseOrchestrationRule` ABC,\n    Universal transforms are not stateful, and fire their before- and after- transition\n    hooks on every state transition unless the proposed state is `None`, indicating that\n    no state should be written to the database. Because there are no guardrails in place\n    to prevent directly mutating state or other parts of the orchestration context,\n    universal transforms should only be used with care.\n\n    Attributes:\n        FROM_STATES: for compatibility with `BaseOrchestrationPolicy`\n        TO_STATES: for compatibility with `BaseOrchestrationPolicy`\n        context: the orchestration context\n        from_state_type: the state type a run is currently in\n        to_state_type: the intended proposed state type prior to any orchestration\n\n    Args:\n        context: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is\n            passed between transforms\n    \"\"\"\n\n    # `BaseUniversalTransform` will always fire on non-null transitions\n    FROM_STATES: Iterable = ALL_ORCHESTRATION_STATES\n    TO_STATES: Iterable = ALL_ORCHESTRATION_STATES\n\n    def __init__(\n        self,\n        context: OrchestrationContext,\n        from_state_type: Optional[states.StateType],\n        to_state_type: Optional[states.StateType],\n    ):\n        self.context = context\n        self.from_state_type = from_state_type\n        self.to_state_type = to_state_type\n\n    async def __aenter__(self):\n\"\"\"\n        Enter an async runtime context governed by this transform.\n\n        The `with` statement will bind a governed `OrchestrationContext` to the target\n        specified by the `as` clause. If the transition proposed by the\n        `OrchestrationContext` has been nullified on entry and `context.proposed_state`\n        is `None`, entering this context will do nothing. Otherwise\n        `self.before_transition` will fire.\n        \"\"\"\n\n        await self.before_transition(self.context)\n        self.context.rule_signature.append(str(self.__class__))\n        return self.context\n\n    async def __aexit__(\n        self,\n        exc_type: Optional[Type[BaseException]],\n        exc_val: Optional[BaseException],\n        exc_tb: Optional[TracebackType],\n    ) -> None:\n\"\"\"\n        Exit the async runtime context governed by this transform.\n\n        If the transition has been nullified or errorred upon exiting this transforms's context,\n        nothing happens. Otherwise, `self.after_transition` will fire on every non-null\n        proposed state.\n        \"\"\"\n\n        if not self.exception_in_transition():\n            await self.after_transition(self.context)\n            self.context.finalization_signature.append(str(self.__class__))\n\n    async def before_transition(self, context) -> None:\n\"\"\"\n        Implements a hook that fires before a state is committed to the database.\n\n        Args:\n            context: the `OrchestrationContext` that contains transition details\n\n        Returns:\n            None\n        \"\"\"\n\n    async def after_transition(self, context) -> None:\n\"\"\"\n        Implements a hook that can fire after a state is committed to the database.\n\n        Args:\n            context: the `OrchestrationContext` that contains transition details\n\n        Returns:\n            None\n        \"\"\"\n\n    def nullified_transition(self) -> bool:\n\"\"\"\n        Determines if the transition has been nullified.\n\n        Transitions are nullified if the proposed state is `None`, indicating that\n        nothing should be written to the database.\n\n        Returns:\n            True if the transition is nullified, False otherwise.\n        \"\"\"\n\n        return self.context.proposed_state is None\n\n    def exception_in_transition(self) -> bool:\n\"\"\"\n        Determines if the transition has encountered an exception.\n\n        Returns:\n            True if the transition is encountered an exception, False otherwise.\n        \"\"\"\n\n        return self.context.orchestration_error is not None\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.before_transition","title":"before_transition async","text":"

Implements a hook that fires before a state is committed to the database.

Parameters:

Name Type Description Default context

the OrchestrationContext that contains transition details

required

Returns:

Type Description None

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def before_transition(self, context) -> None:\n\"\"\"\n    Implements a hook that fires before a state is committed to the database.\n\n    Args:\n        context: the `OrchestrationContext` that contains transition details\n\n    Returns:\n        None\n    \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.after_transition","title":"after_transition async","text":"

Implements a hook that can fire after a state is committed to the database.

Parameters:

Name Type Description Default context

the OrchestrationContext that contains transition details

required

Returns:

Type Description None

None

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
async def after_transition(self, context) -> None:\n\"\"\"\n    Implements a hook that can fire after a state is committed to the database.\n\n    Args:\n        context: the `OrchestrationContext` that contains transition details\n\n    Returns:\n        None\n    \"\"\"\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.nullified_transition","title":"nullified_transition","text":"

Determines if the transition has been nullified.

Transitions are nullified if the proposed state is None, indicating that nothing should be written to the database.

Returns:

Type Description bool

True if the transition is nullified, False otherwise.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def nullified_transition(self) -> bool:\n\"\"\"\n    Determines if the transition has been nullified.\n\n    Transitions are nullified if the proposed state is `None`, indicating that\n    nothing should be written to the database.\n\n    Returns:\n        True if the transition is nullified, False otherwise.\n    \"\"\"\n\n    return self.context.proposed_state is None\n
"},{"location":"api-ref/server/orchestration/rules/#prefect.server.orchestration.rules.BaseUniversalTransform.exception_in_transition","title":"exception_in_transition","text":"

Determines if the transition has encountered an exception.

Returns:

Type Description bool

True if the transition is encountered an exception, False otherwise.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/orchestration/rules.py
def exception_in_transition(self) -> bool:\n\"\"\"\n    Determines if the transition has encountered an exception.\n\n    Returns:\n        True if the transition is encountered an exception, False otherwise.\n    \"\"\"\n\n    return self.context.orchestration_error is not None\n
"},{"location":"api-ref/server/schemas/actions/","title":"server.schemas.actions","text":""},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions","title":"prefect.server.schemas.actions","text":"

Reduced schemas for accepting API actions.

"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate","title":"ArtifactCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create an artifact.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass ArtifactCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create an artifact.\"\"\"\n\n    key: Optional[str] = FieldFrom(schemas.core.Artifact)\n    type: Optional[str] = FieldFrom(schemas.core.Artifact)\n    description: Optional[str] = FieldFrom(schemas.core.Artifact)\n    data: Optional[Union[Dict[str, Any], Any]] = FieldFrom(schemas.core.Artifact)\n    metadata_: Optional[Dict[str, str]] = FieldFrom(schemas.core.Artifact)\n    flow_run_id: Optional[UUID] = FieldFrom(schemas.core.Artifact)\n    task_run_id: Optional[UUID] = FieldFrom(schemas.core.Artifact)\n\n    _validate_artifact_format = validator(\"key\", allow_reuse=True)(\n        validate_artifact_key\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate","title":"ArtifactUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update an artifact.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass ArtifactUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update an artifact.\"\"\"\n\n    data: Optional[Union[Dict[str, Any], Any]] = FieldFrom(schemas.core.Artifact)\n    description: Optional[str] = FieldFrom(schemas.core.Artifact)\n    metadata_: Optional[Dict[str, str]] = FieldFrom(schemas.core.Artifact)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ArtifactUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate","title":"BlockDocumentCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a block document.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass BlockDocumentCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a block document.\"\"\"\n\n    name: Optional[str] = FieldFrom(schemas.core.BlockDocument)\n    data: dict = FieldFrom(schemas.core.BlockDocument)\n    block_schema_id: UUID = FieldFrom(schemas.core.BlockDocument)\n    block_type_id: UUID = FieldFrom(schemas.core.BlockDocument)\n    is_anonymous: bool = FieldFrom(schemas.core.BlockDocument)\n\n    _validate_name_format = validator(\"name\", allow_reuse=True)(\n        validate_block_document_name\n    )\n\n    @root_validator\n    def validate_name_is_present_if_not_anonymous(cls, values):\n        # TODO: We should find an elegant way to reuse this logic from the origin model\n        if not values.get(\"is_anonymous\") and not values.get(\"name\"):\n            raise ValueError(\"Names must be provided for block documents.\")\n        return values\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate","title":"BlockDocumentReferenceCreate","text":"

Bases: ActionBaseModel

Data used to create block document reference.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass BlockDocumentReferenceCreate(ActionBaseModel):\n\"\"\"Data used to create block document reference.\"\"\"\n\n    id: UUID = FieldFrom(schemas.core.BlockDocumentReference)\n    parent_block_document_id: UUID = FieldFrom(schemas.core.BlockDocumentReference)\n    reference_block_document_id: UUID = FieldFrom(schemas.core.BlockDocumentReference)\n    name: str = FieldFrom(schemas.core.BlockDocumentReference)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentReferenceCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate","title":"BlockDocumentUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a block document.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass BlockDocumentUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a block document.\"\"\"\n\n    block_schema_id: Optional[UUID] = Field(\n        default=None, description=\"A block schema ID\"\n    )\n    data: dict = FieldFrom(schemas.core.BlockDocument)\n    merge_existing_data: bool = True\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockDocumentUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate","title":"BlockSchemaCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a block schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass BlockSchemaCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a block schema.\"\"\"\n\n    fields: dict = FieldFrom(schemas.core.BlockSchema)\n    block_type_id: Optional[UUID] = FieldFrom(schemas.core.BlockSchema)\n    capabilities: List[str] = FieldFrom(schemas.core.BlockSchema)\n    version: str = FieldFrom(schemas.core.BlockSchema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockSchemaCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate","title":"BlockTypeCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a block type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass BlockTypeCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a block type.\"\"\"\n\n    name: str = FieldFrom(schemas.core.BlockType)\n    slug: str = FieldFrom(schemas.core.BlockType)\n    logo_url: Optional[schemas.core.HttpUrl] = FieldFrom(schemas.core.BlockType)\n    documentation_url: Optional[schemas.core.HttpUrl] = FieldFrom(\n        schemas.core.BlockType\n    )\n    description: Optional[str] = FieldFrom(schemas.core.BlockType)\n    code_example: Optional[str] = FieldFrom(schemas.core.BlockType)\n\n    # validators\n    _validate_slug_format = validator(\"slug\", allow_reuse=True)(\n        validate_block_type_slug\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate","title":"BlockTypeUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a block type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass BlockTypeUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a block type.\"\"\"\n\n    logo_url: Optional[schemas.core.HttpUrl] = FieldFrom(schemas.core.BlockType)\n    documentation_url: Optional[schemas.core.HttpUrl] = FieldFrom(\n        schemas.core.BlockType\n    )\n    description: Optional[str] = FieldFrom(schemas.core.BlockType)\n    code_example: Optional[str] = FieldFrom(schemas.core.BlockType)\n\n    @classmethod\n    def updatable_fields(cls) -> set:\n        return get_class_fields_only(cls)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.BlockTypeUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate","title":"ConcurrencyLimitCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a concurrency limit.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass ConcurrencyLimitCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a concurrency limit.\"\"\"\n\n    tag: str = FieldFrom(schemas.core.ConcurrencyLimit)\n    concurrency_limit: int = FieldFrom(schemas.core.ConcurrencyLimit)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.ConcurrencyLimitCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate","title":"DeploymentCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@experimental_field(\n    \"work_pool_name\",\n    group=\"work_pools\",\n    when=lambda x: x is not None,\n)\n@copy_model_fields\nclass DeploymentCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a deployment.\"\"\"\n\n    @root_validator(pre=True)\n    def remove_old_fields(cls, values):\n        # 2.7.7 removed worker_pool_queue_id in lieu of worker_pool_name and\n        # worker_pool_queue_name. Those fields were later renamed to work_pool_name\n        # and work_queue_name. This validator removes old fields provided\n        # by older clients to avoid 422 errors.\n        values_copy = copy(values)\n        worker_pool_queue_id = values_copy.pop(\"worker_pool_queue_id\", None)\n        worker_pool_name = values_copy.pop(\"worker_pool_name\", None)\n        worker_pool_queue_name = values_copy.pop(\"worker_pool_queue_name\", None)\n        work_pool_queue_name = values_copy.pop(\"work_pool_queue_name\", None)\n        if worker_pool_queue_id:\n            warnings.warn(\n                (\n                    \"`worker_pool_queue_id` is no longer supported for creating \"\n                    \"deployments. Please use `work_pool_name` and \"\n                    \"`work_queue_name` instead.\"\n                ),\n                UserWarning,\n            )\n        if worker_pool_name or worker_pool_queue_name or work_pool_queue_name:\n            warnings.warn(\n                (\n                    \"`worker_pool_name`, `worker_pool_queue_name`, and \"\n                    \"`work_pool_name` are\"\n                    \"no longer supported for creating \"\n                    \"deployments. Please use `work_pool_name` and \"\n                    \"`work_queue_name` instead.\"\n                ),\n                UserWarning,\n            )\n        return values_copy\n\n    name: str = FieldFrom(schemas.core.Deployment)\n    flow_id: UUID = FieldFrom(schemas.core.Deployment)\n    is_schedule_active: Optional[bool] = FieldFrom(schemas.core.Deployment)\n    parameters: Dict[str, Any] = FieldFrom(schemas.core.Deployment)\n    tags: List[str] = FieldFrom(schemas.core.Deployment)\n    pull_steps: Optional[List[dict]] = FieldFrom(schemas.core.Deployment)\n\n    manifest_path: Optional[str] = FieldFrom(schemas.core.Deployment)\n    work_queue_name: Optional[str] = FieldFrom(schemas.core.Deployment)\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the deployment's work pool.\",\n        example=\"my-work-pool\",\n    )\n    storage_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n    infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n    schedule: Optional[schemas.schedules.SCHEDULE_TYPES] = FieldFrom(\n        schemas.core.Deployment\n    )\n    description: Optional[str] = FieldFrom(schemas.core.Deployment)\n    parameter_openapi_schema: Optional[Dict[str, Any]] = FieldFrom(\n        schemas.core.Deployment\n    )\n    path: Optional[str] = FieldFrom(schemas.core.Deployment)\n    version: Optional[str] = FieldFrom(schemas.core.Deployment)\n    entrypoint: Optional[str] = FieldFrom(schemas.core.Deployment)\n    infra_overrides: Optional[Dict[str, Any]] = FieldFrom(schemas.core.Deployment)\n\n    def check_valid_configuration(self, base_job_template: dict):\n\"\"\"Check that the combination of base_job_template defaults\n        and infra_overrides conforms to the specified schema.\n        \"\"\"\n        variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n        if variables_schema is not None:\n            # jsonschema considers required fields, even if that field has a default,\n            # to still be required. To get around this we remove the fields from\n            # required if there is a default present.\n            required = variables_schema.get(\"required\")\n            properties = variables_schema.get(\"properties\")\n            if required is not None and properties is not None:\n                for k, v in properties.items():\n                    if \"default\" in v and k in required:\n                        required.remove(k)\n\n            jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.check_valid_configuration","title":"check_valid_configuration","text":"

Check that the combination of base_job_template defaults and infra_overrides conforms to the specified schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n\"\"\"Check that the combination of base_job_template defaults\n    and infra_overrides conforms to the specified schema.\n    \"\"\"\n    variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n    if variables_schema is not None:\n        # jsonschema considers required fields, even if that field has a default,\n        # to still be required. To get around this we remove the fields from\n        # required if there is a default present.\n        required = variables_schema.get(\"required\")\n        properties = variables_schema.get(\"properties\")\n        if required is not None and properties is not None:\n            for k, v in properties.items():\n                if \"default\" in v and k in required:\n                    required.remove(k)\n\n        jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate","title":"DeploymentFlowRunCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a flow run from a deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass DeploymentFlowRunCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a flow run from a deployment.\"\"\"\n\n    # FlowRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the flow run to create\"\n    )\n\n    name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    parameters: dict = FieldFrom(schemas.core.FlowRun)\n    context: dict = FieldFrom(schemas.core.FlowRun)\n    infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    empirical_policy: schemas.core.FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n    tags: List[str] = FieldFrom(schemas.core.FlowRun)\n    idempotency_key: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    parent_task_run_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    work_queue_name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentFlowRunCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate","title":"DeploymentUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a deployment.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@experimental_field(\n    \"work_pool_name\",\n    group=\"work_pools\",\n    when=lambda x: x is not None,\n)\n@copy_model_fields\nclass DeploymentUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a deployment.\"\"\"\n\n    @root_validator(pre=True)\n    def remove_old_fields(cls, values):\n        # 2.7.7 removed worker_pool_queue_id in lieu of worker_pool_name and\n        # worker_pool_queue_name. Those fields were later renamed to work_pool_name\n        # and work_queue_name. This validator removes old fields provided\n        # by older clients to avoid 422 errors.\n        values_copy = copy(values)\n        worker_pool_queue_id = values_copy.pop(\"worker_pool_queue_id\", None)\n        worker_pool_name = values_copy.pop(\"worker_pool_name\", None)\n        worker_pool_queue_name = values_copy.pop(\"worker_pool_queue_name\", None)\n        work_pool_queue_name = values_copy.pop(\"work_pool_queue_name\", None)\n        if worker_pool_queue_id:\n            warnings.warn(\n                (\n                    \"`worker_pool_queue_id` is no longer supported for updating \"\n                    \"deployments. Please use `work_pool_name` and \"\n                    \"`work_queue_name` instead.\"\n                ),\n                UserWarning,\n            )\n        if worker_pool_name or worker_pool_queue_name or work_pool_queue_name:\n            warnings.warn(\n                (\n                    \"`worker_pool_name`, `worker_pool_queue_name`, and \"\n                    \"`work_pool_name` are\"\n                    \"no longer supported for creating \"\n                    \"deployments. Please use `work_pool_name` and \"\n                    \"`work_queue_name` instead.\"\n                ),\n                UserWarning,\n            )\n        return values_copy\n\n    version: Optional[str] = FieldFrom(schemas.core.Deployment)\n    schedule: Optional[schemas.schedules.SCHEDULE_TYPES] = FieldFrom(\n        schemas.core.Deployment\n    )\n    description: Optional[str] = FieldFrom(schemas.core.Deployment)\n    is_schedule_active: bool = FieldFrom(schemas.core.Deployment)\n    parameters: Dict[str, Any] = FieldFrom(schemas.core.Deployment)\n    tags: List[str] = FieldFrom(schemas.core.Deployment)\n    work_queue_name: Optional[str] = FieldFrom(schemas.core.Deployment)\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the deployment's work pool.\",\n        example=\"my-work-pool\",\n    )\n    path: Optional[str] = FieldFrom(schemas.core.Deployment)\n    infra_overrides: Optional[Dict[str, Any]] = FieldFrom(schemas.core.Deployment)\n    entrypoint: Optional[str] = FieldFrom(schemas.core.Deployment)\n    manifest_path: Optional[str] = FieldFrom(schemas.core.Deployment)\n    storage_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n    infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.Deployment)\n\n    def check_valid_configuration(self, base_job_template: dict):\n\"\"\"Check that the combination of base_job_template defaults\n        and infra_overrides conforms to the specified schema.\n        \"\"\"\n        variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n        if variables_schema is not None:\n            # jsonschema considers required fields, even if that field has a default,\n            # to still be required. To get around this we remove the fields from\n            # required if there is a default present.\n            required = variables_schema.get(\"required\")\n            properties = variables_schema.get(\"properties\")\n            if required is not None and properties is not None:\n                for k, v in properties.items():\n                    if \"default\" in v and k in required:\n                        required.remove(k)\n\n        if variables_schema is not None:\n            jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.check_valid_configuration","title":"check_valid_configuration","text":"

Check that the combination of base_job_template defaults and infra_overrides conforms to the specified schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
def check_valid_configuration(self, base_job_template: dict):\n\"\"\"Check that the combination of base_job_template defaults\n    and infra_overrides conforms to the specified schema.\n    \"\"\"\n    variables_schema = deepcopy(base_job_template.get(\"variables\"))\n\n    if variables_schema is not None:\n        # jsonschema considers required fields, even if that field has a default,\n        # to still be required. To get around this we remove the fields from\n        # required if there is a default present.\n        required = variables_schema.get(\"required\")\n        properties = variables_schema.get(\"properties\")\n        if required is not None and properties is not None:\n            for k, v in properties.items():\n                if \"default\" in v and k in required:\n                    required.remove(k)\n\n    if variables_schema is not None:\n        jsonschema.validate(self.infra_overrides, variables_schema)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.DeploymentUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate","title":"FlowCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a flow.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass FlowCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a flow.\"\"\"\n\n    name: str = FieldFrom(schemas.core.Flow)\n    tags: List[str] = FieldFrom(schemas.core.Flow)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate","title":"FlowRunCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a flow run.\"\"\"\n\n    # FlowRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the flow run to create\"\n    )\n\n    name: str = FieldFrom(schemas.core.FlowRun)\n    flow_id: UUID = FieldFrom(schemas.core.FlowRun)\n    flow_version: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    parameters: dict = FieldFrom(schemas.core.FlowRun)\n    context: dict = FieldFrom(schemas.core.FlowRun)\n    parent_task_run_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    empirical_policy: schemas.core.FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n    tags: List[str] = FieldFrom(schemas.core.FlowRun)\n    idempotency_key: Optional[str] = FieldFrom(schemas.core.FlowRun)\n\n    # DEPRECATED\n\n    deployment_id: Optional[UUID] = Field(\n        None,\n        description=(\n            \"DEPRECATED: The id of the deployment associated with this flow run, if\"\n            \" available.\"\n        ),\n        deprecated=True,\n    )\n\n    class Config(ActionBaseModel.Config):\n        json_dumps = orjson_dumps_extra_compatible\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate","title":"FlowRunNotificationPolicyCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a flow run notification policy.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunNotificationPolicyCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a flow run notification policy.\"\"\"\n\n    is_active: bool = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    state_names: List[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    tags: List[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    block_document_id: UUID = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    message_template: Optional[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate","title":"FlowRunNotificationPolicyUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a flow run notification policy.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunNotificationPolicyUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a flow run notification policy.\"\"\"\n\n    is_active: Optional[bool] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    state_names: Optional[List[str]] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    tags: Optional[List[str]] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n    block_document_id: Optional[UUID] = FieldFrom(\n        schemas.core.FlowRunNotificationPolicy\n    )\n    message_template: Optional[str] = FieldFrom(schemas.core.FlowRunNotificationPolicy)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunNotificationPolicyUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate","title":"FlowRunUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a flow run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass FlowRunUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a flow run.\"\"\"\n\n    name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    flow_version: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    parameters: dict = FieldFrom(schemas.core.FlowRun)\n    empirical_policy: schemas.core.FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n    tags: List[str] = FieldFrom(schemas.core.FlowRun)\n    infrastructure_pid: Optional[str] = FieldFrom(schemas.core.FlowRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowRunUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate","title":"FlowUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a flow.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass FlowUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a flow.\"\"\"\n\n    tags: List[str] = FieldFrom(schemas.core.Flow)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.FlowUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate","title":"LogCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a log.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass LogCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a log.\"\"\"\n\n    name: str = FieldFrom(schemas.core.Log)\n    level: int = FieldFrom(schemas.core.Log)\n    message: str = FieldFrom(schemas.core.Log)\n    timestamp: schemas.core.DateTimeTZ = FieldFrom(schemas.core.Log)\n    flow_run_id: UUID = FieldFrom(schemas.core.Log)\n    task_run_id: Optional[UUID] = FieldFrom(schemas.core.Log)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.LogCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate","title":"SavedSearchCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a saved search.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass SavedSearchCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a saved search.\"\"\"\n\n    name: str = FieldFrom(schemas.core.SavedSearch)\n    filters: List[schemas.core.SavedSearchFilter] = FieldFrom(schemas.core.SavedSearch)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.SavedSearchCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate","title":"StateCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a new state.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass StateCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a new state.\"\"\"\n\n    type: schemas.states.StateType = FieldFrom(schemas.states.State)\n    name: Optional[str] = FieldFrom(schemas.states.State)\n    message: Optional[str] = FieldFrom(schemas.states.State)\n    data: Optional[Any] = FieldFrom(schemas.states.State)\n    state_details: schemas.states.StateDetails = FieldFrom(schemas.states.State)\n\n    # DEPRECATED\n\n    timestamp: Optional[schemas.core.DateTimeTZ] = Field(\n        default=None,\n        repr=False,\n        ignored=True,\n    )\n    id: Optional[UUID] = Field(default=None, repr=False, ignored=True)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.StateCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate","title":"TaskRunCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a task run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass TaskRunCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a task run\"\"\"\n\n    # TaskRunCreate states must be provided as StateCreate objects\n    state: Optional[StateCreate] = Field(\n        default=None, description=\"The state of the task run to create\"\n    )\n\n    name: str = FieldFrom(schemas.core.TaskRun)\n    flow_run_id: UUID = FieldFrom(schemas.core.TaskRun)\n    task_key: str = FieldFrom(schemas.core.TaskRun)\n    dynamic_key: str = FieldFrom(schemas.core.TaskRun)\n    cache_key: Optional[str] = FieldFrom(schemas.core.TaskRun)\n    cache_expiration: Optional[schemas.core.DateTimeTZ] = FieldFrom(\n        schemas.core.TaskRun\n    )\n    task_version: Optional[str] = FieldFrom(schemas.core.TaskRun)\n    empirical_policy: schemas.core.TaskRunPolicy = FieldFrom(schemas.core.TaskRun)\n    tags: List[str] = FieldFrom(schemas.core.TaskRun)\n    task_inputs: Dict[\n        str,\n        List[\n            Union[\n                schemas.core.TaskRunResult,\n                schemas.core.Parameter,\n                schemas.core.Constant,\n            ]\n        ],\n    ] = FieldFrom(schemas.core.TaskRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate","title":"TaskRunUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a task run

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass TaskRunUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a task run\"\"\"\n\n    name: str = FieldFrom(schemas.core.TaskRun)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.TaskRunUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate","title":"VariableCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a Variable.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass VariableCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a Variable.\"\"\"\n\n    name: str = FieldFrom(schemas.core.Variable)\n    value: str = FieldFrom(schemas.core.Variable)\n    tags: Optional[List[str]] = FieldFrom(schemas.core.Variable)\n\n    # validators\n    _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate","title":"VariableUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a Variable.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass VariableUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a Variable.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the variable\",\n        example=\"my_variable\",\n        max_length=schemas.core.MAX_VARIABLE_NAME_LENGTH,\n    )\n    value: Optional[str] = Field(\n        default=None,\n        description=\"The value of the variable\",\n        example=\"my-value\",\n        max_length=schemas.core.MAX_VARIABLE_VALUE_LENGTH,\n    )\n    tags: Optional[List[str]] = FieldFrom(schemas.core.Variable)\n\n    # validators\n    _validate_name_format = validator(\"name\", allow_reuse=True)(validate_variable_name)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.VariableUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate","title":"WorkPoolCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a work pool.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass WorkPoolCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a work pool.\"\"\"\n\n    name: str = FieldFrom(schemas.core.WorkPool)\n    description: Optional[str] = FieldFrom(schemas.core.WorkPool)\n    type: str = Field(description=\"The work pool type.\", default=\"prefect-agent\")\n    base_job_template: Dict[str, Any] = FieldFrom(schemas.core.WorkPool)\n    is_paused: bool = FieldFrom(schemas.core.WorkPool)\n    concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkPool)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate","title":"WorkPoolUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a work pool.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass WorkPoolUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a work pool.\"\"\"\n\n    description: Optional[str] = FieldFrom(schemas.core.WorkPool)\n    is_paused: Optional[bool] = FieldFrom(schemas.core.WorkPool)\n    base_job_template: Optional[Dict[str, Any]] = FieldFrom(schemas.core.WorkPool)\n    concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkPool)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkPoolUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate","title":"WorkQueueCreate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to create a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass WorkQueueCreate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to create a work queue.\"\"\"\n\n    name: str = FieldFrom(schemas.core.WorkQueue)\n    description: Optional[str] = FieldFrom(schemas.core.WorkQueue)\n    is_paused: bool = FieldFrom(schemas.core.WorkQueue)\n    concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkQueue)\n    priority: Optional[int] = Field(\n        default=None,\n        description=(\n            \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n        ),\n    )\n\n    # DEPRECATED\n\n    filter: Optional[schemas.core.QueueFilter] = Field(\n        None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueCreate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate","title":"WorkQueueUpdate","text":"

Bases: ActionBaseModel

Data used by the Prefect REST API to update a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/actions.py
@copy_model_fields\nclass WorkQueueUpdate(ActionBaseModel):\n\"\"\"Data used by the Prefect REST API to update a work queue.\"\"\"\n\n    name: str = FieldFrom(schemas.core.WorkQueue)\n    description: Optional[str] = FieldFrom(schemas.core.WorkQueue)\n    is_paused: bool = FieldFrom(schemas.core.WorkQueue)\n    concurrency_limit: Optional[int] = FieldFrom(schemas.core.WorkQueue)\n    priority: Optional[int] = FieldFrom(schemas.core.WorkQueue)\n    last_polled: Optional[DateTimeTZ] = FieldFrom(schemas.core.WorkQueue)\n\n    # DEPRECATED\n\n    filter: Optional[schemas.core.QueueFilter] = Field(\n        None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/actions/#prefect.server.schemas.actions.WorkQueueUpdate.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/core/","title":"server.schemas.core","text":""},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core","title":"prefect.server.schemas.core","text":"

Full schemas of Prefect REST API objects.

"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Agent","title":"Agent","text":"

Bases: ORMBaseModel

An ORM representation of an agent

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class Agent(ORMBaseModel):\n\"\"\"An ORM representation of an agent\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the agent. If a name is not provided, it will be\"\n            \" auto-generated.\"\n        ),\n    )\n    work_queue_id: UUID = Field(\n        default=..., description=\"The work queue with which the agent is associated.\"\n    )\n    last_activity_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The last time this agent polled for work.\"\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockDocument","title":"BlockDocument","text":"

Bases: ORMBaseModel

An ORM representation of a block document.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class BlockDocument(ORMBaseModel):\n\"\"\"An ORM representation of a block document.\"\"\"\n\n    name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The block document's name. Not required for anonymous block documents.\"\n        ),\n    )\n    data: dict = Field(default_factory=dict, description=\"The block document's data\")\n    block_schema_id: UUID = Field(default=..., description=\"A block schema ID\")\n    block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The associated block schema\"\n    )\n    block_type_id: UUID = Field(default=..., description=\"A block type ID\")\n    block_type: Optional[BlockType] = Field(\n        default=None, description=\"The associated block type\"\n    )\n    block_document_references: Dict[str, Dict[str, Any]] = Field(\n        default_factory=dict, description=\"Record of the block document's references\"\n    )\n    is_anonymous: bool = Field(\n        default=False,\n        description=(\n            \"Whether the block is anonymous (anonymous blocks are usually created by\"\n            \" Prefect automatically)\"\n        ),\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        # the BlockDocumentCreate subclass allows name=None\n        # and will inherit this validator\n        if v is not None:\n            raise_on_name_with_banned_characters(v)\n        return v\n\n    @root_validator\n    def validate_name_is_present_if_not_anonymous(cls, values):\n        # anonymous blocks may have no name prior to actually being\n        # stored in the database\n        if not values.get(\"is_anonymous\") and not values.get(\"name\"):\n            raise ValueError(\"Names must be provided for block documents.\")\n        return values\n\n    @classmethod\n    async def from_orm_model(\n        cls,\n        session,\n        orm_block_document: \"prefect.server.database.orm_models.ORMBlockDocument\",\n        include_secrets: bool = False,\n    ):\n        data = await orm_block_document.decrypt_data(session=session)\n        # if secrets are not included, obfuscate them based on the schema's\n        # `secret_fields`. Note this walks any nested blocks as well. If the\n        # nested blocks were recovered from named blocks, they will already\n        # be obfuscated, but if nested fields were hardcoded into the parent\n        # blocks data, this is the only opportunity to obfuscate them.\n        if not include_secrets:\n            flat_data = dict_to_flatdict(data)\n            # iterate over the (possibly nested) secret fields\n            # and obfuscate their data\n            for secret_field in orm_block_document.block_schema.fields.get(\n                \"secret_fields\", []\n            ):\n                secret_key = tuple(secret_field.split(\".\"))\n                if flat_data.get(secret_key) is not None:\n                    flat_data[secret_key] = obfuscate_string(flat_data[secret_key])\n                # If a wildcard (*) is in the current secret key path, we take the portion\n                # of the path before the wildcard and compare it to the same level of each\n                # key. A match means that the field is nested under the secret key and should\n                # be obfuscated.\n                elif \"*\" in secret_key:\n                    wildcard_index = secret_key.index(\"*\")\n                    for data_key in flat_data.keys():\n                        if secret_key[0:wildcard_index] == data_key[0:wildcard_index]:\n                            flat_data[data_key] = obfuscate(flat_data[data_key])\n            data = flatdict_to_dict(flat_data)\n\n        return cls(\n            id=orm_block_document.id,\n            created=orm_block_document.created,\n            updated=orm_block_document.updated,\n            name=orm_block_document.name,\n            data=data,\n            block_schema_id=orm_block_document.block_schema_id,\n            block_schema=orm_block_document.block_schema,\n            block_type_id=orm_block_document.block_type_id,\n            block_type=orm_block_document.block_type,\n            is_anonymous=orm_block_document.is_anonymous,\n        )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockDocumentReference","title":"BlockDocumentReference","text":"

Bases: ORMBaseModel

An ORM representation of a block document reference.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class BlockDocumentReference(ORMBaseModel):\n\"\"\"An ORM representation of a block document reference.\"\"\"\n\n    parent_block_document_id: UUID = Field(\n        default=..., description=\"ID of block document the reference is nested within\"\n    )\n    parent_block_document: Optional[BlockDocument] = Field(\n        default=None, description=\"The block document the reference is nested within\"\n    )\n    reference_block_document_id: UUID = Field(\n        default=..., description=\"ID of the nested block document\"\n    )\n    reference_block_document: Optional[BlockDocument] = Field(\n        default=None, description=\"The nested block document\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n\n    @root_validator\n    def validate_parent_and_ref_are_different(cls, values):\n        parent_id = values.get(\"parent_block_document_id\")\n        ref_id = values.get(\"reference_block_document_id\")\n        if parent_id and ref_id and parent_id == ref_id:\n            raise ValueError(\n                \"`parent_block_document_id` and `reference_block_document_id` cannot be\"\n                \" the same\"\n            )\n        return values\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockSchema","title":"BlockSchema","text":"

Bases: ORMBaseModel

An ORM representation of a block schema.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class BlockSchema(ORMBaseModel):\n\"\"\"An ORM representation of a block schema.\"\"\"\n\n    checksum: str = Field(default=..., description=\"The block schema's unique checksum\")\n    fields: dict = Field(\n        default_factory=dict, description=\"The block schema's field schema\"\n    )\n    block_type_id: Optional[UUID] = Field(default=..., description=\"A block type ID\")\n    block_type: Optional[BlockType] = Field(\n        default=None, description=\"The associated block type\"\n    )\n    capabilities: List[str] = Field(\n        default_factory=list,\n        description=\"A list of Block capabilities\",\n    )\n    version: str = Field(\n        default=DEFAULT_BLOCK_SCHEMA_VERSION,\n        description=\"Human readable identifier for the block schema\",\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockSchemaReference","title":"BlockSchemaReference","text":"

Bases: ORMBaseModel

An ORM representation of a block schema reference.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class BlockSchemaReference(ORMBaseModel):\n\"\"\"An ORM representation of a block schema reference.\"\"\"\n\n    parent_block_schema_id: UUID = Field(\n        default=..., description=\"ID of block schema the reference is nested within\"\n    )\n    parent_block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The block schema the reference is nested within\"\n    )\n    reference_block_schema_id: UUID = Field(\n        default=..., description=\"ID of the nested block schema\"\n    )\n    reference_block_schema: Optional[BlockSchema] = Field(\n        default=None, description=\"The nested block schema\"\n    )\n    name: str = Field(\n        default=..., description=\"The name that the reference is nested under\"\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.BlockType","title":"BlockType","text":"

Bases: ORMBaseModel

An ORM representation of a block type

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class BlockType(ORMBaseModel):\n\"\"\"An ORM representation of a block type\"\"\"\n\n    name: str = Field(default=..., description=\"A block type's name\")\n    slug: str = Field(default=..., description=\"A block type's slug\")\n    logo_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's logo\"\n    )\n    documentation_url: Optional[HttpUrl] = Field(\n        default=None, description=\"Web URL for the block type's documentation\"\n    )\n    description: Optional[str] = Field(\n        default=None,\n        description=\"A short blurb about the corresponding block's intended use\",\n    )\n    code_example: Optional[str] = Field(\n        default=None,\n        description=\"A code snippet demonstrating use of the corresponding block\",\n    )\n    is_protected: bool = Field(\n        default=False, description=\"Protected block types cannot be modified via API.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        raise_on_name_with_banned_characters(v)\n        return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.ConcurrencyLimit","title":"ConcurrencyLimit","text":"

Bases: ORMBaseModel

An ORM representation of a concurrency limit.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class ConcurrencyLimit(ORMBaseModel):\n\"\"\"An ORM representation of a concurrency limit.\"\"\"\n\n    tag: str = Field(\n        default=..., description=\"A tag the concurrency limit is applied to.\"\n    )\n    concurrency_limit: int = Field(default=..., description=\"The concurrency limit.\")\n    active_slots: List[UUID] = Field(\n        default_factory=list,\n        description=\"A list of active run ids using a concurrency slot\",\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Configuration","title":"Configuration","text":"

Bases: ORMBaseModel

An ORM representation of account info.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class Configuration(ORMBaseModel):\n\"\"\"An ORM representation of account info.\"\"\"\n\n    key: str = Field(default=..., description=\"Account info key\")\n    value: dict = Field(default=..., description=\"Account info\")\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Deployment","title":"Deployment","text":"

Bases: ORMBaseModel

An ORM representation of deployment data.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class Deployment(ORMBaseModel):\n\"\"\"An ORM representation of deployment data.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the deployment.\")\n    version: Optional[str] = Field(\n        default=None, description=\"An optional version for the deployment.\"\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A description for the deployment.\"\n    )\n    flow_id: UUID = Field(\n        default=..., description=\"The flow id associated with the deployment.\"\n    )\n    schedule: Optional[schedules.SCHEDULE_TYPES] = Field(\n        default=None, description=\"A schedule for the deployment.\"\n    )\n    is_schedule_active: bool = Field(\n        default=True, description=\"Whether or not the deployment schedule is active.\"\n    )\n    infra_overrides: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Overrides to apply to the base infrastructure block at runtime.\",\n    )\n    parameters: Dict[str, Any] = Field(\n        default_factory=dict,\n        description=\"Parameters for flow runs scheduled by the deployment.\",\n    )\n    pull_steps: Optional[List[dict]] = Field(\n        default=None,\n        description=\"Pull steps for cloning and running this deployment.\",\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the deployment\",\n        example=[\"tag-1\", \"tag-2\"],\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The work queue for the deployment. If no work queue is set, work will not\"\n            \" be scheduled.\"\n        ),\n    )\n    parameter_openapi_schema: Optional[Dict[str, Any]] = Field(\n        default=None,\n        description=\"The parameter schema of the flow, including defaults.\",\n    )\n    path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the working directory for the workflow, relative to remote\"\n            \" storage or an absolute path.\"\n        ),\n    )\n    entrypoint: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the entrypoint for the workflow, relative to the `path`.\"\n        ),\n    )\n    manifest_path: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The path to the flow's manifest file, relative to the chosen storage.\"\n        ),\n    )\n    storage_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining storage used for this flow.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use for flow runs.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this deployment.\",\n    )\n    updated_by: Optional[UpdatedBy] = Field(\n        default=None,\n        description=\"Optional information about the updater of this deployment.\",\n    )\n    work_queue_id: UUID = Field(\n        default=None,\n        description=(\n            \"The id of the work pool queue to which this deployment is assigned.\"\n        ),\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        raise_on_name_with_banned_characters(v)\n        return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Flow","title":"Flow","text":"

Bases: ORMBaseModel

An ORM representation of flow data.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class Flow(ORMBaseModel):\n\"\"\"An ORM representation of flow data.\"\"\"\n\n    name: str = Field(\n        default=..., description=\"The name of the flow\", example=\"my-flow\"\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of flow tags\",\n        example=[\"tag-1\", \"tag-2\"],\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        raise_on_name_with_banned_characters(v)\n        return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRun","title":"FlowRun","text":"

Bases: ORMBaseModel

An ORM representation of flow run data.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class FlowRun(ORMBaseModel):\n\"\"\"An ORM representation of flow run data.\"\"\"\n\n    name: str = Field(\n        default_factory=lambda: generate_slug(2),\n        description=(\n            \"The name of the flow run. Defaults to a random slug if not specified.\"\n        ),\n        example=\"my-flow-run\",\n    )\n    flow_id: UUID = Field(default=..., description=\"The id of the flow being run.\")\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the flow run's current state.\"\n    )\n    deployment_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"The id of the deployment associated with this flow run, if available.\"\n        ),\n    )\n    work_queue_name: Optional[str] = Field(\n        default=None, description=\"The work queue that handled this flow run.\"\n    )\n    flow_version: Optional[str] = Field(\n        default=None,\n        description=\"The version of the flow executed in this flow run.\",\n        example=\"1.0\",\n    )\n    parameters: dict = Field(\n        default_factory=dict, description=\"Parameters for the flow run.\"\n    )\n    idempotency_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional idempotency key for the flow run. Used to ensure the same flow\"\n            \" run is not created multiple times.\"\n        ),\n    )\n    context: dict = Field(\n        default_factory=dict,\n        description=\"Additional context for the flow run.\",\n        example={\"my_var\": \"my_val\"},\n    )\n    empirical_policy: FlowRunPolicy = Field(\n        default_factory=FlowRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags on the flow run\",\n        example=[\"tag-1\", \"tag-2\"],\n    )\n    parent_task_run_id: Optional[UUID] = Field(\n        default=None,\n        description=(\n            \"If the flow run is a subflow, the id of the 'dummy' task in the parent\"\n            \" flow used to track subflow state.\"\n        ),\n    )\n\n    state_type: Optional[states.StateType] = Field(\n        default=None, description=\"The type of the current flow run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current flow run state.\"\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the flow run was executed.\"\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The flow run's expected start time.\",\n    )\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the flow run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the flow run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of the total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n    auto_scheduled: bool = Field(\n        default=False,\n        description=\"Whether or not the flow run was automatically scheduled.\",\n    )\n    infrastructure_document_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The block document defining infrastructure to use this flow run.\",\n    )\n    infrastructure_pid: Optional[str] = Field(\n        default=None,\n        description=\"The id of the flow run as returned by an infrastructure block.\",\n    )\n    created_by: Optional[CreatedBy] = Field(\n        default=None,\n        description=\"Optional information about the creator of this flow run.\",\n    )\n    work_queue_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the run's work pool queue.\"\n    )\n\n    # relationships\n    # flow: Flow = None\n    # task_runs: List[\"TaskRun\"] = Field(default_factory=list)\n    state: Optional[states.State] = Field(\n        default=None, description=\"The current state of the flow run.\"\n    )\n    # parent_task_run: \"TaskRun\" = None\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return name or generate_slug(2)\n\n    def __eq__(self, other: Any) -> bool:\n\"\"\"\n        Check for \"equality\" to another flow run schema\n\n        Estimates times are rolling and will always change with repeated queries for\n        a flow run so we ignore them during equality checks.\n        \"\"\"\n        if isinstance(other, FlowRun):\n            exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n            return self.dict(exclude=exclude_fields) == other.dict(\n                exclude=exclude_fields\n            )\n        return super().__eq__(other)\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunNotificationPolicy","title":"FlowRunNotificationPolicy","text":"

Bases: ORMBaseModel

An ORM representation of a flow run notification.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class FlowRunNotificationPolicy(ORMBaseModel):\n\"\"\"An ORM representation of a flow run notification.\"\"\"\n\n    is_active: bool = Field(\n        default=True, description=\"Whether the policy is currently active\"\n    )\n    state_names: List[str] = Field(\n        default=..., description=\"The flow run states that trigger notifications\"\n    )\n    tags: List[str] = Field(\n        default=...,\n        description=\"The flow run tags that trigger notifications (set [] to disable)\",\n    )\n    block_document_id: UUID = Field(\n        default=..., description=\"The block document ID used for sending notifications\"\n    )\n    message_template: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A templatable notification message. Use {braces} to add variables.\"\n            \" Valid variables include:\"\n            f\" {listrepr(sorted(FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS), sep=', ')}\"\n        ),\n        example=(\n            \"Flow run {flow_run_name} with id {flow_run_id} entered state\"\n            \" {flow_run_state_name}.\"\n        ),\n    )\n\n    @validator(\"message_template\")\n    def validate_message_template_variables(cls, v):\n        if v is not None:\n            try:\n                v.format(**{k: \"test\" for k in FLOW_RUN_NOTIFICATION_TEMPLATE_KWARGS})\n            except KeyError as exc:\n                raise ValueError(f\"Invalid template variable provided: '{exc.args[0]}'\")\n        return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunPolicy","title":"FlowRunPolicy","text":"

Bases: PrefectBaseModel

Defines of how a flow run should retry.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class FlowRunPolicy(PrefectBaseModel):\n\"\"\"Defines of how a flow run should retry.\"\"\"\n\n    # TODO: Determine how to separate between infrastructure and within-process level\n    #       retries\n    max_retries: int = Field(\n        default=0,\n        description=(\n            \"The maximum number of retries. Field is not used. Please use `retries`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retry_delay_seconds: float = Field(\n        default=0,\n        description=(\n            \"The delay between retries. Field is not used. Please use `retry_delay`\"\n            \" instead.\"\n        ),\n        deprecated=True,\n    )\n    retries: Optional[int] = Field(default=None, description=\"The number of retries.\")\n    retry_delay: Optional[int] = Field(\n        default=None, description=\"The delay time between retries, in seconds.\"\n    )\n    pause_keys: Optional[set] = Field(\n        default_factory=set, description=\"Tracks pauses this run has observed.\"\n    )\n    resuming: Optional[bool] = Field(\n        default=False, description=\"Indicates if this run is resuming from a pause.\"\n    )\n\n    @root_validator\n    def populate_deprecated_fields(cls, values):\n\"\"\"\n        If deprecated fields are provided, populate the corresponding new fields\n        to preserve orchestration behavior.\n        \"\"\"\n        if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n            values[\"retries\"] = values[\"max_retries\"]\n        if (\n            not values.get(\"retry_delay\", None)\n            and values.get(\"retry_delay_seconds\", 0) != 0\n        ):\n            values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n        return values\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunPolicy.populate_deprecated_fields","title":"populate_deprecated_fields","text":"

If deprecated fields are provided, populate the corresponding new fields to preserve orchestration behavior.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
@root_validator\ndef populate_deprecated_fields(cls, values):\n\"\"\"\n    If deprecated fields are provided, populate the corresponding new fields\n    to preserve orchestration behavior.\n    \"\"\"\n    if not values.get(\"retries\", None) and values.get(\"max_retries\", 0) != 0:\n        values[\"retries\"] = values[\"max_retries\"]\n    if (\n        not values.get(\"retry_delay\", None)\n        and values.get(\"retry_delay_seconds\", 0) != 0\n    ):\n        values[\"retry_delay\"] = values[\"retry_delay_seconds\"]\n    return values\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.FlowRunnerSettings","title":"FlowRunnerSettings","text":"

Bases: PrefectBaseModel

An API schema for passing details about the flow runner.

This schema is agnostic to the types and configuration provided by clients

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class FlowRunnerSettings(PrefectBaseModel):\n\"\"\"\n    An API schema for passing details about the flow runner.\n\n    This schema is agnostic to the types and configuration provided by clients\n    \"\"\"\n\n    type: Optional[str] = Field(\n        default=None,\n        description=(\n            \"The type of the flow runner which can be used by the client for\"\n            \" dispatching.\"\n        ),\n    )\n    config: Optional[dict] = Field(\n        default=None, description=\"The configuration for the given flow runner type.\"\n    )\n\n    # The following is required for composite compatibility in the ORM\n\n    def __init__(self, type: str = None, config: dict = None, **kwargs) -> None:\n        # Pydantic does not support positional arguments so they must be converted to\n        # keyword arguments\n        super().__init__(type=type, config=config, **kwargs)\n\n    def __composite_values__(self):\n        return self.type, self.config\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Log","title":"Log","text":"

Bases: ORMBaseModel

An ORM representation of log data.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class Log(ORMBaseModel):\n\"\"\"An ORM representation of log data.\"\"\"\n\n    name: str = Field(default=..., description=\"The logger name.\")\n    level: int = Field(default=..., description=\"The log level.\")\n    message: str = Field(default=..., description=\"The log message.\")\n    timestamp: DateTimeTZ = Field(default=..., description=\"The log timestamp.\")\n    flow_run_id: UUID = Field(\n        default=..., description=\"The flow run ID associated with the log.\"\n    )\n    task_run_id: Optional[UUID] = Field(\n        default=None, description=\"The task run ID associated with the log.\"\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.QueueFilter","title":"QueueFilter","text":"

Bases: PrefectBaseModel

Filter criteria definition for a work queue.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class QueueFilter(PrefectBaseModel):\n\"\"\"Filter criteria definition for a work queue.\"\"\"\n\n    tags: Optional[List[str]] = Field(\n        default=None,\n        description=\"Only include flow runs with these tags in the work queue.\",\n    )\n    deployment_ids: Optional[List[UUID]] = Field(\n        default=None,\n        description=\"Only include flow runs from these deployments in the work queue.\",\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.SavedSearch","title":"SavedSearch","text":"

Bases: ORMBaseModel

An ORM representation of saved search data. Represents a set of filter criteria.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class SavedSearch(ORMBaseModel):\n\"\"\"An ORM representation of saved search data. Represents a set of filter criteria.\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the saved search.\")\n    filters: List[SavedSearchFilter] = Field(\n        default_factory=list, description=\"The filter set for the saved search.\"\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.SavedSearchFilter","title":"SavedSearchFilter","text":"

Bases: PrefectBaseModel

A filter for a saved search model. Intended for use by the Prefect UI.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class SavedSearchFilter(PrefectBaseModel):\n\"\"\"A filter for a saved search model. Intended for use by the Prefect UI.\"\"\"\n\n    object: str = Field(default=..., description=\"The object over which to filter.\")\n    property: str = Field(\n        default=..., description=\"The property of the object on which to filter.\"\n    )\n    type: str = Field(default=..., description=\"The type of the property.\")\n    operation: str = Field(\n        default=...,\n        description=\"The operator to apply to the object. For example, `equals`.\",\n    )\n    value: Any = Field(\n        default=..., description=\"A JSON-compatible value for the filter.\"\n    )\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.TaskRun","title":"TaskRun","text":"

Bases: ORMBaseModel

An ORM representation of task run data.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class TaskRun(ORMBaseModel):\n\"\"\"An ORM representation of task run data.\"\"\"\n\n    name: str = Field(default_factory=lambda: generate_slug(2), example=\"my-task-run\")\n    flow_run_id: UUID = Field(\n        default=..., description=\"The flow run id of the task run.\"\n    )\n    task_key: str = Field(\n        default=..., description=\"A unique identifier for the task being run.\"\n    )\n    dynamic_key: str = Field(\n        default=...,\n        description=(\n            \"A dynamic key used to differentiate between multiple runs of the same task\"\n            \" within the same flow run.\"\n        ),\n    )\n    cache_key: Optional[str] = Field(\n        default=None,\n        description=(\n            \"An optional cache key. If a COMPLETED state associated with this cache key\"\n            \" is found, the cached COMPLETED state will be used instead of executing\"\n            \" the task run.\"\n        ),\n    )\n    cache_expiration: Optional[DateTimeTZ] = Field(\n        default=None, description=\"Specifies when the cached state should expire.\"\n    )\n    task_version: Optional[str] = Field(\n        default=None, description=\"The version of the task being run.\"\n    )\n    empirical_policy: TaskRunPolicy = Field(\n        default_factory=TaskRunPolicy,\n    )\n    tags: List[str] = Field(\n        default_factory=list,\n        description=\"A list of tags for the task run.\",\n        example=[\"tag-1\", \"tag-2\"],\n    )\n    state_id: Optional[UUID] = Field(\n        default=None, description=\"The id of the current task run state.\"\n    )\n    task_inputs: Dict[str, List[Union[TaskRunResult, Parameter, Constant]]] = Field(\n        default_factory=dict,\n        description=(\n            \"Tracks the source of inputs to a task run. Used for internal bookkeeping.\"\n        ),\n    )\n    state_type: Optional[states.StateType] = Field(\n        default=None, description=\"The type of the current task run state.\"\n    )\n    state_name: Optional[str] = Field(\n        default=None, description=\"The name of the current task run state.\"\n    )\n    run_count: int = Field(\n        default=0, description=\"The number of times the task run has been executed.\"\n    )\n    flow_run_run_count: int = Field(\n        default=0,\n        description=(\n            \"If the parent flow has retried, this indicates the flow retry this run is\"\n            \" associated with.\"\n        ),\n    )\n    expected_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The task run's expected start time.\",\n    )\n\n    # the next scheduled start time will be populated\n    # whenever the run is in a scheduled state\n    next_scheduled_start_time: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"The next time the task run is scheduled to start.\",\n    )\n    start_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual start time.\"\n    )\n    end_time: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The actual end time.\"\n    )\n    total_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=(\n            \"Total run time. If the task run was executed multiple times, the time of\"\n            \" each run will be summed.\"\n        ),\n    )\n    estimated_run_time: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"A real-time estimate of total run time.\",\n    )\n    estimated_start_time_delta: datetime.timedelta = Field(\n        default=datetime.timedelta(0),\n        description=\"The difference between actual and expected start time.\",\n    )\n\n    # relationships\n    # flow_run: FlowRun = None\n    # subflow_runs: List[FlowRun] = Field(default_factory=list)\n    state: Optional[states.State] = Field(\n        default=None, description=\"The current task run state.\"\n    )\n\n    @validator(\"name\", pre=True)\n    def set_name(cls, name):\n        return name or generate_slug(2)\n\n    @validator(\"cache_key\")\n    def validate_cache_key_length(cls, cache_key):\n        if cache_key and len(cache_key) > PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH.value():\n            raise ValueError(\n                \"Cache key exceeded maximum allowed length of\"\n                f\" {PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH.value()} characters.\"\n            )\n        return cache_key\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkPool","title":"WorkPool","text":"

Bases: ORMBaseModel

An ORM representation of a work pool

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class WorkPool(ORMBaseModel):\n\"\"\"An ORM representation of a work pool\"\"\"\n\n    name: str = Field(\n        description=\"The name of the work pool.\",\n    )\n    description: Optional[str] = Field(\n        default=None, description=\"A description of the work pool.\"\n    )\n    type: str = Field(description=\"The work pool type.\")\n    base_job_template: Dict[str, Any] = Field(\n        default_factory=dict, description=\"The work pool's base job template.\"\n    )\n    is_paused: bool = Field(\n        default=False,\n        description=\"Pausing the work pool stops the delivery of all work.\",\n    )\n    concurrency_limit: Optional[conint(ge=0)] = Field(\n        default=None, description=\"A concurrency limit for the work pool.\"\n    )\n\n    # this required field has a default of None so that the custom validator\n    # below will be called and produce a more helpful error message\n    default_queue_id: UUID = Field(\n        None, description=\"The id of the pool's default queue.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        raise_on_name_with_banned_characters(v)\n        return v\n\n    @validator(\"default_queue_id\", always=True)\n    def helpful_error_for_missing_default_queue_id(cls, v):\n\"\"\"\n        Default queue ID is required because all pools must have a default queue\n        ID, but it represents a circular foreign key relationship to a\n        WorkQueue (which can't be created until the work pool exists).\n        Therefore, while this field can *technically* be null, it shouldn't be.\n        This should only be an issue when creating new pools, as reading\n        existing ones will always have this field populated. This custom error\n        message will help users understand that they should use the\n        `actions.WorkPoolCreate` model in that case.\n        \"\"\"\n        if v is None:\n            raise ValueError(\n                \"`default_queue_id` is a required field. If you are \"\n                \"creating a new WorkPool and don't have a queue \"\n                \"ID yet, use the `actions.WorkPoolCreate` model instead.\"\n            )\n        return v\n\n    @validator(\"base_job_template\")\n    def validate_base_job_template(cls, v):\n        if v == dict():\n            return v\n\n        job_config = v.get(\"job_configuration\")\n        variables = v.get(\"variables\")\n        if not (job_config and variables):\n            raise ValueError(\n                \"The `base_job_template` must contain both a `job_configuration` key\"\n                \" and a `variables` key.\"\n            )\n        template_variables = set()\n        for template in job_config.values():\n            # find any variables inside of double curly braces, minus any whitespace\n            # e.g. \"{{ var1 }}.{{var2}}\" -> [\"var1\", \"var2\"]\n            # convert to json string to handle nested objects and lists\n            found_variables = find_placeholders(json.dumps(template))\n            template_variables = {placeholder.name for placeholder in found_variables}\n\n        provided_variables = set(variables[\"properties\"].keys())\n        if not template_variables.issubset(provided_variables):\n            missing_variables = template_variables - provided_variables\n            raise ValueError(\n                \"The variables specified in the job configuration template must be \"\n                \"present as properties in the variables schema. \"\n                \"Your job configuration uses the following undeclared \"\n                f\"variable(s): {' ,'.join(missing_variables)}.\"\n            )\n        return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkPool.helpful_error_for_missing_default_queue_id","title":"helpful_error_for_missing_default_queue_id","text":"

Default queue ID is required because all pools must have a default queue ID, but it represents a circular foreign key relationship to a WorkQueue (which can't be created until the work pool exists). Therefore, while this field can technically be null, it shouldn't be. This should only be an issue when creating new pools, as reading existing ones will always have this field populated. This custom error message will help users understand that they should use the actions.WorkPoolCreate model in that case.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
@validator(\"default_queue_id\", always=True)\ndef helpful_error_for_missing_default_queue_id(cls, v):\n\"\"\"\n    Default queue ID is required because all pools must have a default queue\n    ID, but it represents a circular foreign key relationship to a\n    WorkQueue (which can't be created until the work pool exists).\n    Therefore, while this field can *technically* be null, it shouldn't be.\n    This should only be an issue when creating new pools, as reading\n    existing ones will always have this field populated. This custom error\n    message will help users understand that they should use the\n    `actions.WorkPoolCreate` model in that case.\n    \"\"\"\n    if v is None:\n        raise ValueError(\n            \"`default_queue_id` is a required field. If you are \"\n            \"creating a new WorkPool and don't have a queue \"\n            \"ID yet, use the `actions.WorkPoolCreate` model instead.\"\n        )\n    return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueue","title":"WorkQueue","text":"

Bases: ORMBaseModel

An ORM representation of a work queue

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class WorkQueue(ORMBaseModel):\n\"\"\"An ORM representation of a work queue\"\"\"\n\n    name: str = Field(default=..., description=\"The name of the work queue.\")\n    description: Optional[str] = Field(\n        default=\"\", description=\"An optional description for the work queue.\"\n    )\n    is_paused: bool = Field(\n        default=False, description=\"Whether or not the work queue is paused.\"\n    )\n    concurrency_limit: Optional[conint(ge=0)] = Field(\n        default=None, description=\"An optional concurrency limit for the work queue.\"\n    )\n    priority: conint(ge=1) = Field(\n        default=1,\n        description=(\n            \"The queue's priority. Lower values are higher priority (1 is the highest).\"\n        ),\n    )\n    # Will be required after a future migration\n    work_pool_id: Optional[UUID] = Field(\n        default=None, description=\"The work pool with which the queue is associated.\"\n    )\n    filter: Optional[QueueFilter] = Field(\n        default=None,\n        description=\"DEPRECATED: Filter criteria for the work queue.\",\n        deprecated=True,\n    )\n    last_polled: Optional[DateTimeTZ] = Field(\n        default=None, description=\"The last time an agent polled this queue for work.\"\n    )\n\n    @validator(\"name\", check_fields=False)\n    def validate_name_characters(cls, v):\n        raise_on_name_with_banned_characters(v)\n        return v\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueueHealthPolicy","title":"WorkQueueHealthPolicy","text":"

Bases: PrefectBaseModel

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class WorkQueueHealthPolicy(PrefectBaseModel):\n    maximum_late_runs: Optional[int] = Field(\n        default=0,\n        description=(\n            \"The maximum number of late runs in the work queue before it is deemed\"\n            \" unhealthy. Defaults to `0`.\"\n        ),\n    )\n    maximum_seconds_since_last_polled: Optional[int] = Field(\n        default=60,\n        description=(\n            \"The maximum number of time in seconds elapsed since work queue has been\"\n            \" polled before it is deemed unhealthy. Defaults to `60`.\"\n        ),\n    )\n\n    def evaluate_health_status(\n        self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n    ) -> bool:\n\"\"\"\n        Given empirical information about the state of the work queue, evaulate its health status.\n\n        Args:\n            late_runs: the count of late runs for the work queue.\n            last_polled: the last time the work queue was polled, if available.\n\n        Returns:\n            bool: whether or not the work queue is healthy.\n        \"\"\"\n        healthy = True\n        if (\n            self.maximum_late_runs is not None\n            and late_runs_count > self.maximum_late_runs\n        ):\n            healthy = False\n\n        if self.maximum_seconds_since_last_polled is not None:\n            if (\n                last_polled is None\n                or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n                > self.maximum_seconds_since_last_polled\n            ):\n                healthy = False\n\n        return healthy\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.WorkQueueHealthPolicy.evaluate_health_status","title":"evaluate_health_status","text":"

Given empirical information about the state of the work queue, evaulate its health status.

Parameters:

Name Type Description Default late_runs

the count of late runs for the work queue.

required last_polled Optional[DateTimeTZ]

the last time the work queue was polled, if available.

None

Returns:

Name Type Description bool bool

whether or not the work queue is healthy.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
def evaluate_health_status(\n    self, late_runs_count: int, last_polled: Optional[DateTimeTZ] = None\n) -> bool:\n\"\"\"\n    Given empirical information about the state of the work queue, evaulate its health status.\n\n    Args:\n        late_runs: the count of late runs for the work queue.\n        last_polled: the last time the work queue was polled, if available.\n\n    Returns:\n        bool: whether or not the work queue is healthy.\n    \"\"\"\n    healthy = True\n    if (\n        self.maximum_late_runs is not None\n        and late_runs_count > self.maximum_late_runs\n    ):\n        healthy = False\n\n    if self.maximum_seconds_since_last_polled is not None:\n        if (\n            last_polled is None\n            or pendulum.now(\"UTC\").diff(last_polled).in_seconds()\n            > self.maximum_seconds_since_last_polled\n        ):\n            healthy = False\n\n    return healthy\n
"},{"location":"api-ref/server/schemas/core/#prefect.server.schemas.core.Worker","title":"Worker","text":"

Bases: ORMBaseModel

An ORM representation of a worker

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/core.py
class Worker(ORMBaseModel):\n\"\"\"An ORM representation of a worker\"\"\"\n\n    name: str = Field(description=\"The name of the worker.\")\n    work_pool_id: UUID = Field(\n        description=\"The work pool with which the queue is associated.\"\n    )\n    last_heartbeat_time: datetime.datetime = Field(\n        None, description=\"The last time the worker process sent a heartbeat.\"\n    )\n
"},{"location":"api-ref/server/schemas/filters/","title":"server.schemas.filters","text":""},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters","title":"prefect.server.schemas.filters","text":"

Schemas that define Prefect REST API filtering operations.

Each filter schema includes logic for transforming itself into a SQL where clause.

"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilter","title":"ArtifactCollectionFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter artifact collections. Only artifact collections matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactCollectionFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter artifact collections. Only artifact collections matching all criteria will be returned\"\"\"\n\n    latest_id: Optional[ArtifactCollectionFilterLatestId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.id`\"\n    )\n    key: Optional[ArtifactCollectionFilterKey] = Field(\n        default=None, description=\"Filter criteria for `Artifact.key`\"\n    )\n    flow_run_id: Optional[ArtifactCollectionFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n    )\n    task_run_id: Optional[ArtifactCollectionFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n    )\n    type: Optional[ArtifactCollectionFilterType] = Field(\n        default=None, description=\"Filter criteria for `Artifact.type`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.latest_id is not None:\n            filters.append(self.latest_id.as_sql_filter(db))\n        if self.key is not None:\n            filters.append(self.key.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n        if self.task_run_id is not None:\n            filters.append(self.task_run_id.as_sql_filter(db))\n        if self.type is not None:\n            filters.append(self.type.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId","title":"ArtifactCollectionFilterFlowRunId","text":"

Bases: PrefectFilterBaseModel

Filter by ArtifactCollection.flow_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactCollectionFilterFlowRunId(PrefectFilterBaseModel):\n\"\"\"Filter by `ArtifactCollection.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.flow_run_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterFlowRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey","title":"ArtifactCollectionFilterKey","text":"

Bases: PrefectFilterBaseModel

Filter by ArtifactCollection.key.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactCollectionFilterKey(PrefectFilterBaseModel):\n\"\"\"Filter by `ArtifactCollection.key`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact keys to include\"\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match artifact keys against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        example=\"my-artifact-%\",\n    )\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `true`, only include artifacts with a non-null key. If `false`, \"\n            \"only include artifacts with a null key. Should return all rows in \"\n            \"the ArtifactCollection table if specified.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.key.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.ArtifactCollection.key.ilike(f\"%{self.like_}%\"))\n        if self.exists_ is not None:\n            filters.append(\n                db.ArtifactCollection.key.isnot(None)\n                if self.exists_\n                else db.ArtifactCollection.key.is_(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterKey.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId","title":"ArtifactCollectionFilterLatestId","text":"

Bases: PrefectFilterBaseModel

Filter by ArtifactCollection.latest_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactCollectionFilterLatestId(PrefectFilterBaseModel):\n\"\"\"Filter by `ArtifactCollection.latest_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of artifact ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.latest_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterLatestId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId","title":"ArtifactCollectionFilterTaskRunId","text":"

Bases: PrefectFilterBaseModel

Filter by ArtifactCollection.task_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactCollectionFilterTaskRunId(PrefectFilterBaseModel):\n\"\"\"Filter by `ArtifactCollection.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.task_run_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterTaskRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType","title":"ArtifactCollectionFilterType","text":"

Bases: PrefectFilterBaseModel

Filter by ArtifactCollection.type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactCollectionFilterType(PrefectFilterBaseModel):\n\"\"\"Filter by `ArtifactCollection.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.ArtifactCollection.type.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.ArtifactCollection.type.notin_(self.not_any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactCollectionFilterType.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter","title":"ArtifactFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter artifacts. Only artifacts matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter artifacts. Only artifacts matching all criteria will be returned\"\"\"\n\n    id: Optional[ArtifactFilterId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.id`\"\n    )\n    key: Optional[ArtifactFilterKey] = Field(\n        default=None, description=\"Filter criteria for `Artifact.key`\"\n    )\n    flow_run_id: Optional[ArtifactFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.flow_run_id`\"\n    )\n    task_run_id: Optional[ArtifactFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Artifact.task_run_id`\"\n    )\n    type: Optional[ArtifactFilterType] = Field(\n        default=None, description=\"Filter criteria for `Artifact.type`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.key is not None:\n            filters.append(self.key.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n        if self.task_run_id is not None:\n            filters.append(self.task_run_id.as_sql_filter(db))\n        if self.type is not None:\n            filters.append(self.type.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId","title":"ArtifactFilterFlowRunId","text":"

Bases: PrefectFilterBaseModel

Filter by Artifact.flow_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactFilterFlowRunId(PrefectFilterBaseModel):\n\"\"\"Filter by `Artifact.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.flow_run_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterFlowRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId","title":"ArtifactFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by Artifact.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `Artifact.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of artifact ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey","title":"ArtifactFilterKey","text":"

Bases: PrefectFilterBaseModel

Filter by Artifact.key.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactFilterKey(PrefectFilterBaseModel):\n\"\"\"Filter by `Artifact.key`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact keys to include\"\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match artifact keys against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        example=\"my-artifact-%\",\n    )\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If `true`, only include artifacts with a non-null key. If `false`, \"\n            \"only include artifacts with a null key.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.key.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Artifact.key.ilike(f\"%{self.like_}%\"))\n        if self.exists_ is not None:\n            filters.append(\n                db.Artifact.key.isnot(None)\n                if self.exists_\n                else db.Artifact.key.is_(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterKey.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId","title":"ArtifactFilterTaskRunId","text":"

Bases: PrefectFilterBaseModel

Filter by Artifact.task_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactFilterTaskRunId(PrefectFilterBaseModel):\n\"\"\"Filter by `Artifact.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.task_run_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterTaskRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType","title":"ArtifactFilterType","text":"

Bases: PrefectFilterBaseModel

Filter by Artifact.type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class ArtifactFilterType(PrefectFilterBaseModel):\n\"\"\"Filter by `Artifact.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of artifact types to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Artifact.type.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.Artifact.type.notin_(self.not_any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.ArtifactFilterType.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter","title":"BlockDocumentFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockDocumentFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned\"\"\"\n\n    id: Optional[BlockDocumentFilterId] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.id`\"\n    )\n    is_anonymous: Optional[BlockDocumentFilterIsAnonymous] = Field(\n        # default is to exclude anonymous blocks\n        BlockDocumentFilterIsAnonymous(eq_=False),\n        description=(\n            \"Filter criteria for `BlockDocument.is_anonymous`. \"\n            \"Defaults to excluding anonymous blocks.\"\n        ),\n    )\n    block_type_id: Optional[BlockDocumentFilterBlockTypeId] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.block_type_id`\"\n    )\n    name: Optional[BlockDocumentFilterName] = Field(\n        default=None, description=\"Filter criteria for `BlockDocument.name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.is_anonymous is not None:\n            filters.append(self.is_anonymous.as_sql_filter(db))\n        if self.block_type_id is not None:\n            filters.append(self.block_type_id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId","title":"BlockDocumentFilterBlockTypeId","text":"

Bases: PrefectFilterBaseModel

Filter by BlockDocument.block_type_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockDocumentFilterBlockTypeId(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockDocument.block_type_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block type ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockDocument.block_type_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterBlockTypeId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId","title":"BlockDocumentFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by BlockDocument.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockDocumentFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockDocument.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockDocument.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous","title":"BlockDocumentFilterIsAnonymous","text":"

Bases: PrefectFilterBaseModel

Filter by BlockDocument.is_anonymous.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockDocumentFilterIsAnonymous(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockDocument.is_anonymous`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Filter block documents for only those that are or are not anonymous.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.BlockDocument.is_anonymous.is_(self.eq_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterIsAnonymous.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName","title":"BlockDocumentFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by BlockDocument.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockDocumentFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockDocument.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of block names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockDocument.name.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockDocumentFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter","title":"BlockSchemaFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter BlockSchemas

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockSchemaFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter BlockSchemas\"\"\"\n\n    block_type_id: Optional[BlockSchemaFilterBlockTypeId] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.block_type_id`\"\n    )\n    block_capabilities: Optional[BlockSchemaFilterCapabilities] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.capabilities`\"\n    )\n    id: Optional[BlockSchemaFilterId] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.id`\"\n    )\n    version: Optional[BlockSchemaFilterVersion] = Field(\n        default=None, description=\"Filter criteria for `BlockSchema.version`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.block_type_id is not None:\n            filters.append(self.block_type_id.as_sql_filter(db))\n        if self.block_capabilities is not None:\n            filters.append(self.block_capabilities.as_sql_filter(db))\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.version is not None:\n            filters.append(self.version.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId","title":"BlockSchemaFilterBlockTypeId","text":"

Bases: PrefectFilterBaseModel

Filter by BlockSchema.block_type_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockSchemaFilterBlockTypeId(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockSchema.block_type_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of block type ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockSchema.block_type_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterBlockTypeId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities","title":"BlockSchemaFilterCapabilities","text":"

Bases: PrefectFilterBaseModel

Filter by BlockSchema.capabilities

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockSchemaFilterCapabilities(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"write-storage\", \"read-storage\"],\n        description=(\n            \"A list of block capabilities. Block entities will be returned only if an\"\n            \" associated block schema has a superset of the defined capabilities.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.BlockSchema.capabilities, self.all_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterCapabilities.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId","title":"BlockSchemaFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by BlockSchema.id

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockSchemaFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by BlockSchema.id\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockSchema.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion","title":"BlockSchemaFilterVersion","text":"

Bases: PrefectFilterBaseModel

Filter by BlockSchema.capabilities

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockSchemaFilterVersion(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockSchema.capabilities`\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"2.0.0\", \"2.1.0\"],\n        description=\"A list of block schema versions.\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        pass\n\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockSchema.version.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockSchemaFilterVersion.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter","title":"BlockTypeFilter","text":"

Bases: PrefectFilterBaseModel

Filter BlockTypes

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockTypeFilter(PrefectFilterBaseModel):\n\"\"\"Filter BlockTypes\"\"\"\n\n    name: Optional[BlockTypeFilterName] = Field(\n        default=None, description=\"Filter criteria for `BlockType.name`\"\n    )\n\n    slug: Optional[BlockTypeFilterSlug] = Field(\n        default=None, description=\"Filter criteria for `BlockType.slug`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.slug is not None:\n            filters.append(self.slug.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName","title":"BlockTypeFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by BlockType.name

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockTypeFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockType.name`\"\"\"\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        example=\"marvin\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.like_ is not None:\n            filters.append(db.BlockType.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug","title":"BlockTypeFilterSlug","text":"

Bases: PrefectFilterBaseModel

Filter by BlockType.slug

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class BlockTypeFilterSlug(PrefectFilterBaseModel):\n\"\"\"Filter by `BlockType.slug`\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of slugs to match\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.BlockType.slug.in_(self.any_))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.BlockTypeFilterSlug.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter","title":"DeploymentFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter for deployments. Only deployments matching all criteria will be returned.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class DeploymentFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter for deployments. Only deployments matching all criteria will be returned.\"\"\"\n\n    id: Optional[DeploymentFilterId] = Field(\n        default=None, description=\"Filter criteria for `Deployment.id`\"\n    )\n    name: Optional[DeploymentFilterName] = Field(\n        default=None, description=\"Filter criteria for `Deployment.name`\"\n    )\n    is_schedule_active: Optional[DeploymentFilterIsScheduleActive] = Field(\n        default=None, description=\"Filter criteria for `Deployment.is_schedule_active`\"\n    )\n    tags: Optional[DeploymentFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Deployment.tags`\"\n    )\n    work_queue_name: Optional[DeploymentFilterWorkQueueName] = Field(\n        default=None, description=\"Filter criteria for `Deployment.work_queue_name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.is_schedule_active is not None:\n            filters.append(self.is_schedule_active.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        if self.work_queue_name is not None:\n            filters.append(self.work_queue_name.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId","title":"DeploymentFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by Deployment.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class DeploymentFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `Deployment.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of deployment ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Deployment.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive","title":"DeploymentFilterIsScheduleActive","text":"

Bases: PrefectFilterBaseModel

Filter by Deployment.is_schedule_active.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class DeploymentFilterIsScheduleActive(PrefectFilterBaseModel):\n\"\"\"Filter by `Deployment.is_schedule_active`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=\"Only returns where deployment schedule is/is not active\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.Deployment.is_schedule_active.is_(self.eq_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterIsScheduleActive.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName","title":"DeploymentFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by Deployment.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class DeploymentFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `Deployment.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of deployment names to include\",\n        example=[\"my-deployment-1\", \"my-deployment-2\"],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        example=\"marvin\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Deployment.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Deployment.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags","title":"DeploymentFilterTags","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by Deployment.tags.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class DeploymentFilterTags(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `Deployment.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"tag-1\", \"tag-2\"],\n        description=(\n            \"A list of tags. Deployments will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include deployments without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.Deployment.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.Deployment.tags == [] if self.is_null_ else db.Deployment.tags != []\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterTags.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName","title":"DeploymentFilterWorkQueueName","text":"

Bases: PrefectFilterBaseModel

Filter by Deployment.work_queue_name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class DeploymentFilterWorkQueueName(PrefectFilterBaseModel):\n\"\"\"Filter by `Deployment.work_queue_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        example=[\"work_queue_1\", \"work_queue_2\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Deployment.work_queue_name.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.DeploymentFilterWorkQueueName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet","title":"FilterSet","text":"

Bases: PrefectBaseModel

A collection of filters for common objects

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FilterSet(PrefectBaseModel):\n\"\"\"A collection of filters for common objects\"\"\"\n\n    flows: FlowFilter = Field(\n        default_factory=FlowFilter, description=\"Filters that apply to flows\"\n    )\n    flow_runs: FlowRunFilter = Field(\n        default_factory=FlowRunFilter, description=\"Filters that apply to flow runs\"\n    )\n    task_runs: TaskRunFilter = Field(\n        default_factory=TaskRunFilter, description=\"Filters that apply to task runs\"\n    )\n    deployments: DeploymentFilter = Field(\n        default_factory=DeploymentFilter,\n        description=\"Filters that apply to deployments\",\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FilterSet.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter","title":"FlowFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter for flows. Only flows matching all criteria will be returned.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter for flows. Only flows matching all criteria will be returned.\"\"\"\n\n    id: Optional[FlowFilterId] = Field(\n        default=None, description=\"Filter criteria for `Flow.id`\"\n    )\n    name: Optional[FlowFilterName] = Field(\n        default=None, description=\"Filter criteria for `Flow.name`\"\n    )\n    tags: Optional[FlowFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Flow.tags`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId","title":"FlowFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by Flow.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `Flow.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Flow.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName","title":"FlowFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by Flow.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `Flow.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of flow names to include\",\n        example=[\"my-flow-1\", \"my-flow-2\"],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        example=\"marvin\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Flow.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Flow.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags","title":"FlowFilterTags","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by Flow.tags.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowFilterTags(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `Flow.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"tag-1\", \"tag-2\"],\n        description=(\n            \"A list of tags. Flows will be returned only if their tags are a superset\"\n            \" of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include flows without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.Flow.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(db.Flow.tags == [] if self.is_null_ else db.Flow.tags != [])\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowFilterTags.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter","title":"FlowRunFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter flow runs. Only flow runs matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter flow runs. Only flow runs matching all criteria will be returned\"\"\"\n\n    id: Optional[FlowRunFilterId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.id`\"\n    )\n    name: Optional[FlowRunFilterName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.name`\"\n    )\n    tags: Optional[FlowRunFilterTags] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.tags`\"\n    )\n    deployment_id: Optional[FlowRunFilterDeploymentId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.deployment_id`\"\n    )\n    work_queue_name: Optional[FlowRunFilterWorkQueueName] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.work_queue_name\"\n    )\n    state: Optional[FlowRunFilterState] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.state`\"\n    )\n    flow_version: Optional[FlowRunFilterFlowVersion] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.flow_version`\"\n    )\n    start_time: Optional[FlowRunFilterStartTime] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.start_time`\"\n    )\n    expected_start_time: Optional[FlowRunFilterExpectedStartTime] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.expected_start_time`\"\n    )\n    next_scheduled_start_time: Optional[FlowRunFilterNextScheduledStartTime] = Field(\n        default=None,\n        description=\"Filter criteria for `FlowRun.next_scheduled_start_time`\",\n    )\n    parent_task_run_id: Optional[FlowRunFilterParentTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.parent_task_run_id`\"\n    )\n    idempotency_key: Optional[FlowRunFilterIdempotencyKey] = Field(\n        default=None, description=\"Filter criteria for `FlowRun.idempotency_key`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        if self.deployment_id is not None:\n            filters.append(self.deployment_id.as_sql_filter(db))\n        if self.work_queue_name is not None:\n            filters.append(self.work_queue_name.as_sql_filter(db))\n        if self.flow_version is not None:\n            filters.append(self.flow_version.as_sql_filter(db))\n        if self.state is not None:\n            filters.append(self.state.as_sql_filter(db))\n        if self.start_time is not None:\n            filters.append(self.start_time.as_sql_filter(db))\n        if self.expected_start_time is not None:\n            filters.append(self.expected_start_time.as_sql_filter(db))\n        if self.next_scheduled_start_time is not None:\n            filters.append(self.next_scheduled_start_time.as_sql_filter(db))\n        if self.parent_task_run_id is not None:\n            filters.append(self.parent_task_run_id.as_sql_filter(db))\n        if self.idempotency_key is not None:\n            filters.append(self.idempotency_key.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId","title":"FlowRunFilterDeploymentId","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by FlowRun.deployment_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterDeploymentId(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `FlowRun.deployment_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run deployment ids to include\"\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without deployment ids\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.deployment_id.in_(self.any_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.deployment_id.is_(None)\n                if self.is_null_\n                else db.FlowRun.deployment_id.is_not(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterDeploymentId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime","title":"FlowRunFilterExpectedStartTime","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.expected_start_time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterExpectedStartTime(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.expected_start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs scheduled to start at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs scheduled to start at or after this time\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.FlowRun.expected_start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.FlowRun.expected_start_time >= self.after_)\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterExpectedStartTime.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion","title":"FlowRunFilterFlowVersion","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.flow_version.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterFlowVersion(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.flow_version`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run flow_versions to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.flow_version.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterFlowVersion.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId","title":"FlowRunFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to include\"\n    )\n    not_any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run ids to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.id.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.FlowRun.id.not_in(self.not_any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey","title":"FlowRunFilterIdempotencyKey","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.idempotency_key.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterIdempotencyKey(PrefectFilterBaseModel):\n\"\"\"Filter by FlowRun.idempotency_key.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run idempotency keys to include\"\n    )\n    not_any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run idempotency keys to exclude\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.idempotency_key.in_(self.any_))\n        if self.not_any_ is not None:\n            filters.append(db.FlowRun.idempotency_key.not_in(self.not_any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterIdempotencyKey.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName","title":"FlowRunFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of flow run names to include\",\n        example=[\"my-flow-run-1\", \"my-flow-run-2\"],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        example=\"marvin\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.FlowRun.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime","title":"FlowRunFilterNextScheduledStartTime","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.next_scheduled_start_time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterNextScheduledStartTime(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.next_scheduled_start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include flow runs with a next_scheduled_start_time or before this\"\n            \" time\"\n        ),\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include flow runs with a next_scheduled_start_time at or after this\"\n            \" time\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.FlowRun.next_scheduled_start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.FlowRun.next_scheduled_start_time >= self.after_)\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterNextScheduledStartTime.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId","title":"FlowRunFilterParentTaskRunId","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by FlowRun.parent_task_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterParentTaskRunId(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `FlowRun.parent_task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run parent_task_run_ids to include\"\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without parent_task_run_id\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.parent_task_run_id.in_(self.any_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.parent_task_run_id.is_(None)\n                if self.is_null_\n                else db.FlowRun.parent_task_run_id.is_not(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterParentTaskRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime","title":"FlowRunFilterStartTime","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.start_time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterStartTime(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs starting at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include flow runs starting at or after this time\",\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only return flow runs without a start time\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.FlowRun.start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.FlowRun.start_time >= self.after_)\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.start_time.is_(None)\n                if self.is_null_\n                else db.FlowRun.start_time.is_not(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStartTime.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState","title":"FlowRunFilterState","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by FlowRun.state_type and FlowRun.state_name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterState(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `FlowRun.state_type` and `FlowRun.state_name`.\"\"\"\n\n    type: Optional[FlowRunFilterStateType]\n    name: Optional[FlowRunFilterStateName]\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.type is not None:\n            filters.extend(self.type._get_filter_list(db))\n        if self.name is not None:\n            filters.extend(self.name._get_filter_list(db))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterState.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName","title":"FlowRunFilterStateName","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.state_name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterStateName(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.state_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of flow run state names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.state_name.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType","title":"FlowRunFilterStateType","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRun.state_type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterStateType(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRun.state_type`.\"\"\"\n\n    any_: Optional[List[schemas.states.StateType]] = Field(\n        default=None, description=\"A list of flow run state types to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.state_type.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterStateType.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags","title":"FlowRunFilterTags","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by FlowRun.tags.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterTags(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `FlowRun.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"tag-1\", \"tag-2\"],\n        description=(\n            \"A list of tags. Flow runs will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include flow runs without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.FlowRun.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.tags == [] if self.is_null_ else db.FlowRun.tags != []\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterTags.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName","title":"FlowRunFilterWorkQueueName","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by FlowRun.work_queue_name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunFilterWorkQueueName(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `FlowRun.work_queue_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        example=[\"work_queue_1\", \"work_queue_2\"],\n    )\n    is_null_: Optional[bool] = Field(\n        default=None,\n        description=\"If true, only include flow runs without work queue names\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.FlowRun.work_queue_name.in_(self.any_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.FlowRun.work_queue_name.is_(None)\n                if self.is_null_\n                else db.FlowRun.work_queue_name.is_not(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunFilterWorkQueueName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter","title":"FlowRunNotificationPolicyFilter","text":"

Bases: PrefectFilterBaseModel

Filter FlowRunNotificationPolicies.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunNotificationPolicyFilter(PrefectFilterBaseModel):\n\"\"\"Filter FlowRunNotificationPolicies.\"\"\"\n\n    is_active: Optional[FlowRunNotificationPolicyFilterIsActive] = Field(\n        default=FlowRunNotificationPolicyFilterIsActive(eq_=False),\n        description=\"Filter criteria for `FlowRunNotificationPolicy.is_active`. \",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.is_active is not None:\n            filters.append(self.is_active.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive","title":"FlowRunNotificationPolicyFilterIsActive","text":"

Bases: PrefectFilterBaseModel

Filter by FlowRunNotificationPolicy.is_active.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class FlowRunNotificationPolicyFilterIsActive(PrefectFilterBaseModel):\n\"\"\"Filter by `FlowRunNotificationPolicy.is_active`.\"\"\"\n\n    eq_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"Filter notification policies for only those that are or are not active.\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.eq_ is not None:\n            filters.append(db.FlowRunNotificationPolicy.is_active.is_(self.eq_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.FlowRunNotificationPolicyFilterIsActive.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter","title":"LogFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter logs. Only logs matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class LogFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter logs. Only logs matching all criteria will be returned\"\"\"\n\n    level: Optional[LogFilterLevel] = Field(\n        default=None, description=\"Filter criteria for `Log.level`\"\n    )\n    timestamp: Optional[LogFilterTimestamp] = Field(\n        default=None, description=\"Filter criteria for `Log.timestamp`\"\n    )\n    flow_run_id: Optional[LogFilterFlowRunId] = Field(\n        default=None, description=\"Filter criteria for `Log.flow_run_id`\"\n    )\n    task_run_id: Optional[LogFilterTaskRunId] = Field(\n        default=None, description=\"Filter criteria for `Log.task_run_id`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.level is not None:\n            filters.append(self.level.as_sql_filter(db))\n        if self.timestamp is not None:\n            filters.append(self.timestamp.as_sql_filter(db))\n        if self.flow_run_id is not None:\n            filters.append(self.flow_run_id.as_sql_filter(db))\n        if self.task_run_id is not None:\n            filters.append(self.task_run_id.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId","title":"LogFilterFlowRunId","text":"

Bases: PrefectFilterBaseModel

Filter by Log.flow_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class LogFilterFlowRunId(PrefectFilterBaseModel):\n\"\"\"Filter by `Log.flow_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of flow run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Log.flow_run_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterFlowRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel","title":"LogFilterLevel","text":"

Bases: PrefectFilterBaseModel

Filter by Log.level.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class LogFilterLevel(PrefectFilterBaseModel):\n\"\"\"Filter by `Log.level`.\"\"\"\n\n    ge_: Optional[int] = Field(\n        default=None,\n        description=\"Include logs with a level greater than or equal to this level\",\n        example=20,\n    )\n\n    le_: Optional[int] = Field(\n        default=None,\n        description=\"Include logs with a level less than or equal to this level\",\n        example=50,\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.ge_ is not None:\n            filters.append(db.Log.level >= self.ge_)\n        if self.le_ is not None:\n            filters.append(db.Log.level <= self.le_)\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterLevel.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName","title":"LogFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by Log.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class LogFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `Log.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of log names to include\",\n        example=[\"prefect.logger.flow_runs\", \"prefect.logger.task_runs\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Log.name.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId","title":"LogFilterTaskRunId","text":"

Bases: PrefectFilterBaseModel

Filter by Log.task_run_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class LogFilterTaskRunId(PrefectFilterBaseModel):\n\"\"\"Filter by `Log.task_run_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run IDs to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Log.task_run_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTaskRunId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp","title":"LogFilterTimestamp","text":"

Bases: PrefectFilterBaseModel

Filter by Log.timestamp.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class LogFilterTimestamp(PrefectFilterBaseModel):\n\"\"\"Filter by `Log.timestamp`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include logs with a timestamp at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include logs with a timestamp at or after this time\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.Log.timestamp <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.Log.timestamp >= self.after_)\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.LogFilterTimestamp.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.Operator","title":"Operator","text":"

Bases: AutoEnum

Operators for combining filter criteria.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class Operator(AutoEnum):\n\"\"\"Operators for combining filter criteria.\"\"\"\n\n    and_ = AutoEnum.auto()\n    or_ = AutoEnum.auto()\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.Operator.auto","title":"auto staticmethod","text":"

Exposes enum.auto() to avoid requiring a second import to use AutoEnum

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
@staticmethod\ndef auto():\n\"\"\"\n    Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n    \"\"\"\n    return auto()\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel","title":"PrefectFilterBaseModel","text":"

Bases: PrefectBaseModel

Base model for Prefect filters

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class PrefectFilterBaseModel(PrefectBaseModel):\n\"\"\"Base model for Prefect filters\"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    def as_sql_filter(self, db: \"PrefectDBInterface\") -> \"BooleanClauseList\":\n\"\"\"Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter.\"\"\"\n        filters = self._get_filter_list(db)\n        if not filters:\n            return True\n        return sa.and_(*filters)\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n\"\"\"Return a list of boolean filter statements based on filter parameters\"\"\"\n        raise NotImplementedError(\"_get_filter_list must be implemented\")\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectFilterBaseModel.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel","title":"PrefectOperatorFilterBaseModel","text":"

Bases: PrefectFilterBaseModel

Base model for Prefect filters that combines criteria with a user-provided operator

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class PrefectOperatorFilterBaseModel(PrefectFilterBaseModel):\n\"\"\"Base model for Prefect filters that combines criteria with a user-provided operator\"\"\"\n\n    operator: Operator = Field(\n        default=Operator.and_,\n        description=\"Operator for combining filter criteria. Defaults to 'and_'.\",\n    )\n\n    def as_sql_filter(self, db: \"PrefectDBInterface\") -> \"BooleanClauseList\":\n        filters = self._get_filter_list(db)\n        if not filters:\n            return True\n        return sa.and_(*filters) if self.operator == Operator.and_ else sa.or_(*filters)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.PrefectOperatorFilterBaseModel.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter","title":"TaskRunFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter task runs. Only task runs matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter task runs. Only task runs matching all criteria will be returned\"\"\"\n\n    id: Optional[TaskRunFilterId] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.id`\"\n    )\n    name: Optional[TaskRunFilterName] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.name`\"\n    )\n    tags: Optional[TaskRunFilterTags] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.tags`\"\n    )\n    state: Optional[TaskRunFilterState] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.state`\"\n    )\n    start_time: Optional[TaskRunFilterStartTime] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.start_time`\"\n    )\n    subflow_runs: Optional[TaskRunFilterSubFlowRuns] = Field(\n        default=None, description=\"Filter criteria for `TaskRun.subflow_run`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        if self.state is not None:\n            filters.append(self.state.as_sql_filter(db))\n        if self.start_time is not None:\n            filters.append(self.start_time.as_sql_filter(db))\n        if self.subflow_runs is not None:\n            filters.append(self.subflow_runs.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId","title":"TaskRunFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by TaskRun.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `TaskRun.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of task run ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName","title":"TaskRunFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by TaskRun.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `TaskRun.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of task run names to include\",\n        example=[\"my-task-run-1\", \"my-task-run-2\"],\n    )\n\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A case-insensitive partial match. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', 'sad-Marvin', and 'marvin-robot'.\"\n        ),\n        example=\"marvin\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.TaskRun.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime","title":"TaskRunFilterStartTime","text":"

Bases: PrefectFilterBaseModel

Filter by TaskRun.start_time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterStartTime(PrefectFilterBaseModel):\n\"\"\"Filter by `TaskRun.start_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include task runs starting at or before this time\",\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=\"Only include task runs starting at or after this time\",\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only return task runs without a start time\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.TaskRun.start_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.TaskRun.start_time >= self.after_)\n        if self.is_null_ is not None:\n            filters.append(\n                db.TaskRun.start_time.is_(None)\n                if self.is_null_\n                else db.TaskRun.start_time.is_not(None)\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStartTime.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState","title":"TaskRunFilterState","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by TaskRun.type and TaskRun.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterState(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `TaskRun.type` and `TaskRun.name`.\"\"\"\n\n    type: Optional[TaskRunFilterStateType]\n    name: Optional[TaskRunFilterStateName]\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.type is not None:\n            filters.extend(self.type._get_filter_list(db))\n        if self.name is not None:\n            filters.extend(self.name._get_filter_list(db))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterState.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName","title":"TaskRunFilterStateName","text":"

Bases: PrefectFilterBaseModel

Filter by TaskRun.state_name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterStateName(PrefectFilterBaseModel):\n\"\"\"Filter by `TaskRun.state_name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of task run state names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.state_name.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType","title":"TaskRunFilterStateType","text":"

Bases: PrefectFilterBaseModel

Filter by TaskRun.state_type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterStateType(PrefectFilterBaseModel):\n\"\"\"Filter by `TaskRun.state_type`.\"\"\"\n\n    any_: Optional[List[schemas.states.StateType]] = Field(\n        default=None, description=\"A list of task run state types to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.TaskRun.state_type.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterStateType.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns","title":"TaskRunFilterSubFlowRuns","text":"

Bases: PrefectFilterBaseModel

Filter by TaskRun.subflow_run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterSubFlowRuns(PrefectFilterBaseModel):\n\"\"\"Filter by `TaskRun.subflow_run`.\"\"\"\n\n    exists_: Optional[bool] = Field(\n        default=None,\n        description=(\n            \"If true, only include task runs that are subflow run parents; if false,\"\n            \" exclude parent task runs\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.exists_ is True:\n            filters.append(db.TaskRun.subflow_run.has())\n        elif self.exists_ is False:\n            filters.append(sa.not_(db.TaskRun.subflow_run.has()))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterSubFlowRuns.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags","title":"TaskRunFilterTags","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by TaskRun.tags.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class TaskRunFilterTags(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `TaskRun.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"tag-1\", \"tag-2\"],\n        description=(\n            \"A list of tags. Task runs will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include task runs without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.TaskRun.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.TaskRun.tags == [] if self.is_null_ else db.TaskRun.tags != []\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.TaskRunFilterTags.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter","title":"VariableFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter variables. Only variables matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class VariableFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter variables. Only variables matching all criteria will be returned\"\"\"\n\n    id: Optional[VariableFilterId] = Field(\n        default=None, description=\"Filter criteria for `Variable.id`\"\n    )\n    name: Optional[VariableFilterName] = Field(\n        default=None, description=\"Filter criteria for `Variable.name`\"\n    )\n    value: Optional[VariableFilterValue] = Field(\n        default=None, description=\"Filter criteria for `Variable.value`\"\n    )\n    tags: Optional[VariableFilterTags] = Field(\n        default=None, description=\"Filter criteria for `Variable.tags`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.value is not None:\n            filters.append(self.value.as_sql_filter(db))\n        if self.tags is not None:\n            filters.append(self.tags.as_sql_filter(db))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId","title":"VariableFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by Variable.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class VariableFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `Variable.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of variable ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Variable.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName","title":"VariableFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by Variable.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class VariableFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `Variable.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of variables names to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match variable names against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        example=\"my_variable_%\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Variable.name.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Variable.name.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags","title":"VariableFilterTags","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by Variable.tags.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class VariableFilterTags(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `Variable.tags`.\"\"\"\n\n    all_: Optional[List[str]] = Field(\n        default=None,\n        example=[\"tag-1\", \"tag-2\"],\n        description=(\n            \"A list of tags. Variables will be returned only if their tags are a\"\n            \" superset of the list\"\n        ),\n    )\n    is_null_: Optional[bool] = Field(\n        default=None, description=\"If true, only include Variables without tags\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        from prefect.server.utilities.database import json_has_all_keys\n\n        filters = []\n        if self.all_ is not None:\n            filters.append(json_has_all_keys(db.Variable.tags, self.all_))\n        if self.is_null_ is not None:\n            filters.append(\n                db.Variable.tags == [] if self.is_null_ else db.Variable.tags != []\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterTags.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue","title":"VariableFilterValue","text":"

Bases: PrefectFilterBaseModel

Filter by Variable.value.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class VariableFilterValue(PrefectFilterBaseModel):\n\"\"\"Filter by `Variable.value`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of variables value to include\"\n    )\n    like_: Optional[str] = Field(\n        default=None,\n        description=(\n            \"A string to match variable value against. This can include \"\n            \"SQL wildcard characters like `%` and `_`.\"\n        ),\n        example=\"my-value-%\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Variable.value.in_(self.any_))\n        if self.like_ is not None:\n            filters.append(db.Variable.value.ilike(f\"%{self.like_}%\"))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.VariableFilterValue.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter","title":"WorkPoolFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter work pools. Only work pools matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkPoolFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter work pools. Only work pools matching all criteria will be returned\"\"\"\n\n    id: Optional[WorkPoolFilterId] = Field(\n        default=None, description=\"Filter criteria for `WorkPool.id`\"\n    )\n    name: Optional[WorkPoolFilterName] = Field(\n        default=None, description=\"Filter criteria for `WorkPool.name`\"\n    )\n    type: Optional[WorkPoolFilterType] = Field(\n        default=None, description=\"Filter criteria for `WorkPool.type`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n        if self.type is not None:\n            filters.append(self.type.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId","title":"WorkPoolFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by WorkPool.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkPoolFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `WorkPool.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of work pool ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkPool.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName","title":"WorkPoolFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by WorkPool.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkPoolFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `WorkPool.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of work pool names to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkPool.name.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType","title":"WorkPoolFilterType","text":"

Bases: PrefectFilterBaseModel

Filter by WorkPool.type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkPoolFilterType(PrefectFilterBaseModel):\n\"\"\"Filter by `WorkPool.type`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None, description=\"A list of work pool types to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkPool.type.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkPoolFilterType.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter","title":"WorkQueueFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter work queues. Only work queues matching all criteria will be returned

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkQueueFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter work queues. Only work queues matching all criteria will be\n    returned\"\"\"\n\n    id: Optional[WorkQueueFilterId] = Field(\n        default=None, description=\"Filter criteria for `WorkQueue.id`\"\n    )\n\n    name: Optional[WorkQueueFilterName] = Field(\n        default=None, description=\"Filter criteria for `WorkQueue.name`\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.id is not None:\n            filters.append(self.id.as_sql_filter(db))\n        if self.name is not None:\n            filters.append(self.name.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId","title":"WorkQueueFilterId","text":"

Bases: PrefectFilterBaseModel

Filter by WorkQueue.id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkQueueFilterId(PrefectFilterBaseModel):\n\"\"\"Filter by `WorkQueue.id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None,\n        description=\"A list of work queue ids to include\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkQueue.id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName","title":"WorkQueueFilterName","text":"

Bases: PrefectFilterBaseModel

Filter by WorkQueue.name.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkQueueFilterName(PrefectFilterBaseModel):\n\"\"\"Filter by `WorkQueue.name`.\"\"\"\n\n    any_: Optional[List[str]] = Field(\n        default=None,\n        description=\"A list of work queue names to include\",\n        example=[\"wq-1\", \"wq-2\"],\n    )\n\n    startswith_: Optional[List[str]] = Field(\n        default=None,\n        description=(\n            \"A list of case-insensitive starts-with matches. For example, \"\n            \" passing 'marvin' will match \"\n            \"'marvin', and 'Marvin-robot', but not 'sad-marvin'.\"\n        ),\n        example=[\"marvin\", \"Marvin-robot\"],\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.WorkQueue.name.in_(self.any_))\n        if self.startswith_ is not None:\n            filters.append(\n                sa.or_(\n                    *[db.WorkQueue.name.ilike(f\"{item}%\") for item in self.startswith_]\n                )\n            )\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkQueueFilterName.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter","title":"WorkerFilter","text":"

Bases: PrefectOperatorFilterBaseModel

Filter by Worker.last_heartbeat_time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkerFilter(PrefectOperatorFilterBaseModel):\n\"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n    # worker_config_id: Optional[WorkerFilterWorkPoolId] = Field(\n    #     default=None, description=\"Filter criteria for `Worker.worker_config_id`\"\n    # )\n\n    last_heartbeat_time: Optional[WorkerFilterLastHeartbeatTime] = Field(\n        default=None,\n        description=\"Filter criteria for `Worker.last_heartbeat_time`\",\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n\n        if self.last_heartbeat_time is not None:\n            filters.append(self.last_heartbeat_time.as_sql_filter(db))\n\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilter.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime","title":"WorkerFilterLastHeartbeatTime","text":"

Bases: PrefectFilterBaseModel

Filter by Worker.last_heartbeat_time.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkerFilterLastHeartbeatTime(PrefectFilterBaseModel):\n\"\"\"Filter by `Worker.last_heartbeat_time`.\"\"\"\n\n    before_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include processes whose last heartbeat was at or before this time\"\n        ),\n    )\n    after_: Optional[DateTimeTZ] = Field(\n        default=None,\n        description=(\n            \"Only include processes whose last heartbeat was at or after this time\"\n        ),\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.before_ is not None:\n            filters.append(db.Worker.last_heartbeat_time <= self.before_)\n        if self.after_ is not None:\n            filters.append(db.Worker.last_heartbeat_time >= self.after_)\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterLastHeartbeatTime.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId","title":"WorkerFilterWorkPoolId","text":"

Bases: PrefectFilterBaseModel

Filter by Worker.worker_config_id.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/filters.py
class WorkerFilterWorkPoolId(PrefectFilterBaseModel):\n\"\"\"Filter by `Worker.worker_config_id`.\"\"\"\n\n    any_: Optional[List[UUID]] = Field(\n        default=None, description=\"A list of work pool ids to include\"\n    )\n\n    def _get_filter_list(self, db: \"PrefectDBInterface\") -> List:\n        filters = []\n        if self.any_ is not None:\n            filters.append(db.Worker.worker_config_id.in_(self.any_))\n        return filters\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/filters/#prefect.server.schemas.filters.WorkerFilterWorkPoolId.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/responses/","title":"server.schemas.responses","text":""},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses","title":"prefect.server.schemas.responses","text":"

Schemas for special responses from the Prefect REST API.

"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.FlowRunResponse","title":"FlowRunResponse","text":"

Bases: ORMBaseModel

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
@copy_model_fields\nclass FlowRunResponse(ORMBaseModel):\n    name: str = FieldFrom(schemas.core.FlowRun)\n    flow_id: UUID = FieldFrom(schemas.core.FlowRun)\n    state_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    deployment_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    work_queue_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    work_queue_name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    flow_version: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    parameters: dict = FieldFrom(schemas.core.FlowRun)\n    idempotency_key: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    context: dict = FieldFrom(schemas.core.FlowRun)\n    empirical_policy: FlowRunPolicy = FieldFrom(schemas.core.FlowRun)\n    tags: List[str] = FieldFrom(schemas.core.FlowRun)\n    parent_task_run_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    state_type: Optional[schemas.states.StateType] = FieldFrom(schemas.core.FlowRun)\n    state_name: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    run_count: int = FieldFrom(schemas.core.FlowRun)\n    expected_start_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n    next_scheduled_start_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n    start_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n    end_time: Optional[DateTimeTZ] = FieldFrom(schemas.core.FlowRun)\n    total_run_time: datetime.timedelta = FieldFrom(schemas.core.FlowRun)\n    estimated_run_time: datetime.timedelta = FieldFrom(schemas.core.FlowRun)\n    estimated_start_time_delta: datetime.timedelta = FieldFrom(schemas.core.FlowRun)\n    auto_scheduled: bool = FieldFrom(schemas.core.FlowRun)\n    infrastructure_document_id: Optional[UUID] = FieldFrom(schemas.core.FlowRun)\n    infrastructure_pid: Optional[str] = FieldFrom(schemas.core.FlowRun)\n    created_by: Optional[CreatedBy] = FieldFrom(schemas.core.FlowRun)\n    work_pool_id: Optional[UUID] = Field(\n        default=None,\n        description=\"The id of the flow run's work pool.\",\n    )\n    work_pool_name: Optional[str] = Field(\n        default=None,\n        description=\"The name of the flow run's work pool.\",\n        example=\"my-work-pool\",\n    )\n    state: Optional[schemas.states.State] = FieldFrom(schemas.core.FlowRun)\n\n    @classmethod\n    def from_orm(cls, orm_flow_run: \"prefect.server.database.orm_models.ORMFlowRun\"):\n        response = super().from_orm(orm_flow_run)\n        if orm_flow_run.work_queue:\n            response.work_queue_id = orm_flow_run.work_queue.id\n            response.work_queue_name = orm_flow_run.work_queue.name\n            if orm_flow_run.work_queue.work_pool:\n                response.work_pool_id = orm_flow_run.work_queue.work_pool.id\n                response.work_pool_name = orm_flow_run.work_queue.work_pool.name\n\n        return response\n\n    def __eq__(self, other: Any) -> bool:\n\"\"\"\n        Check for \"equality\" to another flow run schema\n\n        Estimates times are rolling and will always change with repeated queries for\n        a flow run so we ignore them during equality checks.\n        \"\"\"\n        if isinstance(other, FlowRunResponse):\n            exclude_fields = {\"estimated_run_time\", \"estimated_start_time_delta\"}\n            return self.dict(exclude=exclude_fields) == other.dict(\n                exclude=exclude_fields\n            )\n        return super().__eq__(other)\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.HistoryResponse","title":"HistoryResponse","text":"

Bases: PrefectBaseModel

Represents a history of aggregation states over an interval

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class HistoryResponse(PrefectBaseModel):\n\"\"\"Represents a history of aggregation states over an interval\"\"\"\n\n    interval_start: DateTimeTZ = Field(\n        default=..., description=\"The start date of the interval.\"\n    )\n    interval_end: DateTimeTZ = Field(\n        default=..., description=\"The end date of the interval.\"\n    )\n    states: List[HistoryResponseState] = Field(\n        default=..., description=\"A list of state histories during the interval.\"\n    )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.HistoryResponseState","title":"HistoryResponseState","text":"

Bases: PrefectBaseModel

Represents a single state's history over an interval.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class HistoryResponseState(PrefectBaseModel):\n\"\"\"Represents a single state's history over an interval.\"\"\"\n\n    state_type: schemas.states.StateType = Field(\n        default=..., description=\"The state type.\"\n    )\n    state_name: str = Field(default=..., description=\"The state name.\")\n    count_runs: int = Field(\n        default=...,\n        description=\"The number of runs in the specified state during the interval.\",\n    )\n    sum_estimated_run_time: datetime.timedelta = Field(\n        default=...,\n        description=\"The total estimated run time of all runs during the interval.\",\n    )\n    sum_estimated_lateness: datetime.timedelta = Field(\n        default=...,\n        description=(\n            \"The sum of differences between actual and expected start time during the\"\n            \" interval.\"\n        ),\n    )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.OrchestrationResult","title":"OrchestrationResult","text":"

Bases: PrefectBaseModel

A container for the output of state orchestration.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class OrchestrationResult(PrefectBaseModel):\n\"\"\"\n    A container for the output of state orchestration.\n    \"\"\"\n\n    state: Optional[schemas.states.State]\n    status: SetStateStatus\n    details: StateResponseDetails\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.SetStateStatus","title":"SetStateStatus","text":"

Bases: AutoEnum

Enumerates return statuses for setting run states.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class SetStateStatus(AutoEnum):\n\"\"\"Enumerates return statuses for setting run states.\"\"\"\n\n    ACCEPT = AutoEnum.auto()\n    REJECT = AutoEnum.auto()\n    ABORT = AutoEnum.auto()\n    WAIT = AutoEnum.auto()\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateAbortDetails","title":"StateAbortDetails","text":"

Bases: PrefectBaseModel

Details associated with an ABORT state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class StateAbortDetails(PrefectBaseModel):\n\"\"\"Details associated with an ABORT state transition.\"\"\"\n\n    type: Literal[\"abort_details\"] = Field(\n        default=\"abort_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition was aborted.\"\n    )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateAcceptDetails","title":"StateAcceptDetails","text":"

Bases: PrefectBaseModel

Details associated with an ACCEPT state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class StateAcceptDetails(PrefectBaseModel):\n\"\"\"Details associated with an ACCEPT state transition.\"\"\"\n\n    type: Literal[\"accept_details\"] = Field(\n        default=\"accept_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateRejectDetails","title":"StateRejectDetails","text":"

Bases: PrefectBaseModel

Details associated with a REJECT state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class StateRejectDetails(PrefectBaseModel):\n\"\"\"Details associated with a REJECT state transition.\"\"\"\n\n    type: Literal[\"reject_details\"] = Field(\n        default=\"reject_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition was rejected.\"\n    )\n
"},{"location":"api-ref/server/schemas/responses/#prefect.server.schemas.responses.StateWaitDetails","title":"StateWaitDetails","text":"

Bases: PrefectBaseModel

Details associated with a WAIT state transition.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/responses.py
class StateWaitDetails(PrefectBaseModel):\n\"\"\"Details associated with a WAIT state transition.\"\"\"\n\n    type: Literal[\"wait_details\"] = Field(\n        default=\"wait_details\",\n        description=(\n            \"The type of state transition detail. Used to ensure pydantic does not\"\n            \" coerce into a different type.\"\n        ),\n    )\n    delay_seconds: int = Field(\n        default=...,\n        description=(\n            \"The length of time in seconds the client should wait before transitioning\"\n            \" states.\"\n        ),\n    )\n    reason: Optional[str] = Field(\n        default=None, description=\"The reason why the state transition should wait.\"\n    )\n
"},{"location":"api-ref/server/schemas/schedules/","title":"server.schemas.schedules","text":""},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules","title":"prefect.server.schemas.schedules","text":"

Schedule schemas

"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.CronSchedule","title":"CronSchedule","text":"

Bases: PrefectBaseModel

Cron schedule

NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST.

Parameters:

Name Type Description Default cron str

a valid cron string

required timezone str

a valid timezone string in IANA tzdata format (for example, America/New_York).

required day_or bool

Control how croniter handles day and day_of_week entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday.

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
class CronSchedule(PrefectBaseModel):\n\"\"\"\n    Cron schedule\n\n    NOTE: If the timezone is a DST-observing one, then the schedule will adjust\n    itself appropriately. Cron's rules for DST are based on schedule times, not\n    intervals. This means that an hourly cron schedule will fire on every new\n    schedule hour, not every elapsed hour; for example, when clocks are set back\n    this will result in a two-hour pause as the schedule will fire *the first\n    time* 1am is reached and *the first time* 2am is reached, 120 minutes later.\n    Longer schedules, such as one that fires at 9am every morning, will\n    automatically adjust for DST.\n\n    Args:\n        cron (str): a valid cron string\n        timezone (str): a valid timezone string in IANA tzdata format (for example,\n            America/New_York).\n        day_or (bool, optional): Control how croniter handles `day` and `day_of_week`\n            entries. Defaults to True, matching cron which connects those values using\n            OR. If the switch is set to False, the values are connected using AND. This\n            behaves like fcron and enables you to e.g. define a job that executes each\n            2nd friday of a month by setting the days of month and the weekday.\n\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    cron: str = Field(default=..., example=\"0 0 * * *\")\n    timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n    day_or: bool = Field(\n        default=True,\n        description=(\n            \"Control croniter behavior for handling day and day_of_week entries.\"\n        ),\n    )\n\n    @validator(\"timezone\")\n    def valid_timezone(cls, v):\n        if v and v not in pendulum.tz.timezones:\n            raise ValueError(\n                f'Invalid timezone: \"{v}\" (specify in IANA tzdata format, for example,'\n                \" America/New_York)\"\n            )\n        return v\n\n    @validator(\"cron\")\n    def valid_cron_string(cls, v):\n        # croniter allows \"random\" and \"hashed\" expressions\n        # which we do not support https://github.com/kiorky/croniter\n        if not croniter.is_valid(v):\n            raise ValueError(f'Invalid cron string: \"{v}\"')\n        elif any(c for c in v.split() if c.casefold() in [\"R\", \"H\", \"r\", \"h\"]):\n            raise ValueError(\n                f'Random and Hashed expressions are unsupported, received: \"{v}\"'\n            )\n        return v\n\n    async def get_dates(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> List[pendulum.DateTime]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: A list of dates\n        \"\"\"\n        return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n    def _get_dates_generator(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> Generator[pendulum.DateTime, None, None]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to the current date. If a timezone-naive\n                datetime is provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): No returned date will exceed this date.\n                If a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: a list of dates\n        \"\"\"\n        if start is None:\n            start = pendulum.now(\"UTC\")\n\n        start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n        if n is None:\n            # if an end was supplied, we do our best to supply all matching dates (up to\n            # MAX_ITERATIONS)\n            if end is not None:\n                n = MAX_ITERATIONS\n            else:\n                n = 1\n\n        elif self.timezone:\n            start = start.in_tz(self.timezone)\n\n        # subtract one second from the start date, so that croniter returns it\n        # as an event (if it meets the cron criteria)\n        start = start.subtract(seconds=1)\n\n        # croniter's DST logic interferes with all other datetime libraries except pytz\n        start_localized = pytz.timezone(start.tz.name).localize(\n            datetime.datetime(\n                year=start.year,\n                month=start.month,\n                day=start.day,\n                hour=start.hour,\n                minute=start.minute,\n                second=start.second,\n                microsecond=start.microsecond,\n            )\n        )\n\n        # Respect microseconds by rounding up\n        if start_localized.microsecond > 0:\n            start_localized += datetime.timedelta(seconds=1)\n\n        cron = croniter(self.cron, start_localized, day_or=self.day_or)  # type: ignore\n        dates = set()\n        counter = 0\n\n        while True:\n            next_date = pendulum.instance(cron.get_next(datetime.datetime))\n            # if the end date was exceeded, exit\n            if end and next_date > end:\n                break\n            # ensure no duplicates; weird things can happen with DST\n            if next_date not in dates:\n                dates.add(next_date)\n                yield next_date\n\n            # if enough dates have been collected or enough attempts were made, exit\n            if len(dates) >= n or counter > MAX_ITERATIONS:\n                break\n\n            counter += 1\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.CronSchedule.get_dates","title":"get_dates async","text":"

Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.

Parameters:

Name Type Description Default n int

The number of dates to generate

None start datetime.datetime

The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

None end datetime.datetime

The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

None

Returns:

Type Description List[pendulum.DateTime]

List[pendulum.DateTime]: A list of dates

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
async def get_dates(\n    self,\n    n: int = None,\n    start: datetime.datetime = None,\n    end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n    following the start date.\n\n    Args:\n        n (int): The number of dates to generate\n        start (datetime.datetime, optional): The first returned date will be on or\n            after this date. Defaults to None.  If a timezone-naive datetime is\n            provided, it is assumed to be in the schedule's timezone.\n        end (datetime.datetime, optional): The maximum scheduled date to return. If\n            a timezone-naive datetime is provided, it is assumed to be in the\n            schedule's timezone.\n\n    Returns:\n        List[pendulum.DateTime]: A list of dates\n    \"\"\"\n    return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.IntervalSchedule","title":"IntervalSchedule","text":"

Bases: PrefectBaseModel

A schedule formed by adding interval increments to an anchor_date. If no anchor_date is supplied, the current UTC time is used. If a timezone-naive datetime is provided for anchor_date, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a timezone can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date.

NOTE: If the IntervalSchedule anchor_date or timezone is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.

Parameters:

Name Type Description Default interval datetime.timedelta

an interval to schedule on

required anchor_date DateTimeTZ

an anchor date to schedule increments against; if not provided, the current timestamp will be used

required timezone str

a valid timezone string

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
class IntervalSchedule(PrefectBaseModel):\n\"\"\"\n    A schedule formed by adding `interval` increments to an `anchor_date`. If no\n    `anchor_date` is supplied, the current UTC time is used.  If a\n    timezone-naive datetime is provided for `anchor_date`, it is assumed to be\n    in the schedule's timezone (or UTC). Even if supplied with an IANA timezone,\n    anchor dates are always stored as UTC offsets, so a `timezone` can be\n    provided to determine localization behaviors like DST boundary handling. If\n    none is provided it will be inferred from the anchor date.\n\n    NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a\n    DST-observing timezone, then the schedule will adjust itself appropriately.\n    Intervals greater than 24 hours will follow DST conventions, while intervals\n    of less than 24 hours will follow UTC intervals. For example, an hourly\n    schedule will fire every UTC hour, even across DST boundaries. When clocks\n    are set back, this will result in two runs that *appear* to both be\n    scheduled for 1am local time, even though they are an hour apart in UTC\n    time. For longer intervals, like a daily schedule, the interval schedule\n    will adjust for DST boundaries so that the clock-hour remains constant. This\n    means that a daily schedule that always fires at 9am will observe DST and\n    continue to fire at 9am in the local time zone.\n\n    Args:\n        interval (datetime.timedelta): an interval to schedule on\n        anchor_date (DateTimeTZ, optional): an anchor date to schedule increments against;\n            if not provided, the current timestamp will be used\n        timezone (str, optional): a valid timezone string\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n        exclude_none = True\n\n    interval: datetime.timedelta\n    anchor_date: DateTimeTZ = None\n    timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n\n    @validator(\"interval\")\n    def interval_must_be_positive(cls, v):\n        if v.total_seconds() <= 0:\n            raise ValueError(\"The interval must be positive\")\n        return v\n\n    @validator(\"anchor_date\", always=True)\n    def default_anchor_date(cls, v):\n        if v is None:\n            return pendulum.now(\"UTC\")\n        return pendulum.instance(v)\n\n    @validator(\"timezone\", always=True)\n    def default_timezone(cls, v, *, values, **kwargs):\n        # if was provided, make sure its a valid IANA string\n        if v and v not in pendulum.tz.timezones:\n            raise ValueError(f'Invalid timezone: \"{v}\"')\n\n        # otherwise infer the timezone from the anchor date\n        elif v is None and values.get(\"anchor_date\"):\n            tz = values[\"anchor_date\"].tz.name\n            if tz in pendulum.tz.timezones:\n                return tz\n            # sometimes anchor dates have \"timezones\" that are UTC offsets\n            # like \"-04:00\". This happens when parsing ISO8601 strings.\n            # In this case we, the correct inferred localization is \"UTC\".\n            else:\n                return \"UTC\"\n\n        return v\n\n    async def get_dates(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> List[pendulum.DateTime]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: A list of dates\n        \"\"\"\n        return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n    def _get_dates_generator(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> Generator[pendulum.DateTime, None, None]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: a list of dates\n        \"\"\"\n        if n is None:\n            # if an end was supplied, we do our best to supply all matching dates (up to\n            # MAX_ITERATIONS)\n            if end is not None:\n                n = MAX_ITERATIONS\n            else:\n                n = 1\n\n        if start is None:\n            start = pendulum.now(\"UTC\")\n\n        anchor_tz = self.anchor_date.in_tz(self.timezone)\n        start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n        # compute the offset between the anchor date and the start date to jump to the\n        # next date\n        offset = (start - anchor_tz).total_seconds() / self.interval.total_seconds()\n        next_date = anchor_tz.add(seconds=self.interval.total_seconds() * int(offset))\n\n        # break the interval into `days` and `seconds` because pendulum\n        # will handle DST boundaries properly if days are provided, but not\n        # if we add `total seconds`. Therefore, `next_date + self.interval`\n        # fails while `next_date.add(days=days, seconds=seconds)` works.\n        interval_days = self.interval.days\n        interval_seconds = self.interval.total_seconds() - (\n            interval_days * 24 * 60 * 60\n        )\n\n        # daylight saving time boundaries can create a situation where the next date is\n        # before the start date, so we advance it if necessary\n        while next_date < start:\n            next_date = next_date.add(days=interval_days, seconds=interval_seconds)\n\n        counter = 0\n        dates = set()\n\n        while True:\n            # if the end date was exceeded, exit\n            if end and next_date > end:\n                break\n\n            # ensure no duplicates; weird things can happen with DST\n            if next_date not in dates:\n                dates.add(next_date)\n                yield next_date\n\n            # if enough dates have been collected or enough attempts were made, exit\n            if len(dates) >= n or counter > MAX_ITERATIONS:\n                break\n\n            counter += 1\n\n            next_date = next_date.add(days=interval_days, seconds=interval_seconds)\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.IntervalSchedule.get_dates","title":"get_dates async","text":"

Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.

Parameters:

Name Type Description Default n int

The number of dates to generate

None start datetime.datetime

The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

None end datetime.datetime

The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

None

Returns:

Type Description List[pendulum.DateTime]

List[pendulum.DateTime]: A list of dates

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
async def get_dates(\n    self,\n    n: int = None,\n    start: datetime.datetime = None,\n    end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n    following the start date.\n\n    Args:\n        n (int): The number of dates to generate\n        start (datetime.datetime, optional): The first returned date will be on or\n            after this date. Defaults to None.  If a timezone-naive datetime is\n            provided, it is assumed to be in the schedule's timezone.\n        end (datetime.datetime, optional): The maximum scheduled date to return. If\n            a timezone-naive datetime is provided, it is assumed to be in the\n            schedule's timezone.\n\n    Returns:\n        List[pendulum.DateTime]: A list of dates\n    \"\"\"\n    return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule","title":"RRuleSchedule","text":"

Bases: PrefectBaseModel

RRule schedule, based on the iCalendar standard (RFC 5545) as implemented in dateutils.rrule.

RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more.

Note that as a calendar-oriented standard, RRuleSchedules are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time.

Parameters:

Name Type Description Default rrule str

a valid RRule string

required timezone str

a valid timezone string

required Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
class RRuleSchedule(PrefectBaseModel):\n\"\"\"\n    RRule schedule, based on the iCalendar standard\n    ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as\n    implemented in `dateutils.rrule`.\n\n    RRules are appropriate for any kind of calendar-date manipulation, including\n    irregular intervals, repetition, exclusions, week day or day-of-month\n    adjustments, and more.\n\n    Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to\n    to the initial timezone provided. A 9am daily schedule with a daylight saving\n    time-aware start date will maintain a local 9am time through DST boundaries;\n    a 9am daily schedule with a UTC start date will maintain a 9am UTC time.\n\n    Args:\n        rrule (str): a valid RRule string\n        timezone (str, optional): a valid timezone string\n    \"\"\"\n\n    class Config:\n        extra = \"forbid\"\n\n    rrule: str\n    timezone: Optional[str] = Field(default=None, example=\"America/New_York\")\n\n    @validator(\"rrule\")\n    def validate_rrule_str(cls, v):\n        # attempt to parse the rrule string as an rrule object\n        # this will error if the string is invalid\n        try:\n            dateutil.rrule.rrulestr(v, cache=True)\n        except ValueError as exc:\n            # rrules errors are a mix of cryptic and informative\n            # so reraise to be clear that the string was invalid\n            raise ValueError(f'Invalid RRule string \"{v}\": {exc}')\n        if len(v) > MAX_RRULE_LENGTH:\n            raise ValueError(\n                f'Invalid RRule string \"{v[:40]}...\"\\n'\n                f\"Max length is {MAX_RRULE_LENGTH}, got {len(v)}\"\n            )\n        return v\n\n    @classmethod\n    def from_rrule(cls, rrule: dateutil.rrule.rrule):\n        if isinstance(rrule, dateutil.rrule.rrule):\n            if rrule._dtstart.tzinfo is not None:\n                timezone = rrule._dtstart.tzinfo.name\n            else:\n                timezone = \"UTC\"\n            return RRuleSchedule(rrule=str(rrule), timezone=timezone)\n        elif isinstance(rrule, dateutil.rrule.rruleset):\n            dtstarts = [rr._dtstart for rr in rrule._rrule if rr._dtstart is not None]\n            unique_dstarts = set(pendulum.instance(d).in_tz(\"UTC\") for d in dtstarts)\n            unique_timezones = set(d.tzinfo for d in dtstarts if d.tzinfo is not None)\n\n            if len(unique_timezones) > 1:\n                raise ValueError(\n                    f\"rruleset has too many dtstart timezones: {unique_timezones}\"\n                )\n\n            if len(unique_dstarts) > 1:\n                raise ValueError(f\"rruleset has too many dtstarts: {unique_dstarts}\")\n\n            if unique_dstarts and unique_timezones:\n                timezone = dtstarts[0].tzinfo.name\n            else:\n                timezone = \"UTC\"\n\n            rruleset_string = \"\"\n            if rrule._rrule:\n                rruleset_string += \"\\n\".join(str(r) for r in rrule._rrule)\n            if rrule._exrule:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"\\n\".join(str(r) for r in rrule._exrule).replace(\n                    \"RRULE\", \"EXRULE\"\n                )\n            if rrule._rdate:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"RDATE:\" + \",\".join(\n                    rd.strftime(\"%Y%m%dT%H%M%SZ\") for rd in rrule._rdate\n                )\n            if rrule._exdate:\n                rruleset_string += \"\\n\" if rruleset_string else \"\"\n                rruleset_string += \"EXDATE:\" + \",\".join(\n                    exd.strftime(\"%Y%m%dT%H%M%SZ\") for exd in rrule._exdate\n                )\n            return RRuleSchedule(rrule=rruleset_string, timezone=timezone)\n        else:\n            raise ValueError(f\"Invalid RRule object: {rrule}\")\n\n    def to_rrule(self) -> dateutil.rrule.rrule:\n\"\"\"\n        Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n        here\n        \"\"\"\n        rrule = dateutil.rrule.rrulestr(\n            self.rrule,\n            dtstart=DEFAULT_ANCHOR_DATE,\n            cache=True,\n        )\n        timezone = dateutil.tz.gettz(self.timezone)\n        if isinstance(rrule, dateutil.rrule.rrule):\n            kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n            if rrule._until:\n                kwargs.update(\n                    until=rrule._until.replace(tzinfo=timezone),\n                )\n            return rrule.replace(**kwargs)\n        elif isinstance(rrule, dateutil.rrule.rruleset):\n            # update rrules\n            localized_rrules = []\n            for rr in rrule._rrule:\n                kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n                if rr._until:\n                    kwargs.update(\n                        until=rr._until.replace(tzinfo=timezone),\n                    )\n                localized_rrules.append(rr.replace(**kwargs))\n            rrule._rrule = localized_rrules\n\n            # update exrules\n            localized_exrules = []\n            for exr in rrule._exrule:\n                kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n                if exr._until:\n                    kwargs.update(\n                        until=exr._until.replace(tzinfo=timezone),\n                    )\n                localized_exrules.append(exr.replace(**kwargs))\n            rrule._exrule = localized_exrules\n\n            # update rdates\n            localized_rdates = []\n            for rd in rrule._rdate:\n                localized_rdates.append(rd.replace(tzinfo=timezone))\n            rrule._rdate = localized_rdates\n\n            # update exdates\n            localized_exdates = []\n            for exd in rrule._exdate:\n                localized_exdates.append(exd.replace(tzinfo=timezone))\n            rrule._exdate = localized_exdates\n\n            return rrule\n\n    @validator(\"timezone\", always=True)\n    def valid_timezone(cls, v):\n        if v and v not in pytz.all_timezones_set:\n            raise ValueError(f'Invalid timezone: \"{v}\"')\n        elif v is None:\n            return \"UTC\"\n        return v\n\n    async def get_dates(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> List[pendulum.DateTime]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to None.  If a timezone-naive datetime is\n                provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): The maximum scheduled date to return. If\n                a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: A list of dates\n        \"\"\"\n        return sorted(self._get_dates_generator(n=n, start=start, end=end))\n\n    def _get_dates_generator(\n        self,\n        n: int = None,\n        start: datetime.datetime = None,\n        end: datetime.datetime = None,\n    ) -> Generator[pendulum.DateTime, None, None]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n        following the start date.\n\n        Args:\n            n (int): The number of dates to generate\n            start (datetime.datetime, optional): The first returned date will be on or\n                after this date. Defaults to the current date. If a timezone-naive\n                datetime is provided, it is assumed to be in the schedule's timezone.\n            end (datetime.datetime, optional): No returned date will exceed this date.\n                If a timezone-naive datetime is provided, it is assumed to be in the\n                schedule's timezone.\n\n        Returns:\n            List[pendulum.DateTime]: a list of dates\n        \"\"\"\n        if start is None:\n            start = pendulum.now(\"UTC\")\n\n        start, end = _prepare_scheduling_start_and_end(start, end, self.timezone)\n\n        if n is None:\n            # if an end was supplied, we do our best to supply all matching dates (up\n            # to MAX_ITERATIONS)\n            if end is not None:\n                n = MAX_ITERATIONS\n            else:\n                n = 1\n\n        dates = set()\n        counter = 0\n\n        # pass count = None to account for discrepancies with duplicates around DST\n        # boundaries\n        for next_date in self.to_rrule().xafter(start, count=None, inc=True):\n            next_date = pendulum.instance(next_date).in_tz(self.timezone)\n\n            # if the end date was exceeded, exit\n            if end and next_date > end:\n                break\n\n            # ensure no duplicates; weird things can happen with DST\n            if next_date not in dates:\n                dates.add(next_date)\n                yield next_date\n\n            # if enough dates have been collected or enough attempts were made, exit\n            if len(dates) >= n or counter > MAX_ITERATIONS:\n                break\n\n            counter += 1\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule.get_dates","title":"get_dates async","text":"

Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date.

Parameters:

Name Type Description Default n int

The number of dates to generate

None start datetime.datetime

The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

None end datetime.datetime

The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone.

None

Returns:

Type Description List[pendulum.DateTime]

List[pendulum.DateTime]: A list of dates

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
async def get_dates(\n    self,\n    n: int = None,\n    start: datetime.datetime = None,\n    end: datetime.datetime = None,\n) -> List[pendulum.DateTime]:\n\"\"\"Retrieves dates from the schedule. Up to 1,000 candidate dates are checked\n    following the start date.\n\n    Args:\n        n (int): The number of dates to generate\n        start (datetime.datetime, optional): The first returned date will be on or\n            after this date. Defaults to None.  If a timezone-naive datetime is\n            provided, it is assumed to be in the schedule's timezone.\n        end (datetime.datetime, optional): The maximum scheduled date to return. If\n            a timezone-naive datetime is provided, it is assumed to be in the\n            schedule's timezone.\n\n    Returns:\n        List[pendulum.DateTime]: A list of dates\n    \"\"\"\n    return sorted(self._get_dates_generator(n=n, start=start, end=end))\n
"},{"location":"api-ref/server/schemas/schedules/#prefect.server.schemas.schedules.RRuleSchedule.to_rrule","title":"to_rrule","text":"

Since rrule doesn't properly serialize/deserialize timezones, we localize dates here

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/schedules.py
def to_rrule(self) -> dateutil.rrule.rrule:\n\"\"\"\n    Since rrule doesn't properly serialize/deserialize timezones, we localize dates\n    here\n    \"\"\"\n    rrule = dateutil.rrule.rrulestr(\n        self.rrule,\n        dtstart=DEFAULT_ANCHOR_DATE,\n        cache=True,\n    )\n    timezone = dateutil.tz.gettz(self.timezone)\n    if isinstance(rrule, dateutil.rrule.rrule):\n        kwargs = dict(dtstart=rrule._dtstart.replace(tzinfo=timezone))\n        if rrule._until:\n            kwargs.update(\n                until=rrule._until.replace(tzinfo=timezone),\n            )\n        return rrule.replace(**kwargs)\n    elif isinstance(rrule, dateutil.rrule.rruleset):\n        # update rrules\n        localized_rrules = []\n        for rr in rrule._rrule:\n            kwargs = dict(dtstart=rr._dtstart.replace(tzinfo=timezone))\n            if rr._until:\n                kwargs.update(\n                    until=rr._until.replace(tzinfo=timezone),\n                )\n            localized_rrules.append(rr.replace(**kwargs))\n        rrule._rrule = localized_rrules\n\n        # update exrules\n        localized_exrules = []\n        for exr in rrule._exrule:\n            kwargs = dict(dtstart=exr._dtstart.replace(tzinfo=timezone))\n            if exr._until:\n                kwargs.update(\n                    until=exr._until.replace(tzinfo=timezone),\n                )\n            localized_exrules.append(exr.replace(**kwargs))\n        rrule._exrule = localized_exrules\n\n        # update rdates\n        localized_rdates = []\n        for rd in rrule._rdate:\n            localized_rdates.append(rd.replace(tzinfo=timezone))\n        rrule._rdate = localized_rdates\n\n        # update exdates\n        localized_exdates = []\n        for exd in rrule._exdate:\n            localized_exdates.append(exd.replace(tzinfo=timezone))\n        rrule._exdate = localized_exdates\n\n        return rrule\n
"},{"location":"api-ref/server/schemas/sorting/","title":"server.schemas.sorting","text":""},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting","title":"prefect.server.schemas.sorting","text":"

Schemas for sorting Prefect REST API objects.

"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactCollectionSort","title":"ArtifactCollectionSort","text":"

Bases: AutoEnum

Defines artifact collection sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class ArtifactCollectionSort(AutoEnum):\n\"\"\"Defines artifact collection sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    ID_DESC = AutoEnum.auto()\n    KEY_DESC = AutoEnum.auto()\n    KEY_ASC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort artifact collections\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.ArtifactCollection.created.desc(),\n            \"UPDATED_DESC\": db.ArtifactCollection.updated.desc(),\n            \"ID_DESC\": db.ArtifactCollection.id.desc(),\n            \"KEY_DESC\": db.ArtifactCollection.key.desc(),\n            \"KEY_ASC\": db.ArtifactCollection.key.asc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactCollectionSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort artifact collections

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort artifact collections\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.ArtifactCollection.created.desc(),\n        \"UPDATED_DESC\": db.ArtifactCollection.updated.desc(),\n        \"ID_DESC\": db.ArtifactCollection.id.desc(),\n        \"KEY_DESC\": db.ArtifactCollection.key.desc(),\n        \"KEY_ASC\": db.ArtifactCollection.key.asc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactSort","title":"ArtifactSort","text":"

Bases: AutoEnum

Defines artifact sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class ArtifactSort(AutoEnum):\n\"\"\"Defines artifact sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    ID_DESC = AutoEnum.auto()\n    KEY_DESC = AutoEnum.auto()\n    KEY_ASC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort artifacts\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Artifact.created.desc(),\n            \"UPDATED_DESC\": db.Artifact.updated.desc(),\n            \"ID_DESC\": db.Artifact.id.desc(),\n            \"KEY_DESC\": db.Artifact.key.desc(),\n            \"KEY_ASC\": db.Artifact.key.asc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.ArtifactSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort artifacts

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort artifacts\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Artifact.created.desc(),\n        \"UPDATED_DESC\": db.Artifact.updated.desc(),\n        \"ID_DESC\": db.Artifact.id.desc(),\n        \"KEY_DESC\": db.Artifact.key.desc(),\n        \"KEY_ASC\": db.Artifact.key.asc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.DeploymentSort","title":"DeploymentSort","text":"

Bases: AutoEnum

Defines deployment sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class DeploymentSort(AutoEnum):\n\"\"\"Defines deployment sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort deployments\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Deployment.created.desc(),\n            \"UPDATED_DESC\": db.Deployment.updated.desc(),\n            \"NAME_ASC\": db.Deployment.name.asc(),\n            \"NAME_DESC\": db.Deployment.name.desc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.DeploymentSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort deployments

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort deployments\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Deployment.created.desc(),\n        \"UPDATED_DESC\": db.Deployment.updated.desc(),\n        \"NAME_ASC\": db.Deployment.name.asc(),\n        \"NAME_DESC\": db.Deployment.name.desc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowRunSort","title":"FlowRunSort","text":"

Bases: AutoEnum

Defines flow run sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class FlowRunSort(AutoEnum):\n\"\"\"Defines flow run sorting options.\"\"\"\n\n    ID_DESC = AutoEnum.auto()\n    START_TIME_ASC = AutoEnum.auto()\n    START_TIME_DESC = AutoEnum.auto()\n    EXPECTED_START_TIME_ASC = AutoEnum.auto()\n    EXPECTED_START_TIME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n    NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n    END_TIME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n        from sqlalchemy.sql.functions import coalesce\n\n\"\"\"Return an expression used to sort flow runs\"\"\"\n        sort_mapping = {\n            \"ID_DESC\": db.FlowRun.id.desc(),\n            \"START_TIME_ASC\": coalesce(\n                db.FlowRun.start_time, db.FlowRun.expected_start_time\n            ).asc(),\n            \"START_TIME_DESC\": coalesce(\n                db.FlowRun.start_time, db.FlowRun.expected_start_time\n            ).desc(),\n            \"EXPECTED_START_TIME_ASC\": db.FlowRun.expected_start_time.asc(),\n            \"EXPECTED_START_TIME_DESC\": db.FlowRun.expected_start_time.desc(),\n            \"NAME_ASC\": db.FlowRun.name.asc(),\n            \"NAME_DESC\": db.FlowRun.name.desc(),\n            \"NEXT_SCHEDULED_START_TIME_ASC\": db.FlowRun.next_scheduled_start_time.asc(),\n            \"END_TIME_DESC\": db.FlowRun.end_time.desc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowSort","title":"FlowSort","text":"

Bases: AutoEnum

Defines flow sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class FlowSort(AutoEnum):\n\"\"\"Defines flow sorting options.\"\"\"\n\n    CREATED_DESC = AutoEnum.auto()\n    UPDATED_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort flows\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Flow.created.desc(),\n            \"UPDATED_DESC\": db.Flow.updated.desc(),\n            \"NAME_ASC\": db.Flow.name.asc(),\n            \"NAME_DESC\": db.Flow.name.desc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.FlowSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort flows

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort flows\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Flow.created.desc(),\n        \"UPDATED_DESC\": db.Flow.updated.desc(),\n        \"NAME_ASC\": db.Flow.name.asc(),\n        \"NAME_DESC\": db.Flow.name.desc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.LogSort","title":"LogSort","text":"

Bases: AutoEnum

Defines log sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class LogSort(AutoEnum):\n\"\"\"Defines log sorting options.\"\"\"\n\n    TIMESTAMP_ASC = AutoEnum.auto()\n    TIMESTAMP_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort task runs\"\"\"\n        sort_mapping = {\n            \"TIMESTAMP_ASC\": db.Log.timestamp.asc(),\n            \"TIMESTAMP_DESC\": db.Log.timestamp.desc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.LogSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort task runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort task runs\"\"\"\n    sort_mapping = {\n        \"TIMESTAMP_ASC\": db.Log.timestamp.asc(),\n        \"TIMESTAMP_DESC\": db.Log.timestamp.desc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.TaskRunSort","title":"TaskRunSort","text":"

Bases: AutoEnum

Defines task run sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class TaskRunSort(AutoEnum):\n\"\"\"Defines task run sorting options.\"\"\"\n\n    ID_DESC = AutoEnum.auto()\n    EXPECTED_START_TIME_ASC = AutoEnum.auto()\n    EXPECTED_START_TIME_DESC = AutoEnum.auto()\n    NAME_ASC = AutoEnum.auto()\n    NAME_DESC = AutoEnum.auto()\n    NEXT_SCHEDULED_START_TIME_ASC = AutoEnum.auto()\n    END_TIME_DESC = AutoEnum.auto()\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort task runs\"\"\"\n        sort_mapping = {\n            \"ID_DESC\": db.TaskRun.id.desc(),\n            \"EXPECTED_START_TIME_ASC\": db.TaskRun.expected_start_time.asc(),\n            \"EXPECTED_START_TIME_DESC\": db.TaskRun.expected_start_time.desc(),\n            \"NAME_ASC\": db.TaskRun.name.asc(),\n            \"NAME_DESC\": db.TaskRun.name.desc(),\n            \"NEXT_SCHEDULED_START_TIME_ASC\": db.TaskRun.next_scheduled_start_time.asc(),\n            \"END_TIME_DESC\": db.TaskRun.end_time.desc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.TaskRunSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort task runs

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort task runs\"\"\"\n    sort_mapping = {\n        \"ID_DESC\": db.TaskRun.id.desc(),\n        \"EXPECTED_START_TIME_ASC\": db.TaskRun.expected_start_time.asc(),\n        \"EXPECTED_START_TIME_DESC\": db.TaskRun.expected_start_time.desc(),\n        \"NAME_ASC\": db.TaskRun.name.asc(),\n        \"NAME_DESC\": db.TaskRun.name.desc(),\n        \"NEXT_SCHEDULED_START_TIME_ASC\": db.TaskRun.next_scheduled_start_time.asc(),\n        \"END_TIME_DESC\": db.TaskRun.end_time.desc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.VariableSort","title":"VariableSort","text":"

Bases: AutoEnum

Defines variables sorting options.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
class VariableSort(AutoEnum):\n\"\"\"Defines variables sorting options.\"\"\"\n\n    CREATED_DESC = \"CREATED_DESC\"\n    UPDATED_DESC = \"UPDATED_DESC\"\n    NAME_DESC = \"NAME_DESC\"\n    NAME_ASC = \"NAME_ASC\"\n\n    def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort variables\"\"\"\n        sort_mapping = {\n            \"CREATED_DESC\": db.Variable.created.desc(),\n            \"UPDATED_DESC\": db.Variable.updated.desc(),\n            \"NAME_DESC\": db.Variable.name.desc(),\n            \"NAME_ASC\": db.Variable.name.asc(),\n        }\n        return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/sorting/#prefect.server.schemas.sorting.VariableSort.as_sql_sort","title":"as_sql_sort","text":"

Return an expression used to sort variables

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/sorting.py
def as_sql_sort(self, db: \"PrefectDBInterface\") -> \"ColumnElement\":\n\"\"\"Return an expression used to sort variables\"\"\"\n    sort_mapping = {\n        \"CREATED_DESC\": db.Variable.created.desc(),\n        \"UPDATED_DESC\": db.Variable.updated.desc(),\n        \"NAME_DESC\": db.Variable.name.desc(),\n        \"NAME_ASC\": db.Variable.name.asc(),\n    }\n    return sort_mapping[self.value]\n
"},{"location":"api-ref/server/schemas/states/","title":"server.schemas.states","text":""},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states","title":"prefect.server.schemas.states","text":"

State schemas.

"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State","title":"State","text":"

Bases: StateBaseModel, Generic[R]

Represents the state of a run.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
class State(StateBaseModel, Generic[R]):\n\"\"\"Represents the state of a run.\"\"\"\n\n    class Config:\n        orm_mode = True\n\n    type: StateType\n    name: Optional[str] = Field(default=None)\n    timestamp: DateTimeTZ = Field(default_factory=lambda: pendulum.now(\"UTC\"))\n    message: Optional[str] = Field(default=None, example=\"Run started\")\n    data: Optional[Any] = Field(\n        default=None,\n        description=(\n            \"Data associated with the state, e.g. a result. \"\n            \"Content must be storable as JSON.\"\n        ),\n    )\n    state_details: StateDetails = Field(default_factory=StateDetails)\n\n    @classmethod\n    def from_orm_without_result(\n        cls,\n        orm_state: Union[\n            \"prefect.server.database.orm_models.ORMFlowRunState\",\n            \"prefect.server.database.orm_models.ORMTaskRunState\",\n        ],\n        with_data: Optional[Any] = None,\n    ):\n\"\"\"\n        During orchestration, ORM states can be instantiated prior to inserting results\n        into the artifact table and the `data` field will not be eagerly loaded. In\n        these cases, sqlalchemy will attept to lazily load the the relationship, which\n        will fail when called within a synchronous pydantic method.\n\n        This method will construct a `State` object from an ORM model without a loaded\n        artifact and attach data passed using the `with_data` argument to the `data`\n        field.\n        \"\"\"\n\n        field_keys = cls.schema()[\"properties\"].keys()\n        state_data = {\n            field: getattr(orm_state, field, None)\n            for field in field_keys\n            if field != \"data\"\n        }\n        state_data[\"data\"] = with_data\n        return cls(**state_data)\n\n    @validator(\"name\", always=True)\n    def default_name_from_type(cls, v, *, values, **kwargs):\n\"\"\"If a name is not provided, use the type\"\"\"\n\n        # if `type` is not in `values` it means the `type` didn't pass its own\n        # validation check and an error will be raised after this function is called\n        if v is None and values.get(\"type\"):\n            v = \" \".join([v.capitalize() for v in values.get(\"type\").value.split(\"_\")])\n        return v\n\n    @root_validator\n    def default_scheduled_start_time(cls, values):\n\"\"\"\n        TODO: This should throw an error instead of setting a default but is out of\n              scope for https://github.com/PrefectHQ/orion/pull/174/ and can be rolled\n              into work refactoring state initialization\n        \"\"\"\n        if values.get(\"type\") == StateType.SCHEDULED:\n            state_details = values.setdefault(\n                \"state_details\", cls.__fields__[\"state_details\"].get_default()\n            )\n            if not state_details.scheduled_time:\n                state_details.scheduled_time = pendulum.now(\"utc\")\n        return values\n\n    def is_scheduled(self) -> bool:\n        return self.type == StateType.SCHEDULED\n\n    def is_pending(self) -> bool:\n        return self.type == StateType.PENDING\n\n    def is_running(self) -> bool:\n        return self.type == StateType.RUNNING\n\n    def is_completed(self) -> bool:\n        return self.type == StateType.COMPLETED\n\n    def is_failed(self) -> bool:\n        return self.type == StateType.FAILED\n\n    def is_crashed(self) -> bool:\n        return self.type == StateType.CRASHED\n\n    def is_cancelled(self) -> bool:\n        return self.type == StateType.CANCELLED\n\n    def is_cancelling(self) -> bool:\n        return self.type == StateType.CANCELLING\n\n    def is_final(self) -> bool:\n        return self.type in TERMINAL_STATES\n\n    def is_paused(self) -> bool:\n        return self.type == StateType.PAUSED\n\n    def copy(self, *, update: dict = None, reset_fields: bool = False, **kwargs):\n\"\"\"\n        Copying API models should return an object that could be inserted into the\n        database again. The 'timestamp' is reset using the default factory.\n        \"\"\"\n        update = update or {}\n        update.setdefault(\"timestamp\", self.__fields__[\"timestamp\"].get_default())\n        return super().copy(reset_fields=reset_fields, update=update, **kwargs)\n\n    def result(self, raise_on_failure: bool = True, fetch: Optional[bool] = None):\n        # Backwards compatible `result` handling on the server-side schema\n        from prefect.states import State\n\n        warnings.warn(\n            (\n                \"`result` is no longer supported by\"\n                \" `prefect.server.schemas.states.State` and will be removed in a future\"\n                \" release. When result retrieval is needed, use `prefect.states.State`.\"\n            ),\n            DeprecationWarning,\n            stacklevel=2,\n        )\n\n        state = State.parse_obj(self)\n        return state.result(raise_on_failure=raise_on_failure, fetch=fetch)\n\n    def to_state_create(self):\n        # Backwards compatibility for `to_state_create`\n        from prefect.client.schemas import State\n\n        warnings.warn(\n            (\n                \"Use of `prefect.server.schemas.states.State` from the client is\"\n                \" deprecated and support will be removed in a future release. Use\"\n                \" `prefect.states.State` instead.\"\n            ),\n            DeprecationWarning,\n            stacklevel=2,\n        )\n\n        state = State.parse_obj(self)\n        return state.to_state_create()\n\n    def __repr__(self) -> str:\n\"\"\"\n        Generates a complete state representation appropriate for introspection\n        and debugging, including the result:\n\n        `MyCompletedState(message=\"my message\", type=COMPLETED, result=...)`\n        \"\"\"\n        from prefect.deprecated.data_documents import DataDocument\n\n        if isinstance(self.data, DataDocument):\n            result = self.data.decode()\n        else:\n            result = self.data\n\n        display = dict(\n            message=repr(self.message),\n            type=str(self.type.value),\n            result=repr(result),\n        )\n\n        return f\"{self.name}({', '.join(f'{k}={v}' for k, v in display.items())})\"\n\n    def __str__(self) -> str:\n\"\"\"\n        Generates a simple state representation appropriate for logging:\n\n        `MyCompletedState(\"my message\", type=COMPLETED)`\n        \"\"\"\n\n        display = []\n\n        if self.message:\n            display.append(repr(self.message))\n\n        if self.type.value.lower() != self.name.lower():\n            display.append(f\"type={self.type.value}\")\n\n        return f\"{self.name}({', '.join(display)})\"\n\n    def __hash__(self) -> int:\n        return hash(\n            (\n                getattr(self.state_details, \"flow_run_id\", None),\n                getattr(self.state_details, \"task_run_id\", None),\n                self.timestamp,\n                self.type,\n            )\n        )\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.from_orm_without_result","title":"from_orm_without_result classmethod","text":"

During orchestration, ORM states can be instantiated prior to inserting results into the artifact table and the data field will not be eagerly loaded. In these cases, sqlalchemy will attept to lazily load the the relationship, which will fail when called within a synchronous pydantic method.

This method will construct a State object from an ORM model without a loaded artifact and attach data passed using the with_data argument to the data field.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
@classmethod\ndef from_orm_without_result(\n    cls,\n    orm_state: Union[\n        \"prefect.server.database.orm_models.ORMFlowRunState\",\n        \"prefect.server.database.orm_models.ORMTaskRunState\",\n    ],\n    with_data: Optional[Any] = None,\n):\n\"\"\"\n    During orchestration, ORM states can be instantiated prior to inserting results\n    into the artifact table and the `data` field will not be eagerly loaded. In\n    these cases, sqlalchemy will attept to lazily load the the relationship, which\n    will fail when called within a synchronous pydantic method.\n\n    This method will construct a `State` object from an ORM model without a loaded\n    artifact and attach data passed using the `with_data` argument to the `data`\n    field.\n    \"\"\"\n\n    field_keys = cls.schema()[\"properties\"].keys()\n    state_data = {\n        field: getattr(orm_state, field, None)\n        for field in field_keys\n        if field != \"data\"\n    }\n    state_data[\"data\"] = with_data\n    return cls(**state_data)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.State.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel","title":"StateBaseModel","text":"

Bases: IDBaseModel

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
class StateBaseModel(IDBaseModel):\n    def orm_dict(\n        self, *args, shallow: bool = False, json_compatible: bool = False, **kwargs\n    ) -> dict:\n\"\"\"\n        This method is used as a convenience method for constructing fixtues by first\n        building a `State` schema object and converting it into an ORM-compatible\n        format. Because the `data` field is not writable on ORM states, this method\n        omits the `data` field entirely for the purposes of constructing an ORM model.\n        If state data is required, an artifact must be created separately.\n        \"\"\"\n\n        schema_dict = self.dict(\n            *args, shallow=shallow, json_compatible=json_compatible, **kwargs\n        )\n        # remove the data field in order to construct a state ORM model\n        schema_dict.pop(\"data\", None)\n        return schema_dict\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel.json","title":"json","text":"

Returns a representation of the model as JSON.

If include_secrets=True, then SecretStr and SecretBytes objects are fully revealed. Otherwise they are obfuscated.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/_internal/schemas/bases.py
def json(self, *args, include_secrets: bool = False, **kwargs) -> str:\n\"\"\"\n    Returns a representation of the model as JSON.\n\n    If `include_secrets=True`, then `SecretStr` and `SecretBytes` objects are\n    fully revealed. Otherwise they are obfuscated.\n\n    \"\"\"\n    if include_secrets:\n        if \"encoder\" in kwargs:\n            raise ValueError(\n                \"Alternative encoder provided; can not set encoder for\"\n                \" SecretFields.\"\n            )\n        kwargs[\"encoder\"] = partial(\n            custom_pydantic_encoder,\n            {SecretField: lambda v: v.get_secret_value() if v else None},\n        )\n    return super().json(*args, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateBaseModel.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateType","title":"StateType","text":"

Bases: AutoEnum

Enumeration of state types.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
class StateType(AutoEnum):\n\"\"\"Enumeration of state types.\"\"\"\n\n    SCHEDULED = AutoEnum.auto()\n    PENDING = AutoEnum.auto()\n    RUNNING = AutoEnum.auto()\n    COMPLETED = AutoEnum.auto()\n    FAILED = AutoEnum.auto()\n    CANCELLED = AutoEnum.auto()\n    CRASHED = AutoEnum.auto()\n    PAUSED = AutoEnum.auto()\n    CANCELLING = AutoEnum.auto()\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.StateType.auto","title":"auto staticmethod","text":"

Exposes enum.auto() to avoid requiring a second import to use AutoEnum

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/utilities/collections.py
@staticmethod\ndef auto():\n\"\"\"\n    Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum`\n    \"\"\"\n    return auto()\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.AwaitingRetry","title":"AwaitingRetry","text":"

Convenience function for creating AwaitingRetry states.

Returns:

Name Type Description State State

a AwaitingRetry state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def AwaitingRetry(\n    scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n\"\"\"Convenience function for creating `AwaitingRetry` states.\n\n    Returns:\n        State: a AwaitingRetry state\n    \"\"\"\n    return Scheduled(\n        cls=cls, scheduled_time=scheduled_time, name=\"AwaitingRetry\", **kwargs\n    )\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Cancelled","title":"Cancelled","text":"

Convenience function for creating Cancelled states.

Returns:

Name Type Description State State

a Cancelled state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Cancelled(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Cancelled` states.\n\n    Returns:\n        State: a Cancelled state\n    \"\"\"\n    return cls(type=StateType.CANCELLED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Cancelling","title":"Cancelling","text":"

Convenience function for creating Cancelling states.

Returns:

Name Type Description State State

a Cancelling state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Cancelling(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Cancelling` states.\n\n    Returns:\n        State: a Cancelling state\n    \"\"\"\n    return cls(type=StateType.CANCELLING, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Completed","title":"Completed","text":"

Convenience function for creating Completed states.

Returns:

Name Type Description State State

a Completed state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Completed(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Completed` states.\n\n    Returns:\n        State: a Completed state\n    \"\"\"\n    return cls(type=StateType.COMPLETED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Crashed","title":"Crashed","text":"

Convenience function for creating Crashed states.

Returns:

Name Type Description State State

a Crashed state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Crashed(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Crashed` states.\n\n    Returns:\n        State: a Crashed state\n    \"\"\"\n    return cls(type=StateType.CRASHED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Failed","title":"Failed","text":"

Convenience function for creating Failed states.

Returns:

Name Type Description State State

a Failed state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Failed(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Failed` states.\n\n    Returns:\n        State: a Failed state\n    \"\"\"\n    return cls(type=StateType.FAILED, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Late","title":"Late","text":"

Convenience function for creating Late states.

Returns:

Name Type Description State State

a Late state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Late(\n    scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n\"\"\"Convenience function for creating `Late` states.\n\n    Returns:\n        State: a Late state\n    \"\"\"\n    return Scheduled(cls=cls, scheduled_time=scheduled_time, name=\"Late\", **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Paused","title":"Paused","text":"

Convenience function for creating Paused states.

Returns:

Name Type Description State State

a Paused state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Paused(\n    cls: Type[State] = State,\n    timeout_seconds: int = None,\n    pause_expiration_time: datetime.datetime = None,\n    reschedule: bool = False,\n    pause_key: str = None,\n    **kwargs,\n) -> State:\n\"\"\"Convenience function for creating `Paused` states.\n\n    Returns:\n        State: a Paused state\n    \"\"\"\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n\n    if state_details.pause_timeout:\n        raise ValueError(\"An extra pause timeout was provided in state_details\")\n\n    if pause_expiration_time is not None and timeout_seconds is not None:\n        raise ValueError(\n            \"Cannot supply both a pause_expiration_time and timeout_seconds\"\n        )\n\n    if pause_expiration_time is None and timeout_seconds is None:\n        pass\n    else:\n        state_details.pause_timeout = pause_expiration_time or (\n            pendulum.now(\"UTC\") + pendulum.Duration(seconds=timeout_seconds)\n        )\n\n    state_details.pause_reschedule = reschedule\n    state_details.pause_key = pause_key\n\n    return cls(type=StateType.PAUSED, state_details=state_details, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Pending","title":"Pending","text":"

Convenience function for creating Pending states.

Returns:

Name Type Description State State

a Pending state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Pending(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Pending` states.\n\n    Returns:\n        State: a Pending state\n    \"\"\"\n    return cls(type=StateType.PENDING, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Retrying","title":"Retrying","text":"

Convenience function for creating Retrying states.

Returns:

Name Type Description State State

a Retrying state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Retrying(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Retrying` states.\n\n    Returns:\n        State: a Retrying state\n    \"\"\"\n    return cls(type=StateType.RUNNING, name=\"Retrying\", **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Running","title":"Running","text":"

Convenience function for creating Running states.

Returns:

Name Type Description State State

a Running state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Running(cls: Type[State] = State, **kwargs) -> State:\n\"\"\"Convenience function for creating `Running` states.\n\n    Returns:\n        State: a Running state\n    \"\"\"\n    return cls(type=StateType.RUNNING, **kwargs)\n
"},{"location":"api-ref/server/schemas/states/#prefect.server.schemas.states.Scheduled","title":"Scheduled","text":"

Convenience function for creating Scheduled states.

Returns:

Name Type Description State State

a Scheduled state

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/schemas/states.py
def Scheduled(\n    scheduled_time: datetime.datetime = None, cls: Type[State] = State, **kwargs\n) -> State:\n\"\"\"Convenience function for creating `Scheduled` states.\n\n    Returns:\n        State: a Scheduled state\n    \"\"\"\n    # NOTE: `scheduled_time` must come first for backwards compatibility\n\n    state_details = StateDetails.parse_obj(kwargs.pop(\"state_details\", {}))\n    if scheduled_time is None:\n        scheduled_time = pendulum.now(\"UTC\")\n    elif state_details.scheduled_time:\n        raise ValueError(\"An extra scheduled_time was provided in state_details\")\n    state_details.scheduled_time = scheduled_time\n\n    return cls(type=StateType.SCHEDULED, state_details=state_details, **kwargs)\n
"},{"location":"api-ref/server/services/late_runs/","title":"server.services.late_runs","text":""},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs","title":"prefect.server.services.late_runs","text":"

The MarkLateRuns service. Responsible for putting flow runs in a Late state if they are not started on time. The threshold for a late run can be configured by changing PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS.

"},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs.MarkLateRuns","title":"MarkLateRuns","text":"

Bases: LoopService

A simple loop service responsible for identifying flow runs that are \"late\".

A flow run is defined as \"late\" if has not scheduled within a certain amount of time after its scheduled start time. The exact amount is configurable in Prefect REST API Settings.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/late_runs.py
class MarkLateRuns(LoopService):\n\"\"\"\n    A simple loop service responsible for identifying flow runs that are \"late\".\n\n    A flow run is defined as \"late\" if has not scheduled within a certain amount\n    of time after its scheduled start time. The exact amount is configurable in\n    Prefect REST API Settings.\n    \"\"\"\n\n    def __init__(self, loop_seconds: float = None, **kwargs):\n        super().__init__(\n            loop_seconds=loop_seconds\n            or PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS.value(),\n            **kwargs,\n        )\n\n        # mark runs late if they are this far past their expected start time\n        self.mark_late_after: datetime.timedelta = (\n            PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS.value()\n        )\n\n        # query for this many runs to mark as late at once\n        self.batch_size = 400\n\n    @inject_db\n    async def run_once(self, db: PrefectDBInterface):\n\"\"\"\n        Mark flow runs as late by:\n\n        - Querying for flow runs in a scheduled state that are Scheduled to start in the past\n        - For any runs past the \"late\" threshold, setting the flow run state to a new `Late` state\n        \"\"\"\n        scheduled_to_start_before = pendulum.now(\"UTC\").subtract(\n            seconds=self.mark_late_after.total_seconds()\n        )\n\n        while True:\n            async with db.session_context(begin_transaction=True) as session:\n                query = self._get_select_late_flow_runs_query(\n                    scheduled_to_start_before=scheduled_to_start_before, db=db\n                )\n\n                result = await session.execute(query)\n                runs = result.all()\n\n                # mark each run as late\n                for run in runs:\n                    await self._mark_flow_run_as_late(session=session, flow_run=run)\n\n                # if no runs were found, exit the loop\n                if len(runs) < self.batch_size:\n                    break\n\n        self.logger.info(\"Finished monitoring for late runs.\")\n\n    @inject_db\n    def _get_select_late_flow_runs_query(\n        self, scheduled_to_start_before: datetime.datetime, db: PrefectDBInterface\n    ):\n\"\"\"\n        Returns a sqlalchemy query for late flow runs.\n\n        Args:\n            scheduled_to_start_before: the maximum next scheduled start time of\n                scheduled flow runs to consider in the returned query\n        \"\"\"\n        query = (\n            sa.select(\n                db.FlowRun.id,\n                db.FlowRun.next_scheduled_start_time,\n            )\n            .where(\n                # The next scheduled start time is in the past, including the mark late\n                # after buffer\n                (db.FlowRun.next_scheduled_start_time <= scheduled_to_start_before),\n                db.FlowRun.state_type == states.StateType.SCHEDULED,\n                db.FlowRun.state_name == \"Scheduled\",\n            )\n            .limit(self.batch_size)\n        )\n        return query\n\n    async def _mark_flow_run_as_late(\n        self, session: AsyncSession, flow_run: PrefectDBInterface.FlowRun\n    ) -> None:\n\"\"\"\n        Mark a flow run as late.\n\n        Pass-through method for overrides.\n        \"\"\"\n        await models.flow_runs.set_flow_run_state(\n            session=session,\n            flow_run_id=flow_run.id,\n            state=states.Late(scheduled_time=flow_run.next_scheduled_start_time),\n            force=True,\n        )\n
"},{"location":"api-ref/server/services/late_runs/#prefect.server.services.late_runs.MarkLateRuns.run_once","title":"run_once async","text":"

Mark flow runs as late by:

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/late_runs.py
@inject_db\nasync def run_once(self, db: PrefectDBInterface):\n\"\"\"\n    Mark flow runs as late by:\n\n    - Querying for flow runs in a scheduled state that are Scheduled to start in the past\n    - For any runs past the \"late\" threshold, setting the flow run state to a new `Late` state\n    \"\"\"\n    scheduled_to_start_before = pendulum.now(\"UTC\").subtract(\n        seconds=self.mark_late_after.total_seconds()\n    )\n\n    while True:\n        async with db.session_context(begin_transaction=True) as session:\n            query = self._get_select_late_flow_runs_query(\n                scheduled_to_start_before=scheduled_to_start_before, db=db\n            )\n\n            result = await session.execute(query)\n            runs = result.all()\n\n            # mark each run as late\n            for run in runs:\n                await self._mark_flow_run_as_late(session=session, flow_run=run)\n\n            # if no runs were found, exit the loop\n            if len(runs) < self.batch_size:\n                break\n\n    self.logger.info(\"Finished monitoring for late runs.\")\n
"},{"location":"api-ref/server/services/loop_service/","title":"server.services.loop_service","text":""},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service","title":"prefect.server.services.loop_service","text":"

The base class for all Prefect REST API loop services.

"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService","title":"LoopService","text":"

Loop services are relatively lightweight maintenance routines that need to run periodically.

This class makes it straightforward to design and integrate them. Users only need to define the run_once coroutine to describe the behavior of the service on each loop.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/loop_service.py
class LoopService:\n\"\"\"\n    Loop services are relatively lightweight maintenance routines that need to run periodically.\n\n    This class makes it straightforward to design and integrate them. Users only need to\n    define the `run_once` coroutine to describe the behavior of the service on each loop.\n    \"\"\"\n\n    loop_seconds = 60\n\n    def __init__(self, loop_seconds: float = None, handle_signals: bool = True):\n\"\"\"\n        Args:\n            loop_seconds (float): if provided, overrides the loop interval\n                otherwise specified as a class variable\n            handle_signals (bool): if True (default), SIGINT and SIGTERM are\n                gracefully intercepted and shut down the running service.\n        \"\"\"\n        if loop_seconds:\n            self.loop_seconds = loop_seconds  # seconds between runs\n        self._should_stop = False  # flag for whether the service should stop running\n        self._is_running = False  # flag for whether the service is running\n        self.name = type(self).__name__\n        self.logger = get_logger(f\"server.services.{self.name.lower()}\")\n\n        if handle_signals:\n            signal.signal(signal.SIGINT, self._stop)\n            signal.signal(signal.SIGTERM, self._stop)\n\n    @inject_db\n    async def _on_start(self, db: PrefectDBInterface) -> None:\n\"\"\"\n        Called prior to running the service\n        \"\"\"\n        # reset the _should_stop flag\n        self._should_stop = False\n        # set the _is_running flag\n        self._is_running = True\n\n    async def _on_stop(self) -> None:\n\"\"\"\n        Called after running the service\n        \"\"\"\n        # reset the _is_running flag\n        self._is_running = False\n\n    async def start(self, loops=None) -> None:\n\"\"\"\n        Run the service `loops` time. Pass loops=None to run forever.\n\n        Args:\n            loops (int, optional): the number of loops to run before exiting.\n        \"\"\"\n\n        await self._on_start()\n\n        i = 0\n        while not self._should_stop:\n            start_time = pendulum.now(\"UTC\")\n\n            try:\n                self.logger.debug(f\"About to run {self.name}...\")\n                await self.run_once()\n\n            except NotImplementedError as exc:\n                raise exc from None\n\n            # if an error is raised, log and continue\n            except Exception as exc:\n                self.logger.error(f\"Unexpected error in: {repr(exc)}\", exc_info=True)\n\n            end_time = pendulum.now(\"UTC\")\n\n            # if service took longer than its loop interval, log a warning\n            # that the interval might be too short\n            if (end_time - start_time).total_seconds() > self.loop_seconds:\n                self.logger.warning(\n                    f\"{self.name} took {(end_time-start_time).total_seconds()} seconds\"\n                    \" to run, which is longer than its loop interval of\"\n                    f\" {self.loop_seconds} seconds.\"\n                )\n\n            # check if early stopping was requested\n            i += 1\n            if loops is not None and i == loops:\n                self.logger.debug(f\"{self.name} exiting after {loops} loop(s).\")\n                await self.stop(block=False)\n\n            # next run is every \"loop seconds\" after each previous run *started*.\n            # note that if the loop took unexpectedly long, the \"next_run\" time\n            # might be in the past, which will result in an instant start\n            next_run = max(\n                start_time.add(seconds=self.loop_seconds), pendulum.now(\"UTC\")\n            )\n            self.logger.debug(f\"Finished running {self.name}. Next run at {next_run}\")\n\n            # check the `_should_stop` flag every 1 seconds until the next run time is reached\n            while pendulum.now(\"UTC\") < next_run and not self._should_stop:\n                await asyncio.sleep(\n                    min(1, (next_run - pendulum.now(\"UTC\")).total_seconds())\n                )\n\n        await self._on_stop()\n\n    async def stop(self, block=True) -> None:\n\"\"\"\n        Gracefully stops a running LoopService and optionally blocks until the\n        service stops.\n\n        Args:\n            block (bool): if True, blocks until the service is\n                finished running. Otherwise it requests a stop and returns but\n                the service may still be running a final loop.\n\n        \"\"\"\n        self._stop()\n\n        if block:\n            # if block=True, sleep until the service stops running,\n            # but no more than `loop_seconds` to avoid a deadlock\n            with anyio.move_on_after(self.loop_seconds):\n                while self._is_running:\n                    await asyncio.sleep(0.1)\n\n            # if the service is still running after `loop_seconds`, something's wrong\n            if self._is_running:\n                self.logger.warning(\n                    f\"`stop(block=True)` was called on {self.name} but more than one\"\n                    f\" loop interval ({self.loop_seconds} seconds) has passed. This\"\n                    \" usually means something is wrong. If `stop()` was called from\"\n                    \" inside the loop service, use `stop(block=False)` isntead.\"\n                )\n\n    def _stop(self, *_) -> None:\n\"\"\"\n        Private, synchronous method for setting the `_should_stop` flag. Takes arbitrary\n        arguments so it can be used as a signal handler.\n        \"\"\"\n        self._should_stop = True\n\n    async def run_once(self) -> None:\n\"\"\"\n        Represents one loop of the service.\n\n        Users should override this method.\n\n        To actually run the service once, call `LoopService().start(loops=1)`\n        instead of `LoopService().run_once()`, because this method will not invoke setup\n        and teardown methods properly.\n        \"\"\"\n        raise NotImplementedError(\"LoopService subclasses must implement this method.\")\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.run_once","title":"run_once async","text":"

Represents one loop of the service.

Users should override this method.

To actually run the service once, call LoopService().start(loops=1) instead of LoopService().run_once(), because this method will not invoke setup and teardown methods properly.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/loop_service.py
async def run_once(self) -> None:\n\"\"\"\n    Represents one loop of the service.\n\n    Users should override this method.\n\n    To actually run the service once, call `LoopService().start(loops=1)`\n    instead of `LoopService().run_once()`, because this method will not invoke setup\n    and teardown methods properly.\n    \"\"\"\n    raise NotImplementedError(\"LoopService subclasses must implement this method.\")\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.start","title":"start async","text":"

Run the service loops time. Pass loops=None to run forever.

Parameters:

Name Type Description Default loops int

the number of loops to run before exiting.

None Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/loop_service.py
async def start(self, loops=None) -> None:\n\"\"\"\n    Run the service `loops` time. Pass loops=None to run forever.\n\n    Args:\n        loops (int, optional): the number of loops to run before exiting.\n    \"\"\"\n\n    await self._on_start()\n\n    i = 0\n    while not self._should_stop:\n        start_time = pendulum.now(\"UTC\")\n\n        try:\n            self.logger.debug(f\"About to run {self.name}...\")\n            await self.run_once()\n\n        except NotImplementedError as exc:\n            raise exc from None\n\n        # if an error is raised, log and continue\n        except Exception as exc:\n            self.logger.error(f\"Unexpected error in: {repr(exc)}\", exc_info=True)\n\n        end_time = pendulum.now(\"UTC\")\n\n        # if service took longer than its loop interval, log a warning\n        # that the interval might be too short\n        if (end_time - start_time).total_seconds() > self.loop_seconds:\n            self.logger.warning(\n                f\"{self.name} took {(end_time-start_time).total_seconds()} seconds\"\n                \" to run, which is longer than its loop interval of\"\n                f\" {self.loop_seconds} seconds.\"\n            )\n\n        # check if early stopping was requested\n        i += 1\n        if loops is not None and i == loops:\n            self.logger.debug(f\"{self.name} exiting after {loops} loop(s).\")\n            await self.stop(block=False)\n\n        # next run is every \"loop seconds\" after each previous run *started*.\n        # note that if the loop took unexpectedly long, the \"next_run\" time\n        # might be in the past, which will result in an instant start\n        next_run = max(\n            start_time.add(seconds=self.loop_seconds), pendulum.now(\"UTC\")\n        )\n        self.logger.debug(f\"Finished running {self.name}. Next run at {next_run}\")\n\n        # check the `_should_stop` flag every 1 seconds until the next run time is reached\n        while pendulum.now(\"UTC\") < next_run and not self._should_stop:\n            await asyncio.sleep(\n                min(1, (next_run - pendulum.now(\"UTC\")).total_seconds())\n            )\n\n    await self._on_stop()\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.LoopService.stop","title":"stop async","text":"

Gracefully stops a running LoopService and optionally blocks until the service stops.

Parameters:

Name Type Description Default block bool

if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop.

True Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/loop_service.py
async def stop(self, block=True) -> None:\n\"\"\"\n    Gracefully stops a running LoopService and optionally blocks until the\n    service stops.\n\n    Args:\n        block (bool): if True, blocks until the service is\n            finished running. Otherwise it requests a stop and returns but\n            the service may still be running a final loop.\n\n    \"\"\"\n    self._stop()\n\n    if block:\n        # if block=True, sleep until the service stops running,\n        # but no more than `loop_seconds` to avoid a deadlock\n        with anyio.move_on_after(self.loop_seconds):\n            while self._is_running:\n                await asyncio.sleep(0.1)\n\n        # if the service is still running after `loop_seconds`, something's wrong\n        if self._is_running:\n            self.logger.warning(\n                f\"`stop(block=True)` was called on {self.name} but more than one\"\n                f\" loop interval ({self.loop_seconds} seconds) has passed. This\"\n                \" usually means something is wrong. If `stop()` was called from\"\n                \" inside the loop service, use `stop(block=False)` isntead.\"\n            )\n
"},{"location":"api-ref/server/services/loop_service/#prefect.server.services.loop_service.run_multiple_services","title":"run_multiple_services async","text":"

Only one signal handler can be active at a time, so this function takes a list of loop services and runs all of them with a global signal handler.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/loop_service.py
async def run_multiple_services(loop_services: List[LoopService]):\n\"\"\"\n    Only one signal handler can be active at a time, so this function takes a list\n    of loop services and runs all of them with a global signal handler.\n    \"\"\"\n\n    def stop_all_services(self, *_):\n        for service in loop_services:\n            service._stop()\n\n    signal.signal(signal.SIGINT, stop_all_services)\n    signal.signal(signal.SIGTERM, stop_all_services)\n    await asyncio.gather(*[service.start() for service in loop_services])\n
"},{"location":"api-ref/server/services/scheduler/","title":"server.services.scheduler","text":""},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler","title":"prefect.server.services.scheduler","text":"

The Scheduler service.

"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.RecentDeploymentsScheduler","title":"RecentDeploymentsScheduler","text":"

Bases: Scheduler

A scheduler that only schedules deployments that were updated very recently. This scheduler can run on a tight loop and ensure that runs from newly-created or updated deployments are rapidly scheduled without having to wait for the \"main\" scheduler to complete its loop.

Note that scheduling is idempotent, so its ok for this scheduler to attempt to schedule the same deployments as the main scheduler. It's purpose is to accelerate scheduling for any deployments that users are interacting with.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/scheduler.py
class RecentDeploymentsScheduler(Scheduler):\n\"\"\"\n    A scheduler that only schedules deployments that were updated very recently.\n    This scheduler can run on a tight loop and ensure that runs from\n    newly-created or updated deployments are rapidly scheduled without having to\n    wait for the \"main\" scheduler to complete its loop.\n\n    Note that scheduling is idempotent, so its ok for this scheduler to attempt\n    to schedule the same deployments as the main scheduler. It's purpose is to\n    accelerate scheduling for any deployments that users are interacting with.\n    \"\"\"\n\n    # this scheduler runs on a tight loop\n    loop_seconds = 5\n\n    @inject_db\n    def _get_select_deployments_to_schedule_query(self, db: PrefectDBInterface):\n\"\"\"\n        Returns a sqlalchemy query for selecting deployments to schedule\n        \"\"\"\n        query = (\n            sa.select(db.Deployment.id)\n            .where(\n                db.Deployment.is_schedule_active.is_(True),\n                db.Deployment.schedule.is_not(None),\n                # use a slightly larger window than the loop interval to pick up\n                # any deployments that were created *while* the scheduler was\n                # last running (assuming the scheduler takes less than one\n                # second to run). Scheduling is idempotent so picking up schedules\n                # multiple times is not a concern.\n                db.Deployment.updated\n                >= pendulum.now(\"UTC\").subtract(seconds=self.loop_seconds + 1),\n            )\n            .order_by(db.Deployment.id)\n            .limit(self.deployment_batch_size)\n        )\n        return query\n
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.Scheduler","title":"Scheduler","text":"

Bases: LoopService

A loop service that schedules flow runs from deployments.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/scheduler.py
class Scheduler(LoopService):\n\"\"\"\n    A loop service that schedules flow runs from deployments.\n    \"\"\"\n\n    # the main scheduler takes its loop interval from\n    # PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS\n    loop_seconds = None\n\n    def __init__(self, loop_seconds: float = None, **kwargs):\n        super().__init__(\n            loop_seconds=(\n                loop_seconds\n                or self.loop_seconds\n                or PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS.value()\n            ),\n            **kwargs,\n        )\n        self.deployment_batch_size: int = (\n            PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE.value()\n        )\n        self.max_runs: int = PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS.value()\n        self.min_runs: int = PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS.value()\n        self.max_scheduled_time: datetime.timedelta = (\n            PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME.value()\n        )\n        self.min_scheduled_time: datetime.timedelta = (\n            PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME.value()\n        )\n        self.insert_batch_size = (\n            PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE.value()\n        )\n\n    @inject_db\n    async def run_once(self, db: PrefectDBInterface):\n\"\"\"\n        Schedule flow runs by:\n\n        - Querying for deployments with active schedules\n        - Generating the next set of flow runs based on each deployments schedule\n        - Inserting all scheduled flow runs into the database\n\n        All inserted flow runs are committed to the database at the termination of the\n        loop.\n        \"\"\"\n        total_inserted_runs = 0\n\n        last_id = None\n        while True:\n            async with db.session_context(begin_transaction=False) as session:\n                query = self._get_select_deployments_to_schedule_query()\n\n                # use cursor based pagination\n                if last_id:\n                    query = query.where(db.Deployment.id > last_id)\n\n                result = await session.execute(query)\n                deployment_ids = result.scalars().unique().all()\n\n                # collect runs across all deployments\n                try:\n                    runs_to_insert = await self._collect_flow_runs(\n                        session=session, deployment_ids=deployment_ids\n                    )\n                except TryAgain:\n                    continue\n\n            # bulk insert the runs based on batch size setting\n            for batch in batched_iterable(runs_to_insert, self.insert_batch_size):\n                async with db.session_context(begin_transaction=True) as session:\n                    inserted_runs = await self._insert_scheduled_flow_runs(\n                        session=session, runs=batch\n                    )\n                    total_inserted_runs += len(inserted_runs)\n\n            # if this is the last page of deployments, exit the loop\n            if len(deployment_ids) < self.deployment_batch_size:\n                break\n            else:\n                # record the last deployment ID\n                last_id = deployment_ids[-1]\n\n        self.logger.info(f\"Scheduled {total_inserted_runs} runs.\")\n\n    @inject_db\n    def _get_select_deployments_to_schedule_query(self, db: PrefectDBInterface):\n\"\"\"\n        Returns a sqlalchemy query for selecting deployments to schedule.\n\n        The query gets the IDs of any deployments with:\n\n            - an active schedule\n            - EITHER:\n                - fewer than `min_runs` auto-scheduled runs\n                - OR the max scheduled time is less than `max_scheduled_time` in the future\n        \"\"\"\n        now = pendulum.now(\"UTC\")\n        query = (\n            sa.select(db.Deployment.id)\n            .select_from(db.Deployment)\n            # TODO: on Postgres, this could be replaced with a lateral join that\n            # sorts by `next_scheduled_start_time desc` and limits by\n            # `self.min_runs` for a ~ 50% speedup. At the time of writing,\n            # performance of this universal query appears to be fast enough that\n            # this optimization is not worth maintaining db-specific queries\n            .join(\n                db.FlowRun,\n                # join on matching deployments, only picking up future scheduled runs\n                sa.and_(\n                    db.Deployment.id == db.FlowRun.deployment_id,\n                    db.FlowRun.state_type == StateType.SCHEDULED,\n                    db.FlowRun.next_scheduled_start_time >= now,\n                    db.FlowRun.auto_scheduled.is_(True),\n                ),\n                isouter=True,\n            )\n            .where(\n                db.Deployment.is_schedule_active.is_(True),\n                db.Deployment.schedule.is_not(None),\n            )\n            .group_by(db.Deployment.id)\n            # having EITHER fewer than three runs OR runs not scheduled far enough out\n            .having(\n                sa.or_(\n                    sa.func.count(db.FlowRun.next_scheduled_start_time) < self.min_runs,\n                    sa.func.max(db.FlowRun.next_scheduled_start_time)\n                    < now + self.min_scheduled_time,\n                )\n            )\n            .order_by(db.Deployment.id)\n            .limit(self.deployment_batch_size)\n        )\n        return query\n\n    async def _collect_flow_runs(\n        self,\n        session: sa.orm.Session,\n        deployment_ids: List[UUID],\n    ) -> List[Dict]:\n        runs_to_insert = []\n        for deployment_id in deployment_ids:\n            now = pendulum.now(\"UTC\")\n            # guard against erroneously configured schedules\n            try:\n                runs_to_insert.extend(\n                    await self._generate_scheduled_flow_runs(\n                        session=session,\n                        deployment_id=deployment_id,\n                        start_time=now,\n                        end_time=now + self.max_scheduled_time,\n                        min_time=self.min_scheduled_time,\n                        min_runs=self.min_runs,\n                        max_runs=self.max_runs,\n                    )\n                )\n            except Exception:\n                self.logger.exception(\n                    f\"Error scheduling deployment {deployment_id!r}.\",\n                )\n            finally:\n                connection = await session.connection()\n                if connection.invalidated:\n                    # If the error we handled above was the kind of database error that\n                    # causes underlying transaction to rollback and the connection to\n                    # become invalidated, rollback this session.  Errors that may cause\n                    # this are connection drops, database restarts, and things of the\n                    # sort.\n                    #\n                    # This rollback _does not rollback a transaction_, since that has\n                    # actually already happened due to the error above.  It brings the\n                    # Python session in sync with underlying connection so that when we\n                    # exec the outer with block, the context manager will not attempt to\n                    # commit the session.\n                    #\n                    # Then, raise TryAgain to break out of these nested loops, back to\n                    # the outer loop, where we'll begin a new transaction with\n                    # session.begin() in the next loop iteration.\n                    await session.rollback()\n                    raise TryAgain()\n        return runs_to_insert\n\n    @inject_db\n    async def _generate_scheduled_flow_runs(\n        self,\n        session: sa.orm.Session,\n        deployment_id: UUID,\n        start_time: datetime.datetime,\n        end_time: datetime.datetime,\n        min_time: datetime.timedelta,\n        min_runs: int,\n        max_runs: int,\n        db: PrefectDBInterface,\n    ) -> List[Dict]:\n\"\"\"\n        Given a `deployment_id` and schedule params, generates a list of flow run\n        objects and associated scheduled states that represent scheduled flow runs.\n\n        Pass-through method for overrides.\n\n\n        Args:\n            session: a database session\n            deployment_id: the id of the deployment to schedule\n            start_time: the time from which to start scheduling runs\n            end_time: runs will be scheduled until at most this time\n            min_time: runs will be scheduled until at least this far in the future\n            min_runs: a minimum amount of runs to schedule\n            max_runs: a maximum amount of runs to schedule\n\n        This function will generate the minimum number of runs that satisfy the min\n        and max times, and the min and max counts. Specifically, the following order\n        will be respected:\n\n            - Runs will be generated starting on or after the `start_time`\n            - No more than `max_runs` runs will be generated\n            - No runs will be generated after `end_time` is reached\n            - At least `min_runs` runs will be generated\n            - Runs will be generated until at least `start_time + min_time` is reached\n\n        \"\"\"\n        return await models.deployments._generate_scheduled_flow_runs(\n            session=session,\n            deployment_id=deployment_id,\n            start_time=start_time,\n            end_time=end_time,\n            min_time=min_time,\n            min_runs=min_runs,\n            max_runs=max_runs,\n        )\n\n    @inject_db\n    async def _insert_scheduled_flow_runs(\n        self,\n        session: sa.orm.Session,\n        runs: List[Dict],\n        db: PrefectDBInterface,\n    ) -> List[UUID]:\n\"\"\"\n        Given a list of flow runs to schedule, as generated by\n        `_generate_scheduled_flow_runs`, inserts them into the database. Note this is a\n        separate method to facilitate batch operations on many scheduled runs.\n\n        Pass-through method for overrides.\n        \"\"\"\n        return await models.deployments._insert_scheduled_flow_runs(\n            session=session, runs=runs\n        )\n
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.Scheduler.run_once","title":"run_once async","text":"

Schedule flow runs by:

All inserted flow runs are committed to the database at the termination of the loop.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/scheduler.py
@inject_db\nasync def run_once(self, db: PrefectDBInterface):\n\"\"\"\n    Schedule flow runs by:\n\n    - Querying for deployments with active schedules\n    - Generating the next set of flow runs based on each deployments schedule\n    - Inserting all scheduled flow runs into the database\n\n    All inserted flow runs are committed to the database at the termination of the\n    loop.\n    \"\"\"\n    total_inserted_runs = 0\n\n    last_id = None\n    while True:\n        async with db.session_context(begin_transaction=False) as session:\n            query = self._get_select_deployments_to_schedule_query()\n\n            # use cursor based pagination\n            if last_id:\n                query = query.where(db.Deployment.id > last_id)\n\n            result = await session.execute(query)\n            deployment_ids = result.scalars().unique().all()\n\n            # collect runs across all deployments\n            try:\n                runs_to_insert = await self._collect_flow_runs(\n                    session=session, deployment_ids=deployment_ids\n                )\n            except TryAgain:\n                continue\n\n        # bulk insert the runs based on batch size setting\n        for batch in batched_iterable(runs_to_insert, self.insert_batch_size):\n            async with db.session_context(begin_transaction=True) as session:\n                inserted_runs = await self._insert_scheduled_flow_runs(\n                    session=session, runs=batch\n                )\n                total_inserted_runs += len(inserted_runs)\n\n        # if this is the last page of deployments, exit the loop\n        if len(deployment_ids) < self.deployment_batch_size:\n            break\n        else:\n            # record the last deployment ID\n            last_id = deployment_ids[-1]\n\n    self.logger.info(f\"Scheduled {total_inserted_runs} runs.\")\n
"},{"location":"api-ref/server/services/scheduler/#prefect.server.services.scheduler.TryAgain","title":"TryAgain","text":"

Bases: Exception

Internal control-flow exception used to retry the Scheduler's main loop

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/services/scheduler.py
class TryAgain(Exception):\n\"\"\"Internal control-flow exception used to retry the Scheduler's main loop\"\"\"\n
"},{"location":"api-ref/server/utilities/database/","title":"server.utilities.database","text":""},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database","title":"prefect.server.utilities.database","text":"

Utilities for interacting with Prefect REST API database and ORM layer.

Prefect supports both SQLite and Postgres. Many of these utilities allow the Prefect REST API to seamlessly switch between the two.

"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.GenerateUUID","title":"GenerateUUID","text":"

Bases: FunctionElement

Platform-independent UUID default generator. Note the actual functionality for this class is specified in the compiles-decorated functions below

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class GenerateUUID(FunctionElement):\n\"\"\"\n    Platform-independent UUID default generator.\n    Note the actual functionality for this class is specified in the\n    `compiles`-decorated functions below\n    \"\"\"\n\n    name = \"uuid_default\"\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.JSON","title":"JSON","text":"

Bases: TypeDecorator

JSON type that returns SQLAlchemy's dialect-specific JSON types, where possible. Uses generic JSON otherwise.

The \"base\" type is postgresql.JSONB to expose useful methods prior to SQL compilation

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class JSON(TypeDecorator):\n\"\"\"\n    JSON type that returns SQLAlchemy's dialect-specific JSON types, where\n    possible. Uses generic JSON otherwise.\n\n    The \"base\" type is postgresql.JSONB to expose useful methods prior\n    to SQL compilation\n    \"\"\"\n\n    impl = postgresql.JSONB\n    cache_ok = True\n\n    def load_dialect_impl(self, dialect):\n        if dialect.name == \"postgresql\":\n            return dialect.type_descriptor(postgresql.JSONB(none_as_null=True))\n        elif dialect.name == \"sqlite\":\n            return dialect.type_descriptor(sqlite.JSON(none_as_null=True))\n        else:\n            return dialect.type_descriptor(sa.JSON(none_as_null=True))\n\n    def process_bind_param(self, value, dialect):\n\"\"\"Prepares the given value to be used as a JSON field in a parameter binding\"\"\"\n        if not value:\n            return value\n\n        # PostgreSQL does not support the floating point extrema values `NaN`,\n        # `-Infinity`, or `Infinity`\n        # https://www.postgresql.org/docs/current/datatype-json.html#JSON-TYPE-MAPPING-TABLE\n        #\n        # SQLite supports storing and retrieving full JSON values that include\n        # `NaN`, `-Infinity`, or `Infinity`, but any query that requires SQLite to parse\n        # the value (like `json_extract`) will fail.\n        #\n        # Replace any `NaN`, `-Infinity`, or `Infinity` values with `None` in the\n        # returned value.  See more about `parse_constant` at\n        # https://docs.python.org/3/library/json.html#json.load.\n        return json.loads(json.dumps(value), parse_constant=lambda c: None)\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.Pydantic","title":"Pydantic","text":"

Bases: TypeDecorator

A pydantic type that converts inserted parameters to json and converts read values to the pydantic type.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class Pydantic(TypeDecorator):\n\"\"\"\n    A pydantic type that converts inserted parameters to\n    json and converts read values to the pydantic type.\n    \"\"\"\n\n    impl = JSON\n    cache_ok = True\n\n    def __init__(self, pydantic_type, sa_column_type=None):\n        super().__init__()\n        self._pydantic_type = pydantic_type\n        if sa_column_type is not None:\n            self.impl = sa_column_type\n\n    def process_bind_param(self, value, dialect):\n        if value is None:\n            return None\n        # parse the value to ensure it complies with the schema\n        # (this will raise validation errors if not)\n        value = pydantic.parse_obj_as(self._pydantic_type, value)\n        # sqlalchemy requires the bind parameter's value to be a python-native\n        # collection of JSON-compatible objects. we achieve that by dumping the\n        # value to a json string using the pydantic JSON encoder and re-parsing\n        # it into a python-native form.\n        return json.loads(json.dumps(value, default=pydantic.json.pydantic_encoder))\n\n    def process_result_value(self, value, dialect):\n        if value is not None:\n            # load the json object into a fully hydrated typed object\n            return pydantic.parse_obj_as(self._pydantic_type, value)\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.Timestamp","title":"Timestamp","text":"

Bases: TypeDecorator

TypeDecorator that ensures that timestamps have a timezone.

For SQLite, all timestamps are converted to UTC (since they are stored as naive timestamps without timezones) and recovered as UTC.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class Timestamp(TypeDecorator):\n\"\"\"TypeDecorator that ensures that timestamps have a timezone.\n\n    For SQLite, all timestamps are converted to UTC (since they are stored\n    as naive timestamps without timezones) and recovered as UTC.\n    \"\"\"\n\n    impl = sa.TIMESTAMP(timezone=True)\n    cache_ok = True\n\n    def load_dialect_impl(self, dialect):\n        if dialect.name == \"postgresql\":\n            return dialect.type_descriptor(postgresql.TIMESTAMP(timezone=True))\n        elif dialect.name == \"sqlite\":\n            return dialect.type_descriptor(\n                sqlite.DATETIME(\n                    # SQLite is very particular about datetimes, and performs all comparisons\n                    # as alphanumeric comparisons without regard for actual timestamp\n                    # semantics or timezones. Therefore, it's important to have uniform\n                    # and sortable datetime representations. The default is an ISO8601-compatible\n                    # string with NO time zone and a space (\" \") delimeter between the date\n                    # and the time. The below settings can be used to add a \"T\" delimiter but\n                    # will require all other sqlite datetimes to be set similarly, including\n                    # the custom default value for datetime columns and any handwritten SQL\n                    # formed with `strftime()`.\n                    #\n                    # store with \"T\" separator for time\n                    # storage_format=(\n                    #     \"%(year)04d-%(month)02d-%(day)02d\"\n                    #     \"T%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d\"\n                    # ),\n                    # handle ISO 8601 with \"T\" or \" \" as the time separator\n                    # regexp=r\"(\\d+)-(\\d+)-(\\d+)[T ](\\d+):(\\d+):(\\d+).(\\d+)\",\n                )\n            )\n        else:\n            return dialect.type_descriptor(sa.TIMESTAMP(timezone=True))\n\n    def process_bind_param(self, value, dialect):\n        if value is None:\n            return None\n        else:\n            if value.tzinfo is None:\n                raise ValueError(\"Timestamps must have a timezone.\")\n            elif dialect.name == \"sqlite\":\n                return pendulum.instance(value).in_timezone(\"UTC\")\n            else:\n                return value\n\n    def process_result_value(self, value, dialect):\n        # retrieve timestamps in their native timezone (or UTC)\n        if value is not None:\n            return pendulum.instance(value).in_timezone(\"utc\")\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.UUID","title":"UUID","text":"

Bases: TypeDecorator

Platform-independent UUID type.

Uses PostgreSQL's UUID type, otherwise uses CHAR(36), storing as stringified hex values with hyphens.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class UUID(TypeDecorator):\n\"\"\"\n    Platform-independent UUID type.\n\n    Uses PostgreSQL's UUID type, otherwise uses\n    CHAR(36), storing as stringified hex values with\n    hyphens.\n    \"\"\"\n\n    impl = TypeEngine\n    cache_ok = True\n\n    def load_dialect_impl(self, dialect):\n        if dialect.name == \"postgresql\":\n            return dialect.type_descriptor(postgresql.UUID())\n        else:\n            return dialect.type_descriptor(CHAR(36))\n\n    def process_bind_param(self, value, dialect):\n        if value is None:\n            return None\n        elif dialect.name == \"postgresql\":\n            return str(value)\n        elif isinstance(value, uuid.UUID):\n            return str(value)\n        else:\n            return str(uuid.UUID(value))\n\n    def process_result_value(self, value, dialect):\n        if value is None:\n            return value\n        else:\n            if not isinstance(value, uuid.UUID):\n                value = uuid.UUID(value)\n            return value\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.date_add","title":"date_add","text":"

Bases: FunctionElement

Platform-independent way to add a date and an interval.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class date_add(FunctionElement):\n\"\"\"\n    Platform-independent way to add a date and an interval.\n    \"\"\"\n\n    type = Timestamp()\n    name = \"date_add\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, dt, interval):\n        self.dt = dt\n        self.interval = interval\n        super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.date_diff","title":"date_diff","text":"

Bases: FunctionElement

Platform-independent difference of dates. Computes d1 - d2.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class date_diff(FunctionElement):\n\"\"\"\n    Platform-independent difference of dates. Computes d1 - d2.\n    \"\"\"\n\n    type = sa.Interval()\n    name = \"date_diff\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, d1, d2):\n        self.d1 = d1\n        self.d2 = d2\n        super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.interval_add","title":"interval_add","text":"

Bases: FunctionElement

Platform-independent way to add two intervals.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class interval_add(FunctionElement):\n\"\"\"\n    Platform-independent way to add two intervals.\n    \"\"\"\n\n    type = sa.Interval()\n    name = \"interval_add\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, i1, i2):\n        self.i1 = i1\n        self.i2 = i2\n        super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_contains","title":"json_contains","text":"

Bases: FunctionElement

Platform independent json_contains operator, tests if the left expression contains the right expression.

On postgres this is equivalent to the @> containment operator. https://www.postgresql.org/docs/current/functions-json.html

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class json_contains(FunctionElement):\n\"\"\"\n    Platform independent json_contains operator, tests if the\n    `left` expression contains the `right` expression.\n\n    On postgres this is equivalent to the @> containment operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = BOOLEAN\n    name = \"json_contains\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, left, right):\n        self.left = left\n        self.right = right\n        super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_has_all_keys","title":"json_has_all_keys","text":"

Bases: FunctionElement

Platform independent json_has_all_keys operator.

On postgres this is equivalent to the ?& existence operator. https://www.postgresql.org/docs/current/functions-json.html

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class json_has_all_keys(FunctionElement):\n\"\"\"Platform independent json_has_all_keys operator.\n\n    On postgres this is equivalent to the ?& existence operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = BOOLEAN\n    name = \"json_has_all_keys\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, json_expr, values: List):\n        self.json_expr = json_expr\n        if isinstance(values, list) and not all(isinstance(v, str) for v in values):\n            raise ValueError(\n                \"json_has_all_key values must be strings if provided as a literal list\"\n            )\n        self.values = values\n        super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.json_has_any_key","title":"json_has_any_key","text":"

Bases: FunctionElement

Platform independent json_has_any_key operator.

On postgres this is equivalent to the ?| existence operator. https://www.postgresql.org/docs/current/functions-json.html

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class json_has_any_key(FunctionElement):\n\"\"\"\n    Platform independent json_has_any_key operator.\n\n    On postgres this is equivalent to the ?| existence operator.\n    https://www.postgresql.org/docs/current/functions-json.html\n    \"\"\"\n\n    type = BOOLEAN\n    name = \"json_has_any_key\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = False\n\n    def __init__(self, json_expr, values: List):\n        self.json_expr = json_expr\n        if not all(isinstance(v, str) for v in values):\n            raise ValueError(\"json_has_any_key values must be strings\")\n        self.values = values\n        super().__init__()\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.now","title":"now","text":"

Bases: FunctionElement

Platform-independent \"now\" generator.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
class now(FunctionElement):\n\"\"\"\n    Platform-independent \"now\" generator.\n    \"\"\"\n\n    type = Timestamp()\n    name = \"now\"\n    # see https://docs.sqlalchemy.org/en/14/core/compiler.html#enabling-caching-support-for-custom-constructs\n    inherit_cache = True\n
"},{"location":"api-ref/server/utilities/database/#prefect.server.utilities.database.get_dialect","title":"get_dialect","text":"

Get the dialect of a session, engine, or connection url.

Primary use case is figuring out whether the Prefect REST API is communicating with SQLite or Postgres.

Example
import prefect.settings\nfrom prefect.server.utilities.database import get_dialect\n\ndialect = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value())\nif dialect == \"sqlite\":\n    print(\"Using SQLite!\")\nelse:\n    print(\"Using Postgres!\")\n
Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/database.py
def get_dialect(\n    obj: Union[str, sa.orm.Session, sa.engine.Engine],\n) -> sa.engine.Dialect:\n\"\"\"\n    Get the dialect of a session, engine, or connection url.\n\n    Primary use case is figuring out whether the Prefect REST API is communicating with\n    SQLite or Postgres.\n\n    Example:\n        ```python\n        import prefect.settings\n        from prefect.server.utilities.database import get_dialect\n\n        dialect = get_dialect(PREFECT_API_DATABASE_CONNECTION_URL.value())\n        if dialect == \"sqlite\":\n            print(\"Using SQLite!\")\n        else:\n            print(\"Using Postgres!\")\n        ```\n    \"\"\"\n    if isinstance(obj, sa.orm.Session):\n        url = obj.bind.url\n    elif isinstance(obj, sa.engine.Engine):\n        url = obj.url\n    else:\n        url = sa.engine.url.make_url(obj)\n\n    return url.get_dialect()\n
"},{"location":"api-ref/server/utilities/schemas/","title":"server.utilities.schemas","text":""},{"location":"api-ref/server/utilities/schemas/#prefect.server.utilities.schemas","title":"prefect.server.utilities.schemas","text":"

Utilities for creating and working with Prefect REST API schemas.

"},{"location":"api-ref/server/utilities/schemas/#prefect.server.utilities.schemas.SubclassMethodMixin","title":"SubclassMethodMixin","text":"Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
class SubclassMethodMixin:\n    @classmethod\n    def subclass(\n        cls: Type[B],\n        name: str = None,\n        include_fields: List[str] = None,\n        exclude_fields: List[str] = None,\n    ) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n        See `pydantic_subclass()`.\n\n        Args:\n            name (str, optional): a name for the subclass\n            include_fields (List[str], optional): fields to include\n            exclude_fields (List[str], optional): fields to exclude\n\n        Returns:\n            BaseModel: a subclass of this class\n        \"\"\"\n        return pydantic_subclass(\n            base=cls,\n            name=name,\n            include_fields=include_fields,\n            exclude_fields=exclude_fields,\n        )\n
"},{"location":"api-ref/server/utilities/schemas/#prefect.server.utilities.schemas.SubclassMethodMixin.subclass","title":"subclass classmethod","text":"

Creates a subclass of this model containing only the specified fields.

See pydantic_subclass().

Parameters:

Name Type Description Default name str

a name for the subclass

None include_fields List[str]

fields to include

None exclude_fields List[str]

fields to exclude

None

Returns:

Name Type Description BaseModel Type[B]

a subclass of this class

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
@classmethod\ndef subclass(\n    cls: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of this model containing only the specified fields.\n\n    See `pydantic_subclass()`.\n\n    Args:\n        name (str, optional): a name for the subclass\n        include_fields (List[str], optional): fields to include\n        exclude_fields (List[str], optional): fields to exclude\n\n    Returns:\n        BaseModel: a subclass of this class\n    \"\"\"\n    return pydantic_subclass(\n        base=cls,\n        name=name,\n        include_fields=include_fields,\n        exclude_fields=exclude_fields,\n    )\n
"},{"location":"api-ref/server/utilities/schemas/#prefect.server.utilities.schemas.pydantic_subclass","title":"pydantic_subclass","text":"

Creates a subclass of a Pydantic model that excludes certain fields. Pydantic models use the fields attribute of their parent class to determine inherited fields, so to create a subclass without fields, we temporarily remove those fields from the parent fields and use create_model to dynamically generate a new subclass.

Parameters:

Name Type Description Default base pydantic.BaseModel

a Pydantic BaseModel

required name str

a name for the subclass. If not provided it will have the same name as the base class.

None include_fields List[str]

a set of field names to include. If None, all fields are included.

None exclude_fields List[str]

a list of field names to exclude. If None, no fields are excluded.

None

Returns:

Type Description Type[B]

pydantic.BaseModel: a new model subclass that contains only the specified fields.

Example

To subclass a model with a subset of fields:

class Parent(pydantic.BaseModel):\n    x: int = 1\n    y: int = 2\n\nChild = pydantic_subclass(Parent, 'Child', exclude_fields=['y'])\nassert hasattr(Child(), 'x')\nassert not hasattr(Child(), 'y')\n

To subclass a model with a subset of fields but include a new field:

class Child(pydantic_subclass(Parent, exclude_fields=['y'])):\n    z: int = 3\n\nassert hasattr(Child(), 'x')\nassert not hasattr(Child(), 'y')\nassert hasattr(Child(), 'z')\n

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/schemas.py
def pydantic_subclass(\n    base: Type[B],\n    name: str = None,\n    include_fields: List[str] = None,\n    exclude_fields: List[str] = None,\n) -> Type[B]:\n\"\"\"Creates a subclass of a Pydantic model that excludes certain fields.\n    Pydantic models use the __fields__ attribute of their parent class to\n    determine inherited fields, so to create a subclass without fields, we\n    temporarily remove those fields from the parent __fields__ and use\n    `create_model` to dynamically generate a new subclass.\n\n    Args:\n        base (pydantic.BaseModel): a Pydantic BaseModel\n        name (str): a name for the subclass. If not provided\n            it will have the same name as the base class.\n        include_fields (List[str]): a set of field names to include.\n            If `None`, all fields are included.\n        exclude_fields (List[str]): a list of field names to exclude.\n            If `None`, no fields are excluded.\n\n    Returns:\n        pydantic.BaseModel: a new model subclass that contains only the specified fields.\n\n    Example:\n        To subclass a model with a subset of fields:\n        ```python\n        class Parent(pydantic.BaseModel):\n            x: int = 1\n            y: int = 2\n\n        Child = pydantic_subclass(Parent, 'Child', exclude_fields=['y'])\n        assert hasattr(Child(), 'x')\n        assert not hasattr(Child(), 'y')\n        ```\n\n        To subclass a model with a subset of fields but include a new field:\n        ```python\n        class Child(pydantic_subclass(Parent, exclude_fields=['y'])):\n            z: int = 3\n\n        assert hasattr(Child(), 'x')\n        assert not hasattr(Child(), 'y')\n        assert hasattr(Child(), 'z')\n        ```\n    \"\"\"\n\n    # collect field names\n    field_names = set(include_fields or base.__fields__)\n    excluded_fields = set(exclude_fields or [])\n    if field_names.difference(base.__fields__):\n        raise ValueError(\n            \"Included fields not found on base class: \"\n            f\"{field_names.difference(base.__fields__)}\"\n        )\n    elif excluded_fields.difference(base.__fields__):\n        raise ValueError(\n            \"Excluded fields not found on base class: \"\n            f\"{excluded_fields.difference(base.__fields__)}\"\n        )\n    field_names.difference_update(excluded_fields)\n\n    # create a new class that inherits from `base` but only contains the specified\n    # pydantic __fields__\n    new_cls = type(\n        name or base.__name__,\n        (base,),\n        {\n            \"__fields__\": {\n                k: copy.copy(v) for k, v in base.__fields__.items() if k in field_names\n            }\n        },\n    )\n\n    return new_cls\n
"},{"location":"api-ref/server/utilities/server/","title":"server.utilities.server","text":""},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server","title":"prefect.server.utilities.server","text":"

Utilities for the Prefect REST API server.

"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.OrionAPIRoute","title":"OrionAPIRoute","text":"

Bases: PrefectAPIRoute

Deprecated. Use PrefectAPIRoute instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
@deprecated_callable(start_date=\"Feb 2023\", help=\"Use `PrefectAPIRoute` instead.\")\nclass OrionAPIRoute(PrefectAPIRoute):\n\"\"\"\n    Deprecated. Use `PrefectAPIRoute` instead.\n    \"\"\"\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.OrionRouter","title":"OrionRouter","text":"

Bases: PrefectRouter

Deprecated. Use PrefectRouter instead.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
@deprecated_callable(start_date=\"Feb 2023\", help=\"Use `PrefectRouter` instead.\")\nclass OrionRouter(PrefectRouter):\n\"\"\"\n    Deprecated. Use `PrefectRouter` instead.\n    \"\"\"\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectAPIRoute","title":"PrefectAPIRoute","text":"

Bases: APIRoute

A FastAPIRoute class which attaches an async stack to requests that exits before a response is returned.

Requests already have request.scope['fastapi_astack'] which is an async stack for the full scope of the request. This stack is used for managing contexts of FastAPI dependencies. If we want to close a dependency before the request is complete (i.e. before returning a response to the user), we need a stack with a different scope. This extension adds this stack at request.state.response_scoped_stack.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
class PrefectAPIRoute(APIRoute):\n\"\"\"\n    A FastAPIRoute class which attaches an async stack to requests that exits before\n    a response is returned.\n\n    Requests already have `request.scope['fastapi_astack']` which is an async stack for\n    the full scope of the request. This stack is used for managing contexts of FastAPI\n    dependencies. If we want to close a dependency before the request is complete\n    (i.e. before returning a response to the user), we need a stack with a different\n    scope. This extension adds this stack at `request.state.response_scoped_stack`.\n    \"\"\"\n\n    def get_route_handler(self) -> Callable[[Request], Coroutine[Any, Any, Response]]:\n        default_handler = super().get_route_handler()\n\n        async def handle_response_scoped_depends(request: Request) -> Response:\n            # Create a new stack scoped to exit before the response is returned\n            async with AsyncExitStack() as stack:\n                request.state.response_scoped_stack = stack\n                response = await default_handler(request)\n\n            return response\n\n        return handle_response_scoped_depends\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectRouter","title":"PrefectRouter","text":"

Bases: APIRouter

A base class for Prefect REST API routers.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
class PrefectRouter(APIRouter):\n\"\"\"\n    A base class for Prefect REST API routers.\n    \"\"\"\n\n    def __init__(self, **kwargs: Any) -> None:\n        kwargs.setdefault(\"route_class\", PrefectAPIRoute)\n        super().__init__(**kwargs)\n\n    def add_api_route(\n        self, path: str, endpoint: Callable[..., Any], **kwargs: Any\n    ) -> None:\n\"\"\"\n        Add an API route.\n\n        For routes that return content and have not specified a `response_model`,\n        use return type annotation to infer the response model.\n\n        For routes that return No-Content status codes, explicitly set\n        a `response_class` to ensure nothing is returned in the response body.\n        \"\"\"\n        if kwargs.get(\"status_code\") == status.HTTP_204_NO_CONTENT:\n            # any routes that return No-Content status codes must\n            # explicilty set a response_class that will handle status codes\n            # and not return anything in the body\n            kwargs[\"response_class\"] = Response\n        if kwargs.get(\"response_model\") is None:\n            kwargs[\"response_model\"] = get_type_hints(endpoint).get(\"return\")\n        return super().add_api_route(path, endpoint, **kwargs)\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.PrefectRouter.add_api_route","title":"add_api_route","text":"

Add an API route.

For routes that return content and have not specified a response_model, use return type annotation to infer the response model.

For routes that return No-Content status codes, explicitly set a response_class to ensure nothing is returned in the response body.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
def add_api_route(\n    self, path: str, endpoint: Callable[..., Any], **kwargs: Any\n) -> None:\n\"\"\"\n    Add an API route.\n\n    For routes that return content and have not specified a `response_model`,\n    use return type annotation to infer the response model.\n\n    For routes that return No-Content status codes, explicitly set\n    a `response_class` to ensure nothing is returned in the response body.\n    \"\"\"\n    if kwargs.get(\"status_code\") == status.HTTP_204_NO_CONTENT:\n        # any routes that return No-Content status codes must\n        # explicilty set a response_class that will handle status codes\n        # and not return anything in the body\n        kwargs[\"response_class\"] = Response\n    if kwargs.get(\"response_model\") is None:\n        kwargs[\"response_model\"] = get_type_hints(endpoint).get(\"return\")\n    return super().add_api_route(path, endpoint, **kwargs)\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.method_paths_from_routes","title":"method_paths_from_routes","text":"

Generate a set of strings describing the given routes in the format:

For example, \"GET /logs/\"

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
def method_paths_from_routes(routes: Iterable[APIRoute]) -> Set[str]:\n\"\"\"\n    Generate a set of strings describing the given routes in the format: <method> <path>\n\n    For example, \"GET /logs/\"\n    \"\"\"\n    method_paths = set()\n    for route in routes:\n        for method in route.methods:\n            method_paths.add(f\"{method} {route.path}\")\n\n    return method_paths\n
"},{"location":"api-ref/server/utilities/server/#prefect.server.utilities.server.response_scoped_dependency","title":"response_scoped_dependency","text":"

Ensure that this dependency closes before the response is returned to the client. By default, FastAPI closes dependencies after sending the response.

Uses an async stack that is exited before the response is returned. This is particularly useful for database sesssions which must be committed before the client can do more work.

Do not use a response-scoped dependency within a FastAPI background task.

Background tasks run after FastAPI sends the response, so a response-scoped dependency will already be closed. Use a normal FastAPI dependency instead.

Parameters:

Name Type Description Default dependency Callable

An async callable. FastAPI dependencies may still be used.

required

Returns:

Type Description

A wrapped dependency which will push the dependency context manager onto

a stack when called.

Source code in /home/runner/work/docs/docs/prefect_source/src/prefect/server/utilities/server.py
def response_scoped_dependency(dependency: Callable):\n\"\"\"\n    Ensure that this dependency closes before the response is returned to the client. By\n    default, FastAPI closes dependencies after sending the response.\n\n    Uses an async stack that is exited before the response is returned. This is\n    particularly useful for database sesssions which must be committed before the client\n    can do more work.\n\n    NOTE: Do not use a response-scoped dependency within a FastAPI background task.\n          Background tasks run after FastAPI sends the response, so a response-scoped\n          dependency will already be closed. Use a normal FastAPI dependency instead.\n\n    Args:\n        dependency: An async callable. FastAPI dependencies may still be used.\n\n    Returns:\n        A wrapped `dependency` which will push the `dependency` context manager onto\n        a stack when called.\n    \"\"\"\n    signature = inspect.signature(dependency)\n\n    async def wrapper(*args, request: Request, **kwargs):\n        # Replicate FastAPI behavior of auto-creating a context manager\n        if inspect.isasyncgenfunction(dependency):\n            context_manager = asynccontextmanager(dependency)\n        else:\n            context_manager = dependency\n\n        # Ensure request is provided if requested\n        if \"request\" in signature.parameters:\n            kwargs[\"request\"] = request\n\n        # Enter the route handler provided stack that is closed before responding,\n        # return the value yielded by the wrapped dependency\n        return await request.state.response_scoped_stack.enter_async_context(\n            context_manager(*args, **kwargs)\n        )\n\n    # Ensure that the signature includes `request: Request` to ensure that FastAPI will\n    # inject the request as a dependency; maintain the old signature so those depends\n    # work\n    request_parameter = inspect.signature(wrapper).parameters[\"request\"]\n    functools.update_wrapper(wrapper, dependency)\n\n    if \"request\" not in signature.parameters:\n        new_parameters = signature.parameters.copy()\n        new_parameters[\"request\"] = request_parameter\n        wrapper.__signature__ = signature.replace(\n            parameters=tuple(new_parameters.values())\n        )\n\n    return wrapper\n
"},{"location":"cloud/","title":"Welcome to Prefect Cloud","text":"

Prefect Cloud is a workflow orchestration platform. Prefect Cloud provides all the capabilities of the Prefect server and UI in a hosted environment, plus additional features such as automations, workspaces, and organizations.

Getting Started with Prefect Cloud

Ready to jump right in and start running with Prefect Cloud? See the Cloud Getting Started Guide to create a workspace, configure a local execution environment, and write your first Prefect Cloud-monitored flow run.

Prefect Cloud includes all the features in the open-source Prefect server plus the following:

Prefect Cloud features

Features only available on Prefect Cloud include:

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#user-accounts","title":"User accounts","text":"

When you sign up for Prefect Cloud, a personal account is automatically provisioned for you. A personal account gives you access to profile settings where you can view and administer your:

As a personal account owner, you can create a workspace and invite collaborators to your workspace.

Organizations in Prefect Cloud enable you to invite users to collaborate in your workspaces with the ability to set role-based access controls (RBAC) for organization members. Organizations may also configure service accounts with API keys for non-user access to the Prefect Cloud API.

Prefect Cloud plans for teams of every size

See the Prefect Cloud plans for details on options for individual users and teams.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#workspaces","title":"Workspaces","text":"

A workspace is an isolated environment within Prefect Cloud for your flows, deployments, and block configuration. See the Workspaces documentation for more information about configuring and using workspaces.

Each workspace keeps track of its own:

When you first log into Prefect Cloud and create your workspace, it will most likely be empty. Don't Panic \u2014 you just haven't run any flows tracked by this workspace yet. See the Prefect Cloud Quickstart to configure a local execution environment and start tracking flow runs in Prefect Cloud.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#events","title":"Events","text":"

Prefect Cloud allows you to see your events.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#automations","title":"Automations","text":"

Prefect Cloud automations provide additional notification capabilities as the open-source Prefect server. Automations enable you to configure triggers and actions that can kick off flow runs, pause deployments, or send custom notifications in response to real-time monitoring events.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#organizations","title":"Organizations","text":"

A Prefect Cloud organization is a type of account available on Prefect Cloud that enables more extensive and granular control over workspace collaboration. Within an organization account you can:

See the Organizations documentation for more information about managing users, service accounts, and workspaces in a Prefect Cloud organization.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#service-accounts","title":"Service accounts","text":"

Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account. Service accounts are typically used to configure API access for running agents or executing flow runs on remote infrastructure.

See the service accounts documentation for more information about creating and managing service accounts in a Prefect Cloud organization.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#roles-and-custom-permissions","title":"Roles and custom permissions","text":"

Role-based access control (RBAC) functionality in Prefect Cloud enables you to assign users granular permissions to perform certain activities within an organization or a workspace.

See the role-based access controls (RBAC) documentation for more information about managing user roles in a Prefect Cloud organization.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#single-sign-on-sso","title":"Single Sign-on (SSO)","text":"

Prefect Cloud's Organization and Enterprise plans offer single sign-on (SSO) authentication integration with your team\u2019s identity provider. SSO integration can bet set up with identity providers that support OIDC and SAML.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#audit-log","title":"Audit Log","text":"

Prefect Cloud's Organization and Enterprise plans offer Audit Log compliance and transparency tools. Audit logs provide a chronological record of activities performed by users in your organization, allowing you to monitor detailed actions for security and compliance purposes.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#prefect-cloud-rest-api","title":"Prefect Cloud REST API","text":"

The Prefect REST API is used for communicating data from Prefect clients to Prefect Cloud or a local Prefect server so that orchestration can be performed. This API is mainly consumed by Prefect clients like the Prefect Python Client or the Prefect UI.

Prefect Cloud REST API interactive documentation

Prefect Cloud REST API documentation is available at https://app.prefect.cloud/api/docs.

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/#start-using-prefect-cloud","title":"Start using Prefect Cloud","text":"

To create an account or sign in with an existing Prefect Cloud account, go to https://app.prefect.cloud/.

Then follow the steps in our Prefect Cloud Quickstart to create a workspace, configure a local execution environment, and start running workflows with Prefect Cloud.

Need help?

Get your questions answered with a Prefect product advocate by Booking A Rubber Duck!

","tags":["UI","dashboard","orchestration","Prefect Cloud","accounts","teams","workspaces","PaaS"],"boost":2},{"location":"cloud/automations/","title":"Automations","text":"

Automations in Prefect Cloud enable you to configure actions that Prefect executes automatically based on trigger conditions related to your flows and work pools.

Using triggers and actions you can automatically kick off flow runs, pause deployments, or send custom notifications in response to real-time monitoring events.

Automations are only available in Prefect Cloud

Notifications in an open-source Prefect server provide a subset of the notification message-sending features available in Automations.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#automations-overview","title":"Automations overview","text":"

The Automations page provides an overview of all configured automations for your workspace.

Selecting the toggle next to an automation pauses execution of the automation.

The button next to the toggle provides commands to copy the automation ID, edit the automation, or delete the automation.

Select the name of an automation to view Details about it and relevant Events.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#create-an-automation","title":"Create an automation","text":"

On the Automations page, select the + icon to create a new automation. You'll be prompted to configure:

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#triggers","title":"Triggers","text":"

Triggers specify the conditions under which your action should be performed. Triggers can be of several types, including triggers based on:

Automations API

The automations API enables further programatic customization of trigger and action policies based on arbitrary events.

Importantly, triggers can be configured not only in reaction to events, but also proactively: to trigger in the absence of an event you expect to see.

For example, in the case of flow run state change triggers, you might expect production flows to finish in no longer than thirty minutes. But transient infrastructure or network issues could cause your flow to get \u201cstuck\u201d in a running state. A trigger could kick off an action if the flow stays in a running state for more than 30 minutes. This action could be on the flow itself, such as canceling or restarting it, or it could take the form of a notification so someone can take manual remediation steps.

Work queue health

A work queue is \"unhealthy\" if it has not been polled in over 60 seconds OR if it has one or more late runs.

Custom Triggers

Custom triggers allow advanced configuration of the conditions on which a trigger executes its actions.

For example, if you would only like a trigger to execute an action if it receives 2 flow run failure events of a specific deployment within 10 seconds, you could paste in the following trigger configuration:

{\n\"match\": {\n\"prefect.resource.id\": \"prefect.flow-run.*\"\n},\n\"match_related\": {\n\"prefect.resource.id\": \"prefect.deployment.70cb25fe-e33d-4f96-b1bc-74aa4e50b761\",\n\"prefect.resource.role\": \"deployment\"\n},\n\"for_each\": [\n\"prefect.resource.id\"\n],\n\"after\": [],\n\"expect\": [\n\"prefect.flow-run.Failed\"\n],\n\"posture\": \"Reactive\",\n\"threshold\": 2,\n\"within\": 10\n}\n

Matching on multiple resources

Each key in match and match_related can accept a list of multiple values that are OR'd together. For example, to match on multiple deployments:

\"match_related\": {\n\"prefect.resource.id\": [\n\"prefect.deployment.70cb25fe-e33d-4f96-b1bc-74aa4e50b761\",\n\"prefect.deployment.c33b8eaa-1ba7-43c4-ac43-7904a9550611\"\n],\n\"prefect.resource.role\": \"deployment\"\n},\n

Or, if your work queue enters an unhealthy state and you want your trigger to execute an action if it doesn't recover within 30 minutes, you could paste in the following trigger configuration:

{\n\"match\": {\n\"prefect.resource.id\": \"prefect.work-queue.70cb25fe-e33d-4f96-b1bc-74aa4e50b761\"\n},\n\"match_related\": {},\n\"for_each\": [\n\"prefect.resource.id\"\n],\n\"after\": [\n\"prefect.work-queue.unhealthy\"\n],\n\"expect\": [\n\"prefect.work-queue.healthy\"\n],\n\"posture\": \"Proactive\",\n\"threshold\": 1,\n\"within\": 1800\n}\n
","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#actions","title":"Actions","text":"

Actions specify what your automation does when its trigger criteria are met. Current action types include:

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#selected-and-inferred-action-targets","title":"Selected and inferred action targets","text":"

Some actions require you to either select the target of the action, or specify that the target of the action should be inferred.

Selected targets are simple, and useful for when you know exactly what object your action should act on \u2014 for example, the case of a cleanup flow you want to run or a specific notification you\u2019d like to send.

Inferred targets are deduced from the trigger itself.

For example, if a trigger fires on a flow run that is stuck in a running state, and the action is to cancel an inferred flow run, the flow run to cancel is inferred as the stuck run that caused the trigger to fire.

Similarly, if a trigger fires on work queue health and the action is to pause an inferred work queue, the work queue to pause is inferred as the unhealthy work queue that caused the trigger to fire.

Prefect tries to infer the relevant event whenever possible, but sometimes one does not exist.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#details","title":"Details","text":"

Specify a name and, optionally, a description for the automation.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#create-an-automation-via-deployment-triggers","title":"Create an automation via deployment triggers","text":"

To enable the simple configuation of event-driven deployments, Prefect provides deployment triggers - a shorthand for creating automations that are linked to specific deployments to run them based on the presence or absence of events.

To

triggers:\n- enabled: true\nmatch:\nprefect.resource.id: my.external.resource\nexpect:\n- external.resource.pinged\nparameters:\nparam_1: \"{{ event }}\"\n

When applied, this will create a linked automation that responds to events from an external resource, and passes that event into the parameters of the executed flow run.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#automation-notifications","title":"Automation notifications","text":"

Notifications enable you to set up automation actions that send a message.

Automation notifications support sending notifications via any predefined block that is capable of and configured to send a message. That includes, for example:

Notification blocks must be pre-configured

Notification blocks must be pre-configured prior to creating a notification action. Any existing blocks capable of sending messages will be shown in the block drop-down list.

The Add + button cancels the current automation creation process and enables configuration a notification block.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/automations/#templating-notifications-with-jinja","title":"Templating notifications with Jinja","text":"

The notification body can include templated variables using Jinja syntax. Templated variable enable you to include details relevant to automation trigger, such as a flow or queue name.

Jinja templated variable syntax wraps the variable name in double curly brackets, like {{ variable }}.

You can access properties of the underlying flow run objects including:

In addition to its native properites, each object includes an id along with created and updated timestamps.

The flow_run|ui_url token returns the URL for viewing the flow run in Prefect Cloud.

Here\u2019s an example for something that would be relevant to a flow run state-based notification:

Flow run {{ flow_run.name }} entered state {{ flow_run.state.name }}. \n\n    Timestamp: {{ flow_run.state.timestamp }}\n    Flow ID: {{ flow_run.flow_id }}\n    Flow Run ID: {{ flow_run.id }}\n    State message: {{ flow_run.state.message }}\n

The resulting Slack webhook notification would look something like this:

You could include flow and deployment properties.

Flow run {{ flow_run.name }} for flow {{ flow.name }}\nentered state {{ flow_run.state.name }}\nwith message {{ flow_run.state.message }}\n\nFlow tags: {{ flow_run.tags }}\nDeployment name: {{ deployment.name }}\nDeployment version: {{ deployment.version }}\nDeployment parameters: {{ deployment.parameters }}\n

An automation that reports on work queue health might include notifications using work_queue properties.

Work queue health alert!\n\nName: {{ work_queue.name }}\nLast polled: {{ work_queue.last_polled }}\n

In addition to those shortcuts for flows, deployments, and work pools, you have access to the automation and the event that triggered the automation. See the Automations API for additional details.

Automation: {{ automation.name }}\nDescription: {{ automation.description }}\n\nEvent: {{ event.id }}\nResource:\n{% for label, value in event.resource %}\n{{ label }}: {{ value }}\n{% endfor %}\nRelated Resources:\n{% for related in event.related %}\n    Role: {{ related.role }}\n    {% for label, value in event.resource %}\n    {{ label }}: {{ value }}\n    {% endfor %}\n{% endfor %}\n

Note that this example also illustrates the ability to use Jinja features such as iterator and for loop control structures when templating notifications.

","tags":["UI","states","flow runs","events","triggers","Prefect Cloud"],"boost":2},{"location":"cloud/cloud-quickstart/","title":"Getting Started with Prefect Cloud","text":"

Get started with Prefect Cloud in just a few steps:

  1. Sign in or register a Prefect Cloud account.
  2. Create a workspace for your account.
  3. Install Prefect in your local environment.
  4. Log into Prefect Cloud from a local terminal session.
  5. Run a flow locally and view flow run execution in Prefect Cloud.

Prefer to follow this tutorial in a video? We've got exactly what you need. Happy engineering!

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#sign-in-or-register","title":"Sign in or register","text":"

To sign in with an existing account or register an account, go to https://app.prefect.cloud/.

You can create an account with:

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#create-a-workspace","title":"Create a workspace","text":"

A workspace is an isolated environment within Prefect Cloud for your flows and deployments. You can use workspaces to organize or compartmentalize your workflows.

When you register a new account, you'll be prompted to create a workspace.

Select Create Workspace. You'll be prompted to provide a name and description for your workspace.

Note that the Owner setting applies only to users who are members of Prefect Cloud organizations and have permission to create workspaces within the organization.

Select Save to create the workspace. If you change your mind, select Edit from the options menu to modify the workspace details or to delete it.

The Workspace Settings page for your new workspace displays the commands that enable you to install Prefect and log into Prefect Cloud in a local execution environment.

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#install-prefect","title":"Install Prefect","text":"

Configure a local execution environment to use Prefect Cloud as the API server for flow runs. In other words, \"log in\" to Prefect Cloud from a local environment where you want to run a flow.

Open a new terminal session.

Install Prefect in the environment in which you want to execute flow runs.

$ pip install -U prefect\n

Installation requirements

Prefect requires Python 3.8 or later. If you have any questions about Prefect installations requirements or dependencies in your preferred development environment, check out the Installation documentation.

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#log-into-prefect-cloud-from-a-terminal","title":"Log into Prefect Cloud from a terminal","text":"

Use the prefect cloud login Prefect CLI command to log into Prefect Cloud from your environment.

$ prefect cloud login\n

The prefect cloud login command, used on its own, provides an interactive login experience. Using this command, you may log in with either an API key or through a browser.

$ prefect cloud login\n? How would you like to authenticate? [Use arrows to move; enter to select]\n> Log in with a web browser\n  Paste an API key\nOpening browser...\nWaiting for response...\n? Which workspace would you like to use? [Use arrows to move; enter to select]\n> prefect/terry-prefect-workspace\n  g-gadflow/g-workspace\nAuthenticated with Prefect Cloud! Using workspace 'prefect/terry-prefect-workspace'.\n

If you choose to log in via the browser, Prefect opens a new tab in your default browser and enables you to log in and authenticate the session. Use the same login information as you originally used to create your Prefect Cloud account.

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#run-a-flow-with-prefect-cloud","title":"Run a flow with Prefect Cloud","text":"

Okay, you're all set to run a flow locally, orchestrated with Prefect Cloud. Notice that everything works just like running local flows. However, because you logged into Prefect Cloud, your local flow runs show up in Prefect Cloud!

In your local environment, where you configured the previous steps, create a file named quickstart_flow.py with the following contents:

from prefect import flow, get_run_logger\n\n@flow(name=\"Prefect Cloud Quickstart\")\ndef quickstart_flow():\n    logger = get_run_logger()\n    logger.warning(\"Local quickstart flow is running!\")\n\nif __name__ == \"__main__\":\n    quickstart_flow()\n

Now run quickstart_flow.py. You'll see the following log messages in the terminal, indicating that the flow is running correctly.

$ python quickstart_flow.py\n17:52:38.741 | INFO    | prefect.engine - Created flow run 'aquamarine-deer' for flow 'Prefect Cloud Quickstart'\n17:52:39.487 | WARNING | Flow run 'aquamarine-deer' - Local quickstart flow is running!\n17:52:39.592 | INFO    | Flow run 'aquamarine-deer' - Finished in state Completed()\n

Go to the Flow Runs pages in your workspace in Prefect Cloud. You'll see the flow run results right there in Prefect Cloud!

Prefect Cloud automatically tracks any flow runs in a local execution environment logged into Prefect Cloud.

Select the name of the flow run to see details about this run. In this example, the randomly generated flow run name is aquamarine-deer. Your flow run name is likely to be different.

Congratulations! You successfully ran a local flow and, because you're logged into Prefect Cloud, the local flow run results were captured by Prefect Cloud.

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/cloud-quickstart/#next-steps","title":"Next steps","text":"

If you're new to Prefect, learn more about writing and running flows in the Prefect Flows First Steps tutorial. If you're already familiar with Prefect flows and want to try creating deployments and kicking off flow runs with Prefect Cloud, check out the Deployments tutorial.

Want to learn more about the features available in Prefect Cloud? Start with the Prefect Cloud Overview.

If you ran into any issues getting your first flow run with Prefect Cloud working, please join our community to ask questions or provide feedback:

","tags":["UI","dashboard","Prefect Cloud","quickstart","workspaces","tutorial","getting started"],"boost":2},{"location":"cloud/connecting/","title":"Connecting & Troubleshooting Prefect Cloud","text":"

To create flow runs in a local or remote execution environment and use either Prefect Cloud or a Prefect server as the backend API server, you need to

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#log-into-prefect-cloud-from-a-terminal","title":"Log into Prefect Cloud from a terminal","text":"

Configure a local execution environment to use Prefect Cloud as the API server for flow runs. In other words, \"log in\" to Prefect Cloud from a local environment where you want to run a flow.

  1. Open a new terminal session.
  2. Install Prefect in the environment in which you want to execute flow runs.
$ pip install -U prefect\n
  1. Use the prefect cloud login Prefect CLI command to log into Prefect Cloud from your environment.
$ prefect cloud login\n

The prefect cloud login command, used on its own, provides an interactive login experience. Using this command, you can log in with either an API key or through a browser.

$ prefect cloud login\n? How would you like to authenticate? [Use arrows to move; enter to select]\n> Log in with a web browser\n    Paste an API key\nPaste your authentication key:\n? Which workspace would you like to use? [Use arrows to move; enter to select]\n> prefect/terry-prefect-workspace\n    g-gadflow/g-workspace\nAuthenticated with Prefect Cloud! Using workspace 'prefect/terry-prefect-workspace'.\n

You can also log in by providing a Prefect Cloud API key that you create.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#change-workspaces","title":"Change workspaces","text":"

If you need to change which workspace you're syncing with, use the prefect cloud workspace set Prefect CLI command while logged in, passing the account handle and workspace name.

$ prefect cloud workspace set --workspace \"prefect/my-workspace\"\n

If no workspace is provided, you will be prompted to select one.

Workspace Settings also shows you the prefect cloud workspace set Prefect CLI command you can use to sync a local execution environment with a given workspace.

You may also use the prefect cloud login command with the --workspace or -w option to set the current workspace.

$ prefect cloud login --workspace \"prefect/my-workspace\"\n
","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#manually-configure-prefect-api-settings","title":"Manually configure Prefect API settings","text":"

You can also manually configure the PREFECT_API_URL setting to specify the Prefect Cloud API.

For Prefect Cloud, you can configure the PREFECT_API_URL and PREFECT_API_KEY settings to authenticate with Prefect Cloud by using an account ID, workspace ID, and API key.

$ prefect config set PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n$ prefect config set PREFECT_API_KEY=\"[API-KEY]\"\n

When you're in a Prefect Cloud workspace, you can copy the PREFECT_API_URL value directly from the page URL.

In this example, we configured PREFECT_API_URL and PREFECT_API_KEY in the default profile. You can use prefect profile CLI commands to create settings profiles for different configurations. For example, you could have a \"cloud\" profile configured to use the Prefect Cloud API URL and API key, and another \"local\" profile for local development using a local Prefect API server started with prefect server start. See Settings for details.

Environment variables

You can also set PREFECT_API_URL and PREFECT_API_KEY as you would any other environment variable. See Overriding defaults with environment variables for more information.

See the Flow orchestration with Prefect tutorial for examples.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#install-requirements-in-execution-environments","title":"Install requirements in execution environments","text":"

In local and remote execution environments \u2014 such as VMs and containers \u2014 you must make sure any flow requirements or dependencies have been installed before creating a flow run.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#troubleshooting-prefect-cloud","title":"Troubleshooting Prefect Cloud","text":"

This section provides tips that may be helpful if you run into problems using Prefect Cloud.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-and-proxies","title":"Prefect Cloud and proxies","text":"

Proxies intermediate network requests between a server and a client.

To communicate with Prefect Cloud, the Prefect client library makes HTTPS requests. These requests are made using the httpx Python library. httpx respects accepted proxy environment variables, so the Prefect client is able to communicate through proxies.

To enable communication via proxies, simply set the HTTPS_PROXY and SSL_CERT_FILE environment variables as appropriate in your execution environment and things should \u201cjust work.\u201d

See the Using Prefect Cloud with proxies topic in Prefect Discourse for examples of proxy configuration.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-access-via-api","title":"Prefect Cloud access via API","text":"

If the Prefect Cloud API key, environment variable settings, or account login for your execution environment are not configured correctly, you may experience errors or unexpected flow run results when using Prefect CLI commands, running flows, or observing flow run results in Prefect Cloud.

Use the prefect config view CLI command to make sure your execution environment is correctly configured to access Prefect Cloud.

$ prefect config view\nPREFECT_PROFILE='cloud'\nPREFECT_API_KEY='pnu_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' (from profile)\nPREFECT_API_URL='https://api-beta.prefect.io/api/accounts/...' (from profile)\n

Make sure PREFECT_API_URL is configured to use https://api-beta.prefect.io/api/....

Make sure PREFECT_API_KEY is configured to use a valid API key.

You can use the prefect cloud workspace ls CLI command to view or set the active workspace.

$ prefect cloud workspace ls\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503   Available Workspaces: \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502   g-gadflow/g-workspace \u2502\n\u2502    * prefect/workinonit \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n    * active workspace\n

You can also check that the account and workspace IDs specified in the URL for PREFECT_API_URL match those shown in the URL bar for your Prefect Cloud workspace.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/connecting/#prefect-cloud-login-errors","title":"Prefect Cloud login errors","text":"

If you're having difficulty logging in to Prefect Cloud, the following troubleshooting steps may resolve the issue, or will provide more information when sharing your case to the support channel.

Other tips to help with login difficulties:

In some cases, logging in to Prefect Cloud results in a \"404 Page Not Found\" page or fails with the error: \"Failed to load module script: Expected a JavaScript module script but the server responded with a MIME type of \u201ctext/html\u201d. Strict MIME type checking is enforced for module scripts per HTML spec.\"

This error may be caused by a bad service worker.

To resolve the problem, we recommend unregistering service workers.

In your browser, start by opening the developer console.

Once the developer console is open:

  1. Go to the Application tab in the developer console.
  2. Select Storage.
  3. Make sure Unregister service workers is selected.
  4. Select Clear site data, then hard refresh the page with CMD+Shift+R (CTRL+Shift+R on Windows).

See the Login to Prefect Cloud fails... topic in Prefect Discourse for a video demonstrating these steps.

None of this worked?

Email us at help@prefect.io and provide answers to the questions above in your email to make it faster to troubleshoot and unblock you. Make sure you add the email address with which you were trying to log in, your Prefect Cloud account name, and, if applicable, the organization to which it belongs.

","tags":["Prefect Cloud","API keys","configuration","agents","workers","troubleshooting","connecting"],"boost":2},{"location":"cloud/events/","title":"Events","text":"

An event is a notification of a change. Together, events form a feed of activity recording what's happening across your stack.

Events power several features in Prefect Cloud, including flow run logs, audit logs, and automations.

Events can represent API calls, state transitions, or changes in your execution environment or infrastructure.

Events enable observability into your data stack via the event feed, and the configuration of Prefect's reactivity via automations.

","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-specification","title":"Event Specification","text":"

Events adhere to a structured specification.

Name Type Required? Description occurred String yes When the event happened event String yes The name of the event that happened resource Object yes The primary Resource this event concerns related Array no A list of additional Resources involved in this event payload Object no An open-ended set of data describing what happened id String yes The client-provided identifier of this event follows String no The ID of an event that is known to have occurred prior to this one.","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-grammar","title":"Event Grammar","text":"

Generally, events have a consistent and informative grammar - an event describes a resource and an action that the resource took or that was taken on that resource. For example, events emitted by Prefect objects take the form of:

prefect.block.write-method.called\nprefect-cloud.automation.action.executed\nprefect-cloud.user.logged-in\n
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-sources","title":"Event Sources","text":"

Events are automatically emitted by all Prefect objects, including flows, tasks, deployments, work queues, and logs. Prefect-emitted events will contain the prefect or prefect-cloud resource prefix. Events can also be sent to the Prefect events API via authenticated http request.

The Prefect SDK provides a method that emits events, for use in arbitrary python code that may not be a task or flow. Running the following code will emit events to Prefect Cloud, which will validate and ingest the event data.

from prefect.events import emit_event\n\ndef some_function(name: str=\"kiki\") -> None:\n    print(f\"hi {name}!\")\n    emit_event(event=f\"{name}.sent.event!\", resource={\"prefect.resource.id\": f\"coder.{name}\"})\n\nsome_function()\n

Prefect Cloud offers programmable webhooks to recieve HTTP requests from other systems and translate them into events within your workspace. Webhooks can emit pre-defined static events, dynamic events that use portions of the incoming HTTP request, or events derived from CloudEvents.

Events emitted from any source will appear in the event feed, where you can visualize activity in context and configure automations to react to the presence or absence of it in the future.

","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#resources","title":"Resources","text":"

Every event has a primary resource, which describes the object that emitted an event. Resources are used as quasi-stable identifiers for sources of events, and are constructed as dot-delimited strings, for example:

prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3.action.0\nacme.user.kiki.elt_script_1\nprefect.flow-run.e3755d32-cec5-42ca-9bcd-af236e308ba6\n

Resources can optionally have additional arbitrary labels which can be used in event aggregation queries, such as:

\"resource\": {\n\"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3\",\n\"prefect-cloud.action.type\": \"call-webhook\"\n}\n

Events can optionally contain related resources, used to associate the event with other resources, such as in the case that the primary resource acted on or with another resource:

\"resource\": {\n\"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3.action.0\",\n\"prefect-cloud.action.type\": \"call-webhook\"\n},\n\"related\": [\n{\n\"prefect.resource.id\": \"prefect-cloud.automation.5b9c5c3d-6ca0-48d0-8331-79f4b65385b3\",\n\"prefect.resource.role\": \"automation\",\n\"prefect-cloud.name\": \"webhook_body_demo\",\n\"prefect-cloud.posture\": \"Reactive\"\n}\n]\n
","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#events-in-the-cloud-ui","title":"Events in the Cloud UI","text":"

Prefect Cloud provides an interactive dashboard to analyze and take action on events that occurred in your workspace on the event feed page.

","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-feed","title":"Event feed","text":"

The event feed is the primary place to view, search, and filter events to understand activity across your stack. Each entry displays data on the resource, related resource, and event that took place.

","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#event-details","title":"Event details","text":"

You can view more information about an event by clicking into it, where you can view the full details of an event's resource, related resources, and its payload.

","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/events/#reacting-to-events","title":"Reacting to events","text":"

From an event page, you can easily configure an automation to trigger on the observation of matching events or a lack of matching events by clicking the automate button in the overflow menu:

The default trigger configuration will fire every time it sees an event with a matching resource identifier. Advanced configuration is possible via custom triggers.

","tags":["UI","dashboard","Prefect Cloud","Observability","Events"],"boost":2},{"location":"cloud/organizations/","title":"Organizations","text":"

For larger teams or companies with more complex needs around user and access management, organizations in Prefect Cloud provide several features that enable you to collaborate securely at scale, including:

See the Prefect Cloud plans to learn more about options for supporting more users, service accounts, and workspaces.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#organizations-overview","title":"Organizations overview","text":"

An organization is a type of account available on Prefect Cloud that enables more extensive and granular control over inviting workspace collaboration. Organizations are only available on Prefect Cloud.

Within an organization account you can:

For example, you might create a workspace for a specific team. Within that workspace, give a developer member full Collaborator role access, and invite a data scientist with Read-only Collaborator permissions to monitor the status of scheduled and completed flow runs.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#navigating-organizations","title":"Navigating organizations","text":"

You can see the organizations you're a member of, or create a new organization, by selecting the Organizations icon in the left navigation bar.

When you select an organization, the Profile page provides an overview of the organization.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#organization-workspaces","title":"Organization workspaces","text":"

Workspaces shows you a list of workspaces you can access within the organization. If you have been given the organization Admin role, you can create and manage workspaces here.

You can also select the Prefect icon to see all workspaces you have been invited to access, across personal accounts and organizations.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#organization-members","title":"Organization members","text":"

Members shows you a list of users who are members of the organization. If you have been given the organization Admin role, you can invite new members and set organization roles for users here.

You can control granular access to workspaces by setting default access for all organization members, inviting specific members to collaborate in an organization workspace, or adding service account permissions.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#inviting-organization-members","title":"Inviting organization members","text":"

To invite new members to an organization in Prefect Cloud, select the + icon. Provide an email address and organization role. You may add multiple email addresses.

The user will receive an invite email with a confirmation link at the address provided. If the user does not already have a Prefect Cloud account, they must create an account before accessing the organization.

When the user accepts the invite, they become a member of your organization and are assigned the role from the invite. The member's role can be changed (or access removed entirely) by an organization Admin.

The maximum number of organization members varies. See the Prefect Cloud plans to learn more about options for supporting more users, service accounts, and workspaces.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#service-accounts","title":"Service accounts","text":"

Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account. Service accounts are typically used to configure API access for running agents or executing flow runs on remote infrastructure.

Select Service Accounts to view, create, or edit service accounts for your organization.

Service accounts are created at the organization level, but may be shared to individual workspaces within the organization. See workspace sharing for more information.

Service account credentials

When you create a service account, Prefect Cloud creates a new API key for the account and provides the API configuration command for the execution environment. Save these to a safe location for future use. If the access credentials are lost or compromised, you should regenerate the credentials from the service account page.

Service account roles

Service accounts are created at the organization level, and may then become members of workspaces within the organization.

A service account may only be a Member of an organization. It can never be an organization Admin. You may apply any valid workspace-level role to a service account.

See the service accounts documentation for more information.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#workspace-sharing","title":"Workspace sharing","text":"

Workspace sharing enables you to give organization members and service accounts access to workspaces within an organization. Each workspace within an organization may have its own members and service accounts with roles and permissions specific to that workspace. Organization Admins have full access to all workspaces in an organization.

Within a workspace, select Workspace Sharing, then select the + icon to add new members or service accounts to the workspace. Only organization Admins and workspace Owners may add members or service accounts to a workspace.

Members and service accounts must already be configured for the organization. An Admin or Owner may configure a different role for the user or service account as needed.

Default workspace role

You may make a workspace available to any user in an organization by setting a default role for \"Anyone at...\". Users in the organization may access the workspace with the specified default role permissions. Default workspace roles do not apply to service accounts.

The role given to users specifically added to the workspace is the union of workspace scopes given by the default workspace role and that users' role in the workspace.

You may set this to \"No Access\" if you do not want organization members not specifically added to the workspace to access it (organization Admins excepted).

See the Workspace sharing documentation for more information.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#organization-and-workspace-roles","title":"Organization and workspace roles","text":"

Prefect Cloud enables you to configure both organization and workspace roles for users.

Select Roles within an organization to see the configured workspace roles for your organization.

Prefect Cloud provides default workspace roles that cover most use cases. You may also create custom workspace roles to suit your specific organization needs.

See the Roles (RBAC) documentation for more information on default and custom role permissions.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/organizations/#single-sign-on-sso","title":"Single Sign-on (SSO)","text":"

Prefect Cloud's Organization and Enterprise plans offer single sign-on (SSO) authentication integration with your team\u2019s identity provider. SSO integration can bet set up with identity providers that support OIDC and SAML.

See the Single Sign-on (SSO) documentation for more information on default and custom role permissions.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","service accounts"],"boost":2},{"location":"cloud/rate-limits/","title":"API Rate Limits & Retention Periods","text":"

API rate limits restrict the number of requests that a single client can make in a given time period. They ensure Prefect Cloud's stability, so that when you make an API call, you always get a response.

Prefect Cloud rate limits are subject to change

The following rate limits are in effect currently, but are subject to change. Contact Prefect support at help@prefect.io if you have questions about current rate limits.

Prefect Cloud enforces the following rate limits:

","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/rate-limits/#flow-flow-run-and-task-run-rate-limits","title":"Flow, flow run, and task run rate limits","text":"

Prefect Cloud limits the flow_runs, task_runs, and flows endpoints and their subroutes at the following levels:

The Prefect Cloud API will return a 429 response with an appropriate Retry-After header if these limits are triggered.

","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/rate-limits/#log-service-rate-limits","title":"Log service rate limits","text":"

Prefect Cloud limits the number of logs accepted:

The Prefect Cloud API will return a 429 response if these limits are triggered.

","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/rate-limits/#flow-run-retention","title":"Flow run retention","text":"

Prefect Cloud feature

The Flow Run Retention Policy setting is only applicable in Prefect Cloud.

Flow runs in Prefect Cloud are retained according to the Flow Run Retention Policy setting in your personal account or organization profile. The policy setting applies to all workspaces owned by the personal account or organization respectively.

The flow run retention policy represents the number of days each flow run is available in the Prefect Cloud UI, and via the Prefect CLI and API after it ends. Once a flow run reaches a terminal state (detailed in the chart here), it will be retained until the end of the flow run retention period.

Flow Run Retention Policy keys on terminal state

Note that, because Flow Run Retention Policy keys on terminal state, if two flows start at the same time, but reach a terminal state at different times, they will be removed at different times according to when they each reached their respective terminal states.

This retention policy applies to all details about a flow run, including its task runs. Subflow runs follow the retention policy independently from their parent flow runs, and are removed based on the time each subflow run reaches a terminal state.

If you or your organization have needs that require a tailored retention period, contact the Prefect Sales team.

","tags":["API","Prefect Cloud","rate limits"],"boost":2},{"location":"cloud/webhooks/","title":"Webhooks","text":"

Use webhooks in your Prefect Cloud workspace to receive, observe, and react to events from other systems in your ecosystem. Each webhook exposes a unique URL endpoint to receive events from other systems and transforms them into Prefect events for use in automations.

Webhooks are defined by two essential components: a unique URL and a template which translates incoming web requests to a Prefect event.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#configuring-webhooks","title":"Configuring webhooks","text":"","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#via-the-prefect-cloud-api","title":"Via the Prefect Cloud API","text":"

Webhooks are managed via the Webhooks API endpoints. This is a Prefect Cloud-only feature. You authenticate API calls using the standard authentication methods you use with Prefect Cloud.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#via-prefect-cloud","title":"Via Prefect Cloud","text":"

Webhooks can be created and managed from the Prefect Cloud UI.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#via-the-prefect-cli","title":"Via the Prefect CLI","text":"

Webhooks can be managed and interacted with via the prefect cloud webhook command group.

prefect cloud webhook --help\n

You can create your first webhook by invoking create:

prefect cloud webhook create your-webhook-name \\\n--description \"Receives webhooks from your system\" \\\n--template '{ \"event\": \"your.event.name\", \"resource\": { \"prefect.resource.id\": \"your.resource.id\" } }'\n

Note the template string, which is discussed in greater detail down below

You can retrieve details for a specific webhook by ID using get, or optionally query all webhooks in your workspace via ls:

# get webhook by ID\nprefect cloud webhook get <webhook-id>\n\n# list all configured webhooks in your workspace\nprefect cloud webhook ls\n

If you ever need to disable an existing webhook without deleting it, use toggle:

prefect cloud webhook toggle <webhook-id>\nWebhook is now disabled\n\nprefect cloud webhook toggle <webhook-id>\nWebhook is now enabled\n

If you are concerned that your webhook endpoint may have been compromised, use rotate to generate a new, random endpoint

prefect cloud webhook rotate <webhook-url-slug>\n
","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#webhook-endpoints","title":"Webhook endpoints","text":"

The webhook endpoints have randomly generated opaque URLs that do not divulge any information about your Prefect Cloud workspace. They are rooted at https://api.prefect.cloud/hooks/. For example: https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ. Prefect Cloud assigns this URL when you create a webhook; it cannot be set via the API. You may rotate your webhook URL at any time without losing the associated configuration.

All webhooks may accept requests via the most common HTTP methods:

Prefect Cloud webhooks are deliberately quiet to the outside world, and will only return a 204 No Content response when they are successful, and a 400 Bad Request error when there is any error interpreting the request. For more visibility when your webhooks fail, see the Troubleshooting section below.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#webhook-templates","title":"Webhook templates","text":"

The purpose of a webhook is to accept an HTTP request from another system and produce a Prefect event from it. You may find that you often have little influence or control over the format of those requests, so Prefect's webhook system gives you full control over how you turn those notifications from other systems into meaningful events in your Prefect Cloud workspace. The template you define for each webhook will determine how individual components of the incoming HTTP request become the event name and resource labels of the resulting Prefect event.

As with the templates available in Prefect Cloud Automation for defining notifications and other parameters, you will write templates in Jinja2. All of the built-in Jinja2 blocks and filters are available, as well as the filters from the jinja2-humanize-extensions package.

Your goal when defining your event template is to produce a valid JSON object that defines (at minimum) the event name and the resource[\"prefect.resource.id\"], which are required of all events. The simplest template is one in which these are statically defined.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#static-webhook-events","title":"Static webhook events","text":"

Let's see a static webhook template example. Say you want to configure a webhook that will notify Prefect when your recommendations machine learning model has been updated, so you can then send a Slack notification to your team and run a few subsequent deployments. Those models are produced on a daily schedule by another team that is using cron for scheduling. They aren't able to use Prefect for their flows (yet!), but they are happy to add a curl to the end of their daily script to notify you. Because this webhook will only be used for a single event from a single resource, your template can be entirely static:

{\n\"event\": \"model.refreshed\",\n\"resource\": {\n\"prefect.resource.id\": \"product.models.recommendations\",\n\"prefect.resource.name\": \"Recommendations [Products]\",\n\"producing-team\": \"Data Science\"\n}\n}\n

Make sure to produce valid JSON

The output of your template, when rendered, should be a valid string that can be parsed, for example, with json.loads.

A webhook with this template may be invoked via any of the HTTP methods, including a GET request with no body, so the team you are integrating with can include this line at the end of their daily script:

curl https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ\n

Each time the script hits the webhook, the webhook will produce a single Prefect event with that name and resource in your workspace.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#event-fields-that-prefect-cloud-populates-for-you","title":"Event fields that Prefect Cloud populates for you","text":"

You may notice that you only had to provide the event and resource definition, which is not a completely fleshed out event. Prefect Cloud will set default values for any missing fields, such as occurred and id, so you don't need to set them in your template. Additionally, Prefect Cloud will add the webhook itself as a related resource on all of the events it produces.

If your template does not produce a payload field, the payload will default to a standard set of debugging information, including the HTTP method, headers, and body.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#dynamic-webhook-events","title":"Dynamic webhook events","text":"

Now let's say that after a few days you and the Data Science team are getting a lot of value from the automations you have set up with the static webhook. You've agreed to upgrade this webhook to handle all of the various models that the team produces. It's time to add some dynamic information to your webhook template.

Your colleagues on the team have adjusted their daily cron scripts to POST a small body that includes the ID and name of the model that was updated:

curl \\\n-d \"model=recommendations\" \\\n-d \"friendly_name=Recommendations%20[Products]\" \\\n-X POST https://api.prefect.cloud/hooks/AERylZ_uewzpDx-8fcweHQ\n

This script will send a POST request and the body will include a traditional URL-encoded form with two fields describing the model that was updated: model and friendly_name. Here's the webhook code that uses Jinja to receive these values in your template and produce different events for the different models:

{\n    \"event\": \"model.refreshed\",\n    \"resource\": {\n        \"prefect.resource.id\": \"product.models.{{ body.model }}\",\n        \"prefect.resource.name\": \"{{ body.friendly_name }}\",\n        \"producing-team\": \"Data Science\"\n    }\n}\n

All subsequent POST requests will produce events with those variable resource IDs and names. The other statically-defined parts, such as event or the producing-team label you included earlier will still be used.

Use Jinja2's default filter to handle missing values

Jinja2 has a helpful default filter that can compensate for missing values in the request. In this example, you may want to use the model's ID in place of the friendly name when the friendly name is not provided: {{ body.friendly_name|default(body.model) }}.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#how-http-request-components-are-handled","title":"How HTTP request components are handled","text":"

The Jinja2 template context includes the three parts of the incoming HTTP request:

HTTP headers are available without any alteration as a dict-like object, but you may access them with header names in any case. For example, these template expressions all return the value of the Content-Length header:

{{ headers['Content-Length'] }}\n\n{{ headers['content-length'] }}\n\n{{ headers['CoNtEnt-LeNgTh'] }}\n

The HTTP request body goes through some light preprocessing to make it more useful in templates. If the Content-Type of the request is application/json, the body will be parsed as a JSON object and made available to the webhook templates. If the Content-Type is application/x-www-form-urlencoded (as in our example above), the body is parsed into a flat dict-like object of key-value pairs. Jinja2 supports both index and attribute access to the fields of these objects, so the following two expressions are equivalent:

{{ body['friendly_name'] }}\n\n{{ body.friendly_name }}\n

Only for Python identifiers

Jinja2's syntax only allows attribute-like access if the key is a valid Python identifier, so body.friendly-name will not work. Use body['friendly-name'] in those cases.

You may not have much control over the client invoking your webhook, but would still like for bodies that look like JSON to be parsed as such. Prefect Cloud will attempt to parse any other content type (like text/plain) as if it were JSON first. In any case where the body cannot be transformed into JSON, it will be made available to your templates as a Python str.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#accepting-prefect-events-directly","title":"Accepting Prefect events directly","text":"

In cases where you have more control over the client, your webhook can accept Prefect events directly with a simple pass-through template:

{{ body|tojson }}\n

This template accepts the incoming body (assuming it was in JSON format) and just passes it through unmodified. This allows a POST of a partial Prefect event as in this example:

POST /hooks/AERylZ_uewzpDx-8fcweHQ HTTP/1.1\nHost: api.prefect.cloud\nContent-Type: application/json\nContent-Length: 228\n\n{\n    \"event\": \"model.refreshed\",\n    \"resource\": {\n        \"prefect.resource.id\": \"product.models.recommendations\",\n        \"prefect.resource.name\": \"Recommendations [Products]\",\n        \"producing-team\": \"Data Science\"\n    }\n}\n

The resulting event will be filled out with the default values for occurred, id, and other fields as described above.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#accepting-cloudevents","title":"Accepting CloudEvents","text":"

The Cloud Native Computing Foundation has standardized CloudEvents for use by systems to exchange event information in a common format. These events are supported by major cloud providers and a growing number of cloud-native systems. Prefect Cloud can interpret a webhook containing a CloudEvent natively with the following template:

{{ body|from_cloud_event(headers) }}\n

The resulting event will use the CloudEvent's subject as the resource (or the source if no subject is available). The CloudEvent's data attribute will become the Prefect event's payload['data'], and the other CloudEvent metadata will be at payload['cloudevents']. If you would like to handle CloudEvents in a more specific way tailored to your use case, use a dynamic template to interpret the incoming body.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/webhooks/#troubleshooting","title":"Troubleshooting","text":"

The initial configuration of your webhook may require some trial and error as you get the sender and your receiving webhook speaking a compatible language. While you are in this phase, you may find the Event Feed in the UI to be indispensable for seeing the events as they are happening.

When Prefect Cloud encounters an error during receipt of a webhook, it will produce a prefect-cloud.webhook.failed event in your workspace. This event will include critical information about the HTTP method, headers, and body it received, as well as what the template rendered. Keep an eye out for these events when something goes wrong.

","tags":["events","automations","triggers","webhooks","Prefect Cloud"]},{"location":"cloud/workspaces/","title":"Workspaces","text":"

A workspace is a discrete environment within Prefect Cloud for your flows and deployments. Workspaces are only available to Prefect Cloud accounts.

Workspaces could be used in any way you like to organize or compartmentalize your workflows. For example, you could use separate workspaces to isolate dev, staging, and prod environments, or to provide separation between different teams.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspaces-overview","title":"Workspaces overview","text":"

When you first log into Prefect Cloud, you will be prompted to create your own initial workspace. After creating your workspace, you'll be able to view flow runs, flows, deployments, and other workspace-specific features in the Prefect Cloud UI.

Select the Workspaces Icon to see all of the workspaces you can access.

Your list of available workspaces may include:

Workspace-specific features

Each workspace keeps track of its own:

Your user permissions within workspaces may vary. Organizations can assign roles and permissions at the workspace level.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#create-a-workspace","title":"Create a workspace","text":"

On the Workspaces page, select the + icon to create a new workspace. You'll be prompted to configure:

Select Create to actually create the new workspace. The number of available workspaces varies by Prefect Cloud plan. See Pricing if you need additional workspaces or users.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-settings","title":"Workspace settings","text":"

Within a workspace, select Workspace Settings to view or edit workspace details.

The options menu enables you to edit workspace details or delete the workspace.

Deleting a workspace

Deleting a workspace deletes all deployments, flow run history, work pools, and notifications configured in workspace.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-collaborators","title":"Workspace collaborators","text":"

Personal account users may invite workspace collaborators, users who can join, view, and run flows in your workspaces.

In your workspace, select Workspace Collaborators. If you've previously invited collaborators, you'll see them listed.

To invite a user to become a workspace collaborator, select the + icon. You'll be prompted for the email address of the person you'd like to invite. Add the email address, then select Send to initiate the invitation.

If the user does not already have a Prefect Cloud account, they will be able to create one when accepting the workspace collaborator invitation.

To delete a workspace collaborator, select Remove from the menu on the left side of the user's information on this page.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-sharing","title":"Workspace sharing","text":"

Within a Prefect Cloud organization, Admins and workspace Owners may invite users and service accounts to work in an organization workspace. In addition to giving the user access to the workspace, the Admin or Owner assigns a workspace role to the user. The role specifies the scope of permissions for the user within the workspace.

In an organization workspace, select Workspace Sharing to manage users and service accounts for the workspace. If you've previously invited users and service accounts, you'll see them listed.

To invite a user to become a workspace collaborator, select the Members + icon. You can select from a list of existing organization members.

Select a Workspace Role for the user. This will be the initial role for the user within the workspace. A workspace Owner can change this role at any time.

Select Send to initiate the invitation.

To add a service account to a workspace, select the Service Accounts + icon. You can select from a list of existing service accounts configured for the organization. Select a Workspace Role for the service account. This will be the initial role for the service account within the workspace. A workspace Owner can change this role at any time. Select Share to finalize adding the service account.

To delete a workspace collaborator or service account, select Remove from the menu on the right side of the user or service account information on this page.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#workspace-transfer","title":"Workspace transfer","text":"

Workspace transfer enables you to move an existing workspace from one account to another. For example, you may transfer a workspace from a personal account to an organization.

Workspace transfer retains existing workspace configuration and flow run history, including blocks, deployments, notifications, work pools, and logs.

Workspace transfer permissions

Workspace transfer must be initiated or approved by a user with admin priviliges for the workspace to be transferred.

For example, if you are transferring a personal workspace to an organization, the owner of the personal account is the default admin for that account and must initiate or approve the transfer.

To initiate a workspace transfer between personal accounts, contact support@prefect.io.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/workspaces/#transfer-a-workspace","title":"Transfer a workspace","text":"

To transfer a workspace, select Workspace Settings within the workspace. Then, from the options menu, select Transfer to initiate the workspace transfer process.

The Transfer Workspace page shows the workspace to be transferred on the left. Select the target account or organization for the workspace on the right. You may also change the handle of the workspace during the transfer process.

Select Transfer to transfer the workspace.

Workspace transfer impact on accounts

Workspace transfer may impact resource usage and costs for source and target account or organization.

When you transfer a workspace, users, API keys, and service accounts may lose access to the workspace. Audit log will no longer track activity on the workspace. Flow runs ending outside of the destination account\u2019s flow run retention period will be removed. You may also need to update Prefect CLI profiles and execution environment settings to access the workspace's new location.

You may also incur new charges in the target account to accommodate the transferred workspace.

The Transfer Workspace page outlines the impacts of transferring the selected workspace to the selected target. Please review these notes carefully before selecting Transfer to transfer the workspace.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/","title":"Users","text":"

Users of Prefect Cloud are accounts for individuals created by signing up at app.prefect.cloud. Users can also be added as a Member of other Organizations and Workspaces.

","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#user-settings","title":"User Settings","text":"

Users can access their account in the profile menu, including:

","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/#roles","title":"Roles","text":"

Users can be granted roles with their respective permission sets at the Account and Workspace levels.

","tags":["Prefect Cloud","Users"],"boost":2},{"location":"cloud/users/api-keys/","title":"Manage Prefect Cloud API Keys","text":"

API keys enable you to authenticate a local environment to work with Prefect Cloud. See Configure execution environment for details on how API keys are configured in your execution environment.

See Configure Local Environments for details on setting up access to Prefect Cloud from a local development environment or remote execution environment.

","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#create-an-api-key","title":"Create an API key","text":"

To create an API key, select the account icon at the bottom-left corner of the UI and select your account name. This displays your account profile.

Select the API Keys tab. This displays a list of previously generated keys and lets you create new API keys or delete keys.

Select the + button to create a new API key. You're prompted to provide a name for the key and, optionally, an expiration date. Select Create API Key to generate the key.

Note that API keys cannot be revealed again in the UI after you generate them, so copy the key to a secure location.

","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#log-into-prefect-cloud-with-an-api-key","title":"Log into Prefect Cloud with an API Key","text":"
prefect cloud login -k '<my-api-key>'\n
","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/api-keys/#service-account-api-keys","title":"Service account API keys","text":"

Service accounts are a feature of Prefect Cloud organizations that enable you to create a Prefect Cloud API key that is not associated with a user account.

Service accounts are typically used to configure API access for running agents or executing flow runs on remote infrastructure. Events and logs for flow runs in those environments are then associated with the service account rather than a user, and API access may be managed or revoked by configuring or removing the service account without disrupting user access.

See the service accounts documentation for more information about creating and managing service accounts in a Prefect Cloud organization.

","tags":["Prefect Cloud","API keys","configuration"],"boost":2},{"location":"cloud/users/audit-log/","title":"Audit Log","text":"

Prefect Cloud's Organization and Enterprise plans offer enhanced compliance and transparency tools with Audit Log. Audit logs provide a chronological record of activities performed by Prefect Cloud users in your organization, allowing you to monitor detailed Prefect Cloud actions for security and compliance purposes.

Audit logs enable you to identify who took what action, when, and using what resources within your Prefect Cloud organization. In conjunction with appropriate tools and procedures, audit logs can assist in detecting potential security violations and investigating application errors.

Audit logs can be used to identify changes in:

See the Prefect Cloud plans to learn more about options for supporting audit logs.

","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/audit-log/#viewing-audit-logs","title":"Viewing audit logs","text":"

Within your organization, select the Audit Log page to view audit logs.

Organization admins can view audit logs for:

Admins can filter audit logs on multiple dimensions to restrict the results they see by workspace, user, or event type. Available audit log events are displayed in the Events drop-down menu.

Audit logs may also be filtered by date range. Audit log retention period varies by Prefect Cloud plan. See your organization profile page for the current audit log retention period.

","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","audit logs","compliance"],"boost":2},{"location":"cloud/users/roles/","title":"User and Service Account Roles","text":"

Organizations in Prefect Cloud let you give people in your organization access to the appropriate Prefect functionality within your organization and within specific workspaces.

Role-based access control (RBAC) functionality in Prefect Cloud enables you to assign users granular permissions to perform certain activities within an organization or a workspace.

To give users access to functionality beyond the scope of Prefect\u2019s built-in workspace roles, you may also create custom roles for users.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#built-in-roles","title":"Built-in roles","text":"

Roles give users abilities at either the organization level or at the individual workspace level.

The following sections outline the abilities of the built-in, Prefect-defined organizational and workspace roles.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#organization-level-roles","title":"Organization-level roles","text":"

The following built-in roles have permissions across an organization in Prefect Cloud.

Role Abilities Admin \u2022 Set/change all organization profile settings allowed to be set/changed by a Prefect user. \u2022 Add and remove organization members, and their organization roles. \u2022 Create and delete service accounts in the organization. \u2022 Create workspaces in the organization. \u2022 Implicit workspace owner access on all workspaces in the organization. Member \u2022 View organization profile settings. \u2022 View workspaces I have access to in the organization. \u2022 View organization members and their roles. \u2022 View service accounts in the organization.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-level-roles","title":"Workspace-level roles","text":"

The following built-in roles have permissions within a given workspace in Prefect Cloud.

Role Abilities Viewer \u2022 View flow runs within a workspace. \u2022 View deployments within a workspace. \u2022 View all work pools within a workspace. \u2022 View all blocks within a workspace. \u2022 View all automations within a workspace. \u2022 View workspace handle and description. Runner All Viewer abilities, plus: \u2022 Run deployments within a workspace. Developer All Runner abilities, plus: \u2022 Run flows within a workspace. \u2022 Delete flow runs within a workspace. \u2022 Create, edit, and delete deployments within a workspace. \u2022 Create, edit, and delete work pools within a workspace. \u2022 Create, edit, and delete all blocks and their secrets within a workspace. \u2022 Create, edit, and delete automations within a workspace. \u2022 View all workspace settings. Owner All Developer abilities, plus: \u2022 Add and remove organization members, and set their role within a workspace. \u2022 Set the workspace\u2019s default workspace role for all users in the organization. \u2022 Set, view, edit workspace settings. Worker The minimum scopes required for a worker to poll for and submit work.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#custom-workspace-roles","title":"Custom workspace roles","text":"

The built-in roles will serve the needs of most users, but your team may need to configure custom roles, giving users access to specific permissions within a workspace.

Custom roles can inherit permissions from a built-in role. This enables tweaks to meet your organization\u2019s needs, while ensuring users can still benefit from Prefect\u2019s default workspace role permission curation as new functionality becomes available.

Custom workspace roles can also be created independent of Prefect\u2019s built-in roles. This option gives workspace admins full control of user access to workspace functionality. However, for non-inherited custom roles, the workspace admin takes on the responsibility for monitoring and setting permissions for new functionality as it is released.

See Role permissions for details of permissions you may set for custom roles.

After you create a new role, it become available in the organization Members page and the Workspace Sharing page for you to apply to users.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#inherited-roles","title":"Inherited roles","text":"

A custom role may be configured as an Inherited Role. Using an inherited role allows you to create a custom role using a set of initial permissions associated with a built-in Prefect role. Additional permissions can be added to the custom role. Permissions included in the inherited role cannot be removed.

Custom roles created using an inherited role will follow Prefect's default workspace role permission curation as new functionality becomes available.

To configure an inherited role when configuring a custom role, select the Inherit permission from a default role check box, then select the role from which the new role should inherit permissions.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-role-permissions","title":"Workspace role permissions","text":"

The following permissions are available for custom roles.

","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#automations","title":"Automations","text":"Permission Description View automations User can see configured automations within a workspace. Create, edit, and delete automations User can create, edit, and delete automations within a workspace. Includes permissions of View automations.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#blocks","title":"Blocks","text":"Permission Description View blocks User can see configured blocks within a workspace. View secret block data User can see configured blocks and their secrets within a workspace. Includes permissions of\u00a0View blocks. Create, edit, and delete blocks User can create, edit, and delete blocks within a workspace. Includes permissions of View blocks and View secret block data.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#deployments","title":"Deployments","text":"Permission Description View deployments User can see configured deployments within a workspace. Run deployments User can run deployments within a workspace. This does not give a user permission to execute the flow associated with the deployment. This only gives a user (via their key) the ability to run a deployment \u2014 another user/key must actually execute that flow, such as a service account with an appropriate role. Includes permissions of View deployments. Create and edit deployments User can create and edit deployments within a workspace. Includes permissions of View deployments and Run deployments. Delete deployments User can delete deployments within a workspace. Includes permissions of View deployments, Run deployments, and Create and edit deployments.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#flows","title":"Flows","text":"Permission Description View flows and flow runs User can see flows and flow runs within a workspace. Create, update, and delete saved search filters User can create, update, and delete saved flow run search filters configured within a workspace. Includes permissions of View flows and flow runs. Create, update, and run flows User can create, update, and run flows within a workspace. Includes permissions of View flows and flow runs. Delete flows User can delete flows within a workspace. Includes permissions of View flows and flow runs and Create, update, and run flows.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#notifications","title":"Notifications","text":"Permission Description View notification policies User can see notification policies configured within a workspace. Create and edit notification policies User can create and edit notification policies configured within a workspace. Includes permissions of View notification policies. Delete notification policies User can delete notification policies configured within a workspace. Includes permissions of View notification policies and Create and edit notification policies.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#task-run-concurrency","title":"Task run concurrency","text":"Permission Description View concurrency limits User can see configured task run concurrency limits within a workspace. Create, edit, and delete concurrency limits User can create, edit, and delete task run concurrency limits within a workspace. Includes permissions of View concurrency limits.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#work-pools","title":"Work pools","text":"Permission Description View work pools User can see work pools configured within a workspace. Create, edit, and pause work pools User can create, edit, and pause work pools configured within a workspace. Includes permissions of View work pools. Delete work pools User can delete work pools configured within a workspace. Includes permissions of View work pools and Create, edit, and pause work pools.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/roles/#workspace-management","title":"Workspace management","text":"Permission Description View information about workspace service accounts User can see service accounts configured within a workspace. View information about workspace users User can see user accounts for users invited to the workspace. View workspace settings User can see settings configured within a workspace. Edit workspace settings User can edit settings for a workspace. Includes permissions of View workspace settings. Delete the workspace User can delete a workspace. Includes permissions of View workspace settings and Edit workspace settings.","tags":["UI","dashboard","Prefect Cloud","accounts","teams","workspaces","organizations","custom roles","RBAC"],"boost":2},{"location":"cloud/users/service-accounts/","title":"Service Accounts","text":"

Service accounts enable you to create a Prefect Cloud API key that is not associated with a user account. Service accounts are typically used to configure API access for running agents or executing deployment flow runs on remote infrastructure.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/service-accounts/#service-accounts-overview","title":"Service accounts overview","text":"

Service accounts are non-user organization accounts that have the following:

Using service account credentials, you can configure an execution environment to interact with your Prefect Cloud organization workspaces without a user having to manually log in from that environment. Service accounts may be created, added to workspaces, have their roles changed, or deleted without affecting organization user accounts.

Select Service Accounts to view, create, or edit service accounts for your organization.

Service accounts are created at the organization level, but individual workspaces within the organization may be shared with the account. See workspace sharing for more information.

Service account credentials

When you create a service account, Prefect Cloud creates a new API key for the account and provides the API configuration command for the execution environment. Save these to a safe location for future use. If the access credentials are lost or compromised, you should regenerate the credentials from the service account page.

Service account roles

Service accounts are created at the organization level, and may then become members of workspaces within the organization.

A service account may only be a Member of an organization. It can never be an organization Admin. You may apply any valid workspace-level role to a service account.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/service-accounts/#create-a-service-account","title":"Create a service account","text":"

Within your organization, on the Service Accounts page, select the + icon to create a new service account. You'll be prompted to configure:

Service account roles

A service account may only be a Member of an organization. You may apply any valid workspace-level role to a service account when it is added to a workspace.

Select Create to actually create the new service account.

Note that API keys cannot be revealed again in the UI after you generate them, so copy the key to a secure location.

You can change the API key and expiration for a service account by rotating the API key. Select Rotate API Key from the menu on the left side of the service account's information on this page.

To delete a service account, select Remove from the menu on the left side of the service account's information.

","tags":["UI","Prefect Cloud","workspaces","deployments"],"boost":2},{"location":"cloud/users/sso/","title":"Single Sign-on (SSO)","text":"

Prefect Cloud's Organization and Enterprise plans offer single sign-on (SSO) integration with your team\u2019s identity provider. SSO integration can bet set up with any identity provider that supports:

When using SSO, Prefect Cloud won't store passwords for any accounts managed by your identity provider. Members of your Prefect Cloud organization will instead log in to the organization and authenticate using your identity provider.

Once your SSO integration has been set up, non-admins will be required to authenticate through the SSO provider when accessing organization resources.

See the Prefect Cloud plans to learn more about options for supporting more users and workspaces, service accounts, and SSO.

","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#configuring-sso","title":"Configuring SSO","text":"

Within your organization, select the SSO page to enable SSO for users.

If you haven't enabled SSO for a domain yet, enter the email domains for which you want to configure SSO in Prefect Cloud. Select Save to accept the domains.

Under Enabled Domains, select the domains from the Domains list, then select Generate Link. This step creates a link you can use to configure SSO with your identity provider.

Using the provided link navigate to the Identity Provider Configuration dashboard and select your identity provider to continue configuration. If your provider isn't listed, you can continue with the SAML or Open ID Connect choices instead.

Once you complete SSO configuration your users will be required to authenticate via your identity provider when accessing organization resources, giving you full control over application access.

","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"cloud/users/sso/#directory-sync","title":"Directory sync","text":"

Directory sync automatically provisions and de-provisions users for your organization.

Provisioned users are given basic \u201cMember\u201d roles and will have access to any resources that role entails.

When a user is unassigned from the Prefect Cloud application in your identity provider, they will automatically lose access to Prefect Cloud resources, allowing your IT team to control access to Prefect Cloud without ever signing into the app.

","tags":["UI","dashboard","Prefect Cloud","enterprise","teams","workspaces","organizations","single sign-on","SSO","authentication"],"boost":2},{"location":"community/","title":"Community","text":"

There are many ways to get involved with the Prefect community

","tags":["community","Slack","Discourse"],"boost":2},{"location":"concepts/","title":"Explore Prefect concepts","text":"","tags":["concepts","features","overview"],"boost":2},{"location":"concepts/#develop","title":"Develop","text":"Keyword Description Flows A Prefect workflow, modeled as a Python function. Tasks Discrete units of work in a Prefect workflow. Results The data returned by a flow or a task. Artifacts Formatted outputs rendered in the Prefect UI, such as markdown, tables, or links. States The status of a particular task run or flow run. Task Runners Enable you to engage specific executors for Prefect tasks, such as concurrent, parallel, or distributed execution of tasks. Runtime Context Information about the current flow or task run that you can refer to in your code. Profiles & Configuration Settings you can use to interact with Prefect Cloud and a Prefect server. Blocks Prefect primitives that enable the storage of configuration and provide a UI interface. Variables Named, mutable string values, much like environment variables.","tags":["concepts","features","overview"],"boost":2},{"location":"concepts/#deploy","title":"Deploy","text":"Keyword Description Deployments A server-side concept that encapsulates a flow, allowing it to be scheduled and triggered via API. Deployment Management A minimally opinionated set of files that describe how to prepare one or more flow deployments. Work Pools, Workers & Agents Bridge the Prefect orchestration environment with your execution environment. Storage Lets you configure how flow code for deployments is persisted and retrieved by Prefect agents. Filesystems Blocks that allow you to read and write data from paths. Infrastructure Blocks that specify infrastructure for flow runs created by the deployment. Schedules Tell the Prefect API how to create new flow runs for you automatically on a specified cadence. Logging Log a variety of useful information about your flow and task runs on the server.

Features specific to Prefect Cloud are in their own subheading.

","tags":["concepts","features","overview"],"boost":2},{"location":"concepts/artifacts/","title":"Artifacts","text":"

Artifacts are persisted outputs such as tables, files, or links. They can be published via the Prefect SDK or REST API. They are stored on Prefect Cloud or a Prefect server instance and rendered in the Prefect UI. Artifacts make it easy to track and monitor the objects that your flows produce and update over time.

Published artifacts may be associated with a particular task run, flow run, or outside a flow run context. Artifacts provide a richer way to present information relative to typical logging practices \u2014 including the ability to display tables, Markdown, and links to external data.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#artifacts-overview","title":"Artifacts Overview","text":"

Whether you're publishing links, markdown, or tables, artifacts provide a powerful and flexible way to showcase data within your workflow. With artifacts, you can easily manage and share information with your team, providing valuable insights and context.

Common use cases for artifacts include:

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#creating-artifacts","title":"Creating Artifacts","text":"

Creating artifacts allows you to publish data from task and flow runs or outside of a flow run context. Currently, you can render three artifact types: links, markdown, and tables.

Artifacts render individually

Please note that every artifact created within a task will be displayed as an individual artifact in the Prefect UI. This means that each call to create_link_artifact() or create_markdown_artifact() generates a distinct artifact.

Unlike the print() command, where you can concatenate multiple calls to include additional items in a report, within a task, these commands must be used multiple times if necessary.

To create artifacts like reports or summaries using create_markdown_artifact(), compile your message string separately and then pass it to create_markdown_artifact() to create the complete artifact.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#creating-link-artifacts","title":"Creating Link Artifacts","text":"

To create a link artifact, use the create_link_artifact() function. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key argument to the create_link_artifact() function to track an artifact's history over time. Without a key, the artifact will only be visible in the artifacts tab of the associated flow run or task run.\"

from prefect import flow, task\nfrom prefect.artifacts import create_link_artifact\n\n@task\ndef my_first_task():\n        create_link_artifact(\n            key=\"irregular-data\",\n            link=\"https://nyc3.digitaloceanspaces.com/my-bucket-name/highly_variable_data.csv\",\n            description=\"## Highly variable data\",\n        )\n\n@task\ndef my_second_task():\n        create_link_artifact(\n            key=\"irregular-data\",\n            link=\"https://nyc3.digitaloceanspaces.com/my-bucket-name/low_pred_data.csv\",\n            description=\"# Low prediction accuracy\",\n        )\n\n@flow\ndef my_flow():\n    my_first_task()\n    my_second_task()\n\nif __name__ == \"__main__\":\n    my_flow()\n

Tip

You can specify multiple artifacts with the same key to more easily track something very specific that you care about, such as irregularities in your data pipeline.

After running the above flows, you can find your new artifacts in the Artifacts page of the UI. You can click into the \"irregular data\" artifact and see all versions of it, along with custom descriptions and links to the relevant data.

Here, you'll also be able to view information about your artifact such as its associated flow run or task run id, previous and future versions of the artifact (multiple artifacts can have the same key in order to show lineage), the data you've stored (in this case a markdown-rendered link), an optional markdown description, and when the artifact was created or updated.

To make the links more readable for you and your collaborators, you can pass in a link_text argument for your link artifacts:

from prefect import flow\nfrom prefect.artifacts import create_link_artifact\n\n@flow\ndef my_flow():\n    create_link_artifact(\n        key=\"my-important-link\",\n        link=\"https://www.prefect.io/\",\n        link_text=\"Prefect\",\n    )\n\nif __name__ == \"__main__\":\n    my_flow()\n
In the above example, the create_link_artifact method is used within a flow to create a link artifact with a key of my-important-link. The link parameter is used to specify the external resource to be linked to, and link_text is used to specify the text to be displayed for the link. An optional description could also be added for context.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#creating-markdown-artifacts","title":"Creating Markdown Artifacts","text":"

To create a markdown artifact, you can use the create_markdown_artifact() function. To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key argument to the create_markdown_artifact() function to track an artifact's history over time. Without a key, the artifact will only be visible in the artifacts tab of the associated flow run or task run.\"

from prefect import flow, task\nfrom prefect.artifacts import create_markdown_artifact\n\n@task\ndef markdown_task():\n    na_revenue = 500000\n    markdown_report = f\"\"\"# Sales Report\n\n## Summary\n\nIn the past quarter, our company saw a significant increase in sales, with a total revenue of $1,000,000. \nThis represents a 20% increase over the same period last year.\n\n## Sales by Region\n\n| Region        | Revenue |\n|:--------------|-------:|\n| North America | ${na_revenue:,} |\n| Europe        | $250,000 |\n| Asia          | $150,000 |\n| South America | $75,000 |\n| Africa        | $25,000 |\n\n## Top Products\n\n1. Product A - $300,000 in revenue\n2. Product B - $200,000 in revenue\n3. Product C - $150,000 in revenue\n\n## Conclusion\n\nOverall, these results are very encouraging and demonstrate the success of our sales team in increasing revenue \nacross all regions. However, we still have room for improvement and should focus on further increasing sales in \nthe coming quarter.\n\"\"\"\n    create_markdown_artifact(\n        key=\"gtm-report\",\n        markdown=markdown_report,\n        description=\"Quarterly Sales Report\",\n    )\n\n@flow()\ndef my_flow():\n    markdown_task()\n\n\nif __name__ == \"__main__\":\n    my_flow()\n

After running the above flow, you should see your \"gtm-report\" artifact in the Artifacts page of the UI. As with all artifacts, you'll be able to view the associated flow run or task run id, previous and future versions of the artifact, your rendered markdown data, and your optional markdown description.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#create-table-artifacts","title":"Create Table Artifacts","text":"

You can create a table artifact by calling create_table_artifact(). To create multiple versions of the same artifact and/or view them on the Artifacts page of the Prefect UI, provide a key argument to the create_table_artifact() function to track an artifact's history over time. Without a key, the artifact will only be visible in the artifacts tab of the associated flow run or task run.\"

Note

The create_table_artifact() function accepts a table argument, which can be provided as either a list of lists, a list of dictionaries, or a dictionary of lists.

from prefect.artifacts import create_table_artifact\n\ndef my_fn():\n    highest_churn_possibility = [\n       {'customer_id':'12345', 'name': 'John Smith', 'churn_probability': 0.85 }, \n       {'customer_id':'56789', 'name': 'Jane Jones', 'churn_probability': 0.65 } \n    ]\n\n    create_table_artifact(\n        key=\"personalized-reachout\",\n        table=highest_churn_possibility,\n        description= \"# Marvin, please reach out to these customers today!\"\n    )\n\nif __name__ == \"__main__\":\n    my_fn()\n

As you can see, you don't need to create an artifact in a flow run context. You can use it however you like and still get the benefits in the Prefect UI.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#managing-artifacts","title":"Managing Artifacts","text":"","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#reading-artifacts","title":"Reading Artifacts","text":"

In the Prefect UI, you can view all of the latest versions of your artifacts and click into a specific artifact to see its lineage over time. Additionally, you can inspect all versions of an artifact with a given key by running:

$ prefect artifact inspect <my-key>\n

or view all artifacts by running:

$ prefect artifact ls\n

You can also use the Prefect REST API to programmatically filter your results.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#deleting-artifacts","title":"Deleting Artifacts","text":"

You can delete an artifact directly using the CLI to delete specific artifacts with a given key or id:

$ prefect artifact delete <my-key>\n
$ prefect artifact delete --id <my-id>\n

Alternatively, you can delete artifacts using the Prefect REST API.

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/artifacts/#artifacts-api","title":"Artifacts API","text":"

Prefect provides the Prefect REST API to allow you to create, read, and delete artifacts programmatically. With the Artifacts API, you can automate the creation and management of artifacts as part of your workflow.

For example, to read the 5 most recently created markdown, table, and link artifacts, you can do the following:

import requests\n\nPREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/abc/workspaces/xyz\"\nPREFECT_API_KEY=\"pnu_ghijk\"\ndata = {\n    \"sort\": \"CREATED_DESC\",\n    \"limit\": 5,\n    \"artifacts\": {\n        \"key\": {\n            \"exists_\": True\n        }\n    }\n}\n\nheaders = {\"Authorization\": f\"Bearer {PREFECT_API_KEY}\"}\nendpoint = f\"{PREFECT_API_URL}/artifacts/filter\"\n\nresponse = requests.post(endpoint, headers=headers, json=data)\nassert response.status_code == 200\nfor artifact in response.json():\n    print(artifact)\n

If you don't specify a key or that a key must exist, you will also return results (which are a type of key-less artifact).

See the rest of the Prefect REST API documentation on artifacts for more information!

","tags":["artifacts","UI","Markdown"],"boost":2},{"location":"concepts/blocks/","title":"Blocks","text":"

Blocks are a primitive within Prefect that enable the storage of configuration and provide an interface for interacting with external systems.

With blocks, you can securely store credentials for authenticating with services like AWS, GitHub, Slack, and any other system you'd like to orchestrate with Prefect.

Blocks expose methods that provide pre-built functionality for performing actions against an external system. They can be used to download data from or upload data to an S3 bucket, query data from or write data to a database, or send a message to a Slack channel.

You may configure blocks through code or via the Prefect Cloud and the Prefect server UI.

You can access blocks for both configuring flow deployments and directly from within your flow code.

Prefect provides some built-in block types that you can use right out of the box. Additional blocks are available through Prefect Integrations. To use these blocks you can pip install the package, then register the blocks you want to use with Prefect Cloud or a Prefect server.

Prefect Cloud and the Prefect server UI display a library of block types available for you to configure blocks that may be used by your flows.

Blocks and parameters

Blocks are useful for configuration that needs to be shared across flow runs and between flows.

For configuration that will change between flow runs, we recommend using parameters.

","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#prefect-built-in-blocks","title":"Prefect built-in blocks","text":"

Prefect provides a broad range of commonly used, built-in block types. These block types are available in Prefect Cloud and the Prefect server UI.

Block Slug Description Azure azure Store data as a file on Azure Datalake and Azure Blob Storage. Date Time date-time A block that represents a datetime. Docker Container docker-container Runs a command in a container. Docker Registry docker-registry Connects to a Docker registry. Requires a Docker Engine to be connectable. GCS gcs Store data as a file on Google Cloud Storage. GitHub github Interact with files stored on public GitHub repositories. JSON json A block that represents JSON. Kubernetes Cluster Config kubernetes-cluster-config Stores configuration for interaction with Kubernetes clusters. Kubernetes Job kubernetes-job Runs a command as a Kubernetes Job. Local File System local-file-system Store data as a file on a local file system. Microsoft Teams Webhook ms-teams-webhook Enables sending notifications via a provided Microsoft Teams webhook. Opsgenie Webhook opsgenie-webhook Enables sending notifications via a provided Opsgenie webhook. Pager Duty Webhook pager-duty-webhook Enables sending notifications via a provided PagerDuty webhook. Process process Run a command in a new process. Remote File System remote-file-system Store data as a file on a remote file system. Supports any remote file system supported by fsspec. S3 s3 Store data as a file on AWS S3. Secret secret A block that represents a secret value. The value stored in this block will be obfuscated when this block is logged or shown in the UI. Slack Webhook slack-webhook Enables sending notifications via a provided Slack webhook. SMB smb Store data as a file on a SMB share. String string A block that represents a string. Twilio SMS twilio-sms Enables sending notifications via Twilio SMS. Webhook webhook Block that enables calling webhooks.","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#blocks-in-prefect-integrations","title":"Blocks in Prefect Integrations","text":"

Blocks can also be created by anyone and shared with the community. You'll find blocks that are available for consumption in many of the published Prefect Integrations. The following table provides an overview of the blocks available from our most popular Prefect Integrations.

Integration Block Slug prefect-airbyte Airbyte Connection airbyte-connection prefect-airbyte Airbyte Server airbyte-server prefect-aws AWS Credentials aws-credentials prefect-aws ECS Task ecs-task prefect-aws MinIO Credentials minio-credentials prefect-aws S3 Bucket s3-bucket prefect-azure Azure Blob Storage Credentials azure-blob-storage-credentials prefect-azure Azure Container Instance Credentials azure-container-instance-credentials prefect-azure Azure Container Instance Job azure-container-instance-job prefect-azure Azure Cosmos DB Credentials azure-cosmos-db-credentials prefect-azure AzureML Credentials azureml-credentials prefect-bitbucket BitBucket Credentials bitbucket-credentials prefect-bitbucket BitBucket Repository bitbucket-repository prefect-census Census Credentials census-credentials prefect-census Census Sync census-sync prefect-databricks Databricks Credentials databricks-credentials prefect-dbt dbt CLI BigQuery Target Configs dbt-cli-bigquery-target-configs prefect-dbt dbt CLI Profile dbt-cli-profile prefect-dbt dbt Cloud Credentials dbt-cloud-credentials prefect-dbt dbt CLI Global Configs dbt-cli-global-configs prefect-dbt dbt CLI Postgres Target Configs dbt-cli-postgres-target-configs prefect-dbt dbt CLI Snowflake Target Configs dbt-cli-snowflake-target-configs prefect-dbt dbt CLI Target Configs dbt-cli-target-configs prefect-docker Docker Host docker-host prefect-docker Docker Registry Credentials docker-registry-credentials prefect-email Email Server Credentials email-server-credentials prefect-firebolt Firebolt Credentials firebolt-credentials prefect-firebolt Firebolt Database firebolt-database prefect-gcp BigQuery Warehouse bigquery-warehouse prefect-gcp GCP Cloud Run Job cloud-run-job prefect-gcp GCP Credentials gcp-credentials prefect-gcp GcpSecret gcpsecret prefect-gcp GCS Bucket gcs-bucket prefect-gcp Vertex AI Custom Training Job vertex-ai-custom-training-job prefect-github GitHub Credentials github-credentials prefect-github GitHub Repository github-repository prefect-gitlab GitLab Credentials gitlab-credentials prefect-gitlab GitLab Repository gitlab-repository prefect-hex Hex Credentials hex-credentials prefect-hightouch Hightouch Credentials hightouch-credentials prefect-kubernetes Kubernetes Credentials kubernetes-credentials prefect-monday Monday Credentials monday-credentials prefect-monte-carlo Monte Carlo Credentials monte-carlo-credentials prefect-openai OpenAI Completion Model openai-completion-model prefect-openai OpenAI Image Model openai-image-model prefect-openai OpenAI Credentials openai-credentials prefect-slack Slack Credentials slack-credentials prefect-slack Slack Incoming Webhook slack-incoming-webhook prefect-snowflake Snowflake Connector snowflake-connector prefect-snowflake Snowflake Credentials snowflake-credentials prefect-sqlalchemy Database Credentials database-credentials prefect-sqlalchemy SQLAlchemy Connector sqlalchemy-connector prefect-twitter Twitter Credentials twitter-credentials","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#using-existing-block-types","title":"Using existing block types","text":"

Blocks are classes that subclass the Block base class. They can be instantiated and used like normal classes.

","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#instantiating-blocks","title":"Instantiating blocks","text":"

For example, to instantiate a block that stores a JSON value, use the JSON block:

from prefect.blocks.system import JSON\n\njson_block = JSON(value={\"the_answer\": 42})\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#saving-blocks","title":"Saving blocks","text":"

If this JSON value needs to be retrieved later to be used within a flow or task, we can use the .save() method on the block to store the value in a block document on the Prefect database for retrieval later:

json_block.save(name=\"life-the-universe-everything\")\n

Utilizing the UI

Blocks documents can also be created and updated via the Prefect UI.

","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#loading-blocks","title":"Loading blocks","text":"

The name given when saving the value stored in the JSON block can be used when retrieving the value during a flow or task run:

from prefect import flow\nfrom prefect.blocks.system import JSON\n\n@flow\ndef what_is_the_answer():\njson_block = JSON.load(\"life-the-universe-everything\")\nprint(json_block.value[\"the_answer\"])\n\nwhat_is_the_answer() # 42\n

Blocks can also be loaded with a unique slug that is a combination of a block type slug and a block document name.

To load our JSON block document from before, we can run the following:

from prefect.blocks.core import Block\n\njson_block = Block.load(\"json/life-the-universe-everything\")\nprint(json_block.value[\"the-answer\"]) #42\n

Sharing Blocks

Blocks can also be loaded by fellow Workspace Collaborators, available on Prefect Cloud.

","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#deleting-blocks","title":"Deleting blocks","text":"

You can delete a block by using the .delete() method on the block:

from prefect.blocks.core import Block\nBlock.delete(\"json/life-the-universe-everything\")\n

You can also use the CLI to delete specific blocks with a given slug or id:

prefect block delete json/life-the-universe-everything\n
prefect block delete --id <my-id>\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#creating-new-block-types","title":"Creating new block types","text":"

To create a custom block type, define a class that subclasses Block. The Block base class builds off of Pydantic's BaseModel, so custom blocks can be declared in same manner as a Pydantic model.

Here's a block that represents a cube and holds information about the length of each edge in inches:

from prefect.blocks.core import Block\n\nclass Cube(Block):\n    edge_length_inches: float\n

You can also include methods on a block include useful functionality. Here's the same cube block with methods to calculate the volume and surface area of the cube:

from prefect.blocks.core import Block\n\nclass Cube(Block):\n    edge_length_inches: float\n\ndef get_volume(self):\nreturn self.edge_length_inches**3\ndef get_surface_area(self):\nreturn 6 * self.edge_length_inches**2\n

Now the Cube block can be used to store different cube configuration that can later be used in a flow:

from prefect import flow\n\nrubiks_cube = Cube(edge_length_inches=2.25)\nrubiks_cube.save(\"rubiks-cube\")\n\n@flow\ndef calculate_cube_surface_area(cube_name):\n    cube = Cube.load(cube_name)\n    print(cube.get_surface_area())\n\ncalculate_cube_surface_area(\"rubiks-cube\") # 30.375\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#secret-fields","title":"Secret fields","text":"

All block values are encrypted before being stored, but if you have values that you would not like visible in the UI or in logs, then you can use the SecretStr field type provided by Pydantic to automatically obfuscate those values. This can be useful for fields that are used to store credentials like passwords and API tokens.

Here's an example of an AWSCredentials block that uses SecretStr:

from typing import Optional\n\nfrom prefect.blocks.core import Block\nfrom pydantic import SecretStr\n\nclass AWSCredentials(Block):\n    aws_access_key_id: Optional[str] = None\naws_secret_access_key: Optional[SecretStr] = None\naws_session_token: Optional[str] = None\n    profile_name: Optional[str] = None\n    region_name: Optional[str] = None\n

Because aws_secret_access_key has the SecretStr type hint assigned to it, the value of that field will not be exposed if the object is logged:

aws_credentials_block = AWSCredentials(\n    aws_access_key_id=\"AKIAJKLJKLJKLJKLJKLJK\",\n    aws_secret_access_key=\"secret_access_key\"\n)\n\nprint(aws_credentials_block)\n# aws_access_key_id='AKIAJKLJKLJKLJKLJKLJK' aws_secret_access_key=SecretStr('**********') aws_session_token=None profile_name=None region_name=None\n

There's also use the SecretDict field type provided by Prefect. This type will allow you to add a dictionary field to your block that will have values at all levels automatically obfuscated in the UI or in logs. This is useful for blocks where typing or structure of secret fields is not known until configuration time.

Here's an example of a block that uses SecretDict:

from typing import Dict\n\nfrom prefect.blocks.core import Block\nfrom prefect.blocks.fields import SecretDict\n\n\nclass SystemConfiguration(Block):\n    system_secrets: SecretDict\n    system_variables: Dict\n\n\nsystem_configuration_block = SystemConfiguration(\n    system_secrets={\n        \"password\": \"p@ssw0rd\",\n        \"api_token\": \"token_123456789\",\n        \"private_key\": \"<private key here>\",\n    },\n    system_variables={\n        \"self_destruct_countdown_seconds\": 60,\n        \"self_destruct_countdown_stop_time\": 7,\n    },\n)\n
system_secrets will be obfuscated when system_configuration_block is displayed, but system_variables will be shown in plain-text:

print(system_configuration_block)\n# SystemConfiguration(\n#   system_secrets=SecretDict('{'password': '**********', 'api_token': '**********', 'private_key': '**********'}'), \n#   system_variables={'self_destruct_countdown_seconds': 60, 'self_destruct_countdown_stop_time': 7}\n# )\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#blocks-metadata","title":"Blocks metadata","text":"

The way that a block is displayed can be controlled by metadata fields that can be set on a block subclass.

Available metadata fields include:

Property Description _block_type_name Display name of the block in the UI. Defaults to the class name. _block_type_slug Unique slug used to reference the block type in the API. Defaults to a lowercase, dash-delimited version of the block type name. _logo_url URL pointing to an image that should be displayed for the block type in the UI. Default to None. _description Short description of block type. Defaults to docstring, if provided. _code_example Short code snippet shown in UI for how to load/use block type. Default to first example provided in the docstring of the class, if provided.","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#nested-blocks","title":"Nested blocks","text":"

Block are composable. This means that you can create a block that uses functionality from another block by declaring it as an attribute on the block that you're creating. It also means that configuration can be changed for each block independently, which allows configuration that may change on different time frames to be easily managed and configuration can be shared across multiple use cases.

To illustrate, here's a an expanded AWSCredentials block that includes the ability to get an authenticated session via the boto3 library:

from typing import Optional\n\nimport boto3\nfrom prefect.blocks.core import Block\nfrom pydantic import SecretStr\n\nclass AWSCredentials(Block):\n    aws_access_key_id: Optional[str] = None\n    aws_secret_access_key: Optional[SecretStr] = None\n    aws_session_token: Optional[str] = None\n    profile_name: Optional[str] = None\n    region_name: Optional[str] = None\n\n    def get_boto3_session(self):\n        return boto3.Session(\n            aws_access_key_id = self.aws_access_key_id\n            aws_secret_access_key = self.aws_secret_access_key\n            aws_session_token = self.aws_session_token\n            profile_name = self.profile_name\n            region_name = self.region\n        )\n

The AWSCredentials block can be used within an S3Bucket block to provide authentication when interacting with an S3 bucket:

import io\n\nclass S3Bucket(Block):\n    bucket_name: str\ncredentials: AWSCredentials\ndef read(self, key: str) -> bytes:\n        s3_client = self.credentials.get_boto3_session().client(\"s3\")\n\n        stream = io.BytesIO()\n        s3_client.download_fileobj(Bucket=self.bucket_name, key=key, Fileobj=stream)\n\n        stream.seek(0)\n        output = stream.read()\n\n        return output\n\n    def write(self, key: str, data: bytes) -> None:\n        s3_client = self.credentials.get_boto3_session().client(\"s3\")\n        stream = io.BytesIO(data)\n        s3_client.upload_fileobj(stream, Bucket=self.bucket_name, Key=key)\n

You can use this S3Bucket block with previously saved AWSCredentials block values in order to interact with the configured S3 bucket:

my_s3_bucket = S3Bucket(\n    bucket_name=\"my_s3_bucket\",\n    credentials=AWSCredentials.load(\"my_aws_credentials\")\n)\n\nmy_s3_bucket.save(\"my_s3_bucket\")\n

Saving block values like this links the values of the two blocks so that any changes to the values stored for the AWSCredentials block with the name my_aws_credentials will be seen the next time that block values for the S3Bucket block named my_s3_bucket is loaded.

Values for nested blocks can also be hard coded by not first saving child blocks:

my_s3_bucket = S3Bucket(\n    bucket_name=\"my_s3_bucket\",\n    credentials=AWSCredentials(\n        aws_access_key_id=\"AKIAJKLJKLJKLJKLJKLJK\",\n        aws_secret_access_key=\"secret_access_key\"\n    )\n)\n\nmy_s3_bucket.save(\"my_s3_bucket\")\n

In the above example, the values for AWSCredentials are saved with my_s3_bucket and will not be usable with any other blocks.

","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#handling-updates-to-custom-block-types","title":"Handling updates to custom Block types","text":"

Let's say that you now want to add a bucket_folder field to your custom S3Bucket block that represents the default path to read and write objects from (this field exists on our implementation).

We can add the new field to the class definition:

class S3Bucket(Block):\n    bucket_name: str\n    credentials: AWSCredentials\nbucket_folder: str = None\n...\n

Then register the updated block type with either Prefect Cloud or your self-hosted Prefect server.

If you have any existing blocks of this type that were created before the update and you'd prefer to not re-create them, you can migrate them to the new version of your block type by adding the missing values:

# Bypass Pydantic validation to allow your local Block class to load the old block version\nmy_s3_bucket_block = S3Bucket.load(\"my-s3-bucket\", validate=False)\n\n# Set the new field to an appropriate value\nmy_s3_bucket_block.bucket_path = \"my-default-bucket-path\"\n\n# Overwrite the old block values and update the expected fields on the block\nmy_s3_bucket_block.save(\"my-s3-bucket\", overwrite=True)\n
","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/blocks/#registering-blocks-for-use-in-the-prefect-ui","title":"Registering blocks for use in the Prefect UI","text":"

Blocks can be registered from a Python module available in the current virtual environment with a CLI command like this:

$ prefect block register --module prefect_aws.credentials\n

This command is useful for registering all blocks found in the credentials module within Prefect Integrations.

Or, if a block has been created in a .py file, the block can also be registered with the CLI command:

$ prefect block register --file my_block.py\n

The registered block will then be available in the Prefect UI for configuration.

","tags":["blocks","storage","secrets","configuration","infrastructure","deployments"],"boost":2},{"location":"concepts/deployments-ux/","title":"Managing Deployments","text":"

You can manage your deployments with a prefect.yaml file that describes how to prepare one or more flow deployments. At a high level, you simply add the following file to your working directory:

You can initialize your deployment configuration, which creates the prefect.yaml file, by running the CLI command prefect init in any directory or repository that stores your flow code.

Deployment configuration recipes

Prefect ships with many off-the-shelf \"recipes\" that allow you to get started with more structure within your prefect.yaml file; run prefect init to be prompted with available recipes in your installation. You can provide a recipe name in your initialization command with the --recipe flag, otherwise Prefect will attempt to guess an appropriate recipe based on the structure of your working directory (for example if you initialize within a git repository, Prefect will use the git recipe).

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#the-prefect-yaml-file","title":"The Prefect YAML file","text":"

The prefect.yaml file contains deployment configuration for deployments created from this file, default instructions for how to build and push any necessary code artifacts (such as Docker images), and default instructions for pulling a deployment in remote execution environments (e.g., cloning a GitHub repository).

Any deployment configuration can be overridden via options available on the prefect deploy CLI command when creating a deployment.

The base structure for prefect.yaml is as follows:

# generic metadata\nprefect-version: null\nname: null\n\n# preparation steps\nbuild: null\npush: null\n\n# runtime steps\npull: null\n\n# deployment configurations\ndeployments:\n- # base metadata\nname: null\nversion: null\ntags: []\ndescription: null\nschedule: null\n\n# flow-specific fields\nflow_name: null\nentrypoint: null\nparameters: {}\n\n# infra-specific fields\nwork_pool:\nname: null\nwork_queue_name: null\njob_variables: {}\n

The metadata fields are always pre-populated for you and are currently for bookkeeping purposes only. The other sections are pre-populated based on recipe; if no recipe is provided, Prefect will attempt to guess an appropriate one based on local configuration.

You can create deployments via the CLI command prefect deploy without ever needing to alter the deployments section of your prefect.yaml file \u2014 the prefect deploy command will help in deployment creation via interactive prompts. However, it is useful for version-controlling your deployments and managing multiple deployments.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#deployment-actions","title":"Deployment Actions","text":"

Deployment actions defined in your prefect.yaml file control the lifecycle of the creation and execution of your deployments. The three actions available are build, push, and pull. pull is the only required deployment action \u2014 it is used to define how Prefect will pull your deployment in remote execution environments.

Each action is defined as a list of steps that are executing in sequence.

Each step has the following format:

section:\n- prefect_package.path.to.importable.step:\nid: \"step-id\" # optional\nrequires: \"pip-installable-package-spec\" # optional\nkwarg1: value\nkwarg2: more-values\n

Every step can optionally provide a requires field that Prefect will use to auto-install in the event that the step cannot be found in the current environment. Each step can also specify an id for the step which is used when referencing step outputs in later steps. The additional fields map directly onto Python keyword arguments to the step function. Within a given section, steps always run in the order that they are provided within the prefect.yaml file.

Deployment Instruction Overrides

build, push, and pull sections can all be overridden on a per-deployment basis by defining build, push, and pull fields within a deployment definition in the prefect.yaml file.

The prefect deploy command will use any build, push, or pull instructions provided in a deployment's definition in the prefect.yaml file.

This capability is useful with multiple deployments that require different deployment instructions.

For more information on the mechanics of steps, see below.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#the-build-action","title":"The Build Action","text":"

The build section of prefect.yaml is where any necessary side effects for running your deployments are built - the most common type of side effect produced here is a Docker image. If you initialize with the docker recipe, you will be prompted to provide required information, such as image name and tag:

$ prefect init --recipe docker\n>> image_name: < insert image name here >\n>> tag: < insert image tag here >\n

Use --field to avoid the interactive experience

We recommend that you only initialize a recipe when you are first creating your deployment structure, and afterwards store your configuration files within version control; however, sometimes you may need to initialize programmatically and avoid the interactive prompts. To do so, provide all required fields for your recipe using the --field flag:

$ prefect init --recipe docker \\\n--field image_name=my-repo/my-image \\\n--field tag=my-tag\n

build:\n- prefect_docker.deployments.steps.build_docker_image:\nrequires: prefect-docker>=0.3.0\nimage_name: my-repo/my-image\ntag: my-tag\ndockerfile: auto\npush: true\n

Once you've confirmed that these fields are set to their desired values, this step will automatically build a Docker image with the provided name and tag and push it to the repository referenced by the image name. As the documentation notes, this step produces a few fields that can optionally be used in future steps or within prefect.yaml as template values. It is best practice to use {{ image }} within prefect.yaml (specifically the work pool's job variables section) so that you don't risk having your build step and deployment specification get out of sync with hardcoded values. For a worked example, check out the deployments tutorial.

Note

Note that in the build step example above, we relied on the prefect-docker package; in cases that deal with external services, additional packages are often required and will be auto-installed for you.

Pass output to downstream steps

Each deployment action can be composed of multiple steps. For example, if you wanted to build a Docker image tagged with the current commit hash, you could use the run_shell_script step and feed the output into the build_docker_image step:

build:\n- prefect.deployments.steps.run_shell_script:\nid: get-commit-hash\nscript: git rev-parse --short HEAD\nstream_output: false\n- prefect_docker.deployments.steps.build_docker_image:\nrequires: prefect-docker\nimage_name: my-image\nimage_tag: \"{{ get-commit-hash.stdout }}\"\ndockerfile: auto\n

Note that the id field is used in the run_shell_script step so that its output can be referenced in the next step.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#the-push-action","title":"The Push Action","text":"

The push section is most critical for situations in which code is not stored on persistent filesystems or in version control. In this scenario, code is often pushed and pulled from a Cloud storage bucket of some kind (e.g., S3, GCS, Azure Blobs, etc.). The push section allows users to specify and customize the logic for pushing this code repository to arbitrary remote locations.

For example, a user wishing to store their code in an S3 bucket and rely on default worker settings for its runtime environment could use the s3 recipe:

$ prefect init --recipe s3\n>> bucket: < insert bucket name here >\n

Inspecting our newly created prefect.yaml file we find that the push and pull sections have been templated out for us as follows:

push:\n- prefect_aws.deployments.steps.push_to_s3:\nid: push-code\nrequires: prefect-aws>=0.3.0\nbucket: my-bucket\nfolder: project-name\ncredentials: null\n\npull:\n- prefect_aws.deployments.steps.pull_from_s3:\nrequires: prefect-aws>=0.3.0\nbucket: my-bucket\nfolder: \"{{ push-code.folder }}\"\ncredentials: null\n

The bucket has been populated with our provided value (which also could have been provided with the --field flag); note that the folder property of the push step is a template - the pull_from_s3 step outputs both a bucket value as well as a folder value that can be used to template downstream steps. Doing this helps you keep your steps consistent across edits.

As discussed above, if you are using blocks, the credentials section can be templated with a block reference for secure and dynamic credentials access:

push:\n- prefect_aws.deployments.steps.push_to_s3:\nrequires: prefect-aws>=0.3.0\nbucket: my-bucket\nfolder: project-name\ncredentials: \"{{ prefect.blocks.aws-credentials.dev-credentials }}\"\n

Anytime you run prefect deploy, this push section will be executed upon successful completion of your build section. For more information on the mechanics of steps, see below.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#the-pull-action","title":"The Pull Action","text":"

The pull section is the most important section within the prefect.yaml file as it contains instructions for preparing your flows for a deployment run. These instructions will be executed each time a deployment created within this folder is run via a worker.

There are three main types of steps that typically show up in a pull section:

Use block and variable references

All block and variable references within your pull step will remain unresolved until runtime and will be pulled each time your deployment is run. This allows you to avoid storing sensitive information insecurely; it also allows you to manage certain types of configuration from the API and UI without having to rebuild your deployment every time.

Below is an example of how to use an existing GitHubCredentials block to clone a private GitHub repository:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/org/repo.git\ncredentials: \"{{ prefect.blocks.github-credentials.my-credentials }}\"\n

Alternatively, you can specify a BitBucketCredentials or GitLabCredentials block to clone from Bitbucket or GitLab. In lieu of a credentials block, you can also provide a GitHub, GitLab, or Bitbucket token directly to the 'access_token` field. You can use a Secret block to do this securely:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://bitbucket.org/org/repo.git\naccess_token: \"{{ prefect.blocks.secret.bitbucket-token }}\"\n
","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#utility-steps","title":"Utility Steps","text":"

Utility steps can be used within a build, push, or pull action to assist in managing the deployment lifecycle:

Here is an example of retrieving the short Git commit hash of the current repository to use as a Docker image tag:

build:\n- prefect.deployments.steps.run_shell_script:\nid: get-commit-hash\nscript: git rev-parse --short HEAD\nstream_output: false\n- prefect_docker.deployments.steps.build_docker_image:\nrequires: prefect-docker>=0.3.0\nimage_name: my-image\nimage_tag: \"{{ get-commit-hash.stdout }}\"\ndockerfile: auto\n

Below is an example of installing dependencies from a requirements.txt file after cloning:

pull:\n- prefect.deployments.steps.git_clone:\nid: clone-step\nrepository: https://github.com/org/repo.git\n- prefect.deployments.steps.pip_install_requirements:\ndirectory: {{ clone-step.directory }}\nrequirements_file: requirements.txt\nstream_output: False\n

Below is an example that retrieves an access token from a 3rd party Key Vault and uses it in a private clone step:

pull:\n- prefect.deployments.steps.run_shell_script:\nid: get-access-token\nscript: az keyvault secret show --name <secret name> --vault-name <secret vault> --query \"value\" --output tsv\nstream_output: false\n- prefect.deployments.steps.git_clone:\nrepository: https://bitbucket.org/samples/deployments.git\nbranch: master\naccess_token: \"{{ get-access-token.stdout }}\"\n

You can also run custom steps by packaging them. In the example below, retrieve_secrets is a custom python module that has been packaged into the default working directory of a docker image (which is /opt/prefect by default). main is the function entry point, which returns an access token (e.g. return {\"access_token\": access_token}) like the preceding example, but utilizing the Azure Python SDK for retrieval.

- retrieve_secrets.main:\nid: get-access-token\n- prefect.deployments.steps.git_clone:\nrepository: https://bitbucket.org/samples/deployments.git\nbranch: master\naccess_token: '{{ get-access-token.access_token }}'\n
","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#templating-options","title":"Templating Options","text":"

Values that you place within your prefect.yaml file can reference dynamic values in several different ways:

As an example, consider the following prefect.yaml file:

build:\n- prefect_docker.deployments.steps.build_docker_image:\nid: build-image`\nrequires: prefect-docker>=0.3.0\nimage_name: my-repo/my-image\ntag: my-tag\ndockerfile: auto\npush: true\n\ndeployments:\n- # base metadata\nname: null\nversion: \"{{ build-image.tag }}\"\ntags:\n- \"{{ $my_deployment_tag }}\"\n- \"{{ prefect.variables.some_common_tag }}\"\ndescription: null\nschedule: null\n\n# flow-specific fields\nflow_name: null\nentrypoint: null\nparameters: {}\n\n# infra-specific fields\nwork_pool:\nname: \"my-k8s-work-pool\"\nwork_queue_name: null\njob_variables:\nimage: \"{{ build-image.image }}\"\ncluster_config: \"{{ prefect.blocks.kubernetes-cluster-config.my-favorite-config }}\"\n

So long as our build steps produce fields called image_name and image_tag, every time we deploy a new version of our deployment these fields will be dynamically populated with the relevant values.

Docker step

The most commonly used build step is prefect_docker.deployments.steps.build_docker_image which produces both the image_name and tag fields.

For an example, check out the deployments tutorial.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#deployment-configurations","title":"Deployment Configurations","text":"

Each prefect.yaml file can have multiple deployment configurations that control the behavior of created deployments. These deployments can be managed independently of one another, allowing you to deploy the same flow with different configurations in the same codebase.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#working-with-multiple-deployments","title":"Working With Multiple Deployments","text":"

Prefect supports multiple deployment declarations within the prefect.yaml file. This method of declaring multiple deployments allows the configuration for all deployments to be version controlled and deployed with a single command.

New deployment declarations can be added to the prefect.yaml file by adding a new entry to the deployments list. Each deployment declaration must have a unique name field which is used to select deployment declarations when using the prefect deploy command.

For example, consider the following prefect.yaml file:

build: ...\npush: ...\npull: ...\n\ndeployments:\n- name: deployment-1\nentrypoint: flows/hello.py:my_flow\nparameters:\nnumber: 42,\nmessage: Don't panic!\nwork_pool:\nname: my-process-work-pool\nwork_queue_name: primary-queue\n\n- name: deployment-2\nentrypoint: flows/goodbye.py:my_other_flow\nwork_pool:\nname: my-process-work-pool\nwork_queue_name: secondary-queue\n\n- name: deployment-3\nentrypoint: flows/hello.py:yet_another_flow\nwork_pool:\nname: my-docker-work-pool\nwork_queue_name: tertiary-queue\n

This file has three deployment declarations, each referencing a different flow. Each deployment declaration has a unique name field and can be deployed individually by using the --name flag when deploying.

For example, to deploy deployment-1 we would run:

$ prefect deploy --name deployment-1\n

To deploy multiple deployments you can provide multiple --name flags:

$ prefect deploy --name deployment-1 --name deployment-2\n

To deploy multiple deployments with the same name, you can prefix the deployment name with its flow name:

$ prefect deploy --name my_flow/deployment-1 --name my_other_flow/deployment-1\n

To deploy all deployments you can use the --all flag:

$ prefect deploy --all\n

CLI Options When Deploying Multiple Deployments

When deploying more than one deployment with a single prefect deploy command, any additional attributes provided via the CLI will be ignored.

To provide overrides to a deployment via the CLI, you must deploy that deployment individually.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#reusing-configuration-across-deployments","title":"Reusing Configuration Across Deployments","text":"

Because a prefect.yaml file is a standard YAML file, you can use YAML aliases to reuse configuration across deployments.

This functionality is useful when multiple deployments need to share the work pool configuration, deployment actions, or other configurations.

You can declare a YAML alias by using the &{alias_name} syntax and insert that alias elsewhere in the file with the *{alias_name} syntax. When aliasing YAML maps, you can also override specific fields of the aliased map by using the <<: *{alias_name} syntax and adding additional fields below.

We recommend adding a definitions section to your prefect.yaml file at the same level as the deployments section to store your aliases.

For example, consider the following prefect.yaml file:

build: ...\npush: ...\npull: ...\n\ndefinitions:\nwork_pools:\nmy_docker_work_pool: &my_docker_work_pool\nname: my-docker-work-pool\nwork_queue_name: default\njob_variables:\nimage: \"{{ build-image.image }}\"\nschedules:\nevery_ten_minutes: &every_10_minutes\ninterval: 600\nactions:\ndocker_build: &docker_build\n- prefect_docker.deployments.steps.build_docker_image: &docker_build_config\nid: build-image\nrequires: prefect-docker>=0.3.0\nimage_name: my-example-image\ntag: dev\ndockerfile: auto\npush: true\n\ndeployments:\n- name: deployment-1\nentrypoint: flows/hello.py:my_flow\nschedule: *every_10_minutes\nparameters:\nnumber: 42,\nmessage: Don't panic!\nwork_pool: *my_docker_work_pool\nbuild: *docker_build # Uses the full docker_build action with no overrides\n\n- name: deployment-2\nentrypoint: flows/goodbye.py:my_other_flow\nwork_pool: *my_docker_work_pool\nbuild:\n- prefect_docker.deployments.steps.build_docker_image:\n<<: *docker_build_config # Uses the docker_build_config alias and overrides the dockerfile field\ndockerfile: Dockerfile.custom\n\n- name: deployment-3\nentrypoint: flows/hello.py:yet_another_flow\nschedule: *every_10_minutes\nwork_pool:\nname: my-process-work-pool\nwork_queue_name: primary-queue\n

In the above example, we are using YAML aliases to reuse work pool, schedule, and build configuration across multiple deployments:

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#deployment-declaration-reference","title":"Deployment Declaration Reference","text":"","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#deployment-fields","title":"Deployment Fields","text":"

Below are fields that can be added to each deployment declaration.

Property Description name The name to give to the created deployment. Used with the prefect deploy command to create or update specific deployments. version An optional version for the deployment. tags A list of strings to assign to the deployment as tags. description An optional description for the deployment. schedule An optional schedule to assign to the deployment. Fields for this section are documented in the Schedule Fields section. triggers An optional array of triggers to assign to the deployment flow_name Deprecated: The name of a flow that has been registered in the .prefect directory. Either flow_name or entrypoint is required. entrypoint The path to the .py file containing the flow you want to deploy (relative to the root directory of your development folder) combined with the name of the flow function. Should be in the format path/to/file.py:flow_function_name. Either flow_name or entrypoint is required. parameters Optional default values to provide for the parameters of the deployed flow. Should be an object with key/value pairs. work_pool Information on where to schedule flow runs for the deployment. Fields for this section are documented in the Work Pool Fields section.","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#schedule-fields","title":"Schedule Fields","text":"

Below are fields that can be added to a deployment declaration's schedule section.

Property Description interval Number of seconds indicating the time between flow runs. Cannot be used in conjunction with cron or rrule. anchor_date Datetime string indicating the starting or \"anchor\" date to begin the schedule. If no anchor_date is supplied, the current UTC time is used. Can only be used with interval. timezone String name of a time zone, used to enforce localization behaviors like DST boundaries. See the IANA Time Zone Database for valid time zones. cron A valid cron string. Cannot be used in conjunction with interval or rrule. day_or Boolean indicating how croniter handles day and day_of_week entries. Must be used with cron. Defaults to True. rrule String representation of an RRule schedule. See the rrulestr examples for syntax. Cannot be used in conjunction with interval or cron.

For more information about schedules, see the Schedules concept doc.

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#work-pool-fields","title":"Work Pool Fields","text":"

Below are fields that can be added to a deployment declaration's work_pool section.

Property Description name The name of the work pool to schedule flow runs in for the deployment. work_queue_name The name of the work queue within the specified work pool to schedule flow runs in for the deployment. If not provided, the default queue for the specified work pool will be used. job_variables Values used to override the default values in the specified work pool's base job template. Maps directly to a created deployments infra_overrides attribute.","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments-ux/#deployment-mechanics","title":"Deployment mechanics","text":"

Anytime you run prefect deploy, the following actions are taken in order:

Deployment Instruction Overrides

The build, push, and pull sections in deployment definitions take precedence over the corresponding sections above them in prefect.yaml.

Anytime a step is run, the following actions are taken in order:

","tags":["work pools","workers","orchestration","flow runs","deployments","projects","storage","infrastructure","blocks","tutorial"],"boost":2},{"location":"concepts/deployments/","title":"Deployments","text":"

A deployment is a server-side concept that encapsulates a flow, allowing it to be scheduled and triggered via API. The deployment stores metadata about where your flow's code is stored and how your flow should be run.

Each deployment references a single \"entrypoint\" flow (though that flow may, in turn, call any number of tasks and subflows). Any single flow, however, may be referenced by any number of deployments.

At a high level, you can think of a deployment as configuration for managing flows, whether you run them via the CLI, the UI, or the API.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#deployments-overview","title":"Deployments overview","text":"

All Prefect flow runs are tracked by the API. The API does not require prior registration of flows. With Prefect, you can call a flow locally or on a remote environment and it will be tracked.

Creating a deployment for a Prefect workflow means packaging workflow code, settings, and infrastructure configuration so that the workflow can be managed via the Prefect API and run remotely by a Prefect agent.

The following diagram provides a high-level overview of the conceptual elements involved in defining a deployment and executing a flow run based on that deployment.

%%{\n  init: {\n    'theme': 'base',\n    'themeVariables': {\n      'fontSize': '19px'\n    }\n  }\n}%%\n\nflowchart LR\n    F(\"<div style='margin: 5px 10px 5px 5px;'>Flow Code</div>\"):::yellow -.-> A(\"<div style='margin: 5px 10px 5px 5px;'>Deployment Definition</div>\"):::gold\n    subgraph Server [\"<div style='width: 150px; text-align: center; margin-top: 5px;'>Prefect API</div>\"]\n        D(\"<div style='margin: 5px 10px 5px 5px;'>Deployment</div>\"):::green\n    end\n    subgraph Remote Storage [\"<div style='width: 160px; text-align: center; margin-top: 5px;'>Remote Storage</div>\"]\n        B(\"<div style='margin: 5px 6px 5px 5px;'>Flow</div>\"):::yellow\n    end\n    subgraph Infrastructure [\"<div style='width: 150px; text-align: center; margin-top: 5px;'>Infrastructure</div>\"]\n        G(\"<div style='margin: 5px 10px 5px 5px;'>Flow Run</div>\"):::blue\n    end\n\n    A --> D\n    D --> E(\"<div style='margin: 5px 10px 5px 5px;'>Agent</div>\"):::red\n    B -.-> E\n    A -.-> B\n    E -.-> G\n\n    classDef gold fill:goldenrod,stroke:goldenrod,stroke-width:4px\n    classDef yellow fill:gold,stroke:gold,stroke-width:4px\n    classDef gray fill:lightgray,stroke:lightgray,stroke-width:4px\n    classDef blue fill:blue,stroke:blue,stroke-width:4px,color:white\n    classDef green fill:green,stroke:green,stroke-width:4px,color:white\n    classDef red fill:red,stroke:red,stroke-width:4px,color:white\n    classDef dkgray fill:darkgray,stroke:darkgray,stroke-width:4px,color:white

Your flow code and the Prefect hybrid model

In the diagram above, the dotted line indicates the path of your flow code in the lifecycle of a Prefect deployment, from creation to executing a flow run. Notice that your flow code stays within your storage and execution infrastructure and never lives on the Prefect server or database.

This is the heart of the Prefect hybrid model: there's always a boundary between your code, your private infrastructure, and the Prefect backend, such as Prefect Cloud. Even if you're using a self-hosted Prefect server, you only register the deployment metadata on the backend allowing for a clean separation of concerns.

When creating a deployment, a user must answer two basic questions:

A deployment additionally enables you to:

With remote storage blocks, you can package not only your flow code script but also any supporting files, including your custom modules, SQL scripts and any configuration files needed in your project.

To define how your flow execution environment should be configured, you may either reference pre-configured infrastructure blocks or let Prefect create those automatically for you as anonymous blocks (this happens when you specify the infrastructure type using --infra flag during the build process).

Work queue affinity improved starting from Prefect 2.0.5

Until Prefect 2.0.4, tags were used to associate flow runs with work queues. Starting in Prefect 2.0.5, tag-based work queues are deprecated. Instead, work queue names are used to explicitly direct flow runs from deployments into queues.

Note that backward compatibility is maintained and work queues that use tag-based matching can still be created and will continue to work. However, those work queues are now considered legacy and we encourage you to use the new behavior by specifying work queues explicitly on agents and deployments.

See Agents & Work Pools for details.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#deployments-and-flows","title":"Deployments and flows","text":"

Each deployment is associated with a single flow, but any given flow can be referenced by multiple deployments.

Deployments are uniquely identified by the combination of: flow_name/deployment_name.

graph LR\n    F(\"my_flow\"):::yellow -.-> A(\"Deployment 'daily'\"):::tan --> W(\"my_flow/daily\"):::fgreen\n    F -.-> B(\"Deployment 'weekly'\"):::gold  --> X(\"my_flow/weekly\"):::green\n    F -.-> C(\"Deployment 'ad-hoc'\"):::dgold --> Y(\"my_flow/ad-hoc\"):::dgreen\n    F -.-> D(\"Deployment 'trigger-based'\"):::dgold --> Z(\"my_flow/trigger-based\"):::dgreen\n\n    classDef gold fill:goldenrod,stroke:goldenrod,stroke-width:4px,color:white\n    classDef yellow fill:gold,stroke:gold,stroke-width:4px\n    classDef dgold fill:darkgoldenrod,stroke:darkgoldenrod,stroke-width:4px,color:white\n    classDef tan fill:tan,stroke:tan,stroke-width:4px,color:white\n    classDef fgreen fill:forestgreen,stroke:forestgreen,stroke-width:4px,color:white\n    classDef green fill:green,stroke:green,stroke-width:4px,color:white\n    classDef dgreen fill:darkgreen,stroke:darkgreen,stroke-width:4px,color:white

This enables you to run a single flow with different parameters, based on multiple schedules and triggers, and in different environments. This also enables you to run different versions of the same flow for testing and production purposes.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#deployment-definition","title":"Deployment definition","text":"

A deployment definition captures the settings for creating a deployment object on the Prefect API. You can create the deployment definition by:

The minimum required information to create a deployment includes:

You may provide additional settings for the deployment. Any settings you do not explicitly specify are inferred from defaults.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-deployment-on-the-cli","title":"Create a deployment on the CLI","text":"

To create a deployment on the CLI, there are two steps:

  1. Build the deployment definition file deployment.yaml. This step includes uploading your flow to its configured remote storage location, if one is specified.
  2. Create the deployment on the API.
","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#build-the-deployment","title":"Build the deployment","text":"

To build the deployment definition file deployment.yaml, run the prefect deployment build Prefect CLI command from the folder containing your flow script and any dependencies of the script.

$ prefect deployment build [OPTIONS] PATH\n

Path to the flow is specified in the format path-to-script:flow-function-name \u2014 The path and filename of the flow script file, a colon, then the name of the entrypoint flow function.

For example:

$ prefect deployment build -n marvin -p default-agent-pool -q test flows/marvin.py:say_hi\n

When you run this command, Prefect:

Uploading files may require storage filesystem libraries

Note that the appropriate filesystem library supporting the storage location must be installed prior to building a deployment with a storage block. For example, the AWS S3 Storage block requires the s3fs library.

Ignore files or directories from a deployment

By default, Prefect uploads all files in the current folder to the configured storage location (local by default) when you build a deployment.

If you want to omit certain files or directories from your deployments, add a .prefectignore file to the root directory. .prefectignore enables users to omit certain files or directories from their deployments.

Similar to other .ignore files, the syntax supports pattern matching, so an entry of *.pyc will ensure all .pyc files are ignored by the deployment call when uploading to remote storage.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#deployment-build-options","title":"Deployment build options","text":"

You may specify additional options to further customize your deployment.

Options Description PATH Path, filename, and flow name of the flow definition. (Required) --apply, -a When provided, automatically registers the resulting deployment with the API. --cron TEXT A cron string that will be used to set a CronSchedule on the deployment. For example, --cron \"*/1 * * * *\" to create flow runs from that deployment every minute. --help Display help for available commands and options. --infra-block TEXT, -ib The infrastructure block to use, in block-type/block-name format. --infra, -i The infrastructure type to use. (Default is Process) --interval INTEGER An integer specifying an interval (in seconds) that will be used to set an IntervalSchedule on the deployment. For example, --interval 60 to create flow runs from that deployment every minute. --name TEXT, -n The name of the deployment. --output TEXT, -o Optional location for the YAML manifest generated as a result of the build step. You can version-control that file, but it's not required since the CLI can generate everything you need to define a deployment. --override TEXT One or more optional infrastructure overrides provided as a dot delimited path. For example, specify an environment variable: env.env_key=env_value. For Kubernetes, specify customizations: customizations='[{\"op\": \"add\",\"path\": \"/spec/template/spec/containers/0/resources/limits\", \"value\": {\"memory\": \"8Gi\",\"cpu\": \"4000m\"}}]' (note the string format). --param An optional parameter override, values are parsed as JSON strings. For example, --param question=ultimate --param answer=42. --params An optional parameter override in a JSON string format. For example, --params=\\'{\"question\": \"ultimate\", \"answer\": 42}\\'. --path An optional path to specify a subdirectory of remote storage to upload to, or to point to a subdirectory of a locally stored flow. --pool TEXT, -p The work pool that will handle this deployment's runs. \u2502 --rrule TEXT An RRule that will be used to set an RRuleSchedule on the deployment. For example, --rrule 'FREQ=HOURLY;BYDAY=MO,TU,WE,TH,FR;BYHOUR=9,10,11,12,13,14,15,16,17' to create flow runs from that deployment every hour but only during business hours. --skip-upload When provided, skips uploading this deployment's files to remote storage. --storage-block TEXT, -sb The storage block to use, in block-type/block-name or block-type/block-name/path format. Note that the appropriate library supporting the storage filesystem must be installed. --tag TEXT, -t One or more optional tags to apply to the deployment. --version TEXT, -v An optional version for the deployment. This could be a git commit hash if you use this command from a CI/CD pipeline. --work-queue TEXT, -q The work queue that will handle this deployment's runs. It will be created if it doesn't already exist. Defaults to None. Note that if a work queue is not set, work will not be scheduled.","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#block-identifiers","title":"Block identifiers","text":"

When specifying a storage block with the -sb or --storage-block flag, you may specify the block by passing its slug. The storage block slug is formatted as block-type/block-name.

For example, s3/example-block is the slug for an S3 block named example-block.

In addition, when passing the storage block slug, you may pass just the block slug or the block slug and a path.

When specifying an infrastructure block with the -ib or --infra-block flag, you specify the block by passing its slug. The infrastructure block slug is formatted as block-type/block-name.

Block name Block class name Block type for a slug Azure Azure azure Docker Container DockerContainer docker-container GitHub GitHub github GCS GCS gcs Kubernetes Job KubernetesJob kubernetes-job Process Process process Remote File System RemoteFileSystem remote-file-system S3 S3 s3 SMB SMB smb GitLab Repository GitLabRepository gitlab-repository

Note that the appropriate library supporting the storage filesystem must be installed prior to building a deployment with a storage block. For example, the AWS S3 Storage block requires the s3fs library. See Storage for more information.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#deploymentyaml","title":"deployment.yaml","text":"

A deployment's YAML file configures additional settings needed to create a deployment on the server.

As a single flow may have multiple deployments created for it, with different schedules, tags, and so on. A single flow definition may have multiple deployment YAML files referencing it, each specifying different settings. The only requirement is that each deployment must have a unique name.

The default {flow-name}-deployment.yaml filename may be edited as needed with the --output flag to prefect deployment build.

###\n### A complete description of a Prefect Deployment for flow 'Cat Facts'\n###\nname: catfact\ndescription: null\nversion: c0fc95308d8137c50d2da51af138aa23\n# The work queue that will handle this deployment's runs\nwork_queue_name: test\nwork_pool_name: null\ntags: []\nparameters: {}\nschedule: null\ninfra_overrides: {}\ninfrastructure:\ntype: process\nenv: {}\nlabels: {}\nname: null\ncommand:\n- python\n- -m\n- prefect.engine\nstream_output: true\n###\n### DO NOT EDIT BELOW THIS LINE\n###\nflow_name: Cat Facts\nmanifest_path: null\nstorage: null\npath: /Users/terry/test/testflows/catfact\nentrypoint: catfact.py:catfacts_flow\nparameter_openapi_schema:\ntitle: Parameters\ntype: object\nproperties:\nurl:\ntitle: url\nrequired:\n- url\ndefinitions: null\n

Editing deployment.yaml

Note the big DO NOT EDIT comment in your deployment's YAML: In practice, anything above this block can be freely edited before running prefect deployment apply to create the deployment on the API.

We recommend editing most of these fields from the CLI or Prefect UI for convenience.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#parameters-in-deployments","title":"Parameters in deployments","text":"

You may provide default parameter values in the deployment.yaml configuration, and these parameter values will be used for flow runs based on the deployment.

To configure default parameter values, add them to the parameters: {} line of deployment.yaml as JSON key-value pairs. The parameter list configured in deployment.yaml must match the parameters expected by the entrypoint flow function.

parameters: {\"name\": \"Marvin\", \"num\": 42, \"url\": \"https://catfact.ninja/fact\"}\n

Passing **kwargs as flow parameters

You may pass **kwargs as a deployment parameter as a \"kwargs\":{} JSON object containing the key-value pairs of any passed keyword arguments.

parameters: {\"name\": \"Marvin\", \"kwargs\":{\"cattype\":\"tabby\",\"num\": 42}\n

You can edit default parameters for deployments in the Prefect UI, and you can override default parameter values when creating ad-hoc flow runs via the Prefect UI.

To edit parameters in the Prefect UI, go the the details page for a deployment, then select Edit from the commands menu. If you change parameter values, the new values are used for all future flow runs based on the deployment.

To create an ad-hoc flow run with different parameter values, go the the details page for a deployment, select Run, then select Custom. You will be able to provide custom values for any editable deployment fields. Under Parameters, select Custom. Provide the new values, then select Save. Select Run to begin the flow run with custom values.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-deployment","title":"Create a deployment","text":"

When you've configured deployment.yaml for a deployment, you can create the deployment on the API by running the prefect deployment apply Prefect CLI command.

$ prefect deployment apply catfacts_flow-deployment.yaml\n

For example:

$ prefect deployment apply ./catfacts_flow-deployment.yaml\nSuccessfully loaded 'catfact'\nDeployment '76a9f1ac-4d8c-4a92-8869-615bec502685' successfully created.\n

prefect deployment apply accepts an optional --upload flag that, when provided, uploads this deployment's files to remote storage.

Once the deployment has been created, you'll see it in the Prefect UI and can inspect it using the CLI.

$ prefect deployment ls\n                               Deployments\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name                           \u2503 ID                                   \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 Cat Facts/catfact              \u2502 76a9f1ac-4d8c-4a92-8869-615bec502685 \u2502\n\u2502 leonardo_dicapriflow/hello_leo \u2502 fb4681d7-aa5a-4617-bf6f-f67e6f964984 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n

When you run a deployed flow with Prefect, the following happens:

Agents and work pools enable the Prefect orchestration engine and API to run deployments in your local execution environments. To execute deployed flow runs you need to configure at least one agent.

Scheduled flow runs

Scheduled flow runs will not be created unless the scheduler is running with either Prefect Cloud or a local Prefect server started with prefect server start.

Scheduled flow runs will not run unless an appropriate agent and work pool are configured.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-deployment-from-a-python-object","title":"Create a deployment from a Python object","text":"

You can also create deployments from Python scripts by using the prefect.deployments.Deployment class.

Create a new deployment using configuration defaults for an imported flow:

from my_project.flows import my_flow\nfrom prefect.deployments import Deployment\n\ndeployment = Deployment.build_from_flow(\n    flow=my_flow,\n    name=\"example-deployment\", \n    version=1, \n    work_queue_name=\"demo\",\n    work_pool_name=\"default-agent-pool\",\n)\ndeployment.apply()\n

Create a new deployment with a pre-defined storage block and an infrastructure override:

from my_project.flows import my_flow\nfrom prefect.deployments import Deployment\nfrom prefect.filesystems import S3\n\nstorage = S3.load(\"dev-bucket\") # load a pre-defined block\n\ndeployment = Deployment.build_from_flow(\n    flow=my_flow,\n    name=\"s3-example\",\n    version=2,\n    work_queue_name=\"aws\",\n    work_pool_name=\"default-agent-pool\",\n    storage=storage,\n    infra_overrides={\n        \"env\": {\n            \"ENV_VAR\": \"value\"\n        }\n    },\n)\n\ndeployment.apply()\n

If you have settings that you want to share from an existing deployment you can load those settings:

deployment = Deployment(\n    name=\"a-name-you-used\", \n    flow_name=\"name-of-flow\"\n)\ndeployment.load() # loads server-side settings\n

Once the existing deployment settings are loaded, you may update them as needed by changing deployment properties.

View all of the parameters for the Deployment object in the Python API documentation.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#deployment-api-representation","title":"Deployment API representation","text":"

When you create a deployment, it is constructed from deployment definition data you provide and additional properties set by client-side utilities.

Deployment properties include:

Property Description id An auto-generated UUID ID value identifying the deployment. created A datetime timestamp indicating when the deployment was created. updated A datetime timestamp indicating when the deployment was last changed. name The name of the deployment. version The version of the deployment description A description of the deployment. flow_id The id of the flow associated with the deployment. schedule An optional schedule for the deployment. is_schedule_active Boolean indicating whether the deployment schedule is active. Default is True. infra_overrides One or more optional infrastructure overrides parameters An optional dictionary of parameters for flow runs scheduled by the deployment. tags An optional list of tags for the deployment. work_queue_name The optional work queue that will handle the deployment's run parameter_openapi_schema JSON schema for flow parameters. path The path to the deployment.yaml file entrypoint The path to a flow entry point storage_document_id Storage block configured for the deployment. infrastructure_document_id Infrastructure block configured for the deployment.

You can inspect a deployment using the CLI with the prefect deployment inspect command, referencing the deployment with <flow_name>/<deployment_name>.

$ prefect deployment inspect 'Cat Facts/catfact'\n{\n'id': '76a9f1ac-4d8c-4a92-8869-615bec502685',\n    'created': '2022-07-26T03:48:14.723328+00:00',\n    'updated': '2022-07-26T03:50:02.043238+00:00',\n    'name': 'catfact',\n    'version': '899b136ebc356d58562f48d8ddce7c19',\n    'description': None,\n    'flow_id': '2c7b36d1-0bdb-462e-bb97-f6eb9fef6fd5',\n    'schedule': None,\n    'is_schedule_active': True,\n    'infra_overrides': {},\n    'parameters': {},\n    'tags': [],\n    'work_queue_name': 'test',\n    'parameter_openapi_schema': {\n'title': 'Parameters',\n        'type': 'object',\n        'properties': {'url': {'title': 'url'}},\n        'required': ['url']\n},\n    'path': '/Users/terry/test/testflows/catfact',\n    'entrypoint': 'catfact.py:catfacts_flow',\n    'manifest_path': None,\n    'storage_document_id': None,\n    'infrastructure_document_id': 'f958db1c-b143-4709-846c-321125247e07',\n    'infrastructure': {\n'type': 'process',\n        'env': {},\n        'labels': {},\n        'name': None,\n        'command': ['python', '-m', 'prefect.engine'],\n        'stream_output': True\n    }\n}\n
","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-flow-run-from-a-deployment","title":"Create a flow run from a deployment","text":"","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-flow-run-with-a-schedule","title":"Create a flow run with a schedule","text":"

If you specify a schedule for a deployment, the deployment will execute its flow automatically on that schedule as long as a Prefect server and agent are running. Prefect Cloud creates schedules flow runs automatically, and they will run on schedule if an agent is configured to pick up flow runs for the deployment.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-flow-run-with-an-event-trigger","title":"Create a flow run with an event trigger","text":"

deployment triggers are only available in Prefect Cloud

Deployments can optionally take a trigger specification, which will configure an automation to run the deployment based on the presence or absence of events, and optionally pass event data into the deployment run as parameters via jinja templating.

triggers:\n- enabled: true\nmatch:\nprefect.resource.id: prefect.flow-run.*\nexpect:\n- prefect.flow-run.Completed\nmatch_related:\nprefect.resource.name: prefect.flow.etl-flow\nprefect.resource.role: flow\nparameters:\nparam_1: \"{{ event }}\"\n

When applied, this deployment will start a flow run upon the completion of the upstream flow specified in the match_related key, with the flow run passed in as a parameter. Triggers can be configured to respond to the presence or absence of arbitrary internal or external events. The trigger system and API are detailed in Automations.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-flow-run-with-prefect-ui","title":"Create a flow run with Prefect UI","text":"

In the Prefect UI, you can click the Run button next to any deployment to execute an ad hoc flow run for that deployment.

The prefect deployment CLI command provides commands for managing and running deployments locally.

Command Description apply Create or update a deployment from a YAML file. build Generate a deployment YAML from /path/to/file.py:flow_function. delete Delete a deployment. inspect View details about a deployment. ls View all deployments or deployments for specific flows. pause-schedule Pause schedule of a given deployment. resume-schedule Resume schedule of a given deployment. run Create a flow run for the given flow and deployment. set-schedule Set schedule for a given deployment.","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#create-a-flow-run-in-a-python-script","title":"Create a flow run in a Python script","text":"

You can create a flow run from a deployment in a Python script with the run_deployment function.

from prefect.deployments import run_deployment\n\n\ndef main():\n    response = run_deployment(name=\"flow-name/deployment-name\")\n    print(response)\n\n\nif __name__ == \"__main__\":\n   main()\n

PREFECT_API_URL setting for agents

You'll need to configure agents and work pools that can create flow runs for deployments in remote environments. PREFECT_API_URL must be set for the environment in which your agent is running.

If you want the agent to communicate with Prefect Cloud from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL in that environment.

","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/deployments/#examples","title":"Examples","text":"","tags":["work queues","agents","orchestration","flow runs","deployments","schedules","triggers","automations","deployments.yaml","infrastructure","storage"],"boost":2},{"location":"concepts/filesystems/","title":"Filesystems","text":"

A filesystem block is an object that allows you to read and write data from paths. Prefect provides multiple built-in file system types that cover a wide range of use cases.

Additional file system types are available in Prefect Collections.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#local-filesystem","title":"Local filesystem","text":"

The LocalFileSystem block enables interaction with the files in your current development environment.

LocalFileSystem properties include:

Property Description basepath String path to the location of files on the local filesystem. Access to files outside of the base path will not be allowed.
from prefect.filesystems import LocalFileSystem\n\nfs = LocalFileSystem(basepath=\"/foo/bar\")\n

Limited access to local file system

Be aware that LocalFileSystem access is limited to the exact path provided. This file system may not be ideal for some use cases. The execution environment for your workflows may not have the same file system as the environment you are writing and deploying your code on.

Use of this file system can limit the availability of results after a flow run has completed or prevent the code for a flow from being retrieved successfully at the start of a run.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#remote-file-system","title":"Remote file system","text":"

The RemoteFileSystem block enables interaction with arbitrary remote file systems. Under the hood, RemoteFileSystem uses fsspec and supports any file system that fsspec supports.

RemoteFileSystem properties include:

Property Description basepath String path to the location of files on the remote filesystem. Access to files outside of the base path will not be allowed. settings Dictionary containing extra parameters required to access the remote file system.

The file system is specified using a protocol:

For example, to use it with Amazon S3:

from prefect.filesystems import RemoteFileSystem\n\nblock = RemoteFileSystem(basepath=\"s3://my-bucket/folder/\")\nblock.save(\"dev\")\n

You may need to install additional libraries to use some remote storage types.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#remotefilesystem-examples","title":"RemoteFileSystem examples","text":"

How can we use RemoteFileSystem to store our flow code? The following is a use case where we use MinIO as a storage backend:

from prefect.filesystems import RemoteFileSystem\n\nminio_block = RemoteFileSystem(\n    basepath=\"s3://my-bucket\",\n    settings={\n        \"key\": \"MINIO_ROOT_USER\",\n        \"secret\": \"MINIO_ROOT_PASSWORD\",\n        \"client_kwargs\": {\"endpoint_url\": \"http://localhost:9000\"},\n    },\n)\nminio_block.save(\"minio\")\n
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#azure","title":"Azure","text":"

The Azure file system block enables interaction with Azure Datalake and Azure Blob Storage. Under the hood, the Azure block uses adlfs.

Azure properties include:

Property Description bucket_path String path to the location of files on the remote filesystem. Access to files outside of the bucket path will not be allowed. azure_storage_connection_string Azure storage connection string. azure_storage_account_name Azure storage account name. azure_storage_account_key Azure storage account key. azure_storage_tenant_id Azure storage tenant ID. azure_storage_client_id Azure storage client ID. azure_storage_client_secret Azure storage client secret. azure_storage_anon Anonymous authentication, disable to use DefaultAzureCredential.

To create a block:

from prefect.filesystems import Azure\n\nblock = Azure(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n

To use it in a deployment:

prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb az/dev\n

You need to install adlfs to use it.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#github","title":"GitHub","text":"

The GitHub filesystem block enables interaction with GitHub repositories. This block is read-only and works with both public and private repositories.

GitHub properties include:

Property Description reference An optional reference to pin to, such as a branch name or tag. repository The URL of a GitHub repository to read from, in either HTTPS or SSH format. access_token A GitHub Personal Access Token (PAT) with repo scope.

To create a block:

from prefect.filesystems import GitHub\n\nblock = GitHub(\n    repository=\"https://github.com/my-repo/\",\n    access_token=<my_access_token> # only required for private repos\n)\nblock.get_directory(\"folder-in-repo\") # specify a subfolder of repo\nblock.save(\"dev\")\n

To use it in a deployment:

prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb github/dev -a\n
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#gitlabrepository","title":"GitLabRepository","text":"

The GitLabRepository block is read-only and works with private GitLab repositories.

GitLabRepository properties include:

Property Description reference An optional reference to pin to, such as a branch name or tag. repository The URL of a GitLab repository to read from, in either HTTPS or SSH format. credentials A GitLabCredentials block with Personal Access Token (PAT) with read_repository scope.

To create a block:

from prefect_gitlab.credentials import GitLabCredentials\nfrom prefect_gitlab.repositories import GitLabRepository\n\ngitlab_creds = GitLabCredentials(token=\"YOUR_GITLAB_ACCESS_TOKEN\")\ngitlab_repo = GitLabRepository(\n    repository=\"https://gitlab.com/yourorg/yourrepo.git\",\n    reference=\"main\",\n    credentials=gitlab_creds,\n)\ngitlab_repo.save(\"dev\")\n

To use it in a deployment (and apply):

prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb gitlab-repository/dev -a\n

Note that to use this block, you need to install the prefect-gitlab collection.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#gcs","title":"GCS","text":"

The GCS file system block enables interaction with Google Cloud Storage. Under the hood, GCS uses gcsfs.

GCS properties include:

Property Description bucket_path A GCS bucket path service_account_info The contents of a service account keyfile as a JSON string. project The project the GCS bucket resides in. If not provided, the project will be inferred from the credentials or environment.

To create a block:

from prefect.filesystems import GCS\n\nblock = GCS(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n

To use it in a deployment:

prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb gcs/dev\n

You need to install gcsfsto use it.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#s3","title":"S3","text":"

The S3 file system block enables interaction with Amazon S3. Under the hood, S3 uses s3fs.

S3 properties include:

Property Description bucket_path An S3 bucket path aws_access_key_id AWS Access Key ID aws_secret_access_key AWS Secret Access Key

To create a block:

from prefect.filesystems import S3\n\nblock = S3(bucket_path=\"my-bucket/folder/\")\nblock.save(\"dev\")\n

To use it in a deployment:

prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb s3/dev\n

You need to install s3fsto use this block.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#smb","title":"SMB","text":"

The SMB file system block enables interaction with SMB shared network storage. Under the hood, SMB uses smbprotocol. Used to connect to Windows-based SMB shares from Linux-based Prefect flows. The SMB file system block is able to copy files, but cannot create directories.

SMB properties include:

Property Description basepath String path to the location of files on the remote filesystem. Access to files outside of the base path will not be allowed. smb_host Hostname or IP address where SMB network share is located. smb_port Port for SMB network share (defaults to 445). smb_username SMB username with read/write permissions. smb_password SMB password.

To create a block:

from prefect.filesystems import SMB\n\nblock = SMB(basepath=\"my-share/folder/\")\nblock.save(\"dev\")\n

To use it in a deployment:

prefect deployment build path/to/flow.py:flow_name --name deployment_name --tag dev -sb smb/dev\n

You need to install smbprotocol to use it.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#handling-credentials-for-cloud-object-storage-services","title":"Handling credentials for cloud object storage services","text":"

If you leverage S3, GCS, or Azure storage blocks, and you don't explicitly configure credentials on the respective storage block, those credentials will be inferred from the environment. Make sure to set those either explicitly on the block or as environment variables, configuration files, or IAM roles within both the build and runtime environment for your deployments.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#filesystem-package-dependencies","title":"Filesystem package dependencies","text":"

A Prefect installation and doesn't include filesystem-specific package dependencies such as s3fs, gcsfs or adlfs. This includes Prefect base Docker images.

You must ensure that filesystem-specific libraries are installed in an execution environment where they will be used by flow runs.

In Dockerized deployments using the Prefect base image, you can leverage the EXTRA_PIP_PACKAGES environment variable. Those dependencies will be installed at runtime within your Docker container or Kubernetes Job before the flow starts running.

In Dockerized deployments using a custom image, you must include the filesystem-specific package dependency in your image.

Here is an example from a deployment YAML file showing how to specify the installation of s3fs from into your image:

infrastructure:\ntype: docker-container\nenv:\nEXTRA_PIP_PACKAGES: s3fs  # could be gcsfs, adlfs, etc.\n

You may specify multiple dependencies by providing a comma-delimted list.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#saving-and-loading-file-systems","title":"Saving and loading file systems","text":"

Configuration for a file system can be saved to the Prefect API. For example:

fs = RemoteFileSystem(basepath=\"s3://my-bucket/folder/\")\nfs.write_path(\"foo\", b\"hello\")\nfs.save(\"dev-s3\")\n

This file system can be retrieved for later use with load.

fs = RemoteFileSystem.load(\"dev-s3\")\nfs.read_path(\"foo\")  # b'hello'\n
","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/filesystems/#readable-and-writable-file-systems","title":"Readable and writable file systems","text":"

Prefect provides two abstract file system types, ReadableFileSystem and WriteableFileSystem.

A file system may implement both of these types.

","tags":["filesystems","storage","deployments","LocalFileSystem","RemoteFileSystem"],"boost":2},{"location":"concepts/flows/","title":"Flows","text":"

Flows are the most basic Prefect object. Flows are the only Prefect abstraction that can be interacted with, displayed, and run without needing to reference any other aspect of the Prefect engine. A flow is a container for workflow logic and allows users to interact with and reason about the state of their workflows. It is represented in Python as a single function.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flows-overview","title":"Flows overview","text":"

Flows are like functions. They can take inputs, perform work, and return an output. In fact, you can turn any function into a Prefect flow by adding the @flow decorator. When a function becomes a flow, its behavior changes, giving it the following advantages:

Flows also take advantage of automatic Prefect logging to capture details about flow runs such as run time, task tags, and final state.

All workflows are defined within the context of a flow. Flows can include calls to tasks as well as to other flows, which we call \"subflows\" in this context. Flows may be defined within modules and imported for use as subflows in your flow definitions.

Flows are required for deployments \u2014 every deployment points to a specific flow as the entrypoint for a flow run.

Tasks must be called from flows

All tasks must be called from within a flow. Tasks may not be called from other tasks.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flow-runs","title":"Flow runs","text":"

A flow run represents a single execution of the flow.

You can create a flow run by calling the flow. For example, by running a Python script or importing the flow into an interactive session.

You can also create a flow run by:

However you run the flow, the Prefect API monitors the flow run, capturing flow run state for observability.

When you run a flow that contains tasks or additional flows, Prefect will track the relationship of each child run to the parent flow run.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#writing-flows","title":"Writing flows","text":"

For most use cases, we recommend using the @flow decorator to designate a flow:

from prefect import flow\n\n@flow\ndef my_flow():\n    return\n

Flows are uniquely identified by name. You can provide a name parameter value for the flow. If you don't provide a name, Prefect uses the flow function name.

@flow(name=\"My Flow\")\ndef my_flow():\n    return\n

Flows can call tasks to do specific work:

from prefect import flow, task\n\n@task\ndef print_hello(name):\n    print(f\"Hello {name}!\")\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    print_hello(name)\n

Flows and tasks

There's nothing stopping you from putting all of your code in a single flow function \u2014 Prefect will happily run it!

However, organizing your workflow code into smaller flow and task units lets you take advantage of Prefect features like retries, more granular visibility into runtime state, the ability to determine final state regardless of individual task state, and more.

In addition, if you put all of your workflow logic in a single flow function and any line of code fails, the entire flow will fail and must be retried from the beginning. This can be avoided by breaking up the code into multiple tasks.

Each Prefect workflow must contain one primary, entrypoint @flow function. From that flow function, you may call any number of other tasks, subflows, and even regular Python functions. You can pass parameters to your entrypoint flow function that will be used elsewhere in the workflow, and the final state of that entrypoint flow function determines the final state of your workflow.

Prefect encourages \"small tasks\" \u2014 each one should represent a single logical step of your workflow. This allows Prefect to better contain task failures.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#flow-settings","title":"Flow settings","text":"

Flows allow a great deal of configuration by passing arguments to the decorator. Flows accept the following optional settings.

Argument Description description An optional string description for the flow. If not provided, the description will be pulled from the docstring for the decorated function. name An optional name for the flow. If not provided, the name will be inferred from the function. retries An optional number of times to retry on flow run failure. retry_delay_seconds An optional number of seconds to wait before retrying the flow after failure. This is only applicable if retries is nonzero. flow_run_name An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables; this name can also be provided as a function that returns a string. task_runner An optional task runner to use for task execution within the flow when you .submit() tasks. If not provided and you .submit() tasks, the ConcurrentTaskRunner will be used. timeout_seconds An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called. validate_parameters Boolean indicating whether parameters passed to flows are validated by Pydantic. Default is True. version An optional version string for the flow. If not provided, we will attempt to create a version string as a hash of the file containing the wrapped function. If the file cannot be located, the version will be null.

For example, you can provide a name value for the flow. Here we've also used the optional description argument and specified a non-default task runner.

from prefect import flow\nfrom prefect.task_runners import SequentialTaskRunner\n\n@flow(name=\"My Flow\",\n      description=\"My flow using SequentialTaskRunner\",\n      task_runner=SequentialTaskRunner())\ndef my_flow():\n    return\n

You can also provide the description as the docstring on the flow function.

@flow(name=\"My Flow\",\n      task_runner=SequentialTaskRunner())\ndef my_flow():\n\"\"\"My flow using SequentialTaskRunner\"\"\"\n    return\n

You can distinguish runs of this flow by providing a flow_run_name; this setting accepts a string that can optionally contain templated references to the parameters of your flow. The name will be formatted using Python's standard string formatting syntax as can be seen here:

import datetime\nfrom prefect import flow\n\n@flow(flow_run_name=\"{name}-on-{date:%A}\")\ndef my_flow(name: str, date: datetime.datetime):\n    pass\n\n# creates a flow run called 'marvin-on-Thursday'\nmy_flow(name=\"marvin\", date=datetime.datetime.utcnow())\n

Additionally this setting also accepts a function that returns a string for the flow run name:

import datetime\nfrom prefect import flow\n\ndef generate_flow_run_name():\n    date = datetime.datetime.utcnow()\n\n    return f\"{date:%A}-is-a-nice-day\"\n\n@flow(flow_run_name=generate_flow_run_name)\ndef my_flow(name: str):\n    pass\n\n# creates a flow run called 'Thursday-is-a-nice-day'\nmy_flow(name=\"marvin\")\n

If you need access to information about the flow, use the prefect.runtime module. For example:

from prefect import flow\nfrom prefect.runtime import flow_run\n\ndef generate_flow_run_name():\n    flow_name = flow_run.flow_name\n\n    parameters = flow_run.parameters\n    name = parameters[\"name\"]\n    limit = parameters[\"limit\"]\n\n    return f\"{flow_name}-with-{name}-and-{limit}\"\n\n@flow(flow_run_name=generate_flow_run_name)\ndef my_flow(name: str, limit: int = 100):\n    pass\n\n# creates a flow run called 'my-flow-with-marvin-and-100'\nmy_flow(name=\"marvin\")\n

Note that validate_parameters will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type. For example, if a parameter is defined as x: int and \"5\" is passed, it will be resolved to 5. If set to False, no validation will be performed on flow parameters.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#separating-logic-into-tasks","title":"Separating logic into tasks","text":"

The simplest workflow is just a @flow function that does all the work of the workflow.

from prefect import flow\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    print(f\"Hello {name}!\")\n\nhello_world(\"Marvin\")\n

When you run this flow, you'll see the following output:

$ python hello.py\n15:11:23.594 | INFO    | prefect.engine - Created flow run 'benevolent-donkey' for flow 'hello-world'\n15:11:23.594 | INFO    | Flow run 'benevolent-donkey' - Using task runner 'ConcurrentTaskRunner'\nHello Marvin!\n15:11:24.447 | INFO    | Flow run 'benevolent-donkey' - Finished in state Completed()\n

A better practice is to create @task functions that do the specific work of your flow, and use your @flow function as the conductor that orchestrates the flow of your application:

from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n\nhello_world(\"Marvin\")\n

When you run this flow, you'll see the following output, which illustrates how the work is encapsulated in a task run.

$ python hello.py\n15:15:58.673 | INFO    | prefect.engine - Created flow run 'loose-wolverine' for flow 'Hello Flow'\n15:15:58.674 | INFO    | Flow run 'loose-wolverine' - Using task runner 'ConcurrentTaskRunner'\n15:15:58.973 | INFO    | Flow run 'loose-wolverine' - Created task run 'Print Hello-84f0fe0e-0' for task 'Print Hello'\nHello Marvin!\n15:15:59.037 | INFO    | Task run 'Print Hello-84f0fe0e-0' - Finished in state Completed()\n15:15:59.568 | INFO    | Flow run 'loose-wolverine' - Finished in state Completed('All states completed.')\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#composing-flows","title":"Composing flows","text":"

A subflow run is created when a flow function is called inside the execution of another flow. The primary flow is the \"parent\" flow. The flow created within the parent is the \"child\" flow or \"subflow.\"

Subflow runs behave like normal flow runs. There is a full representation of the flow run in the backend as if it had been called separately. When a subflow starts, it will create a new task runner for tasks within the subflow. When the subflow completes, the task runner is shut down.

Subflows will block execution of the parent flow until completion. However, asynchronous subflows can be run in parallel by using AnyIO task groups or asyncio.gather.

Subflows differ from normal flows in that they will resolve any passed task futures into data. This allows data to be passed from the parent flow to the child easily.

The relationship between a child and parent flow is tracked by creating a special task run in the parent flow. This task run will mirror the state of the child flow run.

A task that represents a subflow will be annotated as such in its state_details via the presence of a child_flow_run_id field. A subflow can be identified via the presence of a parent_task_run_id on state_details.

You can define multiple flows within the same file. Whether running locally or via a deployment, you must indicate which flow is the entrypoint for a flow run.

Cancelling subflow runs

In line subflow runs, i.e. those created without run_deployment, cannot be cancelled without cancelling their parent flow run. If you may need to cancel a subflow run independent of its parent flow run, we recommend deploying it separately and starting it using the run_deployment method.

from prefect import flow, task\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Subflow\")\ndef my_subflow(msg):\n    print(f\"Subflow says: {msg}\")\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n    my_subflow(message)\n\nhello_world(\"Marvin\")\n

You can also define flows or tasks in separate modules and import them for usage. For example, here's a simple subflow module:

from prefect import flow, task\n\n@flow(name=\"Subflow\")\ndef my_subflow(msg):\n    print(f\"Subflow says: {msg}\")\n

Here's a parent flow that imports and uses my_subflow() as a subflow:

from prefect import flow, task\nfrom subflow import my_subflow\n\n@task(name=\"Print Hello\")\ndef print_hello(name):\n    msg = f\"Hello {name}!\"\n    print(msg)\n    return msg\n\n@flow(name=\"Hello Flow\")\ndef hello_world(name=\"world\"):\n    message = print_hello(name)\n    my_subflow(message)\n\nhello_world(\"Marvin\")\n

Running the hello_world() flow (in this example from the file hello.py) creates a flow run like this:

$ python hello.py\n15:19:21.651 | INFO    | prefect.engine - Created flow run 'daft-cougar' for flow 'Hello Flow'\n15:19:21.651 | INFO    | Flow run 'daft-cougar' - Using task runner 'ConcurrentTaskRunner'\n15:19:21.945 | INFO    | Flow run 'daft-cougar' - Created task run 'Print Hello-84f0fe0e-0' for task 'Print Hello'\nHello Marvin!\n15:19:22.055 | INFO    | Task run 'Print Hello-84f0fe0e-0' - Finished in state Completed()\n15:19:22.107 | INFO    | Flow run 'daft-cougar' - Created subflow run 'ninja-duck' for flow 'Subflow'\nSubflow says: Hello Marvin!\n15:19:22.794 | INFO    | Flow run 'ninja-duck' - Finished in state Completed()\n15:19:23.215 | INFO    | Flow run 'daft-cougar' - Finished in state Completed('All states completed.')\n

Subflows or tasks?

In Prefect you can call tasks or subflows to do work within your workflow, including passing results from other tasks to your subflow. So a common question we hear is:

\"When should I use a subflow instead of a task?\"

We recommend writing tasks that do a discrete, specific piece of work in your workflow: calling an API, performing a database operation, analyzing or transforming a data point. Prefect tasks are well suited to parallel or distributed execution using distributed computation frameworks such as Dask or Ray. For troubleshooting, the more granular you create your tasks, the easier it is to find and fix issues should a task fail.

Subflows enable you to group related tasks within your workflow. Here are some scenarios where you might choose to use a subflow rather than calling tasks individually:

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#parameters","title":"Parameters","text":"

Flows can be called with both positional and keyword arguments. These arguments are resolved at runtime into a dictionary of parameters mapping name to value. These parameters are stored by the Prefect orchestration engine on the flow run object.

Prefect API requires keyword arguments

When creating flow runs from the Prefect API, parameter names must be specified when overriding defaults \u2014 they cannot be positional.

Type hints provide an easy way to enforce typing on your flow parameters via pydantic. This means any pydantic model used as a type hint within a flow will be coerced automatically into the relevant object type:

from prefect import flow\nfrom pydantic import BaseModel\n\nclass Model(BaseModel):\n    a: int\n    b: float\n    c: str\n\n@flow\ndef model_validator(model: Model):\n    print(model)\n

Note that parameter values can be provided to a flow via API using a deployment. Flow run parameters sent to the API on flow calls are coerced to a serializable form. Type hints on your flow functions provide you a way of automatically coercing JSON provided values to their appropriate Python representation.

For example, to automatically convert something to a datetime:

from prefect import flow\nfrom datetime import datetime\n\n@flow\ndef what_day_is_it(date: datetime = None):\n    if date is None:\n        date = datetime.utcnow()\n    print(f\"It was {date.strftime('%A')} on {date.isoformat()}\")\n\nwhat_day_is_it(\"2021-01-01T02:00:19.180906\")\n# It was Friday on 2021-01-01T02:00:19.180906\n

Parameters are validated before a flow is run. If a flow call receives invalid parameters, a flow run is created in a Failed state. If a flow run for a deployment receives invalid parameters, it will move from a Pending state to a Failed without entering a Running state.

Flow run parameters cannot exceed 512kb in size

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#final-state-determination","title":"Final state determination","text":"

Prerequisite

Read the documentation about states before proceeding with this section.

The final state of the flow is determined by its return value. The following rules apply:

The following examples illustrate each of these cases:

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#raise-an-exception","title":"Raise an exception","text":"

If an exception is raised within the flow function, the flow is immediately marked as failed.

from prefect import flow\n\n@flow\ndef always_fails_flow():\nraise ValueError(\"This flow immediately fails\")\nalways_fails_flow()\n

Running this flow produces the following result:

22:22:36.864 | INFO    | prefect.engine - Created flow run 'acrid-tuatara' for flow 'always-fails-flow'\n22:22:36.864 | INFO    | Flow run 'acrid-tuatara' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n22:22:37.060 | ERROR   | Flow run 'acrid-tuatara' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: This flow immediately fails\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-none","title":"Return None","text":"

A flow with no return statement is determined by the state of all of its task runs.

from prefect import flow, task\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n    print(\"I'm fail safe!\")\n    return \"success\"\n\n@flow\ndef always_fails_flow():\n    always_fails_task.submit().result(raise_on_failure=False)\n    always_succeeds_task()\n\nif __name__ == \"__main__\":\n    always_fails_flow()\n

Running this flow produces the following result:

18:32:05.345 | INFO    | prefect.engine - Created flow run 'auburn-lionfish' for flow 'always-fails-flow'\n18:32:05.346 | INFO    | Flow run 'auburn-lionfish' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:32:05.582 | INFO    | Flow run 'auburn-lionfish' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:32:05.582 | INFO    | Flow run 'auburn-lionfish' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:32:05.610 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\n18:32:05.638 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:32:05.658 | INFO    | Flow run 'auburn-lionfish' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:32:05.659 | INFO    | Flow run 'auburn-lionfish' - Executing 'always_succeeds_task-9c27db32-0' immediately...\nI'm fail safe!\n18:32:05.703 | INFO    | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:32:05.730 | ERROR   | Flow run 'auburn-lionfish' - Finished in state Failed('1/2 states failed.')\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-a-future","title":"Return a future","text":"

If a flow returns one or more futures, the final state is determined based on the underlying states.

from prefect import flow, task\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n    print(\"I'm fail safe!\")\n    return \"success\"\n\n@flow\ndef always_succeeds_flow():\n    x = always_fails_task.submit().result(raise_on_failure=False)\ny = always_succeeds_task.submit(wait_for=[x])\nreturn y\n\nif __name__ == \"__main__\":\n    always_succeeds_flow()\n

Running this flow produces the following result \u2014 it succeeds because it returns the future of the task that succeeds:

18:35:24.965 | INFO    | prefect.engine - Created flow run 'whispering-guan' for flow 'always-succeeds-flow'\n18:35:24.965 | INFO    | Flow run 'whispering-guan' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:35:25.204 | INFO    | Flow run 'whispering-guan' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:35:25.205 | INFO    | Flow run 'whispering-guan' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:35:25.232 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\n18:35:25.265 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:35:25.289 | INFO    | Flow run 'whispering-guan' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:35:25.289 | INFO    | Flow run 'whispering-guan' - Submitted task run 'always_succeeds_task-9c27db32-0' for execution.\nI'm fail safe!\n18:35:25.335 | INFO    | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:35:25.362 | INFO    | Flow run 'whispering-guan' - Finished in state Completed('All states completed.')\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-multiple-states-or-futures","title":"Return multiple states or futures","text":"

If a flow returns a mix of futures and states, the final state is determined by resolving all futures to states, then determining if any of the states are not COMPLETED.

from prefect import task, flow\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I am bad task\")\n\n@task\ndef always_succeeds_task():\n    return \"foo\"\n\n@flow\ndef always_succeeds_flow():\n    return \"bar\"\n\n@flow\ndef always_fails_flow():\n    x = always_fails_task()\n    y = always_succeeds_task()\n    z = always_succeeds_flow()\nreturn x, y, z\n

Running this flow produces the following result. It fails because one of the three returned futures failed. Note that the final state is Failed, but the states of each of the returned futures is included in the flow state:

20:57:51.547 | INFO    | prefect.engine - Created flow run 'impartial-gorilla' for flow 'always-fails-flow'\n20:57:51.548 | INFO    | Flow run 'impartial-gorilla' - Using task runner 'ConcurrentTaskRunner'\n20:57:51.645 | INFO    | Flow run 'impartial-gorilla' - Created task run 'always_fails_task-58ea43a6-0' for task 'always_fails_task'\n20:57:51.686 | INFO    | Flow run 'impartial-gorilla' - Created task run 'always_succeeds_task-c9014725-0' for task 'always_succeeds_task'\n20:57:51.727 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: I am bad task\n20:57:51.787 | INFO    | Task run 'always_succeeds_task-c9014725-0' - Finished in state Completed()\n20:57:51.808 | INFO    | Flow run 'impartial-gorilla' - Created subflow run 'unbiased-firefly' for flow 'always-succeeds-flow'\n20:57:51.884 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Finished in state Failed('Task run encountered an exception.')\n20:57:52.438 | INFO    | Flow run 'unbiased-firefly' - Finished in state Completed()\n20:57:52.811 | ERROR   | Flow run 'impartial-gorilla' - Finished in state Failed('1/3 states failed.')\nFailed(message='1/3 states failed.', type=FAILED, result=(Failed(message='Task run encountered an exception.', type=FAILED, result=ValueError('I am bad task'), task_run_id=5fd4c697-7c4c-440d-8ebc-dd9c5bbf2245), Completed(message=None, type=COMPLETED, result='foo', task_run_id=df9b6256-f8ac-457c-ba69-0638ac9b9367), Completed(message=None, type=COMPLETED, result='bar', task_run_id=cfdbf4f1-dccd-4816-8d0f-128750017d0c)), flow_run_id=6d2ec094-001a-4cb0-a24e-d2051db6318d)\n

Returning multiple states

When returning multiple states, they must be contained in a set, list, or tuple. If other collection types are used, the result of the contained states will not be checked.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-a-manual-state","title":"Return a manual state","text":"

If a flow returns a manually created state, the final state is determined based on the return value.

from prefect import task, flow\nfrom prefect.server.schemas.states import Completed, Failed\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@task\ndef always_succeeds_task():\n    print(\"I'm fail safe!\")\n    return \"success\"\n\n@flow\ndef always_succeeds_flow():\n    x = always_fails_task.submit()\ny = always_succeeds_task.submit()\nif y.result() == \"success\":\nreturn Completed(message=\"I am happy with this result\")\nelse:\nreturn Failed(message=\"How did this happen!?\")\n\nif __name__ == \"__main__\":\n    always_succeeds_flow()\n

Running this flow produces the following result.

18:37:42.844 | INFO    | prefect.engine - Created flow run 'lavender-elk' for flow 'always-succeeds-flow'\n18:37:42.845 | INFO    | Flow run 'lavender-elk' - Starting 'ConcurrentTaskRunner'; submitted tasks will be run concurrently...\n18:37:43.125 | INFO    | Flow run 'lavender-elk' - Created task run 'always_fails_task-96e4be14-0' for task 'always_fails_task'\n18:37:43.126 | INFO    | Flow run 'lavender-elk' - Submitted task run 'always_fails_task-96e4be14-0' for execution.\n18:37:43.162 | INFO    | Flow run 'lavender-elk' - Created task run 'always_succeeds_task-9c27db32-0' for task 'always_succeeds_task'\n18:37:43.163 | INFO    | Flow run 'lavender-elk' - Submitted task run 'always_succeeds_task-9c27db32-0' for execution.\n18:37:43.175 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Encountered exception during execution:\nTraceback (most recent call last):\n  ...\nValueError: I fail successfully\nI'm fail safe!\n18:37:43.217 | ERROR   | Task run 'always_fails_task-96e4be14-0' - Finished in state Failed('Task run encountered an exception.')\n18:37:43.236 | INFO    | Task run 'always_succeeds_task-9c27db32-0' - Finished in state Completed()\n18:37:43.264 | INFO    | Flow run 'lavender-elk' - Finished in state Completed('I am happy with this result')\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#return-an-object","title":"Return an object","text":"

If the flow run returns any other object, then it is marked as completed.

from prefect import task, flow\n\n@task\ndef always_fails_task():\n    raise ValueError(\"I fail successfully\")\n\n@flow\ndef always_succeeds_flow():\n    always_fails_task().submit()\nreturn \"foo\"\nif __name__ == \"__main__\":\n    always_succeeds_flow()\n

Running this flow produces the following result.

21:02:45.715 | INFO    | prefect.engine - Created flow run 'sparkling-pony' for flow 'always-succeeds-flow'\n21:02:45.715 | INFO    | Flow run 'sparkling-pony' - Using task runner 'ConcurrentTaskRunner'\n21:02:45.816 | INFO    | Flow run 'sparkling-pony' - Created task run 'always_fails_task-58ea43a6-0' for task 'always_fails_task'\n21:02:45.853 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Encountered exception during execution:\nTraceback (most recent call last):...\nValueError: I am bad task\n21:02:45.879 | ERROR   | Task run 'always_fails_task-58ea43a6-0' - Finished in state Failed('Task run encountered an exception.')\n21:02:46.593 | INFO    | Flow run 'sparkling-pony' - Finished in state Completed()\nCompleted(message=None, type=COMPLETED, result='foo', flow_run_id=7240e6f5-f0a8-4e00-9440-a7b33fb51153)\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#pause-a-flow-run","title":"Pause a flow run","text":"

Prefect enables pausing an in-progress flow run for manual approval. Prefect exposes this functionality via the pause_flow_run and resume_flow_run functions, as well as the Prefect server or Prefect Cloud UI.

Most simply, pause_flow_run can be called inside a flow. A timeout option can be supplied as well \u2014 after the specified number of seconds, the flow will fail if it hasn't been resumed.

from prefect import task, flow, pause_flow_run, resume_flow_run\n\n@task\nasync def marvin_setup():\n    return \"a raft of ducks walk into a bar...\"\n@task\nasync def marvin_punchline():\n    return \"it's a wonder none of them ducked!\"\n@flow\nasync def inspiring_joke():\n    await marvin_setup()\n    await pause_flow_run(timeout=600)  # pauses for 10 minutes\n    await marvin_punchline()\n

Calling this flow will pause after the first task and wait for resumption.

await inspiring_joke()\n> \"a raft of ducks walk into a bar...\"\n

Paused flow runs can be resumed by clicking the Resume button in the Prefect UI or calling the resume_flow_run utility via client code.

resume_flow_run(FLOW_RUN_ID)\n

The paused flow run will then finish!

> \"it's a wonder none of them ducked!\"\n

Here is an example of a flow that does not block flow execution while paused. This flow will exit after one task, and will be rescheduled upon resuming. The stored result of the first task is retrieved instead of being rerun.

from prefect import flow, pause_flow_run, task\n\n@task(persist_result=True)\ndef foo():\n    return 42\n\n@flow(persist_result=True)\ndef noblock_pausing():\n    x = foo.submit()\n    pause_flow_run(timeout=30, reschedule=True)\n    y = foo.submit()\n    z = foo(wait_for=[x])\n    alpha = foo(wait_for=[y])\n    omega = foo(wait_for=[x, y])\n

This long-running flow can be paused out of process, either by calling pause_flow_run(flow_run_id=<ID>) or selecting the Pause button in the Prefect UI or Prefect Cloud.

from prefect import flow, task\nimport time\n\n@task(persist_result=True)\nasync def foo():\n    return 42\n\n@flow(persist_result=True)\nasync def longrunning():\n    res = 0\n    for ii in range(20):\n        time.sleep(5)\n        res += (await foo())\n    return res\n

Pausing flow runs is blocking by default

By default, pausing a flow run blocks the agent \u2014 the flow is still running inside the pause_flow_run function. However, you may pause any flow run in this fashion, including non-deployment local flow runs and subflows.

Alternatively, flow runs can be paused without blocking the flow run process. This is particularly useful when running the flow via an agent and you want the agent to be able to pick up other flows while the paused flow is paused.

Non-blocking pause can be accomplished by setting the reschedule flag to True. In order to use this feature, flows that pause with the reschedule flag must have:

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-a-flow-run","title":"Cancel a flow run","text":"

You may cancel a scheduled or in-progress flow run from the CLI, UI, REST API, or Python client.

When cancellation is requested, the flow run is moved to a \"Cancelling\" state. The agent monitors the state of flow runs and detects that cancellation has been requested. The agent then sends a signal to the flow run infrastructure, requesting termination of the run. If the run does not terminate after a grace period (default of 30 seconds), the infrastructure will be killed, ensuring the flow run exits.

An agent is required

Flow run cancellation requires the flow run to be submitted by an agent or worker. The agent or worker must be running to enforce the cancellation. In line subflow runs, i.e. those created without run_deployment, cannot be cancelled without cancelling the parent flow run. If you may need to cancel a subflow run independent of its parent flow run, we recommend deploying it separately and starting it using the run_deployment method.

Support for cancellation is included for all core library infrastructure types:

Cancellation is robust to restarts of the agent. To enable this, we attach metadata about the created infrastructure to the flow run. Internally, this is referred to as the infrastructure_pid or infrastructure identifier. Generally, this is composed of two parts:

  1. Scope: identifying where the infrastructure is running.
  2. ID: a unique identifier for the infrastructure within the scope.

The scope is used to ensure that Prefect does not kill the wrong infrastructure. For example, agents running on multiple machines may have overlapping process IDs but should not have a matching scope.

The identifiers for the primary infrastructure types are as follows:

While the cancellation process is robust, there are a few issues than can occur:

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-via-the-cli","title":"Cancel via the CLI","text":"

From the command line in your execution environment, you can cancel a flow run by using the prefect flow-run cancel CLI command, passing the ID of the flow run.

$ prefect flow-run cancel 'a55a4804-9e3c-4042-8b59-b3b6b7618736'\n
","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/flows/#cancel-via-the-ui","title":"Cancel via the UI","text":"

From the UI you can cancel a flow run by navigating to the flow run's detail page and clicking the Cancel button in the upper right corner.

","tags":["flows","subflows","workflows","scripts","parameters","states","final state"],"boost":2},{"location":"concepts/infrastructure/","title":"Infrastructure","text":"

Users may specify an infrastructure block when creating a deployment. This block will be used to specify infrastructure for flow runs created by the deployment at runtime.

Infrastructure can only be used with a deployment. When you run a flow directly by calling the flow yourself, you are responsible for the environment in which the flow executes.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#infrastructure-overview","title":"Infrastructure overview","text":"

Prefect uses infrastructure to create the environment for a user's flow to execute.

Infrastructure is attached to a deployment and is propagated to flow runs created for that deployment. Infrastructure is deserialized by the agent and it has two jobs:

The engine acquires and calls the flow. Infrastructure doesn't know anything about how the flow is stored, it's just passing a flow run ID to the engine.

Infrastructure is specific to the environments in which flows will run. Prefect currently provides the following infrastructure types:

What about tasks?

Flows and tasks can both use configuration objects to manage the environment in which code runs.

Flows use infrastructure.

Tasks use task runners. For more on how task runners work, see Task Runners.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#using-infrastructure","title":"Using infrastructure","text":"

You may create customized infrastructure blocks through the Prefect UI or Prefect Cloud Blocks page or create them in code and save them to the API using the blocks .save() method.

Once created, there are two distinct ways to use infrastructure in a deployment:

For example, when creating your deployment files, the supported Prefect infrastrucure types are:

$ prefect deployment build ./my_flow.py:my_flow -n my-flow-deployment -t test -i docker-container -sb s3/my-bucket --override env.EXTRA_PIP_PACKAGES=s3fs\nFound flow 'my-flow'\nSuccessfully uploaded 2 files to s3://bucket-full-of-sunshine\nDeployment YAML created at '/Users/terry/test/flows/infra/deployment.yaml'.\n

In this example we specify the DockerContainer infrastructure in addition to a preconfigured AWS S3 bucket storage block.

The default deployment YAML filename may be edited as needed to add an infrastructure type or infrastructure settings.

###\n### A complete description of a Prefect Deployment for flow 'my-flow'\n###\nname: my-flow-deployment\ndescription: null\nversion: e29de5d01b06d61b4e321d40f34a480c\n# The work queue that will handle this deployment's runs\nwork_queue_name: default\nwork_pool_name: default-agent-pool\ntags:\n- test\nparameters: {}\nschedule: null\nis_schedule_active: true\ninfra_overrides:\nenv.EXTRA_PIP_PACKAGES: s3fs\ninfrastructure:\ntype: docker-container\nenv: {}\nlabels: {}\nname: null\ncommand:\n- python\n- -m\n- prefect.engine\nimage: prefecthq/prefect:2-latest\nimage_pull_policy: null\nnetworks: []\nnetwork_mode: null\nauto_remove: false\nvolumes: []\nstream_output: true\nmemswap_limit: null\nmem_limit: null\nprivileged: false\nblock_type_slug: docker-container\n_block_type_slug: docker-container\n\n###\n### DO NOT EDIT BELOW THIS LINE\n###\nflow_name: my-flow\nmanifest_path: my_flow-manifest.json\nstorage:\nbucket_path: bucket-full-of-sunshine\naws_access_key_id: '**********'\naws_secret_access_key: '**********'\n_is_anonymous: true\n_block_document_name: anonymous-xxxxxxxx-f1ff-4265-b55c-6353a6d65333\n_block_document_id: xxxxxxxx-06c2-4c3c-a505-4a8db0147011\nblock_type_slug: s3\n_block_type_slug: s3\npath: ''\nentrypoint: my_flow.py:my-flow\nparameter_openapi_schema:\ntitle: Parameters\ntype: object\nproperties: {}\nrequired: null\ndefinitions: null\ntimestamp: '2023-02-08T23:00:14.974642+00:00'\n

Editing deployment YAML

Note the big DO NOT EDIT comment in the deployment YAML: In practice, anything above this block can be freely edited before running prefect deployment apply to create the deployment on the API.

Once the deployment exists, any flow runs that this deployment starts will use DockerContainer infrastructure.

You can also create custom infrastructure blocks \u2014 either in the Prefect UI for in code via the API \u2014 and use the settings in the block to configure your infastructure. For example, here we specify settings for Kubernetes infrastructure in a block named k8sdev.

from prefect.infrastructure import KubernetesJob, KubernetesImagePullPolicy\n\nk8s_job = KubernetesJob(\n    namespace=\"dev\",\n    image=\"prefecthq/prefect:2.0.0-python3.9\",\n    image_pull_policy=KubernetesImagePullPolicy.IF_NOT_PRESENT,\n)\nk8s_job.save(\"k8sdev\")\n

Now we can apply the infrastrucure type and settings in the block by specifying the block slug kubernetes-job/k8sdev as the infrastructure type when building a deployment:

prefect deployment build flows/k8s_example.py:k8s_flow --name k8sdev --tag k8s -sb s3/dev -ib kubernetes-job/k8sdev\n

See Deployments for more information about deployment build options.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#configuring-infrastructure","title":"Configuring infrastructure","text":"

Every infrastrcture type has type-specific options.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#process","title":"Process","text":"

Process infrastructure runs a command in a new process.

Current environment variables and Prefect settings will be included in the created process. Configured environment variables will override any current environment variables.

Process supports the following settings:

Attributes Description command A list of strings specifying the command to start the flow run. In most cases you should not override this. env Environment variables to set for the new process. labels Labels for the process. Labels are for metadata purposes only and cannot be attached to the process itself. name A name for the process. For display purposes only.","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#dockercontainer","title":"DockerContainer","text":"

DockerContainer infrastructure executes flow runs in a container.

Requirements for DockerContainer:

DockerContainer supports the following settings:

Attributes Description auto_remove Bool indicating whether the container will be removed on completion. If False, the container will remain after exit for inspection. command A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this. env Environment variables to set for the container. image An optional string specifying the name of a Docker image to use. Defaults to the Prefect image. If the image is stored anywhere other than a public Docker Hub registry, use a corresponding registry block, e.g. DockerRegistry or ensure otherwise that your execution layer is authenticated to pull the image from the image registry. image_pull_policy Specifies if the image should be pulled. One of 'ALWAYS', 'NEVER', 'IF_NOT_PRESENT'. image_registry A DockerRegistry block containing credentials to use if image is stored in a private image registry. labels An optional dictionary of labels, mapping name to value. name An optional name for the container. networks An optional list of strings specifying Docker networks to connect the container to. network_mode Set the network mode for the created container. Defaults to 'host' if a local API url is detected, otherwise the Docker default of 'bridge' is used. If 'networks' is set, this cannot be set. stream_output Bool indicating whether to stream output from the subprocess to local standard output. volumes An optional list of volume mount strings in the format of \"local_path:container_path\".

Prefect automatically sets a Docker image matching the Python and Prefect version you're using at deployment time. You can see all available images at Docker Hub.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#kubernetesjob","title":"KubernetesJob","text":"

KubernetesJob infrastructure executes flow runs in a Kubernetes Job.

Requirements for KubernetesJob:

The Prefect CLI command prefect kubernetes manifest server automatically generates a Kubernetes manifest with default settings for Prefect deployments. By default, it simply prints out the YAML configuration for a manifest. You can pipe this output to a file of your choice and edit as necessary.

KubernetesJob supports the following settings:

Attributes Description cluster_config An optional Kubernetes cluster config to use for this job. command A list of strings specifying the command to run in the container to start the flow run. In most cases you should not override this. customizations A list of JSON 6902 patches to apply to the base Job manifest. Alternatively, a valid JSON string is allowed (handy for deployments CLI). env Environment variables to set for the container. finished_job_ttl The number of seconds to retain jobs after completion. If set, finished jobs will be cleaned up by Kubernetes after the given delay. If None (default), jobs will need to be manually removed. image String specifying the tag of a Docker image to use for the Job. image_pull_policy The Kubernetes image pull policy to use for job containers. job The base manifest for the Kubernetes Job. job_watch_timeout_seconds Number of seconds to watch for job creation before timing out (defaults to None). labels Dictionary of labels to add to the Job. name An optional name for the job. namespace String signifying the Kubernetes namespace to use. pod_watch_timeout_seconds Number of seconds to watch for pod creation before timing out (default 60). service_account_name An optional string specifying which Kubernetes service account to use. stream_output Bool indicating whether to stream output from the subprocess to local standard output.","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#kubernetesjob-overrides-and-customizations","title":"KubernetesJob overrides and customizations","text":"

When creating deployments using KubernetesJob infrastructure, the infra_overrides parameter expects a dictionary. For a KubernetesJob, the customizations parameter expects a list.

Containers expect a list of objects, even if there is only one. For any patches applying to the container, the path value should be a list, for example: /spec/templates/spec/containers/0/resources

A Kubernetes-Job infrastructure block defined in Python:

customizations = [\n    {\n        \"op\": \"add\",\n        \"path\": \"/spec/template/spec/containers/0/resources\",\n        \"value\": {\n            \"requests\": {\n                \"cpu\": \"2000m\",\n                \"memory\": \"4gi\"\n            },\n            \"limits\": {\n                \"cpu\": \"4000m\",\n                \"memory\": \"8Gi\",\n                \"nvidia.com/gpu\": \"1\"\n            }\n        },\n    }\n]\n\nk8s_job = KubernetesJob(\n        namespace=namespace,\n        image=image_name,\n        image_pull_policy=KubernetesImagePullPolicy.ALWAYS,\n        finished_job_ttl=300,\n        job_watch_timeout_seconds=600,\n        pod_watch_timeout_seconds=600,\n        service_account_name=\"prefect-server\",\n        customizations=customizations,\n    )\nk8s_job.save(\"devk8s\")\n

A Deployment with infra-overrides defined in Python:

infra_overrides={ \n    \"customizations\": [\n            {\n                \"op\": \"add\",\n                \"path\": \"/spec/template/spec/containers/0/resources\",\n                \"value\": {\n                    \"requests\": {\n                        \"cpu\": \"2000m\",\n                        \"memory\": \"4gi\"\n                    },\n                    \"limits\": {\n                        \"cpu\": \"4000m\",\n                        \"memory\": \"8Gi\",\n                        \"nvidia.com/gpu\": \"1\"\n                }\n            },\n        }\n    ]\n}\n\n# Load an already created K8s Block\nk8sjob = k8s_job.load(\"devk8s\")\n\ndeployment = Deployment.build_from_flow(\n    flow=my_flow,\n    name=\"s3-example\",\n    version=2,\n    work_queue_name=\"aws\",\n    infrastructure=k8sjob,\n    storage=storage,\n    infra_overrides=infra_overrides,\n)\n\ndeployment.apply()\n
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#ecstask","title":"ECSTask","text":"

ECSTask infrastructure runs your flow in an ECS Task.

Requirements for ECSTask:

FROM prefecthq/prefect:2-python3.9  # example base image \nRUN pip install s3fs prefect-aws\n

To get started using Prefect with ECS, check out the repository template dataflow-ops demonstrating ECS agent setup and various deployment configurations for using ECSTask block.

Make sure to allocate enough CPU and memory to your agent, and consider adding retries

When you start a Prefect agent on AWS ECS Fargate, allocate as much CPU and memory as needed for your workloads. Your agent needs enough resources to appropriately provision infrastructure for your flow runs and to monitor their execution. Otherwise, your flow runs may get stuck in a Pending state. Alternatively, set a work-queue concurrency limit to ensure that the agent will not try to process all runs at the same time.

Some API calls to provision infrastructure may fail due to unexpected issues on the client side (for example, transient errors such as ConnectionError, HTTPClientError, or RequestTimeout), or due to server-side rate limiting from the AWS service. To mitigate those issues, we recommend adding environment variables such as AWS_MAX_ATTEMPTS (can be set to an integer value such as 10) and AWS_RETRY_MODE (can be set to a string value including standard or adaptive modes). Those environment variables must be added within the agent environment, e.g. on your ECS service running the agent, rather than on the ECSTask infrastructure block.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#docker-images","title":"Docker images","text":"

Every release of Prefect comes with a few built-in images. These images are all named prefecthq/prefect and their tags are used to identify differences in images.

Prefect agents rely on Docker images for executing flow runs using DockerContainer or KubernetesJob infrastructure. If you do not specify an image, we will use a Prefect image tag that matches your local Prefect and Python versions. If you are building your own image, you may find it useful to use one of the Prefect images as a base.

Choose image versions wisely

It's a good practice to use Docker images with specific Prefect versions in production.

Use care when employing images that automatically update to new versions (such as prefecthq/prefect:2-python3.9 or prefecthq/prefect:2-latest).

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#image-tags","title":"Image tags","text":"

When a release is published, images are built for all of Prefect's supported Python versions. These images are tagged to identify the combination of Prefect and Python versions contained. Additionally, we have \"convenience\" tags which are updated with each release to facilitate automatic updates.

For example, when release 2.1.1 is published:

  1. Images with the release packaged are built for each supported Python version (3.8, 3.9, 3.10, 3.11) with both standard Python and Conda.
  2. These images are tagged with the full description, e.g. prefect:2.1.1-python3.10 and prefect:2.1.1-python3.10-conda.
  3. For users that want more specific pins, these images are also tagged with the SHA of the git commit of the release, e.g. sha-88a7ff17a3435ec33c95c0323b8f05d7b9f3f6d2-python3.10
  4. For users that want to be on the latest 2.1.x release, receiving patch updates, we update a tag without the patch version to this release, e.g. prefect.2.1-python3.10.
  5. For users that want to be on the latest 2.x.y release, receiving minor version updates, we update a tag without the minor or patch version to this release, e.g. prefect.2-python3.10
  6. Finally, for users who want the latest 2.x.y release without specifying a Python version, we update 2-latest to the image for our highest supported Python version, which in this case would be equivalent to prefect:2.1.1-python3.10.
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#standard-python","title":"Standard Python","text":"

Standard Python images are based on the official Python slim images, e.g. python:3.10-slim.

Tag Prefect Version Python Version 2-latest most recent v2 PyPi version 3.10 2-python3.11 most recent v2 PyPi version 3.11 2-python3.10 most recent v2 PyPi version 3.10 2-python3.9 most recent v2 PyPi version 3.9 2-python3.8 most recent v2 PyPi version 3.8 2.X-python3.11 2.X 3.11 2.X-python3.10 2.X 3.10 2.X-python3.9 2.X 3.9 2.X-python3.8 2.X 3.8 sha-<hash>-python3.11 <hash> 3.11 sha-<hash>-python3.10 <hash> 3.10 sha-<hash>-python3.9 <hash> 3.9 sha-<hash>-python3.8 <hash> 3.8","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#conda-flavored-python","title":"Conda-flavored Python","text":"

Conda flavored images are based on continuumio/miniconda3. Prefect is installed into a conda environment named prefect.

Note, Conda support for Python 3.11 is not available so we cannot build an image yet.

Tag Prefect Version Python Version 2-latest-conda most recent v2 PyPi version 3.10 2-python3.10-conda most recent v2 PyPi version 3.10 2-python3.9-conda most recent v2 PyPi version 3.9 2-python3.8-conda most recent v2 PyPi version 3.8 2.X-python3.10-conda 2.X 3.10 2.X-python3.9-conda 2.X 3.9 2.X-python3.8-conda 2.X 3.8 sha-<hash>-python3.10-conda <hash> 3.10 sha-<hash>-python3.9-conda <hash> 3.9 sha-<hash>-python3.8-conda <hash> 3.8","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#installing-extra-dependencies-at-runtime","title":"Installing Extra Dependencies at Runtime","text":"

If you're using the prefecthq/prefect image (or an image based on prefecthq/prefect), you can make use of the EXTRA_PIP_PACKAGES environment variable to install dependencies at runtime. If defined, pip install ${EXTRA_PIP_PACKAGES} is executed before the flow run starts.

For production deploys we recommend building a custom image (as described below). Installing dependencies during each flow run can be costly (since you're downloading from PyPI on each execution) and adds another opportunity for failure. Use of EXTRA_PIP_PACKAGES can be useful during development though, as it allows you to iterate on dependencies without building a new image each time.

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#building-your-own-image","title":"Building your Own Image","text":"

If your flow relies on dependencies not found in the default prefecthq/prefect images, you'll want to build your own image. You can either base it off of one of the provided prefecthq/prefect images, or build your own from scratch.

Extending the prefecthq/prefect image

Here we provide an example Dockerfile for building an image based on prefecthq/prefect:2-latest, but with scikit-learn installed.

FROM prefecthq/prefect:2-latest\n\nRUN pip install scikit-learn\n
","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/infrastructure/#choosing-an-image-strategy","title":"Choosing an Image Strategy","text":"

The options described above have different complexity (and performance) characteristics. For choosing a strategy, we provide the following recommendations:

","tags":["orchestration","infrastructure","flow run infrastructure","deployments","Kubernetes","Docker","ECS","Cloud Run","Container Instances"],"boost":2},{"location":"concepts/logs/","title":"Logging","text":"

Prefect enables you to log a variety of useful information about your flow and task runs, capturing information about your workflows for purposes such as monitoring, troubleshooting, and auditing.

Prefect captures logs for your flow and task runs by default, even if you have not started a Prefect server with prefect server start.

You can view and filter logs in the Prefect UI or Prefect Cloud, or access log records via the API.

Prefect enables fine-grained customization of log levels for flows and tasks, including configuration for default levels and log message formatting.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#logging-overview","title":"Logging overview","text":"

Whenever you run a flow, Prefect automatically logs events for flow runs and task runs, along with any custom log handlers you have configured. No configuration is needed to enable Prefect logging.

For example, say you created a simple flow in a file flow.py. If you create a local flow run with python flow.py, you'll see an example of the log messages created automatically by Prefect:

$ python flow.py\n16:45:44.534 | INFO    | prefect.engine - Created flow run 'gray-dingo' for flow 'hello-flow'\n16:45:44.534 | INFO    | Flow run 'gray-dingo' - Using task runner 'SequentialTaskRunner'\n16:45:44.598 | INFO    | Flow run 'gray-dingo' - Created task run 'hello-task-54135dc1-0' for task 'hello-task'\nHello world!\n16:45:44.650 | INFO    | Task run 'hello-task-54135dc1-0' - Finished in state \nCompleted(None)\n16:45:44.672 | INFO    | Flow run 'gray-dingo' - Finished in state \nCompleted('All states completed.')\n

You can see logs for the flow run in the Prefect UI by navigating to the Flow Runs page and selecting a specific flow run to inspect.

These log messages reflect the logging configuration for log levels and message formatters. You may customize the log levels captured and the default message format through configuration, and you can capture custom logging events by explicitly emitting log messages during flow and task runs.

Prefect supports the standard Python logging levels CRITICAL, ERROR, WARNING, INFO, and DEBUG. By default, Prefect displays INFO-level and above events. You can configure the root logging level as well as specific logging levels for flow and task runs.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#logging-configuration","title":"Logging Configuration","text":"","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#logging-settings","title":"Logging settings","text":"

Prefect provides several settings for configuring logging level and loggers.

By default, Prefect displays INFO-level and above logging records. You may change this level to DEBUG and DEBUG-level logs created by Prefect will be shown as well. You may need to change the log level used by loggers from other libraries to see their log records.

You can override any logging configuration by setting an environment variable or Prefect Profile setting using the syntax PREFECT_LOGGING_[PATH]_[TO]_[KEY], with [PATH]_[TO]_[KEY] corresponding to the nested address of any setting.

For example, to change the default logging levels for Prefect to DEBUG, you can set the environment variable PREFECT_LOGGING_LEVEL=\"DEBUG\".

You may also configure the \"root\" Python logger. The root logger receives logs from all loggers unless they explicitly opt out by disabling propagation. By default, the root logger is configured to output WARNING level logs to the console. As with other logging settings, you can override this from the environment or in the logging configuration file. For example, you can change the level with the variable PREFECT_LOGGING_ROOT_LEVEL.

You may adjust the log level used by specific handlers. For example, you could set PREFECT_LOGGING_HANDLERS_API_LEVEL=ERROR to have only ERROR logs reported to the Prefect API. The console handlers will still default to level INFO.

There is a logging.yml file packaged with Prefect that defines the default logging configuration.

You can customize logging configuration by creating your own version of logging.yml with custom settings, by either creating the file at the default location (/.prefect/logging.yml) or by specifying the path to the file with PREFECT_LOGGING_SETTINGS_PATH. (If the file does not exist at the specified location, Prefect ignores the setting and uses the default configuration.)

See the Python Logging configuration documentation for more information about the configuration options and syntax used by logging.yml.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#prefect-loggers","title":"Prefect Loggers","text":"

To access the Prefect logger, import from prefect import get_run_logger. You can send messages to the logger in both flows and tasks.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#logging-in-flows","title":"Logging in flows","text":"

To log from a flow, retrieve a logger instance with get_run_logger(), then call the standard Python logging methods.

from prefect import flow, get_run_logger\n\n@flow(name=\"log-example-flow\")\ndef logger_flow():\n    logger = get_run_logger()\n    logger.info(\"INFO level log message.\")\n

Prefect automatically uses the flow run logger based on the flow context. If you run the above code, Prefect captures the following as a log event.

15:35:17.304 | INFO    | Flow run 'mottled-marten' - INFO level log message.\n

The default flow run log formatter uses the flow run name for log messages.

Note

Starting in 2.7.11, if you use a logger that sends logs to the API outside of a flow or task run, a warning will be displayed instead of an error. You can silence this warning by setting PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW=ignore or have the logger raise an error by setting the value to error.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#logging-in-tasks","title":"Logging in tasks","text":"

Logging in tasks works much as logging in flows: retrieve a logger instance with get_run_logger(), then call the standard Python logging methods.

from prefect import flow, task, get_run_logger\n\n@task(name=\"log-example-task\")\ndef logger_task():\n    logger = get_run_logger()\n    logger.info(\"INFO level log message from a task.\")\n\n@flow(name=\"log-example-flow\")\ndef logger_flow():\n    logger_task()\n

Prefect automatically uses the task run logger based on the task context. The default task run log formatter uses the task run name for log messages.

15:33:47.179 | INFO   | Task run 'logger_task-80a1ffd1-0' - INFO level log message from a task.\n

The underlying log model for task runs captures the task name, task run ID, and parent flow run ID, which are persisted to the database for reporting and may also be used in custom message formatting.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#logging-print-statements","title":"Logging print statements","text":"

Prefect provides the log_prints option to enable the logging of print statements at the task or flow level. When log_prints=True for a given task or flow, the Python builtin print will be patched to redirect to the Prefect logger for the scope of that task or flow.

By default, tasks and subflows will inherit the log_prints setting from their parent flow, unless opted out with their own explicit log_prints setting.

from prefect import task, flow\n\n@task\ndef my_task():\n    print(\"we're logging print statements from a task\")\n\n@flow(log_prints=True)\ndef my_flow():\n    print(\"we're logging print statements from a flow\")\n    my_task()\n

Will output:

15:52:11.244 | INFO    | prefect.engine - Created flow run 'emerald-gharial' for flow 'my-flow'\n15:52:11.812 | INFO    | Flow run 'emerald-gharial' - we're logging print statements from a flow\n15:52:11.926 | INFO    | Flow run 'emerald-gharial' - Created task run 'my_task-20c6ece6-0' for task 'my_task'\n15:52:11.927 | INFO    | Flow run 'emerald-gharial' - Executing 'my_task-20c6ece6-0' immediately...\n15:52:12.217 | INFO    | Task run 'my_task-20c6ece6-0' - we're logging print statements from a task\n
from prefect import task, flow\n\n@task\ndef my_task(log_prints=False):\n    print(\"not logging print statements in this task\")\n\n@flow(log_prints=True)\ndef my_flow():\n    print(\"we're logging print statements from a flow\")\n    my_task()\n

Using log_prints=False at the task level will output:

15:52:11.244 | INFO    | prefect.engine - Created flow run 'emerald-gharial' for flow 'my-flow'\n15:52:11.812 | INFO    | Flow run 'emerald-gharial' - we're logging print statements from a flow\n15:52:11.926 | INFO    | Flow run 'emerald-gharial' - Created task run 'my_task-20c6ece6-0' for task 'my_task'\n15:52:11.927 | INFO    | Flow run 'emerald-gharial' - Executing 'my_task-20c6ece6-0' immediately...\nnot logging print statements in this task\n

You can also configure this behavior globally for all Prefect flows, tasks, and subflows.

prefect config set PREFECT_LOGGING_LOG_PRINTS=True\n
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#formatters","title":"Formatters","text":"

Prefect log formatters specify the format of log messages. You can see details of message formatting for different loggers in logging.yml. For example, the default formatting for task run log records is:

\"%(asctime)s.%(msecs)03d | %(levelname)-7s | Task run %(task_run_name)r - %(message)s\"\n

The variables available to interpolate in log messages varies by logger. In addition to the run context, message string, and any keyword arguments, flow and task run loggers have access to additional variables.

The flow run logger has the following:

The task run logger has the following:

You can specify custom formatting by setting an environment variable or by modifying the formatter in a logging.yml file as described earlier. For example, to change the formatting for the flow runs formatter:

PREFECT_LOGGING_FORMATTERS_STANDARD_FLOW_RUN_FMT=\"%(asctime)s.%(msecs)03d | %(levelname)-7s | %(flow_run_id)s - %(message)s\"\n

The resulting messages, using the flow run ID instead of name, would look like this:

10:40:01.211 | INFO    | e43a5a80-417a-41c4-a39e-2ef7421ee1fc - Created task run 'othertask-1c085beb-3' for task 'othertask'\n
","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#styles","title":"Styles","text":"

By default, Prefect highlights specific keywords in the console logs with a variety of colors.

Highlighting can be toggled on/off with the PREFECT_LOGGING_COLORS setting, e.g.

PREFECT_LOGGING_COLORS=False\n

You can change what gets highlighted and also adjust the colors by updating the styles in a logging.yml file. Below lists the specific keys built-in to the PrefectConsoleHighlighter.

URLs:

Log levels:

State types:

Flow (run) names:

Task (run) names:

You can also build your own handler with a custom highlighter. For example, to additionally highlight emails:

  1. Copy and paste the following into my_package_or_module.py (rename as needed) in the same directory as the flow run script, or ideally part of a Python package so it's available in site-packages to be accessed anywhere within your environment.
import logging\nfrom typing import Dict, Union\n\nfrom rich.highlighter import Highlighter\n\nfrom prefect.logging.handlers import PrefectConsoleHandler\nfrom prefect.logging.highlighters import PrefectConsoleHighlighter\n\nclass CustomConsoleHighlighter(PrefectConsoleHighlighter):\n    base_style = \"log.\"\n    highlights = PrefectConsoleHighlighter.highlights + [\n        # ?P<email> is naming this expression as `email`\n        r\"(?P<email>[\\w-]+@([\\w-]+\\.)+[\\w-]+)\",\n    ]\n\nclass CustomConsoleHandler(PrefectConsoleHandler):\n    def __init__(\n        self,\n        highlighter: Highlighter = CustomConsoleHighlighter,\n        styles: Dict[str, str] = None,\n        level: Union[int, str] = logging.NOTSET,\n   ):\n        super().__init__(highlighter=highlighter, styles=styles, level=level)\n
  1. Update /.prefect/logging.yml to use my_package_or_module.CustomConsoleHandler and additionally reference the base_style and named expression: log.email.

        console_flow_runs:\nlevel: 0\nclass: my_package_or_module.CustomConsoleHandler\nformatter: flow_runs\nstyles:\nlog.email: magenta\n# other styles can be appended here, e.g.\n# log.completed_state: green\n

  2. Then on your next flow run, text that looks like an email will be highlighted--e.g. my@email.com is colored in magenta here.

    from prefect import flow, get_run_logger\n\n@flow\ndef log_email_flow():\n    logger = get_run_logger()\n    logger.info(\"my@email.com\")\n\nlog_email_flow()\n

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#applying-markup-in-logs","title":"Applying markup in logs","text":"

To use Rich's markup in Prefect logs, first configure PREFECT_LOGGING_MARKUP.

PREFECT_LOGGING_MARKUP=True\n

Then, the following will highlight \"fancy\" in red.

from prefect import flow, get_run_logger\n\n@flow\ndef my_flow():\n    logger = get_run_logger()\n    logger.info(\"This is [bold red]fancy[/]\")\n\nlog_email_flow()\n

Inaccurate logs could result

Although this can be convenient, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. DROP TABLE [dbo].[SomeTable];\" outputs DROP TABLE .[SomeTable];.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#log-database-schema","title":"Log database schema","text":"

Logged events are also persisted to the Prefect database. A log record includes the following data:

Column Description id Primary key ID of the log record. created Timestamp specifying when the record was created. updated Timestamp specifying when the record was updated. name String specifying the name of the logger. level Integer representation of the logging level. flow_run_id ID of the flow run associated with the log record. If the log record is for a task run, this is the parent flow of the task. task_run_id ID of the task run associated with the log record. Null if logging a flow run event. message Log message. timestamp The client-side timestamp of this logged statement.

For more information, see Log schema in the API documentation.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/logs/#including-logs-from-other-libraries","title":"Including logs from other libraries","text":"

By default, Prefect won't capture log statements from libraries that your flows and tasks use. You can tell Prefect to include logs from these libraries with the PREFECT_LOGGING_EXTRA_LOGGERS setting.

To use this setting, specify one or more Python library names to include, separated by commas. For example, if you want to make sure Prefect captures Dask and SciPy logging statements with your flow and task run logs:

PREFECT_LOGGING_EXTRA_LOGGERS=dask,scipy\n

You can set this setting as an environment variable or in a profile. See Settings for more details about how to use settings.

","tags":["UI","dashboard","Prefect Cloud","flows","tasks","logging","log formatters","configuration","debug"],"boost":2},{"location":"concepts/results/","title":"Results","text":"

Results represent the data returned by a flow or a task.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#retrieving-results","title":"Retrieving results","text":"

When calling flows or tasks, the result is returned directly:

from prefect import flow, task\n\n@task\ndef my_task():\n    return 1\n\n@flow\ndef my_flow():\n    task_result = my_task()\n    return task_result + 1\n\nresult = my_flow()\nassert result == 2\n

When working with flow and task states, the result can be retrieved with the State.result() method:

from prefect import flow, task\n\n@task\ndef my_task():\n    return 1\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n    return state.result() + 1\n\nstate = my_flow(return_state=True)\nassert state.result() == 2\n

When submitting tasks to a runner, the result can be retrieved with the Future.result() method:

from prefect import flow, task\n\n@task\ndef my_task():\n    return 1\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n    return future.result() + 1\n\nresult = my_flow()\nassert result == 2\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#handling-failures","title":"Handling failures","text":"

Sometimes your flows or tasks will encounter an exception. Prefect captures all exceptions in order to report states to the orchestrator, but we do not hide them from you (unless you ask us to) as your program needs to know if an unexpected error has occurred.

When calling flows or tasks, the exceptions are raised as in normal Python:

from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    try:\n        my_task()\n    except ValueError:\n        print(\"Oh no! The task failed.\")\n\n    return True\n\nmy_flow()\n

If you would prefer to check for a failed task without using try/except, you may ask Prefect to return the state:

from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n\n    if state.is_failed():\n        print(\"Oh no! The task failed. Falling back to '1'.\")\n        result = 1\n    else:\n        result = state.result()\n\n    return result + 1\n\nresult = my_flow()\nassert result == 2\n

If you retrieve the result from a failed state, the exception will be raised. For this reason, it's often best to check if the state is failed first.

from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n\n    try:\n        result = state.result()\n    except ValueError:\n        print(\"Oh no! The state raised the error!\")\n\n    return True\n\nmy_flow()\n

When retrieving the result from a state, you can ask Prefect not to raise exceptions:

from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    state = my_task(return_state=True)\n\n    maybe_result = state.result(raise_on_failure=False)\n    if isinstance(maybe_result, ValueError):\n        print(\"Oh no! The task failed. Falling back to '1'.\")\n        result = 1\n    else:\n        result = maybe_result\n\n    return result + 1\n\nresult = my_flow()\nassert result == 2\n

When submitting tasks to a runner, Future.result() works the same as State.result():

from prefect import flow, task\n\n@task\ndef my_task():\n    raise ValueError()\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n\n    try:\n        future.result()\n    except ValueError:\n        print(\"Ah! Futures will raise the failure as well.\")\n\n    # You can ask it not to raise the exception too\n    maybe_result = future.result(raise_on_failure=False)\n    print(f\"Got {type(maybe_result)}\")\n\n    return True\n\nmy_flow()\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#working-with-async-results","title":"Working with async results","text":"

When calling flows or tasks, the result is returned directly:

import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n    return 1\n\n@flow\nasync def my_flow():\n    task_result = await my_task()\n    return task_result + 1\n\nresult = asyncio.run(my_flow())\nassert result == 2\n

When working with flow and task states, the result can be retrieved with the State.result() method:

import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n    return 1\n\n@flow\nasync def my_flow():\n    state = await my_task(return_state=True)\n    result = await state.result(fetch=True)\n    return result + 1\n\nasync def main():\n    state = await my_flow(return_state=True)\n    assert await state.result(fetch=True) == 2\n\nasyncio.run(main())\n

Resolving results

Prefect 2.6.0 added automatic retrieval of persisted results. Prior to this version, State.result() did not require an await. For backwards compatibility, when used from an asynchronous context, State.result() returns a raw result type.

You may opt-in to the new behavior by passing fetch=True as shown in the example above. If you would like this behavior to be used automatically, you may enable the PREFECT_ASYNC_FETCH_STATE_RESULT setting. If you do not opt-in to this behavior, you will see a warning.

You may also opt-out by setting fetch=False. This will silence the warning, but you will need to retrieve your result manually from the result type.

When submitting tasks to a runner, the result can be retrieved with the Future.result() method:

import asyncio\nfrom prefect import flow, task\n\n@task\nasync def my_task():\n    return 1\n\n@flow\nasync def my_flow():\n    future = await my_task.submit()\n    result = await future.result()\n    return result + 1\n\nresult = asyncio.run(my_flow())\nassert result == 2\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisting-results","title":"Persisting results","text":"

The Prefect API does not store your results except in special cases. Instead, the result is persisted to a storage location in your infrastructure and Prefect stores a reference to the result.

The following Prefect features require results to be persisted:

If results are not persisted, these features may not be usable.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#configuring-persistence-of-results","title":"Configuring persistence of results","text":"

Persistence of results requires a serializer and a storage location. Prefect sets defaults for these, and you should not need to adjust them until you want to customize behavior. You can configure results on the flow and task decorators with the following options:

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#toggling-persistence","title":"Toggling persistence","text":"

Persistence of the result of a task or flow can be configured with the persist_result option. The persist_result option defaults to a null value, which will automatically enable persistence if it is needed for a Prefect feature used by the flow or task. Otherwise, persistence is disabled by default.

For example, the following flow has retries enabled. Flow retries require that all task results are persisted, so the task's result will be persisted:

from prefect import flow, task\n\n@task\ndef my_task():\n    return \"hello world!\"\n\n@flow(retries=2)\ndef my_flow():\n    # This task does not have persistence toggled off and it is needed for the flow feature,\n    # so Prefect will persist its result at runtime\n    my_task()\n

Flow retries do not require the flow's result to be persisted, so it will not be.

In this next example, one task has caching enabled. Task caching requires that the given task's result is persisted:

from prefect import flow, task\nfrom datetime import timedelta\n\n@task(cache_key_fn=lambda: \"always\", cache_expiration=timedelta(seconds=20))\ndef my_task():\n    # This task uses caching so its result will be persisted by default\n    return \"hello world!\"\n\n\n@task\ndef my_other_task():\n    ...\n\n@flow\ndef my_flow():\n    # This task uses a feature that requires result persistence\n    my_task()\n\n    # This task does not use a feature that requires result persistence and the\n    # flow does not use any features that require task result persistence so its\n    # result will not be persisted by default\n    my_other_task()\n

Persistence of results can be manually toggled on or off:

from prefect import flow, task\n\n@flow(persist_result=True)\ndef my_flow():\n    # This flow will persist its result even if not necessary for a feature.\n    ...\n\n@task(persist_result=False)\ndef my_task():\n    # This task will never persist its result.\n    # If persistence needed for a feature, an error will be raised.\n    ...\n

Toggling persistence manually will always override any behavior that Prefect would infer.

You may also change Prefect's default persistence behavior with the PREFECT_RESULTS_PERSIST_BY_DEFAULT setting. To persist results by default, even if they are not needed for a feature change the value to a truthy value:

$ prefect config set PREFECT_RESULTS_PERSIST_BY_DEFAULT=true\n

Task and flows with persist_result=False will not persist their results even if PREFECT_RESULTS_PERSIST_BY_DEFAULT is true.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-location","title":"Result storage location","text":"

The result storage location can be configured with the result_storage option. The result_storage option defaults to a null value, which infers storage from the context. Generally, this means that tasks will use the result storage configured on the flow unless otherwise specified. If there is no context to load the storage from and results must be persisted, results will be stored in the path specified by the PREFECT_LOCAL_STORAGE_PATH setting (defaults to ~/.prefect/storage).

from prefect import flow, task\nfrom prefect.filesystems import LocalFileSystem, S3\n\n@flow(persist_result=True)\ndef my_flow():\n    my_task()  # This task will use the flow's result storage\n\n@task(persist_result=True)\ndef my_task():\n    ...\n\nmy_flow()  # The flow has no result storage configured and no parent, the local file system will be used.\n\n\n# Reconfigure the flow to use a different storage type\nnew_flow = my_flow.with_options(result_storage=S3(bucket_path=\"my-bucket\"))\n\nnew_flow()  # The flow and task within it will use S3 for result storage.\n

You can configure this to use a specific storage using one of the following:

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-key","title":"Result storage key","text":"

The path of the result file in the result storage can be configured with the result_storage_key. The result_storage_key option defaults to a null value, which generates a unique identifier for each result.

from prefect import flow, task\nfrom prefect.filesystems import LocalFileSystem, S3\n\n@flow(result_storage=S3(bucket_path=\"my-bucket\"))\ndef my_flow():\n    my_task()\n\n@task(persist_result=True, result_storage_key=\"my_task.json\")\ndef my_task():\n    ...\n\nmy_flow()  # The task's result will be persisted to 's3://my-bucket/my_task.json'\n

Result storage keys are formatted with access to all of the modules in prefect.runtime and the run's parameters. In the following example, we will run a flow with three runs of the same task. Each task run will write its result to a unique file based on the name parameter.

from prefect import flow, task\n\n@flow()\ndef my_flow():\n    hello_world()\n    hello_world(name=\"foo\")\n    hello_world(name=\"bar\")\n\n@task(persist_result=True, result_storage_key=\"hello-{parameters[name]}.json\")\ndef hello_world(name: str = \"world\"):\n    return f\"hello {name}\"\n\nmy_flow()\n

After running the flow, we can see three persisted result files in our storage directory:

$ ls ~/.prefect/storage | grep \"hello-\"\nhello-bar.json\nhello-foo.json\nhello-world.json\n

In the next example, we include metadata about the flow run from the prefect.runtime.flow_run module:

from prefect import flow, task\n\n@flow\ndef my_flow():\n    hello_world()\n\n@task(persist_result=True, result_storage_key=\"{flow_run.flow_name}_{flow_run.name}_hello.json\")\ndef hello_world(name: str = \"world\"):\n    return f\"hello {name}\"\n\nmy_flow()\n

After running this flow, we can see a result file templated with the name of the flow and the flow run:

\u276f ls ~/.prefect/storage | grep \"my-flow\"    \nmy-flow_industrious-trout_hello.json\n

If a result exists at a given storage key in the storage location, it will be overwritten.

Result storage keys can only be configured on tasks at this time.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-serializer","title":"Result serializer","text":"

The result serializer can be configured with the result_serializer option. The result_serializer option defaults to a null value, which infers the serializer from the context. Generally, this means that tasks will use the result serializer configured on the flow unless otherwise specified. If there is no context to load the serializer from, the serializer defined by PREFECT_RESULTS_DEFAULT_SERIALIZER will be used. This setting defaults to Prefect's pickle serializer.

You may configure the result serializer using:

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#compressing-results","title":"Compressing results","text":"

Prefect provides a CompressedSerializer which can be used to wrap other serializers to provide compression over the bytes they generate. The compressed serializer uses lzma compression by default. We test other compression schemes provided in the Python standard library such as bz2 and zlib, but you should be able to use any compression library that provides compress and decompress methods.

You may configure compression of results using:

Note that the \"compressed/<serializer-type>\" shortcut will only work for serializers provided by Prefect. If you are using custom serializers, you must pass a full instance.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#storage-of-results-in-prefect","title":"Storage of results in Prefect","text":"

The Prefect API does not store your results in most cases for the following reasons:

There are a few cases where Prefect will store your results directly in the database. This is an optimization to reduce the overhead of reading and writing to result storage.

The following data types will be stored by the API without persistence to storage:

If persist_result is set to False, these values will never be stored.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#tracking-results","title":"Tracking results","text":"

The Prefect API tracks metadata about your results. The value of your result is only stored in specific cases. Result metadata can be seen in the UI on the \"Results\" page for flows.

Prefect tracks the following result metadata: - Data type - Storage location (if persisted)

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#caching-of-results-in-memory","title":"Caching of results in memory","text":"

When running your workflows, Prefect will keep the results of all tasks and flows in memory so they can be passed downstream. In some cases, it is desirable to override this behavior. For example, if you are returning a large amount of data from a task it can be costly to keep it memory for the entire duration of the flow run.

Flows and tasks both include an option to drop the result from memory with cache_result_in_memory:

@flow(cache_result_in_memory=False)\ndef foo():\n    return \"pretend this is large data\"\n\n@task(cache_result_in_memory=False)\ndef bar():\n    return \"pretend this is biiiig data\"\n

When cache_result_in_memory is disabled, the result of your flow or task will be persisted by default. The result will then be pulled from storage when needed.

@flow\ndef foo():\n    result = bar()\n    state = bar(return_state=True)\n\n    # The result will be retrieved from storage here\n    state.result()\n\n    future = bar.submit()\n    # The result will be retrieved from storage here\n    future.result()\n\n@task(cache_result_in_memory=False)\ndef bar():\n    # This result will persisted\n    return \"pretend this is biiiig data\"\n

If both cache_result_in_memory and persistence are disabled, your results will not be available downstream.

@task(persist_result=False, cache_result_in_memory=False)\ndef bar():\n    return \"pretend this is biiiig data\"\n\n@flow\ndef foo():\n    # Raises an error\n    result = bar()\n\n    # This is oaky\n    state = bar(return_state=True)\n\n    # Raises an error\n    state.result()\n\n    # This is okay\n    future = bar.submit()\n\n    # Raises an error\n    future.result()\n
","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-storage-types","title":"Result storage types","text":"

Result storage is responsible for reading and writing serialized data to an external location. At this time, any file system block can be used for result storage.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-serializer-types","title":"Result serializer types","text":"

A result serializer is responsible for converting your Python object to and from bytes. This is necessary to store the object outside of Python and retrieve it later.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#pickle-serializer","title":"Pickle serializer","text":"

Pickle is a standard Python protocol for encoding arbitrary Python objects. We supply a custom pickle serializer at prefect.serializers.PickleSerializer. Prefect's pickle serializer uses the cloudpickle project by default to support more object types. Alternative pickle libraries can be specified:

from prefect.serializers import PickleSerializer\n\nPickleSerializer(picklelib=\"custompickle\")\n

Benefits of the pickle serializer:

Drawbacks of the pickle serializer:

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#json-serializer","title":"JSON serializer","text":"

We supply a custom JSON serializer at prefect.serializers.JSONSerializer. Prefect's JSON serializer uses custom hooks by default to support more object types. Specifically, we add support for all types supported by Pydantic.

By default, we use the standard Python json library. Alternative JSON libraries can be specified:

from prefect.serializers import JSONSerializer\n\nJSONSerializer(jsonlib=\"orjson\")\n

Benefits of the JSON serializer:

Drawbacks of the JSON serializer:

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#result-types","title":"Result types","text":"

Prefect uses internal result types to capture information about the result attached to a state. The following types are used:

All result types include a get() method that can be called to return the value of the result. This is done behind the scenes when the result() method is used on states or futures.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#unpersisted-results","title":"Unpersisted results","text":"

Unpersisted results are used to represent results that have not been and will not be persisted beyond the current flow run. The value associated with the result is stored in memory, but will not be available later. Result metadata is attached to this object for storage in the API and representation in the UI.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#literal-results","title":"Literal results","text":"

Literal results are used to represent results stored in the Prefect database. The values contained by these results must always be JSON serializable.

Example:

result = LiteralResult(value=None)\nresult.json()\n# {\"type\": \"result\", \"value\": \"null\"}\n

Literal results reduce the overhead required to persist simple results.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisted-results","title":"Persisted results","text":"

The persisted result type contains all of the information needed to retrieve the result from storage. This includes:

Persisted result types also contain metadata for inspection without retrieving the result:

The get() method on result references retrieves the data from storage, deserializes it, and returns the original object. The get() operation will cache the resolved object to reduce the overhead of subsequent calls.

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/results/#persisted-result-blob","title":"Persisted result blob","text":"

When results are persisted to storage, they are always written as a JSON document. The schema for this is described by the PersistedResultBlob type. The document contains:

","tags":["flows","subflows","tasks","states","results"],"boost":2},{"location":"concepts/runtime-context/","title":"Runtime context","text":"

Prefect tracks information about the current flow or task run with a run context. The run context can be thought of as a global variable that allows the Prefect engine to determine relationships between your runs, e.g. which flow your task is called from.

The run context itself contains many internal objects used by Prefect to manage execution of your run and is only available in specific situtations. For this reason, we expose a simple interface that only includes the items you care about and dynamically retrieves additional information when necessary. We call this the \"runtime context\" as it contains information that can only be accessed when a run is happening.

Mock values via environment variable

Oftentimes, you may want to mock certain values for testing purposes. For example, by manually setting an ID or a scheduled start time to ensure your code is functioning properly. Starting in version 2.10.3, you can mock values in runtime via environment variable using the schema PREFECT__RUNTIME__{SUBMODULE}__{KEY_NAME}=value:

$ export PREFECT__RUNTIME__TASK_RUN__FAKE_KEY='foo'\n$ python -c 'from prefect.runtime import task_run; print(task_run.fake_key)' # \"foo\"\n
If the environment variable mocks an existing runtime attribute, the value is cast to the same type. This works for runtime attributes of basic types (bool, int, float and str) and pendulum.DateTime. For complex types like list or dict, we suggest mocking them using monkeypatch or a similar tool.

","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"concepts/runtime-context/#accessing-runtime-information","title":"Accessing runtime information","text":"

The prefect.runtime module is the home for all runtime context access. Each major runtime concept has its own submodule:

For example:

from prefect import flow, task\nfrom prefect import runtime\n@flow(log_prints=True)\ndef my_flow(x):\nprint(\"My name is\", runtime.flow_run.name)\nprint(\"I belong to deployment\", runtime.deployment.name)\nmy_task(2)\n\n@task\ndef my_task(y):\nprint(\"My name is\", runtime.task_run.name)\nprint(\"Flow run parameters:\", runtime.flow_run.parameters)\nmy_flow(1)\n

Outputs:

$ python my_runtime_info.py\n10:08:02.948 | INFO    | prefect.engine - Created flow run 'solid-gibbon' for flow 'my-flow'\n10:08:03.555 | INFO    | Flow run 'solid-gibbon' - My name is solid-gibbon\n10:08:03.558 | INFO    | Flow run 'solid-gibbon' - I belong to deployment None\n10:08:03.703 | INFO    | Flow run 'solid-gibbon' - Created task run 'my_task-0' for task 'my_task'\n10:08:03.704 | INFO    | Flow run 'solid-gibbon' - Executing 'my_task-0' immediately...\n10:08:04.006 | INFO    | Task run 'my_task-0' - My name is my_task-0\n10:08:04.007 | INFO    | Task run 'my_task-0' - Flow run parameters: {'x': 1}\n10:08:04.105 | INFO    | Task run 'my_task-0' - Finished in state Completed()\n10:08:04.968 | INFO    | Flow run 'solid-gibbon' - Finished in state Completed('All states completed.')\n

Above, we demonstrate access to information about the current flow run, task run, and deployment. When run without a deployment (via python my_runtime_info.py), you should see \"I belong to deployment None\" logged. When information is not available, the runtime will always return an empty value. Because this flow was run without a deployment, there is no deployment data. If this flow were deployed and executed by an agent or worker, we'd see the name of the deployment instead.

See the runtime API reference for a full list of available attributes.

","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"concepts/runtime-context/#accessing-the-run-context-directly","title":"Accessing the run context directly","text":"

The current run context can be accessed with prefect.context.get_run_context(). This will raise an exception if no run context is available, meaning you are not in a flow or task run. If a task run context is available, it will be returned even if a flow run context is available.

Alternatively, you can access the flow run or task run context explicitly. This will, for example, allow you to access the flow run context from a task run. Note that we do not send the flow run context to distributed task workers because the context is costly to serialize and deserialize.

from prefect.context import FlowRunContext, TaskRunContext\n\nflow_run_ctx = FlowRunContext.get()\ntask_run_ctx = TaskRunContext.get()\n

Unlike get_run_context, this will not raise an error if the context is not available. Instead, it will return None.

","tags":["flows","subflows","tasks","deployments"],"boost":2},{"location":"concepts/schedules/","title":"Schedules","text":"

Schedules tell the Prefect API how to create new flow runs for you automatically on a specified cadence.

You can add a schedule to any flow deployment. The Prefect Scheduler service periodically reviews every deployment and creates new flow runs according to the schedule configured for the deployment.

There are four recommended ways to create a schedule for a deployment:

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-through-the-ui","title":"Creating schedules through the UI","text":"

You can add, modify, and view schedules by selecting Edit under the three dot menu next to a Deployment in the Deployments tab of the Prefect UI.

To create a schedule from the UI, select Add.

Then select Interval or Cron to create a schedule.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#schedule-types","title":"Schedule types","text":"

Prefect supports several types of schedules that cover a wide range of use cases and offer a large degree of customization:

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-through-the-cli-with-the-deployment-build-command","title":"Creating schedules through the CLI with the deployment build command","text":"

When you build a deployment from the CLI you can use cron, interval, or rrule flags to set a schedule.

prefect deployment build demo.py:pipeline -n etl --cron \"0 0 * * *\"\n

This schedule will create flow runs for this deployment every day at midnight.

interval and rrule are the other two command line schedule flags.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-through-a-python-deployment-file","title":"Creating schedules through a Python deployment file","text":"

Here's how you create the equivalent schedule in a Python deployment file, with a timezone specified.

from prefect.server.schemas.schedules import CronSchedule\n\ncron_demo = Deployment.build_from_flow(\n    pipeline,\n    \"etl\",\n    schedule=(CronSchedule(cron=\"0 0 * * *\", timezone=\"America/Chicago\"))\n)\n

IntervalSchedule and RRuleSchedule are the other two Python class schedule options.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#creating-schedules-through-a-deployment-yaml-files-schedule-section","title":"Creating schedules through a deployment YAML file's schedule section","text":"

Alternatively, you can edit the schedule section of the deployment YAML file.

schedule:\ncron: 0 0 * * *\ntimezone: America/Chicago\n

Let's discuss the three schedule types in more detail.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#cron","title":"Cron","text":"

A schedule may be specified with a cron pattern. Users may also provide a timezone to enforce DST behaviors.

Cron uses croniter to specify datetime iteration with a cron-like format.

Cron properties include:

Property Description cron A valid cron string. (Required) day_or Boolean indicating how croniter handles day and day_of_week entries. Default is True. timezone String name of a time zone. (See the IANA Time Zone Database for valid time zones.)

The day_or property defaults to True, matching cron, which connects those values using OR. If False, the values are connected using AND. This behaves like fcron and enables you to, for example, define a job that executes each 2nd Friday of a month by setting the days of month and the weekday.

Supported croniter features

While Prefect supports most features of croniter for creating cron-like schedules, we do not currently support \"R\" random or \"H\" hashed keyword expressions or the schedule jittering possible with those expressions.

Daylight saving time considerations

If the timezone is a DST-observing one, then the schedule will adjust itself appropriately.

The cron rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule fires on every new schedule hour, not every elapsed hour. For example, when clocks are set back, this results in a two-hour pause as the schedule will fire the first time 1am is reached and the first time 2am is reached, 120 minutes later.

Longer schedules, such as one that fires at 9am every morning, will adjust for DST automatically.

See the Python CronSchedule class docs for more information.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#interval","title":"Interval","text":"

An Interval schedule creates new flow runs on a regular interval measured in seconds. Intervals are computed from an optional anchor_date. For example, here's how you can create a schedule for every 10 minutes in the deployent YAML file.

schedule:\ninterval: 600\ntimezone: America/Chicago 

Interval properties include:

Property Description interval datetime.timedelta indicating the time between flow runs. (Required) anchor_date datetime.datetime indicating the starting or \"anchor\" date to begin the schedule. If no anchor_date is supplied, the current UTC time is used. timezone String name of a time zone, used to enforce localization behaviors like DST boundaries. (See the IANA Time Zone Database for valid time zones.)

Note that the anchor_date does not indicate a \"start time\" for the schedule, but rather a fixed point in time from which to compute intervals. If the anchor date is in the future, then schedule dates are computed by subtracting the interval from it. Note that in this example, we import the Pendulum Python package for easy datetime manipulation. Pendulum isn\u2019t required, but it\u2019s a useful tool for specifying dates.

Daylight saving time considerations

If the schedule's anchor_date or timezone are provided with a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals.

For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that appear to both be scheduled for 1am local time, even though they are an hour apart in UTC time.

For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone.

See the Python IntervalSchedule class docs for more information.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#rrule","title":"RRule","text":"

An RRule scheduling supports iCal recurrence rules (RRules), which provide convenient syntax for creating repetitive schedules. Schedules can repeat on a frequency from yearly down to every minute.

RRule uses the dateutil rrule module to specify iCal recurrence rules.

RRules are appropriate for any kind of calendar-date manipulation, including simple repetition, irregular intervals, exclusions, week day or day-of-month adjustments, and more. RRules can represent complex logic like:

RRule properties include:

Property Description rrule String representation of an RRule schedule. See the rrulestr examples for syntax. timezone String name of a time zone. See the IANA Time Zone Database for valid time zones.

You may find it useful to use an RRule string generator such as the iCalendar.org RRule Tool to help create valid RRules.

For example, the following RRule schedule creates flow runs on Monday, Wednesday, and Friday until July 30, 2024.

schedule:\nrrule: 'FREQ=WEEKLY;BYDAY=MO,WE,FR;UNTIL=20240730T040000Z'\n

Max RRule length

Note the max supported character length of an rrulestr is 6500 characters

Daylight saving time considerations

Note that as a calendar-oriented standard, RRules are sensitive to the initial timezone provided. A 9am daily schedule with a DST-aware start date will maintain a local 9am time through DST boundaries. A 9am daily schedule with a UTC start date will maintain a 9am UTC time.

See the Python RRuleSchedule class docs for more information.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/schedules/#the-scheduler-service","title":"The Scheduler service","text":"

The Scheduler service is started automatically when prefect server start is run and it is a built-in service of Prefect Cloud.

By default, the Scheduler service visits deployments on a 60-second loop, though recently-modified deployments will be visited more frequently. The Scheduler evaluates each deployment's schedule and creates new runs appropriately. For typical deployments, it will create the next three runs, though more runs will be scheduled if the next 3 would all start in the next hour.

More specifically, the Scheduler tries to create the smallest number of runs that satisfy the following constraints, in order:

These behaviors can all be adjusted through the relevant settings that can be viewed with the terminal command prefect config view --show-defaults:

PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE='100' PREFECT_API_SERVICES_SCHEDULER_ENABLED='True' PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE='500' PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS='60.0' PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS='3' PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS='100' PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME='1:00:00' PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME='100 days, 0:00:00' 

See the Settings docs for more information on altering your settings.

These settings mean that if a deployment has an hourly schedule, the default settings will create runs for the next 4 days (or 100 hours). If it has a weekly schedule, the default settings will maintain the next 14 runs (up to 100 days in the future).

The Scheduler does not affect execution

The Prefect Scheduler service only creates new flow runs and places them in Scheduled states. It is not involved in flow or task execution.

If you change a schedule, previously scheduled flow runs that have not started are removed, and new scheduled flow runs are created to reflect the new schedule.

To remove all scheduled runs for a flow deployment, update the deployment YAML with no schedule value. Alternatively, remove the schedule via the UI.

","tags":["flows","flow runs","deployments","schedules","scheduling","cron","RRule","iCal"],"boost":2},{"location":"concepts/settings/","title":"Profiles & Configuration","text":"

Prefect's settings are documented and type-validated. By modifying the default settings, you can customize various aspects of the system.

Settings can be viewed from the CLI or the UI.

You can override the setting for a profile with environment variables.

Prefect profiles are groups of settings that you can persist on your machine. When you change profiles, all of the settings configured in the profile are applied. You can apply profiles to individual commands or set a profile for your environment.

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#commonly-configured-settings","title":"Commonly configured settings","text":"

This section describes some commonly configured settings for Prefect installations. See Configuring settings for details on setting and unsetting configuration values.

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#prefect_api_url","title":"PREFECT_API_URL","text":"

The PREFECT_API_URL value specifies the API endpoint of your Prefect Cloud workspace or a Prefect server instance.

For example, using a local Prefect server instance.

PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n

Using Prefect Cloud:

PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n

View your Account ID and Workspace ID in your browser URL when logged into Prefect Cloud. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.

PREFECT_API_URL setting for agents

When using workers, agents, and work pools that can create flow runs for deployments in remote environments, PREFECT_API_URL must be set for the environment in which your worker or agent is running.

If you want the worker or agent to communicate with Prefect Cloud or a Prefect server instance from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL in that environment.

Running the Prefect UI behind a reverse proxy

When using a reverse proxy (such as Nginx or Traefik) to proxy traffic to a locally-hosted Prefect UI instance, the Prefect server also needs to be configured to know how to connect to the API. The PREFECT_UI_API_URL should be set to the external proxy URL (e.g. if your external URL is https://prefect-server.example.com/ then set PREFECT_UI_API_URL=https://prefect-server.example.com/api for the Prefect server process). You can also accomplish this by setting PREFECT_API_URL to the API URL, as this setting is used as a fallback if PREFECT_UI_API_URL is not set.

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#prefect_api_key","title":"PREFECT_API_KEY","text":"

The PREFECT_API_KEY value specifies the API key used to authenticate with your Prefect Cloud workspace.

PREFECT_API_KEY=\"[API-KEY]\"\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#prefect_home","title":"PREFECT_HOME","text":"

The PREFECT_HOME value specifies the local Prefect directory for configuration files, profiles, and the location of the default Prefect SQLite database.

PREFECT_HOME='~/.prefect'\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#prefect_local_storage_path","title":"PREFECT_LOCAL_STORAGE_PATH","text":"

The PREFECT_LOCAL_STORAGE_PATH value specifies the default location of local storage for flow runs.

PREFECT_LOCAL_STORAGE_PATH='${PREFECT_HOME}/storage'\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#database-settings","title":"Database settings","text":"

Prefect provides several self-hosting database configuration settings you can read about here.

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#logging-settings","title":"Logging settings","text":"

Prefect provides several logging configuration settings that you can read about in the logging docs.

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#configuring-settings","title":"Configuring settings","text":"

The prefect config CLI commands enable you to view, set, and unset settings.

Command Description set Change the value for a setting. unset Restore the default value for a setting. view Display the current settings.","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#viewing-settings-from-the-cli","title":"Viewing settings from the CLI","text":"

The prefect config view command will display settings that override default values.

$ prefect config view\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG'\n

You can show the sources of values with --show-sources:

$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG' (from env)\n

You can also include default values with --show-defaults:

$ prefect config view --show-defaults\nPREFECT_PROFILE='default'\nPREFECT_AGENT_PREFETCH_SECONDS='10' (from defaults)\nPREFECT_AGENT_QUERY_INTERVAL='5.0' (from defaults)\nPREFECT_API_KEY='None' (from defaults)\nPREFECT_API_REQUEST_TIMEOUT='30.0' (from defaults)\nPREFECT_API_URL='None' (from defaults)\n...\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#setting-and-clearing-values","title":"Setting and clearing values","text":"

The prefect config set command lets you change the value of a default setting.

A commonly used example is setting the PREFECT_API_URL, which you may need to change when interacting with different Prefect server instances or Prefect Cloud.

# use a local Prefect server\nprefect config set PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n\n# use Prefect Cloud\nprefect config set PREFECT_API_URL=\"https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]\"\n

If you want to configure a setting to use its default value, use the prefect config unset command.

prefect config unset PREFECT_API_URL\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#overriding-defaults-with-environment-variables","title":"Overriding defaults with environment variables","text":"

All settings have keys that match the environment variable that can be used to override them.

For example, configuring the home directory:

# environment variable\nexport PREFECT_HOME=\"/path/to/home\"\n
# python\nimport prefect.settings\nprefect.settings.PREFECT_HOME.value()  # PosixPath('/path/to/home')\n

Configuring the server's port:

# environment variable\nexport PREFECT_SERVER_API_PORT=4242\n
# python\nprefect.settings.PREFECT_SERVER_API_PORT.value()  # 4242\n

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#configuration-profiles","title":"Configuration profiles","text":"

Prefect allows you to persist settings instead of setting an environment variable each time you open a new shell. Settings are persisted to profiles, which allow you to move between groups of settings quickly.

The prefect profile CLI commands enable you to create, review, and manage profiles.

Command Description create Create a new profile. delete Delete the given profile. inspect Display settings from a given profile; defaults to active. ls List profile names. rename Change the name of a profile. use Switch the active profile.

If you configured settings for a profile, prefect profile inspect displays those settings:

$ prefect profile inspect\nPREFECT_PROFILE = \"default\"\nPREFECT_API_KEY = \"pnu_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\"\nPREFECT_API_URL = \"http://127.0.0.1:4200/api\"\n

You can pass the name of a profile to view its settings:

$ prefect profile create test\n$ prefect profile inspect test\nPREFECT_PROFILE=\"test\"\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#creating-and-removing-profiles","title":"Creating and removing profiles","text":"

Create a new profile with no settings:

$ prefect profile create test\nCreated profile 'test' at /Users/terry/.prefect/profiles.toml.\n

Create a new profile foo with settings cloned from an existing default profile:

$ prefect profile create foo --from default\nCreated profile 'cloud' matching 'default' at /Users/terry/.prefect/profiles.toml.\n

Rename a profile:

$ prefect profile rename temp test\nRenamed profile 'temp' to 'test'.\n

Remove a profile:

$ prefect profile delete test\nRemoved profile 'test'.\n

Removing the default profile resets it:

$ prefect profile delete default\nReset profile 'default'.\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#change-values-in-profiles","title":"Change values in profiles","text":"

Set a value in the current profile:

$ prefect config set VAR=X\nSet variable 'VAR' to 'X'\nUpdated profile 'default'\n

Set multiple values in the current profile:

$ prefect config set VAR2=Y VAR3=Z\nSet variable 'VAR2' to 'Y'\nSet variable 'VAR3' to 'Z'\nUpdated profile 'default'\n

You can set a value in another profile by passing the --profile NAME option to a CLI command:

$ prefect --profile \"foo\" config set VAR=Y\nSet variable 'VAR' to 'Y'\nUpdated profile 'foo'\n

Unset values in the current profile to restore the defaults:

$ prefect config unset VAR2 VAR3\nUnset variable 'VAR2'\nUnset variable 'VAR3'\nUpdated profile 'default'\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#inspecting-profiles","title":"Inspecting profiles","text":"

See a list of available profiles:

$ prefect profile ls\n* default\ncloud\ntest\nlocal\n

View all settings for a profile:

$ prefect profile inspect cloud\nPREFECT_API_URL='https://api.prefect.cloud/api/accounts/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx\nx/workspaces/xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'\nPREFECT_API_KEY='xxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'          
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#using-profiles","title":"Using profiles","text":"

The profile default is used by default. There are several methods to switch to another profile.

The recommended method is to use the prefect profile use command with the name of the profile:

$ prefect profile use foo\nProfile 'test' now active.\n

Alternatively, you may set the environment variable PREFECT_PROFILE to the name of the profile:

$ export PREFECT_PROFILE=foo\n

Or, specify the profile in the CLI command for one-time usage:

$ prefect --profile \"foo\" ...\n

Note that this option must come before the subcommand. For example, to list flow runs using the profile foo:

$ prefect --profile \"foo\" flow-run ls\n

You may use the -p flag as well:

$ prefect -p \"foo\" flow-run ls\n

You may also create an 'alias' to automatically use your profile:

$ alias prefect-foo=\"prefect --profile 'foo' \"\n# uses our profile!\n$ prefect-foo config view  
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#conflicts-with-environment-variables","title":"Conflicts with environment variables","text":"

If setting the profile from the CLI with --profile, environment variables that conflict with settings in the profile will be ignored.

In all other cases, environment variables will take precedence over the value in the profile.

For example, a value set in a profile will be used by default:

$ prefect config set PREFECT_LOGGING_LEVEL=\"ERROR\"\n$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='ERROR' (from profile)\n

But, setting an environment variable will override the profile setting:

$ export PREFECT_LOGGING_LEVEL=\"DEBUG\"\n$ prefect config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='DEBUG' (from env)\n

Unless the profile is explicitly requested when using the CLI:

$ prefect --profile default config view --show-sources\nPREFECT_PROFILE=\"default\"\nPREFECT_LOGGING_LEVEL='ERROR' (from profile)\n
","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/settings/#profile-files","title":"Profile files","text":"

Profiles are persisted to the PREFECT_PROFILES_PATH, which can be changed with an environment variable.

By default, it is stored in your PREFECT_HOME directory:

$ prefect config view --show-defaults | grep PROFILES_PATH\nPREFECT_PROFILES_PATH='~/.prefect/profiles.toml'\n

The TOML format is used to store profile data.

","tags":["configuration","settings","environment variables","profiles"],"boost":2},{"location":"concepts/states/","title":"States","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#overview","title":"Overview","text":"

States are rich objects that contain information about the status of a particular task run or flow run. While you don't need to know the details of the states to use Prefect, you can give your workflows superpowers by taking advantage of it.

At any moment, you can learn anything you need to know about a task or flow by examining its current state or the history of its states. For example, a state could tell you:

By manipulating a relatively small number of task states, Prefect flows can harness the complexity that emerges in workflows.

Only runs have states

Though we often refer to the \"state\" of a flow or a task, what we really mean is the state of a flow run or a task run. Flows and tasks are templates that describe what a system does; only when we run the system does it also take on a state. So while we might refer to a task as \"running\" or being \"successful\", we really mean that a specific instance of the task is in that state.

","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#state-types","title":"State Types","text":"

States have names and types. State types are canonical, with specific orchestration rules that apply to transitions into and out of each state type. A state's name, is often, but not always, synonymous with its type. For example, a task run that is running for the first time has a state with the name Running and the type RUNNING. However, if the task retries, that same task run will have the name Retrying and the type RUNNING. Each time the task run transitions into the RUNNING state, the same orchestration rules are applied.

There are terminal state types from which there are no orchestrated transitions to any other state type.

The full complement of states and state types includes:

Name Type Terminal? Description Scheduled SCHEDULED No The run will begin at a particular time in the future. Late SCHEDULED No The run's scheduled start time has passed, but it has not transitioned to PENDING (5 seconds by default). AwaitingRetry SCHEDULED No The run did not complete successfully because of a code issue and had remaining retry attempts. Pending PENDING No The run has been submitted to run, but is waiting on necessary preconditions to be satisfied. Running RUNNING No The run code is currently executing. Retrying RUNNING No The run code is currently executing after previously not complete successfully. Paused PAUSED No The run code has stopped executing until it recieves manual approval to proceed. Cancelling CANCELLING No The infrastructure on which the code was running is being cleaned up. Cancelled CANCELLED Yes The run did not complete because a user determined that it should not. Completed COMPLETED Yes The run completed successfully. Failed FAILED Yes The run did not complete because of a code issue and had no remaining retry attempts. Crashed CRASHED Yes The run did not complete because of an infrastructure issue.","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#returned-values","title":"Returned values","text":"

When calling a task or a flow, there are three types of returned values:

Returning data\u200a is the default behavior any time you call your_task().

Returning Prefect State occurs anytime you call your task or flow with the argument return_state=True.

Returning PrefectFuture is achieved by calling your_task.submit().

","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-data","title":"Return Data","text":"

By default, running a task will return data:

from prefect import flow, task \n\n@task \ndef add_one(x):\nreturn x + 1\n@flow \ndef my_flow():\n    result = add_one(1) # return int\n

The same rule applies for a subflow:

@flow \ndef subflow():\nreturn 42 \n@flow \ndef my_flow():\n    result = subflow() # return data\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-prefect-state","title":"Return Prefect State","text":"

To return a State instead, add return_state=True as a parameter of your task call.

@flow \ndef my_flow():\nstate = add_one(1, return_state=True) # return State\n

To get data from a State, call .result().

@flow \ndef my_flow():\n    state = add_one(1, return_state=True) # return State\nresult = state.result() # return int\n

The same rule applies for a subflow:

@flow \ndef subflow():\n    return 42 \n\n@flow \ndef my_flow():\nstate = subflow(return_state=True) # return State\nresult = state.result() # return int\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#return-a-prefectfuture","title":"Return a PrefectFuture","text":"

To get a PrefectFuture, add .submit() to your task call.

@flow \ndef my_flow():\nfuture = add_one.submit(1) # return PrefectFuture\n

To get data from a PrefectFuture, call .result().

@flow \ndef my_flow():\n    future = add_one.submit(1) # return PrefectFuture\nresult = future.result() # return data\n

To get a State from a PrefectFuture, call .wait().

@flow \ndef my_flow():\n    future = add_one.submit(1) # return PrefectFuture\nstate = future.wait() # return State\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#final-state-determination","title":"Final state determination","text":"

The final state of a flow is determined by its return value. The following rules apply:

See the Final state determination section of the Flows documentation for further details and examples.

","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#state-change-hooks","title":"State Change Hooks","text":"

State change hooks execute code in response to changes in flow or task run states, enabling you to define actions for specific state transitions in a workflow.

","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#a-simple-example","title":"A simple example","text":"
from prefect import flow\n\ndef my_success_hook(flow, flow_run, state):\n    print(\"Flow run succeeded!\")\n\n@flow(on_completion=[my_success_hook])\ndef my_flow():\n    return 42\n\nmy_flow()\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-and-use-hooks","title":"Create and use hooks","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#available-state-change-hooks","title":"Available state change hooks","text":"Type Flow Task Description on_completion \u2713 \u2713 Executes when a flow or task run enters a Completed state. on_failure \u2713 \u2713 Executes when a flow or task run enters a Failed state. on_cancellation \u2713 - Executes when a flow run enters a Cancelling state. on_crashed \u2713 - Executes when a flow run enters a Crashed state.","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-flow-run-state-change-hooks","title":"Create flow run state change hooks","text":"
def my_flow_hook(flow: Flow, flow_run: FlowRun, state: State):\n\"\"\"This is the required signature for a flow run state\n    change hook. This hook can only be passed into flows.\n    \"\"\"\n\n# pass hook as a list of callables\n@flow(on_completion=[my_flow_hook])\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#create-task-run-state-change-hooks","title":"Create task run state change hooks","text":"
def my_task_hook(task: Task, task_run: TaskRun, state: State):\n\"\"\"This is the required signature for a task run state change\n    hook. This hook can only be passed into tasks.\n    \"\"\"\n\n# pass hook as a list of callables\n@task(on_failure=[my_task_hook])\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#use-multiple-state-change-hooks","title":"Use multiple state change hooks","text":"

State change hooks are versatile, allowing you to specify multiple state change hooks for the same state transition, or to use the same state change hook for different transitions:

def my_success_hook(task, task_run, state):\n    print(\"Task run succeeded!\")\n\ndef my_failure_hook(task, task_run, state):\n    print(\"Task run failed!\")\n\ndef my_succeed_or_fail_hook(task, task_run, state):\n    print(\"If the task run succeeds or fails, this hook runs.\")\n\n@task(\n    on_completion=[my_success_hook, my_succeed_or_fail_hook],\n    on_failure=[my_failure_hook, my_succeed_or_fail_hook]\n)\n
","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/states/#more-examples-of-state-change-hooks","title":"More examples of state change hooks","text":"","tags":["orchestration","flow runs","task runs","states","status","state change hooks","triggers"],"boost":2},{"location":"concepts/storage/","title":"Storage","text":"

Storage lets you configure how flow code for deployments is persisted and retrieved by Prefect agents. Anytime you build a deployment, a storage block is used to upload the entire directory containing your workflow code (along with supporting files) to its configured location. This helps ensure portability of your relative imports, configuration files, and more. Note that your environment dependencies (for example, external Python packages) still need to be managed separately.

If no storage is explicitly configured, Prefect will use LocalFileSystem storage by default. Local storage works fine for many local flow run scenarios, especially when testing and getting started. However, due to the inherent lack of portability, many use cases are better served by using remote storage such as S3 or Google Cloud Storage.

Prefect supports creating multiple storage configurations and switching between storage as needed.

Storage uses blocks

Blocks is the Prefect technology underlying storage, and enables you to do so much more.

In addition to creating storage blocks via the Prefect CLI, you can now create storage blocks and other kinds of block configuration objects via the Prefect UI and Prefect Cloud.

","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":2},{"location":"concepts/storage/#configuring-storage-for-a-deployment","title":"Configuring storage for a deployment","text":"

When building a deployment for a workflow, you have two options for configuring workflow storage:

","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":2},{"location":"concepts/storage/#using-the-default","title":"Using the default","text":"

Anytime you call prefect deployment build without providing the --storage-block flag, a default LocalFileSystem block will be used. Note that this block will always use your present working directory as its basepath (which is usually desirable). You can see the block's settings by inspecting the deployment.yaml file that Prefect creates after calling prefect deployment build.

While you generally can't run a deployment stored on a local file system on other machines, any agent running on the same machine will be able to successfully run your deployment.

","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":2},{"location":"concepts/storage/#supported-storage-blocks","title":"Supported storage blocks","text":"

Current options for deployment storage blocks include:

Storage Description Required Library Local File System Store code in a run's local file system. Remote File System Store code in a any filesystem supported by fsspec. AWS S3 Storage Store code in an AWS S3 bucket. s3fs Azure Storage Store code in Azure Datalake and Azure Blob Storage. adlfs GitHub Storage Store code in a GitHub repository. Google Cloud Storage Store code in a Google Cloud Platform (GCP) Cloud Storage bucket. gcsfs SMB Store code in SMB shared network storage. smbprotocol GitLab Repository Store code in a GitLab repository. prefect-gitlab Bitbucket Repository Store code in a Bitbucket repository. prefect-bitbucket

Accessing files may require storage filesystem libraries

Note that the appropriate filesystem library supporting the storage location must be installed prior to building a deployment with a storage block or accessing the storage location from flow scripts.

For example, the AWS S3 Storage block requires the s3fs library.

See Filesystem package dependencies for more information about configuring filesystem libraries in your execution environment.

","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":2},{"location":"concepts/storage/#configuring-a-block","title":"Configuring a block","text":"

You can create these blocks either via the UI or via Python.

You can create, edit, and manage storage blocks in the Prefect UI and Prefect Cloud. On a Prefect server, blocks are created in the server's database. On Prefect Cloud, blocks are created on a workspace.

To create a new block, select the + button. Prefect displays a library of block types you can configure to create blocks to be used by your flows.

Select Add + to configure a new storage block based on a specific block type. Prefect displays a Create page that enables specifying storage settings.

You can also create blocks using the Prefect Python API:

from prefect.filesystems import S3\n\nblock = S3(bucket_path=\"my-bucket/a-sub-directory\", \n           aws_access_key_id=\"foo\", \n           aws_secret_access_key=\"bar\"\n)\nblock.save(\"example-block\")\n

This block configuration is now available to be used by anyone with appropriate access to your Prefect API. We can use this block to build a deployment by passing its slug to the prefect deployment build command. The storage block slug is formatted as block-type/block-name. In this case, s3/example-block for an AWS S3 Bucket block named example-block. See block identifiers for details.

prefect deployment build ./flows/my_flow.py:my_flow --name \"Example Deployment\" --storage-block s3/example-block\n

This command will upload the contents of your flow's directory to the designated storage location, then the full deployment specification will be persisted to a newly created deployment.yaml file. For more information, see Deployments.

","tags":["storage","databases","database configuration","configuration","settings","AWS S3","Azure Blob Storage","Google Cloud Storage","SMB"],"boost":2},{"location":"concepts/task-runners/","title":"Task runners","text":"

Task runners enable you to engage specific executors for Prefect tasks, such as for concurrent, parallel, or distributed execution of tasks.

Task runners are not required for task execution. If you call a task function directly, the task executes as a regular Python function, without a task runner, and produces whatever result is returned by the function.

","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#task-runner-overview","title":"Task runner overview","text":"

Calling a task function from within a flow, using the default task settings, executes the function sequentially. Execution of the task function blocks execution of the flow until the task completes. This means, by default, calling multiple tasks in a flow causes them to run in order.

However, that's not the only way to run tasks!

You can use the .submit() method on a task function to submit the task to a task runner. Using a task runner enables you to control whether tasks run sequentially, concurrently, or if you want to take advantage of a parallel or distributed execution library such as Dask or Ray.

Using the .submit() method to submit a task also causes the task run to return a PrefectFuture, a Prefect object that contains both any data returned by the task function and a State, a Prefect object indicating the state of the task run.

Prefect currently provides the following built-in task runners:

In addition, the following Prefect-developed task runners for parallel or distributed task execution may be installed as Prefect Integrations.

Concurrency versus parallelism

The words \"concurrency\" and \"parallelism\" may sound the same, but they mean different things in computing.

Concurrency refers to a system that can do more than one thing simultaneously, but not at the exact same time. It may be more accurate to think of concurrent execution as non-blocking: within the restrictions of resources available in the execution environment and data dependencies between tasks, execution of one task does not block execution of other tasks in a flow.

Parallelism refers to a system that can do more than one thing at the exact same time. Again, within the restrictions of resources available, parallel execution can run tasks at the same time, such as for operations mapped across a dataset.

","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-a-task-runner","title":"Using a task runner","text":"

You do not need to specify a task runner for a flow unless your tasks require a specific type of execution.

To configure your flow to use a specific task runner, import a task runner and assign it as an argument for the flow when the flow is defined.

Remember to call .submit() when using a task runner

Make sure you use .submit() to run your task with a task runner. Calling the task directly, without .submit(), from within a flow will run the task sequentially instead of using a specified task runner.

For example, you can use ConcurrentTaskRunner to allow tasks to switch when they would block.

from prefect import flow, task\nfrom prefect.task_runners import ConcurrentTaskRunner\nimport time\n\n@task\ndef stop_at_floor(floor):\n    print(f\"elevator moving to floor {floor}\")\n    time.sleep(floor)\n    print(f\"elevator stops on floor {floor}\")\n\n@flow(task_runner=ConcurrentTaskRunner())\ndef elevator():\n    for floor in range(10, 0, -1):\n        stop_at_floor.submit(floor)\n

If you specify an uninitialized task runner class, a task runner instance of that type is created with the default settings. You can also pass additional configuration parameters for task runners that accept parameters, such as DaskTaskRunner and RayTaskRunner.

Default task runner

If you don't specify a task runner for a flow and you call a task with .submit() within the flow, Prefect uses the default ConcurrentTaskRunner.

","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-sequentially","title":"Running tasks sequentially","text":"

Sometimes, it's useful to force tasks to run sequentially to make it easier to reason about the behavior of your program. Switching to the SequentialTaskRunner will force submitted tasks to run sequentially rather than concurrently.

Synchronous and asynchronous tasks

The SequentialTaskRunner works with both synchronous and asynchronous task functions. Asynchronous tasks are Python functions defined using async def rather than def.

The following example demonstrates using the SequentialTaskRunner to ensure that tasks run sequentially. In the example, the flow glass_tower runs the task stop_at_floor for floors one through 38, in that order.

from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\nimport random\n\n@task\ndef stop_at_floor(floor):\n    situation = random.choice([\"on fire\",\"clear\"])\n    print(f\"elevator stops on {floor} which is {situation}\")\n\n@flow(task_runner=SequentialTaskRunner(),\n      name=\"towering-infernflow\",\n      )\ndef glass_tower():\n    for floor in range(1, 39):\n        stop_at_floor.submit(floor)\n\nglass_tower()\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-multiple-task-runners","title":"Using multiple task runners","text":"

Each flow can only have a single task runner, but sometimes you may want a subset of your tasks to run using a specific task runner. In this case, you can create subflows for tasks that need to use a different task runner.

For example, you can have a flow (in the example below called sequential_flow) that runs its tasks locally using the SequentialTaskRunner. If you have some tasks that can run more efficiently in parallel on a Dask cluster, you could create a subflow (such as dask_subflow) to run those tasks using the DaskTaskRunner.

from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef hello_local():\n    print(\"Hello!\")\n\n@task\ndef hello_dask():\n    print(\"Hello from Dask!\")\n\n@flow(task_runner=SequentialTaskRunner())\ndef sequential_flow():\n    hello_local.submit()\n    dask_subflow()\n    hello_local.submit()\n\n@flow(task_runner=DaskTaskRunner())\ndef dask_subflow():\n    hello_dask.submit()\n\nif __name__ == \"__main__\":\n    sequential_flow()\n

Guarding main

Note that you should guard the main function by using if __name__ == \"__main__\" to avoid issues with parallel processing.

This script outputs the following logs demonstrating the use of the Dask task runner:

120:14:29.785 | INFO    | prefect.engine - Created flow run 'ivory-caiman' for flow 'sequential-flow'\n20:14:29.785 | INFO    | Flow run 'ivory-caiman' - Starting 'SequentialTaskRunner'; submitted tasks will be run sequentially...\n20:14:29.880 | INFO    | Flow run 'ivory-caiman' - Created task run 'hello_local-7633879f-0' for task 'hello_local'\n20:14:29.881 | INFO    | Flow run 'ivory-caiman' - Executing 'hello_local-7633879f-0' immediately...\nHello!\n20:14:29.904 | INFO    | Task run 'hello_local-7633879f-0' - Finished in state Completed()\n20:14:29.952 | INFO    | Flow run 'ivory-caiman' - Created subflow run 'nimble-sparrow' for flow 'dask-subflow'\n20:14:29.953 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n20:14:31.862 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at http://127.0.0.1:8787/status\n20:14:31.901 | INFO    | Flow run 'nimble-sparrow' - Created task run 'hello_dask-2b96d711-0' for task 'hello_dask'\n20:14:32.370 | INFO    | Flow run 'nimble-sparrow' - Submitted task run 'hello_dask-2b96d711-0' for execution.\nHello from Dask!\n20:14:33.358 | INFO    | Flow run 'nimble-sparrow' - Finished in state Completed('All states completed.')\n20:14:33.368 | INFO    | Flow run 'ivory-caiman' - Created task run 'hello_local-7633879f-1' for task 'hello_local'\n20:14:33.368 | INFO    | Flow run 'ivory-caiman' - Executing 'hello_local-7633879f-1' immediately...\nHello!\n20:14:33.386 | INFO    | Task run 'hello_local-7633879f-1' - Finished in state Completed()\n20:14:33.399 | INFO    | Flow run 'ivory-caiman' - Finished in state Completed('All states completed.')\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-results-from-submitted-tasks","title":"Using results from submitted tasks","text":"

When you use .submit() to submit a task to a task runner, the task runner creates a PrefectFuture for access to the state and result of the task.

A PrefectFuture is an object that provides access to a computation happening in a task runner \u2014 even if that computation is happening on a remote system.

In the following example, we save the return value of calling .submit() on the task say_hello to the variable future, and then we print the type of the variable:

from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@flow\ndef hello_world():\n    future = say_hello.submit(\"Marvin\")\n    print(f\"variable 'future' is type {type(future)}\")\n\nhello_world()\n

When you run this code, you'll see that the variable future is a PrefectFuture:

variable 'future' is type <class 'prefect.futures.PrefectFuture'>\n

When you pass a future into a task, Prefect waits for the \"upstream\" task \u2014 the one that the future references \u2014 to reach a final state before starting the downstream task.

This means that the downstream task won't receive the PrefectFuture you passed as an argument. Instead, the downstream task will receive the value that the upstream task returned.

Take a look at how this works in the following example

from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef print_result(result):\n    print(type(result))\n    print(result)\n\n@flow(name=\"hello-flow\")\ndef hello_world():\n    future = say_hello.submit(\"Marvin\")\n    print_result.submit(future)\n\nhello_world()\n
<class 'str'>\nHello Marvin!\n

Futures have a few useful methods. For example, you can get the return value of the task run with .result():

from prefect import flow, task\n\n@task\ndef my_task():\n    return 42\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n    result = future.result()\n    print(result)\n\nmy_flow()\n

The .result() method will wait for the task to complete before returning the result to the caller. If the task run fails, .result() will raise the task run's exception. You may disable this behavior with the raise_on_failure option:

from prefect import flow, task\n\n@task\ndef my_task():\n    return \"I'm a task!\"\n\n\n@flow\ndef my_flow():\n    future = my_task.submit()\n    result = future.result(raise_on_failure=False)\n    if future.get_state().is_failed():\n        # `result` is an exception! handle accordingly\n        ...\n    else:\n        # `result` is the expected return value of our task\n        ...\n

You can retrieve the current state of the task run associated with the PrefectFuture using .get_state():

@flow\ndef my_flow():\n    future = my_task.submit()\n    state = future.get_state()\n

You can also wait for a task to complete by using the .wait() method:

@flow\ndef my_flow():\n    future = my_task.submit()\n    final_state = future.wait()\n

You can include a timeout in the wait call to perform logic if the task has not finished in a given amount of time:

@flow\ndef my_flow():\n    future = my_task.submit()\n    final_state = future.wait(1)  # Wait one second max\n    if final_state:\n        # Take action if the task is done\n        result = final_state.result()\n    else:\n        ... # Task action if the task is still running\n

You may also use the wait_for=[] parameter when calling a task, specifying upstream task dependencies. This enables you to control task execution order for tasks that do not share data dependencies.

@task\ndef task_a():\n    pass\n\n@task\ndef task_b():\n    pass\n\n@task\ndef task_c():\n    pass\n\n@task\ndef task_d():\n    pass\n\n@flow\ndef my_flow():\n    a = task_a.submit()\n    b = task_b.submit()\n    # Wait for task_a and task_b to complete\n    c = task_c.submit(wait_for=[a, b])\n    # task_d will wait for task_c to complete\n    # Note: If waiting for one task it must still be in a list.\n    d = task_d(wait_for=[c])\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#when-to-use-result-in-flows","title":"When to use .result() in flows","text":"

The simplest pattern for writing a flow is either only using tasks or only using pure Python functions. When you need to mix the two, use .result().

Using only tasks:

from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef say_nice_to_meet_you(hello_greeting):\n    return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n    hello = say_hello.submit(\"Marvin\")\n    nice_to_meet_you = say_nice_to_meet_you.submit(hello)\n\nhello_world()\n

Using only Python functions:

from prefect import flow, task\n\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\ndef say_nice_to_meet_you(hello_greeting):\n    return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n    # because this is just a Python function, calls will not be tracked\n    hello = say_hello(\"Marvin\") \n    nice_to_meet_you = say_nice_to_meet_you(hello)\n\nhello_world()\n

Mixing tasks and Python functions:

from prefect import flow, task\n\ndef say_hello_extra_nicely_to_marvin(hello): # not a task or flow!\n    if hello == \"Hello Marvin!\":\n        return \"HI MARVIN!\"\n    return hello\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef say_nice_to_meet_you(hello_greeting):\n    return f\"{hello_greeting} Nice to meet you :)\"\n\n@flow\ndef hello_world():\n    # run a task and get the result\n    hello = say_hello.submit(\"Marvin\").result()\n\n    # not calling a task or flow\n    special_greeting = say_hello_extra_nicely_to_marvin(hello)\n\n    # pass our modified greeting back into a task\n    nice_to_meet_you = say_nice_to_meet_you.submit(special_greeting)\n\n    print(nice_to_meet_you.result())\n\nhello_world()\n

Note that .result() also limits Prefect's ability to track task dependencies. In the \"mixed\" example above, Prefect will not be aware that say_hello is upstream of nice_to_meet_you.

Calling .result() is blocking

When calling .result(), be mindful your flow function will have to wait until the task run is completed before continuing.

from prefect import flow, task\n\n@task\ndef say_hello(name):\n    return f\"Hello {name}!\"\n\n@task\ndef do_important_stuff():\n    print(\"Doing lots of important stuff!\")\n\n@flow\ndef hello_world():\n    # blocks until `say_hello` has finished\n    result = say_hello.submit(\"Marvin\").result() \n    do_important_stuff.submit()\n\nhello_world()\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-on-dask","title":"Running tasks on Dask","text":"

The DaskTaskRunner is a parallel task runner that submits tasks to the dask.distributed scheduler. By default, a temporary Dask cluster is created for the duration of the flow run. If you already have a Dask cluster running, either local or cloud hosted, you can provide the connection URL via the address kwarg.

  1. Make sure the prefect-dask collection is installed: pip install prefect-dask.
  2. In your flow code, import DaskTaskRunner from prefect_dask.task_runners.
  3. Assign it as the task runner when the flow is defined using the task_runner=DaskTaskRunner argument.

For example, this flow uses the DaskTaskRunner configured to access an existing Dask cluster at http://my-dask-cluster.

from prefect import flow\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@flow(task_runner=DaskTaskRunner(address=\"http://my-dask-cluster\"))\ndef my_flow():\n    ...\n

DaskTaskRunner accepts the following optional parameters:

Parameter Description address Address of a currently running Dask scheduler. cluster_class The cluster class to use when creating a temporary Dask cluster. It can be either the full class name (for example, \"distributed.LocalCluster\"), or the class itself. cluster_kwargs Additional kwargs to pass to the cluster_class when creating a temporary Dask cluster. adapt_kwargs Additional kwargs to pass to cluster.adapt when creating a temporary Dask cluster. Note that adaptive scaling is only enabled if adapt_kwargs are provided. client_kwargs Additional kwargs to use when creating a dask.distributed.Client.

Multiprocessing safety

Note that, because the DaskTaskRunner uses multiprocessing, calls to flows in scripts must be guarded with if __name__ == \"__main__\": or you will encounter warnings and errors.

If you don't provide the address of a Dask scheduler, Prefect creates a temporary local cluster automatically. The number of workers used is based on the number of cores on your machine. The default provides a mix of processes and threads that should work well for most workloads. If you want to specify this explicitly, you can pass values for n_workers or threads_per_worker to cluster_kwargs.

# Use 4 worker processes, each with 2 threads\nDaskTaskRunner(\n    cluster_kwargs={\"n_workers\": 4, \"threads_per_worker\": 2}\n)\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#using-a-temporary-cluster","title":"Using a temporary cluster","text":"

The DaskTaskRunner is capable of creating a temporary cluster using any of Dask's cluster-manager options. This can be useful when you want each flow run to have its own Dask cluster, allowing for per-flow adaptive scaling.

To configure, you need to provide a cluster_class. This can be:

You can also configure cluster_kwargs, which takes a dictionary of keyword arguments to pass to cluster_class when starting the flow run.

For example, to configure a flow to use a temporary dask_cloudprovider.aws.FargateCluster with 4 workers running with an image named my-prefect-image:

DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n    cluster_kwargs={\"n_workers\": 4, \"image\": \"my-prefect-image\"},\n)\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#connecting-to-an-existing-cluster","title":"Connecting to an existing cluster","text":"

Multiple Prefect flow runs can all use the same existing Dask cluster. You might manage a single long-running Dask cluster (maybe using the Dask Helm Chart) and configure flows to connect to it during execution. This has a few downsides when compared to using a temporary cluster (as described above):

That said, you may prefer managing a single long-running cluster.

To configure a DaskTaskRunner to connect to an existing cluster, pass in the address of the scheduler to the address argument:

# Connect to an existing cluster running at a specified address\nDaskTaskRunner(address=\"tcp://...\")\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#adaptive-scaling","title":"Adaptive scaling","text":"

One nice feature of using a DaskTaskRunner is the ability to scale adaptively to the workload. Instead of specifying n_workers as a fixed number, this lets you specify a minimum and maximum number of workers to use, and the dask cluster will scale up and down as needed.

To do this, you can pass adapt_kwargs to DaskTaskRunner. This takes the following fields:

For example, here we configure a flow to run on a FargateCluster scaling up to at most 10 workers.

DaskTaskRunner(\n    cluster_class=\"dask_cloudprovider.aws.FargateCluster\",\n    adapt_kwargs={\"maximum\": 10}\n)\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#dask-annotations","title":"Dask annotations","text":"

Dask annotations can be used to further control the behavior of tasks.

For example, we can set the priority of tasks in the Dask scheduler:

import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n    print(x)\n\n\n@flow(task_runner=DaskTaskRunner())\ndef my_flow():\n    with dask.annotate(priority=-10):\n        future = show.submit(1)  # low priority task\n\n    with dask.annotate(priority=10):\n        future = show.submit(2)  # high priority task\n

Another common use case is resource annotations:

import dask\nfrom prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n@task\ndef show(x):\n    print(x)\n\n# Create a `LocalCluster` with some resource annotations\n# Annotations are abstract in dask and not inferred from your system.\n# Here, we claim that our system has 1 GPU and 1 process available per worker\n@flow(\n    task_runner=DaskTaskRunner(\n        cluster_kwargs={\"n_workers\": 1, \"resources\": {\"GPU\": 1, \"process\": 1}}\n    )\n)\n\ndef my_flow():\n    with dask.annotate(resources={'GPU': 1}):\n        future = show(0)  # this task requires 1 GPU resource on a worker\n\n    with dask.annotate(resources={'process': 1}):\n        # These tasks each require 1 process on a worker; because we've \n        # specified that our cluster has 1 process per worker and 1 worker,\n        # these tasks will run sequentially\n        future = show(1)\n        future = show(2)\n        future = show(3)\n\n\nif __name__ == \"__main__\":\n    my_flow()\n
","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/task-runners/#running-tasks-on-ray","title":"Running tasks on Ray","text":"

The RayTaskRunner \u2014 installed separately as a Prefect Collection \u2014 is a parallel task runner that submits tasks to Ray. By default, a temporary Ray instance is created for the duration of the flow run. If you already have a Ray instance running, you can provide the connection URL via an address argument.

Remote storage and Ray tasks

We recommend configuring remote storage for task execution with the RayTaskRunner. This ensures tasks executing in Ray have access to task result storage, particularly when accessing a Ray instance outside of your execution environment.

To configure your flow to use the RayTaskRunner:

  1. Make sure the prefect-ray collection is installed: pip install prefect-ray.
  2. In your flow code, import RayTaskRunner from prefect_ray.task_runners.
  3. Assign it as the task runner when the flow is defined using the task_runner=RayTaskRunner argument.

For example, this flow uses the RayTaskRunner configured to access an existing Ray instance at ray://192.0.2.255:8786.

from prefect import flow\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@flow(task_runner=RayTaskRunner(address=\"ray://192.0.2.255:8786\"))\ndef my_flow():\n    ... \n

RayTaskRunner accepts the following optional parameters:

Parameter Description address Address of a currently running Ray instance, starting with the ray:// URI. init_kwargs Additional kwargs to use when calling ray.init.

Note that Ray Client uses the ray:// URI to indicate the address of a Ray instance. If you don't provide the address of a Ray instance, Prefect creates a temporary instance automatically.

Ray environment limitations

While we're excited about adding support for parallel task execution via Ray to Prefect, there are some inherent limitations with Ray you should be aware of:

Ray's support for Python 3.11 is experimental.

Ray support for non-x86/64 architectures such as ARM/M1 processors with installation from pip alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with conda. See the Ray documentation for instructions.

See the Ray installation documentation for further compatibility information.

","tags":["tasks","task runners","executors","PrefectFuture","submit","concurrent execution","sequential execution","parallel execution","Dask","Ray"],"boost":2},{"location":"concepts/tasks/","title":"Tasks","text":"

A task is a function that represents a discrete unit of work in a Prefect workflow. Tasks are not required \u2014 you may define Prefect workflows that consist only of flows, using regular Python statements and functions. Tasks enable you to encapsulate elements of your workflow logic in observable units that can be reused across flows and subflows.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#tasks-overview","title":"Tasks overview","text":"

Tasks are functions: they can take inputs, perform work, and return an output. A Prefect task can do almost anything a Python function can do.

Tasks are special because they receive metadata about upstream dependencies and the state of those dependencies before they run, even if they don't receive any explicit data inputs from them. This gives you the opportunity to, for example, have a task wait on the completion of another task before executing.

Tasks also take advantage of automatic Prefect logging to capture details about task runs such as runtime, tags, and final state.

You can define your tasks within the same file as your flow definition, or you can define tasks within modules and import them for use in your flow definitions. All tasks must be called from within a flow. Tasks may not be called from other tasks.

Calling a task from a flow

Use the @task decorator to designate a function as a task. Calling the task from within a flow function creates a new task run:

from prefect import flow, task\n\n@task\ndef my_task():\nprint(\"Hello, I'm a task\")\n@flow\ndef my_flow():\n    my_task()\n

Tasks are uniquely identified by a task key, which is a hash composed of the task name, the fully-qualified name of the function, and any tags. If the task does not have a name specified, the name is derived from the task function.

How big should a task be?

Prefect encourages \"small tasks\" \u2014 each one should represent a single logical step of your workflow. This allows Prefect to better contain task failures.

To be clear, there's nothing stopping you from putting all of your code in a single task \u2014 Prefect will happily run it! However, if any line of code fails, the entire task will fail and must be retried from the beginning. This can be avoided by splitting the code into multiple dependent tasks.

Calling a task's function from another task

Prefect does not allow triggering task runs from other tasks. If you want to call your task's function directly, you can use task.fn().

from prefect import flow, task\n\n@task\ndef my_first_task(msg):\n    print(f\"Hello, {msg}\")\n\n@task\ndef my_second_task(msg):\nmy_first_task.fn(msg)\n@flow\ndef my_flow():\n    my_second_task(\"Trillian\")\n

Note that in the example above you are only calling the task's function without actually generating a task run. Prefect won't track task execution in your Prefect backend if you call the task function this way. You also won't be able to use features such as retries with this function call.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-arguments","title":"Task arguments","text":"

Tasks allow for customization through optional arguments:

Argument Description name An optional name for the task. If not provided, the name will be inferred from the function name. description An optional string description for the task. If not provided, the description will be pulled from the docstring for the decorated function. tags An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a prefect.tags context at task runtime. cache_key_fn An optional callable that, given the task run context and call parameters, generates a string key. If the key matches a previous completed state, that state result will be restored instead of running the task again. cache_expiration An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire. retries An optional number of times to retry on task run failure. retry_delay_seconds An optional number of seconds to wait before retrying the task after failure. This is only applicable if retries is nonzero. log_prints An optional boolean indicating whether to log print statements. persist_result An optional boolean indicating whether to persist the result of the task run to storage.

See all possible parameters in the Python SDK API docs.

For example, you can provide a name value for the task. Here we've used the optional description argument as well.

@task(name=\"hello-task\", \ndescription=\"This task says hello.\")\ndef my_task():\n    print(\"Hello, I'm a task\")\n

You can distinguish runs of this task by providing a task_run_name; this setting accepts a string that can optionally contain templated references to the keyword arguments of your task. The name will be formatted using Python's standard string formatting syntax as can be seen here:

import datetime\nfrom prefect import flow, task\n\n@task(name=\"My Example Task\", \n      description=\"An example task for a tutorial.\",\n      task_run_name=\"hello-{name}-on-{date:%A}\")\ndef my_task(name, date):\n    pass\n\n@flow\ndef my_flow():\n    # creates a run with a name like \"hello-marvin-on-Thursday\"\n    my_task(name=\"marvin\", date=datetime.datetime.utcnow())\n

Additionally this setting also accepts a function that returns a string to be used for the task run name:

import datetime\nfrom prefect import flow, task\n\ndef generate_task_name():\n    date = datetime.datetime.utcnow()\n    return f\"{date:%A}-is-a-lovely-day\"\n\n@task(name=\"My Example Task\",\n      description=\"An example task for a tutorial.\",\n      task_run_name=generate_task_name)\ndef my_task(name):\n    pass\n\n@flow\ndef my_flow():\n    # creates a run with a name like \"Thursday-is-a-lovely-day\"\n    my_task(name=\"marvin\")\n

If you need access to information about the task, use the prefect.runtime module. For example:

from prefect import flow\nfrom prefect.runtime import flow_run, task_run\n\ndef generate_task_name():\n    flow_name = flow_run.flow_name\n    task_name = task_run.task_name\n\n    parameters = task_run.parameters\n    name = parameters[\"name\"]\n    limit = parameters[\"limit\"]\n\n    return f\"{flow_name}-{task_name}-with-{name}-and-{limit}\"\n\n@task(name=\"my-example-task\",\n      description=\"An example task for a tutorial.\",\n      task_run_name=generate_task_name)\ndef my_task(name: str, limit: int = 100):\n    pass\n\n@flow\ndef my_flow(name: str):\n    # creates a run with a name like \"my-flow-my-example-task-with-marvin-and-100\"\n    my_task(name=\"marvin\")\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#tags","title":"Tags","text":"

Tags are optional string labels that enable you to identify and group tasks other than by name or flow. Tags are useful for:

Tags may be specified as a keyword argument on the task decorator.

@task(name=\"hello-task\", tags=[\"test\"])\ndef my_task():\n    print(\"Hello, I'm a task\")\n

You can also provide tags as an argument with a tags context manager, specifying tags when the task is called rather than in its definition.

from prefect import flow, task\nfrom prefect import tags\n\n@task\ndef my_task():\n    print(\"Hello, I'm a task\")\n\n@flow\ndef my_flow():\nwith tags(\"test\"):\nmy_task()\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#retries","title":"Retries","text":"

Prefect can automatically retry tasks on failure. In Prefect, a task fails if its Python function raises an exception.

To enable retries, pass retries and retry_delay_seconds parameters to your task. If the task fails, Prefect will retry it up to retries times, waiting retry_delay_seconds seconds between each attempt. If the task fails on the final retry, Prefect marks the task as crashed if the task raised an exception or failed if it returned a string.

Retries don't create new task runs

A new task run is not created when a task is retried. A new state is added to the state history of the original task run.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#a-real-world-example-making-an-api-request","title":"A real-world example: making an API request","text":"

Consider the real-world problem of making an API request. In this example, we'll use the httpx library to make an HTTP request.

import httpx\n\nfrom prefect import flow, task\n@task(retries=2, retry_delay_seconds=5)\ndef get_data_task(\n    url: str = \"https://api.brittle-service.com/endpoint\"\n) -> dict:\n    response = httpx.get(url)\n\n    # If the response status code is anything but a 2xx, httpx will raise\n    # an exception. This task doesn't handle the exception, so Prefect will\n    # catch the exception and will consider the task run failed.\n    response.raise_for_status()\n\n    return response.json()\n\n\n@flow\ndef get_data_flow():\n    get_data_task()\n

In this task, if the HTTP request to the brittle API receives any status code other than a 2xx (200, 201, etc.), Prefect will retry the task a maximum of two times, waiting five seconds in between retries.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#custom-retry-behavior","title":"Custom retry behavior","text":"

The retry_delay_seconds option accepts a list of delays for more custom retry behavior. The following task will wait for successively increasing intervals of 1, 10, and 100 seconds, respectively, before the next attempt starts:

from prefect import task\n\n@task(retries=3, retry_delay_seconds=[1, 10, 100])\ndef some_task_with_manual_backoff_retries():\n   ...\n

Additionally, you can pass a callable that accepts the number of retries as an argument and returns a list. Prefect includes an exponential_backoff utility that will automatically generate a list of retry delays that correspond to an exponential backoff retry strategy. The following flow will wait for 10, 20, then 40 seconds before each retry.

from prefect import task\nfrom prefect.tasks import exponential_backoff\n\n@task(retries=3, retry_delay_seconds=exponential_backoff(backoff_factor=10))\ndef some_task_with_exponential_backoff_retries():\n   ...\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#advanced-topic-adding-jitter","title":"Advanced topic: adding \"jitter\"","text":"

While using exponential backoff, you may also want to add jitter to the delay times. Jitter is a random amount of time added to retry periods that helps prevent \"thundering herd\" scenarios, which is when many tasks all retry at the exact same time, potentially overwhelming systems.

The retry_jitter_factor option can be used to add variance to the base delay. For example, a retry delay of 10 seconds with a retry_jitter_factor of 0.5 will be allowed to delay up to 15 seconds. Large values of retry_jitter_factor provide more protection against \"thundering herds,\" while keeping the average retry delay time constant. For example, the following task adds jitter to its exponential backoff so the retry delays will vary up to a maximum delay time of 20, 40, and 80 seconds respectively.

from prefect import task\nfrom prefect.tasks import exponential_backoff\n\n@task(\n    retries=3,\n    retry_delay_seconds=exponential_backoff(backoff_factor=10),\n    retry_jitter_factor=1,\n)\ndef some_task_with_exponential_backoff_retries():\n   ...\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#configuring-retry-behavior-globally-with-settings","title":"Configuring retry behavior globally with settings","text":"

You can also set retries and retry delays by using the following global settings. These settings will not override the retries or retry_delay_seconds that are set in the flow or task decorator.

prefect config set PREFECT_FLOW_DEFAULT_RETRIES=2\nprefect config set PREFECT_TASK_DEFAULT_RETRIES=2\nprefect config set PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS = [1, 10, 100]\nprefect config set PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS = [1, 10, 100]\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#caching","title":"Caching","text":"

Caching refers to the ability of a task run to reflect a finished state without actually running the code that defines the task. This allows you to efficiently reuse results of tasks that may be expensive to run with every flow run, or reuse cached results if the inputs to a task have not changed.

To determine whether a task run should retrieve a cached state, we use \"cache keys\". A cache key is a string value that indicates if one run should be considered identical to another. When a task run with a cache key finishes, we attach that cache key to the state. When each task run starts, Prefect checks for states with a matching cache key. If a state with an identical key is found, Prefect will use the cached state instead of running the task again.

To enable caching, specify a cache_key_fn \u2014 a function that returns a cache key \u2014 on your task. You may optionally provide a cache_expiration timedelta indicating when the cache expires. If you do not specify a cache_expiration, the cache key does not expire.

You can define a task that is cached based on its inputs by using the Prefect task_input_hash. This is a task cache key implementation that hashes all inputs to the task using a JSON or cloudpickle serializer. If the task inputs do not change, the cached results are used rather than running the task until the cache expires.

Note that, if any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, task_input_hash returns a null key indicating that a cache key could not be generated for the given inputs.

In this example, until the cache_expiration time ends, as long as the input to hello_task() remains the same when it is called, the cached return value is returned. In this situation the task is not rerun. However, if the input argument value changes, hello_task() runs using the new input.

from datetime import timedelta\nfrom prefect import flow, task\nfrom prefect.tasks import task_input_hash\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1))\ndef hello_task(name_input):\n    # Doing some work\n    print(\"Saying hello\")\n    return \"hello \" + name_input\n\n@flow\ndef hello_flow(name_input):\n    hello_task(name_input)\n

Alternatively, you can provide your own function or other callable that returns a string cache key. A generic cache_key_fn is a function that accepts two positional arguments:

Note that the cache_key_fn is not defined as a @task.

Task cache keys

By default, a task cache key is limited to 2000 characters, specified by the PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH setting.

from prefect import task, flow\n\ndef static_cache_key(context, parameters):\n# return a constant\nreturn \"static cache key\"\n@task(cache_key_fn=static_cache_key)\ndef cached_task():\n    print('running an expensive operation')\n    return 42\n\n@flow\ndef test_caching():\n    cached_task()\n    cached_task()\n    cached_task()\n

In this case, there's no expiration for the cache key, and no logic to change the cache key, so cached_task() only runs once.

>>> test_caching()\nrunning an expensive operation\n>>> test_caching()\n>>> test_caching()\n

When each task run requested to enter a Running state, it provided its cache key computed from the cache_key_fn. The Prefect backend identified that there was a COMPLETED state associated with this key and instructed the run to immediately enter the same COMPLETED state, including the same return values.

A real-world example might include the flow run ID from the context in the cache key so only repeated calls in the same flow run are cached.

def cache_within_flow_run(context, parameters):\n    return f\"{context.task_run.flow_run_id}-{task_input_hash(context, parameters)}\"\n\n@task(cache_key_fn=cache_within_flow_run)\ndef cached_task():\n    print('running an expensive operation')\n    return 42\n

Task results, retries, and caching

Task results are cached in memory during a flow run and persisted to the location specified by the PREFECT_LOCAL_STORAGE_PATH setting. As a result, task caching between flow runs is currently limited to flow runs with access to that local storage path.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#refreshing-the-cache","title":"Refreshing the cache","text":"

Sometimes, you want a task to update the data associated with its cache key instead of using the cache. This is a cache \"refresh\".

The refresh_cache option can be used to enable this behavior for a specific task:

import random\n\n\ndef static_cache_key(context, parameters):\n    # return a constant\n    return \"static cache key\"\n\n\n@task(cache_key_fn=static_cache_key, refresh_cache=True)\ndef caching_task():\n    return random.random()\n

When this task runs, it will always update the cache key instead of using the cached value. This is particularly useful when you have a flow that is responsible for updating the cache.

If you want to refresh the cache for all tasks, you can use the PREFECT_TASKS_REFRESH_CACHE setting. Setting PREFECT_TASKS_REFRESH_CACHE=true will change the default behavior of all tasks to refresh. This is particularly useful if you want to rerun a flow without cached results.

If you have tasks that should not refresh when this setting is enabled, you may explicitly set refresh_cache to False. These tasks will never refresh the cache \u2014 if a cache key exists it will be read, not updated. Note that, if a cache key does not exist yet, these tasks can still write to the cache.

@task(cache_key_fn=static_cache_key, refresh_cache=False)\ndef caching_task():\n    return random.random()\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#timeouts","title":"Timeouts","text":"

Task timeouts are used to prevent unintentional long-running tasks. When the duration of execution for a task exceeds the duration specified in the timeout, a timeout exception will be raised and the task will be marked as failed. In the UI, the task will be visibly designated as TimedOut. From the perspective of the flow, the timed-out task will be treated like any other failed task.

Timeout durations are specified using the timeout_seconds keyword argument.

from prefect import task, get_run_logger\nimport time\n\n@task(timeout_seconds=1)\ndef show_timeouts():\n    logger = get_run_logger()\n    logger.info(\"I will execute\")\n    time.sleep(5)\n    logger.info(\"I will not execute\")\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-results","title":"Task results","text":"

Depending on how you call tasks, they can return different types of results and optionally engage the use of a task runner.

Any task can return:

To run your task with a task runner, you must call the task with .submit().

See state returned values for examples.

Task runners are optional

If you just need the result from a task, you can simply call the task from your flow. For most workflows, the default behavior of calling a task directly and receiving a result is all you'll need.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#wait-for","title":"Wait for","text":"

To create a dependency between two tasks that do not exchange data, but one needs to wait for the other to finish, use the special wait_for keyword argument:

@task\ndef task_1():\n    pass\n\n@task\ndef task_2():\n    pass\n\n@flow\ndef my_flow():\n    x = task_1()\n\n    # task 2 will wait for task_1 to complete\n    y = task_2(wait_for=[x])\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#map","title":"Map","text":"

Prefect provides a .map() implementation that automatically creates a task run for each element of its input data. Mapped tasks represent the computations of many individual children tasks.

The simplest Prefect map takes a tasks and applies it to each element of its inputs.

from prefect import flow, task\n\n@task\ndef print_nums(nums):\n    for n in nums:\n        print(n)\n\n@task\ndef square_num(num):\n    return num**2\n\n@flow\ndef map_flow(nums):\n    print_nums(nums)\n    squared_nums = square_num.map(nums) \n    print_nums(squared_nums)\n\nmap_flow([1,2,3,5,8,13])\n

Prefect also supports unmapped arguments, allowing you to pass static values that don't get mapped over.

from prefect import flow, task\n\n@task\ndef add_together(x, y):\n    return x + y\n\n@flow\ndef sum_it(numbers, static_value):\n    futures = add_together.map(numbers, static_value)\n    return futures\n\nsum_it([1, 2, 3], 5)\n

If your static argument is an iterable, you'll need to wrap it with unmapped to tell Prefect that it should be treated as a static value.

from prefect import flow, task, unmapped\n\n@task\ndef sum_plus(x, static_iterable):\n    return x + sum(static_iterable)\n\n@flow\ndef sum_it(numbers, static_iterable):\n    futures = sum_plus.map(numbers, static_iterable)\n    return futures\n\nsum_it([4, 5, 6], unmapped([1, 2, 3]))\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#async-tasks","title":"Async tasks","text":"

Prefect also supports asynchronous task and flow definitions by default. All of the standard rules of async apply:

import asyncio\n\nfrom prefect import task, flow\n\n@task\nasync def print_values(values):\n    for value in values:\n        await asyncio.sleep(1) # yield\n        print(value, end=\" \")\n\n@flow\nasync def async_flow():\n    await print_values([1, 2])  # runs immediately\n    coros = [print_values(\"abcd\"), print_values(\"6789\")]\n\n    # asynchronously gather the tasks\n    await asyncio.gather(*coros)\n\nasyncio.run(async_flow())\n

Note, if you are not using asyncio.gather, calling .submit() is required for asynchronous execution on the ConcurrentTaskRunner.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#task-run-concurrency-limits","title":"Task run concurrency limits","text":"

There are situations in which you want to actively prevent too many tasks from running simultaneously. For example, if many tasks across multiple flows are designed to interact with a database that only allows 10 connections, you want to make sure that no more than 10 tasks that connect to this database are running at any given time.

Prefect has built-in functionality for achieving this: task concurrency limits.

Task concurrency limits use task tags. You can specify an optional concurrency limit as the maximum number of concurrent task runs in a Running state for tasks with a given tag. The specified concurrency limit applies to any task to which the tag is applied.

If a task has multiple tags, it will run only if all tags have available concurrency.

Tags without explicit limits are considered to have unlimited concurrency.

0 concurrency limit aborts task runs

Currently, if the concurrency limit is set to 0 for a tag, any attempt to run a task with that tag will be aborted instead of delayed.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#execution-behavior","title":"Execution behavior","text":"

Task tag limits are checked whenever a task run attempts to enter a Running state.

If there are no concurrency slots available for any one of your task's tags, the transition to a Running state will be delayed and the client is instructed to try entering a Running state again in 30 seconds.

Concurrency limits in subflows

Using concurrency limits on task runs in subflows can cause deadlocks. As a best practice, configure your tags and concurrency limits to avoid setting limits on task runs in subflows.

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#configuring-concurrency-limits","title":"Configuring concurrency limits","text":"

Flow run concurrency limits are set at a work pool and/or work queue level

While task run concurrency limits are configured via tags (as shown below), flow run concurrency limits are configured via work pools and/or work queues.

You can set concurrency limits on as few or as many tags as you wish. You can set limits through:

","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#cli","title":"CLI","text":"

You can create, list, and remove concurrency limits by using Prefect CLI concurrency-limit commands.

$ prefect concurrency-limit [command] [arguments]\n
Command Description create Create a concurrency limit by specifying a tag and limit. delete Delete the concurrency limit set on the specified tag. inspect View details about a concurrency limit set on the specified tag. ls View all defined concurrency limits.

For example, to set a concurrency limit of 10 on the 'small_instance' tag:

$ prefect concurrency-limit create small_instance 10\n

To delete the concurrency limit on the 'small_instance' tag:

$ prefect concurrency-limit delete small_instance\n

To view details about the concurrency limit on the 'small_instance' tag:

$ prefect concurrency-limit inspect small_instance\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/tasks/#python-client","title":"Python client","text":"

To update your tag concurrency limits programmatically, use PrefectClient.orchestration.create_concurrency_limit.

create_concurrency_limit takes two arguments:

For example, to set a concurrency limit of 10 on the 'small_instance' tag:

from prefect import get_client\n\nasync with get_client() as client:\n    # set a concurrency limit of 10 on the 'small_instance' tag\n    limit_id = await client.create_concurrency_limit(\n        tag=\"small_instance\", \n        concurrency_limit=10\n        )\n

To remove all concurrency limits on a tag, use PrefectClient.delete_concurrency_limit_by_tag, passing the tag:

async with get_client() as client:\n    # remove a concurrency limit on the 'small_instance' tag\n    await client.delete_concurrency_limit_by_tag(tag=\"small_instance\")\n

If you wish to query for the currently set limit on a tag, use PrefectClient.read_concurrency_limit_by_tag, passing the tag:

To see all of your limits across all of your tags, use PrefectClient.read_concurrency_limits.

async with get_client() as client:\n    # query the concurrency limit on the 'small_instance' tag\n    limit = await client.read_concurrency_limit_by_tag(tag=\"small_instance\")\n
","tags":["tasks","task runs","functions","retries","caching","cache keys","cache key functions","tags","results","async","asynchronous execution","map","concurrency","concurrency limits","task concurrency"],"boost":2},{"location":"concepts/variables/","title":"Variables","text":"

Variables enable you to store and reuse non-sensitive bits of data, such as configuration information. Variables are named, mutable string values, much like environment variables. Variables are scoped to a Prefect server instance or a single workspace in Prefect Cloud.

Variables can be created or modified at any time, but are intended for values with infrequent writes and frequent reads. Variable values may be cached for quicker retrieval.

While variable values are most commonly loaded during flow runtime, they can be loaded in other contexts, at any time, such that they can be used to pass configuration information to Prefect configuration files, such as deployment steps.

Variables are not Encrypted

Using variables to store sensitive information, such as credentials, is not recommended. Instead, use Secret blocks to store and access sensitive information.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#managing-variables","title":"Managing variables","text":"

You can create, read, edit and delete variables via the Prefect UI, API, and CLI. Names must adhere to traditional variable naming conventions:

Values must:

Optionally, you can add tags to the variable.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#via-the-prefect-ui","title":"Via the Prefect UI","text":"

You can see all the variables in your Prefect server instance or Prefect Cloud workspace on the Variables page of the Prefect UI. Both the name and value of all variables are visible to anyone with access to the server or workspace.

To create a new variable, select the + button next to the header of the Variables page. Enter the name and value of the variable.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#via-the-rest-api","title":"Via the REST API","text":"

Variables can be created and deleted via the REST API. You can also set and get variables via the API with either the variable name or ID. See the REST reference for more information.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#via-the-cli","title":"Via the CLI","text":"

You can list, inspect, and delete variables via the command line interface with the prefect variable ls, prefect variable inspect <name>, and prefect variable delete <name> commands, respectively.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#accessing-variables","title":"Accessing variables","text":"

In addition to the UI and API, variables can be referenced in code and in certain Prefect configuration files.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#in-python-code","title":"In Python code","text":"

You can access any variable via the Python SDK via the .get() method. If you attempt to reference a variable that does not exist, the method will return None.

from prefect import variables\n\n# from a synchronous context\nanswer = variables.get('the_answer')\nprint(answer)\n# 42\n\n# from an asynchronous context\nanswer = await variables.get('the_answer')\nprint(answer)\n# 42\n\n# without a default value\nanswer = variables.get('not_the_answer')\nprint(answer)\n# None\n\n# with a default value\nanswer = variables.get('not_the_answer', default='42')\nprint(answer)\n# 42\n
","tags":["variables","blocks"],"boost":2},{"location":"concepts/variables/#in-project-steps","title":"In Project steps","text":"

In .yaml files, variables are denoted by quotes and double curly brackets, like so: \"{{ prefect.variables.my_variable }}\". You can use variables to templatize deployment steps by referencing them in the prefect.yaml file used to create deployments. For example, you could pass a variable in to specify a branch for a git repo in a deployment pull step:

pull:\n- prefect.deployments.steps.git_clone:\n    repository: https://github.com/PrefectHQ/hello-projects.git\n    branch: \"{{ prefect.variables.deployment_branch }}\"\n

The deployment_branch variable will be evaluated at runtime for the deployed flow, allowing changes to be made to variables used in a pull action without updating a deployment directly.

","tags":["variables","blocks"],"boost":2},{"location":"concepts/work-pools/","title":"Work Pools, Workers, and Agents","text":"

Work pools and workers bridge the Prefect orchestration environment with your execution environment. When a deployment creates a flow run, it is submitted to a specific work pool for scheduling. A worker running in the execution environment can poll its respective work pool for new runs to execute, or the work pool can submit flow runs to serverless infrastructure directly, depending on your configuration.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#work-pool-overview","title":"Work pool overview","text":"

Work pools organize work for execution. Work pools have types corresponding to the infrastructure that will execute the flow code, as well as the delivery method of work to that environment. Pull work pools require agents or workers to poll the work pool for flow runs to execute. Push work pools can submit runs directly to serverless infrastructure providers like Cloud Run, Azure Container Instances, and AWS ECS without the need for an agent or worker.

Work pools are like pub/sub topics

It's helpful to think of work pools as a way to coordinate (potentially many) deployments with (potentially many) agents through a known channel: the pool itself. This is similar to how \"topics\" are used to connect producers and consumers in a pub/sub or message-based system. By switching a deployment's work pool, users can quickly change the agent that will execute their runs, making it easy to promote runs through environments or even debug locally.

In addition, users can control aspects of work pool behavior, like how many runs the pool allows to be run concurrently or pausing delivery entirely. These options can be modified at any time, and any workers requesting work for a specific pool will only see matching flow runs.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#work-pool-configuration","title":"Work pool configuration","text":"

You can configure work pools by using:

To manage work pools in the UI, click the Work Pools icon. This displays a list of currently configured work pools.

You can pause a work pool from this page by using the toggle.

Select the + button to create a new work pool. You'll be able to specify the details for work served by this work pool.

To configure a work pool via the Prefect CLI, use the prefect work-pool create command:

prefect work-pool create [OPTIONS] NAME\n

NAME is a required, unique name for the work pool.

Optional configuration parameters you can specify to filter work on the pool include:

Option Description --paused If provided, the work pool will be created in a paused state. --type The type of infrastructure that can execute runs from this work pool. [default: prefect-agent]

For example, to create a work pool called test-pool, you would run this command:

$ prefect work-pool create test-pool\n\nCreated work pool with properties:\n    name - 'test-pool'\nid - a51adf8c-58bb-4949-abe6-1b87af46eabd\n    concurrency limit - None\n\nStart an agent to pick up flows from the work pool:\n    prefect agent start -p 'test-pool'\n\nInspect the work pool:\n    prefect work-pool inspect 'test-pool'\n

On success, the command returns the details of the newly created work pool.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#base-job-template","title":"Base job template","text":"

Each work pool has a base job template that allows the customization of the behavior of the worker executing flow runs from the work pool.

The base job template acts as a contract defining the configuration passed to the worker for each flow run and the options available to deployment creators to customize worker behavior per deployment.

A base job template comprises a job_configuration section and a variables section.

The variables section defines the fields available to be customized per deployment. The variables section follows the OpenAPI specification, which allows work pool creators to place limits on provided values (type, minimum, maximum, etc.).

The job configuration section defines how values provided for fields in the variables section should be translated into the configuration given to a worker when executing a flow run.

The values in the job_configuration can use placeholders to reference values provided in the variables section. Placeholders are declared using double curly braces, e.g., {{ variable_name }}. job_configuration values can also be hard-coded if the value should not be customizable.

Each worker type is configured with a default base job template, making it easy to start with a work pool. The default base template defines fields that can be edited on a per-deployment basis or for the entire work pool via the Prefect API and UI.

For example, if we create a process work pool named 'above-ground' via the CLI:

$ prefect work-pool create --type process above-ground\n

We see these configuration options available in the Prefect UI:

For a process work pool with the default base job template, we can set environment variables for spawned processes, set the working directory to execute flows, and control whether the flow run output is streamed to workers' standard output. You can also see an example of JSON formatted base job template with the 'Advanced' tab.

You can override each of these attributes on a per-deployment basis. When deploying a flow, you can specify these overrides in the work_pool.job_variables section of a deployment.yaml.

If we wanted to turn off streaming output for a specific deployment, we could add the following to our deployment.yaml:

work_pool:\nname: above-ground  job_variables:\nstream_output: false\n

Advanced Customization of the Base Job Template

For advanced use cases, users can create work pools with fully customizable job templates. This customization is available when creating or editing a work pool on the 'Advanced' tab within the UI.

Advanced customization is useful anytime the underlying infrastructure supports a high degree of customization. In these scenarios a work pool job template allows you to expose a minimal and easy-to-digest set of options to deployment authors. Additionally, these options are the only customizable aspects for deployment infrastructure, which can be useful for restricting functionality in secure environments. For example, the kubernetes worker type allows users to specify a custom job template that can be used to configure the manifest that workers use to create jobs for flow execution.

For more information and advanced configuration examples, see the Kubernetes Worker documentation.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#viewing-work-pools","title":"Viewing work pools","text":"

At any time, users can see and edit configured work pools in the Prefect UI.

To view work pools with the Prefect CLI, you can:

prefect work-pool ls lists all configured work pools for the server.

$ prefect work-pool ls\nprefect work-pool ls\n                               Work pools\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name       \u2503    Type        \u2503                                   ID \u2503 Concurrency Limit \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 barbeque   \u2502 prefect-agent  \u2502 72c0a101-b3e2-4448-b5f8-a8c5184abd17 \u2502 None              \u2502\n\u2502 k8s-pool   \u2502  prefect-agent \u2502 7b6e3523-d35b-4882-84a7-7a107325bb3f \u2502 None              \u2502\n\u2502 test-pool  \u2502  prefect-agent \u2502 a51adf8c-58bb-4949-abe6-1b87af46eabd \u2502 None              \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                       (**) denotes a paused pool\n

prefect work-pool inspect provides all configuration metadata for a specific work pool by ID.

$ prefect work-pool inspect 'test-pool'\nWorkpool(\nid='a51adf8c-58bb-4949-abe6-1b87af46eabd',\n    created='2 minutes ago',\n    updated='2 minutes ago',\n    name='test-pool',\n    filter=None,\n)\n

prefect work-pool preview displays scheduled flow runs for a specific work pool by ID for the upcoming hour. The optional --hours flag lets you specify the number of hours to look ahead.

$ prefect work-pool preview 'test-pool' --hours 12 \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Scheduled Star\u2026 \u2503 Run ID                     \u2503 Name         \u2503 Deployment ID               \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 2022-02-26 06:\u2026 \u2502 741483d4-dc90-4913-b88d-0\u2026 \u2502 messy-petrel \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 05:\u2026 \u2502 14e23a19-a51b-4833-9322-5\u2026 \u2502 unselfish-g\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 04:\u2026 \u2502 deb44d4d-5fa2-4f70-a370-e\u2026 \u2502 solid-ostri\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 03:\u2026 \u2502 07374b5c-121f-4c8d-9105-b\u2026 \u2502 sophisticat\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 02:\u2026 \u2502 545bc975-b694-4ece-9def-8\u2026 \u2502 gorgeous-mo\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 01:\u2026 \u2502 704f2d67-9dfa-4fb8-9784-4\u2026 \u2502 sassy-hedge\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-26 00:\u2026 \u2502 691312f0-d142-4218-b617-a\u2026 \u2502 sincere-moo\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 23:\u2026 \u2502 7cb3ff96-606b-4d8c-8a33-4\u2026 \u2502 curious-cat\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 22:\u2026 \u2502 3ea559fe-cb34-43b0-8090-1\u2026 \u2502 primitive-f\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2502 2022-02-25 21:\u2026 \u2502 96212e80-426d-4bf4-9c49-e\u2026 \u2502 phenomenal-\u2026 \u2502 156edead-fe6a-4783-a618-21\u2026 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                                   (**) denotes a late run\n
","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#pausing-and-deleting-work-pools","title":"Pausing and deleting work pools","text":"

A work pool can be paused at any time to stop the delivery of work to agents. Agents will not receive any work when polling a paused pool.

To pause a work pool through the Prefect CLI, use the prefect work-pool pause command:

$ prefect work-pool pause 'test-pool'\nPaused work pool 'test-pool'\n

To resume a work pool through the Prefect CLI, use the prefect work-pool resume command with the work pool name.

To delete a work pool through the Prefect CLI, use the prefect work-pool delete command with the work pool name.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#managing-concurrency","title":"Managing concurrency","text":"

Each work pool can optionally restrict concurrent runs of matching flows.

For example, a work pool with a concurrency limit of 5 will only release new work if fewer than 5 matching runs are currently in a Running or Pending state. If 3 runs are Running or Pending, polling the pool for work will only result in 2 new runs, even if there are many more available, to ensure that the concurrency limit is not exceeded.

When using the prefect work-pool Prefect CLI command to configure a work pool, the following subcommands set concurrency limits:

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#work-queues","title":"Work queues","text":"

Advanced topic

Work queues do not require manual creation or configuration, because Prefect will automatically create them whenever needed. Managing work queues offers advanced control over how runs are executed.

Each work pool has a \"default\" queue that all work will be sent to by default. Additional queues can be added to a work pool. Work queues enable greater control over work delivery through fine grained priority and concurrency. Each work queue has a priority indicated by a unique positive integer. Lower numbers take greater priority in the allocation of work. Accordingly, new queues can be added without changing the rank of the higher-priority queues (e.g. no matter how many queues you add, the queue with priority 1 will always be the highest priority).

Work queues can also have their own concurrency limits. Note that each queue is also subject to the global work pool concurrency limit, which cannot be exceeded.

Together work queue priority and concurrency enable precise control over work. For example, a pool may have three queues: A \"low\" queue with priority 10 and no concurrency limit, a \"high\" queue with priority 5 and a concurrency limit of 3, and a \"critical\" queue with priority 1 and a concurrency limit of 1. This arrangement would enable a pattern in which there are two levels of priority, \"high\" and \"low\" for regularly scheduled flow runs, with the remaining \"critical\" queue for unplanned, urgent work, such as a backfill.

Priority is evaluated to determine the order in which flow runs are submitted for execution. If all flow runs are capable of being executed with no limitation due to concurrency or otherwise, priority is still used to determine order of submission, but there is no impact to execution. If not all flow runs can be executed, usually as a result of concurrency limits, priority is used to determine which queues receive precedence to submit runs for execution.

Priority for flow run submission proceeds from the highest priority to the lowest priority. In the preceding example, all work from the \"critical\" queue (priority 1) will be submitted, before any work is submitted from \"high\" (priority 5). Once all work has been submitted from priority queue \"critical\", work from the \"high\" queue will begin submission.

If new flow runs are received on the \"critical\" queue while flow runs are still in scheduled on the \"high\" and \"low\" queues, flow run submission goes back to ensuring all scheduled work is first satisfied from the highest priority queue, until it is empty, in waterfall fashion.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#local-debugging","title":"Local debugging","text":"

As long as your deployment's infrastructure block supports it, you can use work pools to temporarily send runs to an agent running on your local machine for debugging by running prefect agent start -p my-local-machine and updating the deployment's work pool to my-local-machine.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#worker-overview","title":"Worker overview","text":"

Workers are lightweight polling services that retrieve scheduled runs from a work pool and execute them.

Workers are similar to agents, but offer greater control over infrastructure configuration and the ability to route work to specific types of execution environments.

Workers each have a type corresponding to the execution environment to which they will submit flow runs. Workers are only able to join work pools that match their type. As a result, when deployments are assigned to a work pool, you know in which execution environment scheduled flow runs for that deployment will run.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#worker-types","title":"Worker types","text":"

Below is a list of available worker types. Note that most worker types will require installation of an additional package.

Worker Type Description Required Package process Executes flow runs in subprocesses kubernetes Executes flow runs as Kubernetes jobs prefect-kubernetes docker Executes flow runs within Docker containers prefect-docker ecs Executes flow runs as ECS tasks prefect-aws cloud-run Executes flow runs as Google Cloud Run jobs prefect-gcp azure-container-instance Execute flow runs in ACI containers prefect-azure

If you don\u2019t see a worker type that meets your needs, consider developing a new worker type!

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#worker-options","title":"Worker options","text":"

Workers poll for work from one or more queues within a work pool. If the worker references a work queue that doesn't exist, it will be created automatically. The worker CLI is able to infer the worker type from the work pool. Alternatively, you can also specify the worker type explicitly. If you supply the worker type to the worker CLI, a work pool will be created automatically if it doesn't exist (using default job settings).

Configuration parameters you can specify when starting a worker include:

Option Description --name, -n The name to give to the started worker. If not provided, a unique name will be generated. --pool, -p The work pool the started worker should poll. --work-queue, -q One or more work queue names for the worker to pull from. If not provided, the worker will pull from all work queues in the work pool. --type, -t The type of worker to start. If not provided, the worker type will be inferred from the work pool. --prefetch-seconds The amount of time before a flow run's scheduled start time to begin submission. Default is the value of PREFECT_WORKER_PREFETCH_SECONDS. --run-once Only run worker polling once. By default, the worker runs forever. --limit, -l The maximum number of flow runs to start simultaneously. --with-healthcheck Start a healthcheck server for the worker. --install-policy Install policy to use workers from Prefect integration packages.

You must start a worker within an environment that can access or create the infrastructure needed to execute flow runs. The worker will deploy flow runs to the infrastructure corresponding to the worker type. For example, if you start a worker with type kubernetes, the worker will deploy flow runs to a Kubernetes cluster.

Prefect must be installed in execution environments

Prefect must be installed in any environment (virtual environment, Docker container, etc.) where you intend to run the worker or execute a flow run.

PREFECT_API_URL and PREFECT_API_KEYsettings for workers

PREFECT_API_URL must be set for the environment in which your worker is running. You must also have a user or service account with the Worker role, which can be configured by setting the PREFECT_API_KEY.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#starting-a-worker","title":"Starting a worker","text":"

Use the prefect worker start CLI command to start a worker. You must pass at least the work pool name. If the work pool does not exist, it will be created if the --type flag is used.

$ prefect worker start -p [work pool name]\n

For example:

prefect worker start -p \"my-pool\"\nDiscovered worker type 'process' for work pool 'my-pool'.\nWorker 'ProcessWorker 65716280-96f8-420b-9300-7e94417f2673' started!\n

In this case, Prefect automatically discovered the worker type from the work pool. To create a work pool and start a worker in one command, use the --type flag:

prefect worker start -p \"my-pool\" --type \"process\"\nWorker 'ProcessWorker d24f3768-62a9-4141-9480-a056b9539a25' started!\n06:57:53.289 | INFO    | prefect.worker.process.processworker d24f3768-62a9-4141-9480-a056b9539a25 - Worker pool 'my-pool' created.\n

In addition, workers can limit the number of flow runs they will start simultaneously with the --limit flag. For example, to limit a worker to five concurrent flow runs:

prefect worker start --pool \"my-pool\" --limit 5\n
","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#configuring-prefetch","title":"Configuring prefetch","text":"

By default, the worker begins submitting flow runs a short time (10 seconds) before they are scheduled to run. This behavior allows time for the infrastructure to be created so that the flow run can start on time.

In some cases, infrastructure will take longer than 10 seconds to start the flow run. The prefetch can be increased using the --prefetch-seconds option or the PREFECT_WORKER_PREFETCH_SECONDS setting.

If this value is more than the amount of time it takes for the infrastructure to start, the flow run will wait until its scheduled start time.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#polling-for-work","title":"Polling for work","text":"

Workers poll for work every 15 seconds by default. This interval is configurable in your profile settings with the PREFECT_WORKER_QUERY_SECONDS setting.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#install-policy","title":"Install policy","text":"

The Prefect CLI can install the required package for Prefect-maintained worker types automatically. You can configure this behavior with the --install-policy option. The following are valid install policies

Install Policy Description always Always install the required package. Will update the required package to the most recent version if already installed. if-not-present Install the required package if it is not already installed. never Never install the required package. prompt Prompt the user to choose whether to install the required package. This is the default install policy. If prefect worker start is run non-interactively, the prompt install policy will behave the same as never.","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#agent-overview","title":"Agent overview","text":"

Agent processes are lightweight polling services that get scheduled work from a work pool and deploy the corresponding flow runs.

Agents poll for work every 15 seconds by default. This interval is configurable in your profile settings with the PREFECT_AGENT_QUERY_INTERVAL setting.

It is possible for multiple agent processes to be started for a single work pool. Each agent process sends a unique ID to the server to help disambiguate themselves and let users know how many agents are active.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#agent-options","title":"Agent options","text":"

Agents are configured to pull work from one or more work pool queues. If the agent references a work queue that doesn't exist, it will be created automatically.

Configuration parameters you can specify when starting an agent include:

Option Description --api The API URL for the Prefect server. Default is the value of PREFECT_API_URL. --hide-welcome Do not display the startup ASCII art for the agent process. --limit Maximum number of flow runs to start simultaneously. [default: None] --match, -m Dynamically matches work queue names with the specified prefix for the agent to pull from,for example dev- will match all work queues with a name that starts with dev-. [default: None] --pool, -p A work pool name for the agent to pull from. [default: None] --prefetch-seconds The amount of time before a flow run's scheduled start time to begin submission. Default is the value of PREFECT_AGENT_PREFETCH_SECONDS. --run-once Only run agent polling once. By default, the agent runs forever. [default: no-run-once] --work-queue, -q One or more work queue names for the agent to pull from. [default: None]

You must start an agent within an environment that can access or create the infrastructure needed to execute flow runs. Your agent will deploy flow runs to the infrastructure specified by the deployment.

Prefect must be installed in execution environments

Prefect must be installed in any environment in which you intend to run the agent or execute a flow run.

PREFECT_API_URL and PREFECT_API_KEY settings for agents

PREFECT_API_URL must be set for the environment in which your agent is running or specified when starting the agent with the --api flag. You must also have a user or service account with the Worker role, which can be configured by setting the PREFECT_API_KEY.

If you want an agent to communicate with Prefect Cloud or a Prefect server from a remote execution environment such as a VM or Docker container, you must configure PREFECT_API_URL in that environment.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#starting-an-agent","title":"Starting an agent","text":"

Use the prefect agent start CLI command to start an agent. You must pass at least one work pool name or match string that the agent will poll for work. If the work pool does not exist, it will be created.

$ prefect agent start -p [work pool name]\n

For example:

$ prefect agent start -p \"my-pool\"\nStarting agent with ephemeral API...\n\u00a0 ___ ___ ___ ___ ___ ___ _____ \u00a0 \u00a0 _ \u00a0 ___ ___ _\u00a0 _ _____\n\u00a0| _ \\ _ \\ __| __| __/ __|_ \u00a0 _| \u00a0 /_\\ / __| __| \\| |_ \u00a0 _|\n|\u00a0 _/ \u00a0 / _|| _|| _| (__\u00a0 | |\u00a0 \u00a0 / _ \\ (_ | _|| .` | | |\n|_| |_|_\\___|_| |___\\___| |_| \u00a0 /_/ \\_\\___|___|_|\\_| |_|\n\nAgent started! Looking for work from work pool 'my-pool'...\n

In this case, Prefect automatically created a new my-queue work queue.

By default, the agent polls the API specified by the PREFECT_API_URL environment variable. To configure the agent to poll from a different server location, use the --api flag, specifying the URL of the server.

In addition, agents can match multiple queues in a work pool by providing a --match string instead of specifying all of the queues. The agent will poll every queue with a name that starts with the given string. New queues matching this prefix will be found by the agent without needing to restart it.

For example:

$ prefect agent start --match \"foo-\"\n

This example will poll every work queue that starts with \"foo-\".

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#configuring-prefetch_1","title":"Configuring prefetch","text":"

By default, the agent begins submission of flow runs a short time (10 seconds) before they are scheduled to run. This allows time for the infrastructure to be created, so the flow run can start on time. In some cases, infrastructure will take longer than this to actually start the flow run. In these cases, the prefetch can be increased using the --prefetch-seconds option or the PREFECT_AGENT_PREFETCH_SECONDS setting. Submission can begin an arbitrary amount of time before the flow run is scheduled to start. If this value is larger than the amount of time it takes for the infrastructure to start, the flow run will wait until its scheduled start time. This allows flow runs to start exactly on time.

","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"concepts/work-pools/#additional-resources","title":"Additional resources","text":"","tags":["work pools","workers","agents","orchestration","flow runs","deployments","schedules","concurrency limits","priority"],"boost":2},{"location":"contributing/common-mistakes/","title":"Troubleshooting","text":"

This section provides tips for troubleshooting and resolving common development mistakes and error situations.

","tags":["contributing","development","troubleshooting","errors"],"boost":2},{"location":"contributing/common-mistakes/#api-tests-return-an-unexpected-307-redirected","title":"API tests return an unexpected 307 Redirected","text":"

Summary: Requests require a trailing / in the request URL.

If you write a test that does not include a trailing / when making a request to a specific endpoint:

async def test_example(client):\n    response = await client.post(\"/my_route\")\n    assert response.status_code == 201\n

You'll see a failure like:

E       assert 307 == 201\nE        +  where 307 = <Response [307 Temporary Redirect]>.status_code\n

To resolve this, include the trailing /:

async def test_example(client):\n    response = await client.post(\"/my_route/\")\n    assert response.status_code == 201\n

Note: requests to nested URLs may exhibit the opposite behavior and require no trailing slash:

async def test_nested_example(client):\n    response = await client.post(\"/my_route/filter/\")\n    assert response.status_code == 307\n\n    response = await client.post(\"/my_route/filter\")\n    assert response.status_code == 200\n

Reference: \"HTTPX disabled redirect following by default\" in 0.22.0.

","tags":["contributing","development","troubleshooting","errors"],"boost":2},{"location":"contributing/common-mistakes/#pytestpytestunraisableexceptionwarning-or-resourcewarning","title":"pytest.PytestUnraisableExceptionWarning or ResourceWarning","text":"

As you're working with one of the FlowRunner implementations, you may get a surprising error like:

E               pytest.PytestUnraisableExceptionWarning: Exception ignored in: <ssl.SSLSocket fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>\nE\nE               Traceback (most recent call last):\nE                 File \".../pytest_asyncio/plugin.py\", line 306, in setup\nE                   res = await func(**_add_kwargs(func, kwargs, event_loop, request))\nE               ResourceWarning: unclosed <ssl.SSLSocket fd=10, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 60605), raddr=('127.0.0.1', 6443)>\n\n.../_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning\n

This is saying that your test suite (or the prefect library code) opened a connection to something (like a Docker daemon or a Kubernetes cluster) and didn't close it.

It may help to re-run the specific test with PYTHONTRACEMALLOC=25 pytest ... so that Python can display more of the stack trace where the connection was opened.

","tags":["contributing","development","troubleshooting","errors"],"boost":2},{"location":"contributing/overview/","title":"Contributing","text":"

Thanks for considering contributing to Prefect!

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#setting-up-a-development-environment","title":"Setting up a development environment","text":"

First, you'll need to download the source code and install an editable version of the Python package:

# Clone the repository\ngit clone https://github.com/PrefectHQ/prefect.git\ncd prefect\n\n# We recommend using a virtual environment\npython -m venv .venv\nsource .venv/bin/activate\n\n# Install the package with development dependencies\npip install -e \".[dev]\"\n\n# Setup pre-commit hooks for required formatting\npre-commit install\n

If you don't want to install the pre-commit hooks, you can manually install the formatting dependencies with:

pip install $(./scripts/precommit-versions.py)\n

You'll need to run black and ruff before a contribution can be accepted.

After installation, you can run the test suite with pytest:

# Run all the tests\npytest tests\n\n# Run a subset of tests\npytest tests/test_flows.py\n

Building the Prefect UI

If you intend to run a local Prefect server during development, you must first build the UI. See UI development for instructions.

Windows support is under development

Support for Prefect on Windows is a work in progress.

Right now, we're focused on your ability to develop and run flows and tasks on Windows, along with running the API server, orchestration engine, and UI.

Currently, we cannot guarantee that the tooling for developing Prefect itself in a Windows environment is fully functional.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#prefect-code-of-conduct","title":"Prefect Code of Conduct","text":"","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-pledge","title":"Our Pledge","text":"

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-standards","title":"Our Standards","text":"

Examples of behavior that contributes to creating a positive environment include:

Examples of unacceptable behavior by participants include:

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#our-responsibilities","title":"Our Responsibilities","text":"

Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#scope","title":"Scope","text":"

This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#enforcement","title":"Enforcement","text":"

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Chris White at chris@prefect.io. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#attribution","title":"Attribution","text":"

This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#developer-tooling","title":"Developer tooling","text":"

The Prefect CLI provides several helpful commands to aid development.

Start all services with hot-reloading on code changes (requires UI dependencies to be installed):

prefect dev start\n

Start a Prefect API that reloads on code changes:

prefect dev api\n

Start a Prefect agent that reloads on code changes:

prefect dev agent\n
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#ui-development","title":"UI development","text":"

Developing the Prefect UI requires that npm is installed.

Start a development UI that reloads on code changes:

prefect dev ui\n

Build the static UI (the UI served by prefect server start):

prefect dev build-ui\n
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#docs-development","title":"Docs Development","text":"

Prefect uses mkdocs for the docs website and the mkdocs-material theme. While we use mkdocs-material-insiders for production, builds can still happen without the extra plugins. Deploy previews are available on pull requests, so you'll be able to browse the final look of your changes before merging.

To build the docs:

mkdocs build\n

To serve the docs locally at http://127.0.0.1:8000/:

mkdocs serve\n

For additional mkdocs help and options:

mkdocs --help\n

We us the mkdocs-material theme. To add additional JavaScript or CSS to the docs, please see the theme documentation here.

Internal developers can install the production theme by running:

pip install -e git+https://github.com/PrefectHQ/mkdocs-material-insiders.git#egg=mkdocs-material\nmkdocs build # or mkdocs build --config-file mkdocs.insiders.yml if needed\n
","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#kubernetes-development","title":"Kubernetes development","text":"

Generate a manifest to deploy a development API to a local kubernetes cluster:

prefect dev kubernetes-manifest\n

To access the Prefect UI running in a Kubernetes cluster, use the kubectl port-forward command to forward a port on your local machine to an open port within the cluster. For example:

kubectl port-forward deployment/prefect-dev 4200:4200\n

This forwards port 4200 on the default internal loop IP for localhost to the Prefect server deployment.

To tell the local prefect command how to communicate with the Prefect API running in Kubernetes, set the PREFECT_API_URL environment variable:

export PREFECT_API_URL=http://localhost:4200/api\n

Since you previously configured port forwarding for the localhost port to the Kubernetes environment, you\u2019ll be able to interact with the Prefect API running in Kubernetes when using local Prefect CLI commands.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/overview/#adding-database-migrations","title":"Adding Database Migrations","text":"

To make changes to a table, first update the SQLAlchemy model in src/prefect/server/database/orm_models.py. For example, if you wanted to add a new column to the flow_run table, you would add a new column to the FlowRun model:

# src/prefect/server/database/orm_models.py\n\n@declarative_mixin\nclass ORMFlowRun(ORMRun):\n\"\"\"SQLAlchemy model of a flow run.\"\"\"\n    ...\n    new_column = Column(String, nullable=True) # <-- add this line\n

Next, you will need to generate new migration files. You must generate a new migration file for each database type. Migrations will be generated for whatever database type PREFECT_API_DATABASE_CONNECTION_URL is set to. See here for how to set the database connection URL for each database type.

To generate a new migration file, run the following command:

prefect server database revision --autogenerate -m \"<migration name>\"\n

Try to make your migration name brief but descriptive. For example:

The --autogenerate flag will automatically generate a migration file based on the changes to the models.

Always inspect the output of --autogenerate

--autogenerate will generate a migration file based on the changes to the models. However, it is not perfect. Be sure to check the file to make sure it only includes the changes you want to make. Additionally, you may need to remove extra statements that were included and not related to your change.

The new migration can be found in the src/prefect/server/database/migrations/versions/ directory. Each database type has its own subdirectory. For example, the SQLite migrations are stored in src/prefect/server/database/migrations/versions/sqlite/.

After you have inspected the migration file, you can apply the migration to your database by running the following command:

prefect server database upgrade -y\n

Once you have successfully created and applied migrations for all database types, make sure to update MIGRATION-NOTES.md to document your additions.

","tags":["open source","contributing","development","standards","migrations"],"boost":2},{"location":"contributing/style/","title":"Code style and practices","text":"

Generally, we follow the Google Python Style Guide. This document covers sections where we differ or where additional clarification is necessary.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports","title":"Imports","text":"

A brief collection of rules and guidelines for how imports should be handled in this repository.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports-in-__init__-files","title":"Imports in __init__ files","text":"

Leave __init__ files empty unless exposing an interface. If you must expose objects to present a simpler API, please follow these rules.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#exposing-objects-from-submodules","title":"Exposing objects from submodules","text":"

If importing objects from submodules, the __init__ file should use a relative import. This is required for type checkers to understand the exposed interface.

# Correct\nfrom .flows import flow\n
# Wrong\nfrom prefect.flows import flow\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#exposing-submodules","title":"Exposing submodules","text":"

Generally, submodules should not be imported in the __init__ file. Submodules should only be exposed when the module is designed to be imported and used as a namespaced object.

For example, we do this for our schema and model modules because it is important to know if you are working with an API schema or database model, both of which may have similar names.

import prefect.server.schemas as schemas\n\n# The full module is accessible now\nschemas.core.FlowRun\n

If exposing a submodule, use a relative import as you would when exposing an object.

# Correct\nfrom . import flows\n
# Wrong\nimport prefect.flows\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-to-run-side-effects","title":"Importing to run side-effects","text":"

Another use case for importing submodules is perform global side-effects that occur when they are imported.

Often, global side-effects on import are a dangerous pattern. Avoid them if feasible.

We have a couple acceptable use-cases for this currently:

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#imports-in-modules","title":"Imports in modules","text":"","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-other-modules","title":"Importing other modules","text":"

The from syntax should be reserved for importing objects from modules. Modules should not be imported using the from syntax.

# Correct\nimport prefect.server.schemas  # use with the full name\nimport prefect.server.schemas as schemas  # use the shorter name\n
# Wrong\nfrom prefect.server import schemas\n

Unless in an __init__.py file, relative imports should not be used.

# Correct\nfrom prefect.utilities.foo import bar\n
# Wrong\nfrom .utilities.foo import bar\n

Imports dependent on file location should never be used without explicit indication it is relative. This avoids confusion about the source of a module.

# Correct\nfrom . import test\n
# Wrong\nimport test\n
","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#resolving-circular-dependencies","title":"Resolving circular dependencies","text":"

Sometimes, we must defer an import and perform it within a function to avoid a circular dependency.

## This function in `settings.py` requires a method from the global `context` but the context\n## uses settings\ndef from_context():\n    from prefect.context import get_profile_context\n\n    ...\n

Attempt to avoid circular dependencies. This often reveals overentanglement in the design.

When performing deferred imports, they should all be placed at the top of the function.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#with-type-annotations","title":"With type annotations","text":"

If you are just using the imported object for a type signature, you should use the TYPE_CHECKING flag.

# Correct\nfrom typing import TYPE_CHECKING\n\nif TYPE_CHECKING:\n    from prefect.server.schemas.states import State\n\ndef foo(state: \"State\"):\n    pass\n

Note that usage of the type within the module will need quotes e.g. \"State\" since it is not available at runtime.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#importing-optional-requirements","title":"Importing optional requirements","text":"

We do not have a best practice for this yet. See the kubernetes, docker, and distributed implementations for now.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#delaying-expensive-imports","title":"Delaying expensive imports","text":"

Sometimes, imports are slow. We'd like to keep the prefect module import times fast. In these cases, we can lazily import the slow module by deferring import to the relevant function body. For modules that are consumed by many functions, the pattern used for optional requirements may be used instead.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#command-line-interface-cli-output-messages","title":"Command line interface (CLI) output messages","text":"

Upon executing a command that creates an object, the output message should offer: - A short description of what the command just did. - A bullet point list, rehashing user inputs, if possible. - Next steps, like the next command to run, if applicable. - Other relevant, pre-formatted commands that can be copied and pasted, if applicable. - A new line before the first line and after the last line.

Output Example:

$ prefect work-queue create testing\n\nCreated work queue with properties:\n    name - 'abcde'\nuuid - 940f9828-c820-4148-9526-ea8107082bda\n    tags - None\n    deployment_ids - None\n\nStart an agent to pick up flows from the created work queue:\n    prefect agent start -q 'abcde'\n\nInspect the created work queue:\n    prefect work-queue inspect 'abcde'\n

Additionally:

Placholder Example:

Create a work queue with tags:\n    prefect work-queue create '<WORK QUEUE NAME>' -t '<OPTIONAL TAG 1>' -t '<OPTIONAL TAG 2>'\n

Dedent Example:

from textwrap import dedent\n...\noutput_msg = dedent(\n    f\"\"\"\n    Created work queue with properties:\n        name - {name!r}\n        uuid - {result}\n        tags - {tags or None}\n        deployment_ids - {deployment_ids or None}\n\n    Start an agent to pick up flows from the created work queue:\n        prefect agent start -q {name!r}\n\n    Inspect the created work queue:\n        prefect work-queue inspect {name!r}\n    \"\"\"\n)\n

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/style/#api-versioning","title":"API Versioning","text":"

The Prefect client can be run separately from the Prefect orchestration server and communicate entirely via an API. Among other things, the Prefect client includes anything that runs task or flow code, (e.g. agents, and the Python client) or any consumer of Prefect metadata, (e.g. the Prefect UI, and CLI). The Prefect server stores this metadata and serves it via the REST API.

Sometimes, we make breaking changes to the API (for good reasons). In order to check that a Prefect client is compatible with the API it's making requests to, every API call the client makes includes a three-component API_VERSION header with major, minor, and patch versions.

For example, a request with the X-PREFECT-API-VERSION=3.2.1 header has a major version of 3, minor version 2, and patch version 1.

This version header can be changed by modifying the API_VERSION constant in prefect.server.api.server.

When making a breaking change to the API, we should consider if the change might be backwards compatible for clients, meaning that the previous version of the client would still be able to make calls against the updated version of the server code. This might happen if the changes are purely additive: such as adding a non-critical API route. In these cases, we should make sure to bump the patch version.

In almost all other cases, we should bump the minor version, which denotes a non-backwards-compatible API change. We have reserved the major version chanes to denote also-backwards compatible change that might be significant in some way, such as a major release milestone.

","tags":["standards","code style","coding practices","contributing"],"boost":2},{"location":"contributing/versioning/","title":"Versioning","text":"","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#understanding-version-numbers","title":"Understanding version numbers","text":"

Versions are composed of three parts: MAJOR.MINOR.PATCH. For example, the version 2.5.0 has a major version of 2, a minor version of 5, and patch version of 0.

Ocassionally, we will add a suffix to the version such as rc, a, or b. These indicate pre-release versions that users can opt-into installing to test functionality before it is ready for release.

Each release will increase one of the version numbers. If we increase a number other than the patch version, the versions to the right of it will be reset to zero.

","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#prefects-versioning-scheme","title":"Prefect's versioning scheme","text":"

Prefect will increase the major version when significant and widespread changes are made to the core product. It is very unlikely that the major version will change without extensive warning.

Prefect will increase the minor version when:

Prefect will increase the patch version when:

","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#breaking-changes-and-deprecation","title":"Breaking changes and deprecation","text":"

A breaking change means that your code will need to change to use a new version of Prefect. We strive to avoid breaking changes in all releases.

At times, Prefect will deprecate a feature. This means that a feature has been marked for removal in the future. When you use it, you may see warnings that it will be removed. A feature is deprecated when it will no longer be maintained. Frequently, a deprecated feature will have a new and improved alternative. Deprecated features will be retained for at least 3 minor version increases or 6 months, whichever is longer. We may retain deprecated features longer than this time period.

Prefect will sometimes include changes to behavior to fix a bug. These changes are not categorized as breaking changes.

","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#client-compatibility-with-prefect","title":"Client compatibility with Prefect","text":"

When running a Prefect server, you are in charge of ensuring the version is compatible with those of the clients that are using the server. Prefect aims to maintain backwards compatibility with old clients for each server release. In contrast, sometimes new clients cannot be used with an old server. The new client may expect the server to support functionality that it does not yet include. For this reason, we recommend that all clients are the same version as the server or older.

For example, a client on 2.1.0 can be used with a server on 2.5.0. A client on 2.5.0 cannot be used with a server on 2.1.0.

","tags":["versioning","semver"],"boost":2},{"location":"contributing/versioning/#client-compatibility-with-cloud","title":"Client compatibility with Cloud","text":"

Prefect Cloud targets compatibility with all versions of Prefect clients. If you encounter a compatibility issue, please file a bug report.

","tags":["versioning","semver"],"boost":2},{"location":"getting-started/installation/","title":"Installation","text":"

Prefect requires Python 3.8 or newer.

We recommend installing Prefect using a Python virtual environment manager such as pipenv, conda, or virtualenv/venv.

Windows and Linux requirements

See Windows installation notes and Linux installation notes for details on additional installation requirements and considerations.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#install-prefect","title":"Install Prefect","text":"

The following sections describe how to install Prefect in your development or execution environment.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-the-latest-version","title":"Installing the latest version","text":"

Prefect is published as a Python package. To install the latest release or upgrade an existing Prefect install, run the following command in your terminal:

pip install -U prefect\n

To install a specific version, specify the version number like this:

pip install -U \"prefect==2.10.4\"\n

See available release versions in the Prefect Release Notes.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-the-bleeding-edge","title":"Installing the bleeding edge","text":"

If you'd like to test with the most up-to-date code, you can install directly off the main branch on GitHub:

pip install -U git+https://github.com/PrefectHQ/prefect\n

The main branch may not be stable

Please be aware that this method installs unreleased code and may not be stable.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#installing-for-development","title":"Installing for development","text":"

If you'd like to install a version of Prefect for development:

  1. Clone the Prefect repository.
  2. Install an editable version of the Python package with pip install -e.
  3. Install pre-commit hooks.
$ git clone https://github.com/PrefectHQ/prefect.git\n$ cd prefect\n$ pip install -e \".[dev]\"\n$ pre-commit install\n

See our Contributing guide for more details about standards and practices for contributing to Prefect.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#checking-your-installation","title":"Checking your installation","text":"

To confirm that Prefect was installed correctly, run the command prefect version to print the version and environment details to your console.

$ prefect version\n\nVersion:             2.10.21\nAPI version:         0.8.4\nPython version:      3.10.12\nGit commit:          da816542\nBuilt:               Thu, Jul 13, 2023 2:05 PM\nOS/Arch:             darwin/arm64\nProfile:              local\nServer type:         ephemeral\nServer:\n  Database:          sqlite\n  SQLite version:    3.42.0\n
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#windows-installation-notes","title":"Windows installation notes","text":"

You can install and run Prefect via Windows PowerShell, the Windows Command Prompt, or conda. After installation, you may need to manually add the Python local packages Scripts folder to your Path environment variable.

The Scripts folder path looks something like this (the username and Python version may be different on your system):

C:\\Users\\MyUserNameHere\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\Scripts\n

Watch the pip install output messages for the Scripts folder path on your system.

If you're using Windows Subsystem for Linux (WSL), see Linux installation notes.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#linux-installation-notes","title":"Linux installation notes","text":"

Linux is a popular operating system for running Prefect. You can use Prefect Cloud as your API server, or host your own Prefect server backed by PostgreSQL.

For development, you can use SQLite 2.24 or newer as your database. Note that certain Linux versions of SQLite can be problematic. Compatible versions include Ubuntu 22.04 LTS and Ubuntu 20.04 LTS.

Alternatively, you can install SQLite on Red Hat Enterprise Linux (RHEL) or use the conda virtual environment manager and configure a compatible SQLite version.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#using-a-self-signed-ssl-certificate","title":"Using a self-signed SSL certificate","text":"

If you're using a self-signed SSL certificate, you need to configure your environment to trust the certificate. You can add the certificate to your system bundle and pointing your tools to use that bundle by configuring the SSL_CERT_FILE environment variable.

If the certificate is not part of your system bundle, you can set the PREFECT_API_TLS_INSECURE_SKIP_VERIFY to True to disable certificate verification altogether.

Note: Disabling certificate validation is insecure and only suggested as an option for testing!

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#proxies","title":"Proxies","text":"

Prefect supports communicating via proxies through environment variables. Simply set HTTPS_PROXY and SSL_CERT_FILE in your environment, and the underlying network libraries will route Prefect\u2019s requests appropriately. Read more about using Prefect Cloud with proxies here.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#external-requirements","title":"External requirements","text":"","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#sqlite","title":"SQLite","text":"

You can use Prefect Cloud as your API server, or host your own Prefect server backed by PostgreSQL.

By default, a local Prefect server instance uses SQLite as the backing database. SQLite is not packaged with the Prefect installation. Most systems will already have SQLite installed, because it is typically bundled as a part of Python. Prefect requires SQLite version 3.24.0 or later.

You can check your SQLite version by executing the following command in a terminal:

$ sqlite3 --version\n

Or use the Prefect CLI command prefect version, which prints version and environment details to your console, including the server database and version. For example:

$ prefect version\nVersion:             2.10.21\nAPI version:         0.8.4\nPython version:      3.10.12\nGit commit:          a46cbebb\nBuilt:               Sat, Jul 15, 2023 7:59 AM\nOS/Arch:             darwin/arm64\nProfile:              default\nServer type:         cloud\n
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#install-sqlite-on-rhel","title":"Install SQLite on RHEL","text":"

The following steps are needed to install an appropriate version of SQLite on Red Hat Enterprise Linux (RHEL). Note that some RHEL instances have no C compiler, so you may need to check for and install gcc first:

yum install gcc\n

Download and extract the tarball for SQLite.

wget https://www.sqlite.org/2022/sqlite-autoconf-3390200.tar.gz\ntar -xzf sqlite-autoconf-3390200.tar.gz\n

Move to the extracted SQLite directory, then build and install SQLite.

cd sqlite-autoconf-3390200/\n./configure\nmake\nmake install\n

Add LD_LIBRARY_PATH to your profile.

echo 'export LD_LIBRARY_PATH=\"/usr/local/lib\"' >> /etc/profile\n

Restart your shell to register these changes.

Now you can install Prefect using pip.

pip3 install prefect\n
","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#using-prefect-in-an-environment-with-http-proxies","title":"Using Prefect in an environment with HTTP proxies","text":"

If you are using Prefect Cloud or hosting your own Prefect server instance, the Prefect library will connect to the API via any proxies you have listed in the HTTP_PROXY, HTTPS_PROXY, or ALL_PROXY environment variables. You may also use the NO_PROXY environment variable to specify which hosts should not be sent through the proxy.

For more information about these environment variables, see the cURL documentation.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"getting-started/installation/#next-steps","title":"Next steps","text":"

Now that you have Prefect installed and your environment configured, you may want to check out the Tutorial to get more familiar with Prefect.

","tags":["installation","pip install","development","Linux","Windows","SQLite","upgrading"],"boost":2},{"location":"guides/","title":"Guides","text":"

Need help?

Get your questions answered with a Prefect product advocate by Booking A Rubber Duck!

","tags":["guides","how to"],"boost":2},{"location":"guides/dask-ray-task-runners/","title":"Dask and Ray task runners","text":"

Task runners provide an execution environment for tasks. In a flow decorator, you can specify a task runner to run the tasks called in that flow.

The default task runner is the ConcurrentTaskRunner.

Use .submit to run your tasks asynchronously

To run tasks asynchronously use the .submit method when you call them. If you call a task as you would normally in Python code it will run synchronously, even if you are calling the task within a flow that uses the ConcurrentTaskRunner, DaskTaskRunner, or RayTaskRunner.

Many real-world data workflows benefit from true parallel, distributed task execution. For these use cases, the following Prefect-developed task runners for parallel task execution may be installed as Prefect Integrations.

These task runners can spin up a local Dask cluster or Ray instance on the fly, or let you connect with a Dask or Ray environment you've set up separately. Then you can take advantage of massively parallel computing environments.

Use Dask or Ray in your flows to choose the execution environment that fits your particular needs.

To show you how they work, let's start small.

Remote storage

We recommend configuring remote file storage for task execution with DaskTaskRunner or RayTaskRunner. This ensures tasks executing in Dask or Ray have access to task result storage, particularly when accessing a Dask or Ray instance outside of your execution environment.

","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#configuring-a-task-runner","title":"Configuring a task runner","text":"

You may have seen this briefly in a previous tutorial, but let's look a bit more closely at how you can configure a specific task runner for a flow.

Let's start with the SequentialTaskRunner. This task runner runs all tasks synchronously and may be useful when used as a debugging tool in conjunction with async code.

Let's start with this simple flow. We import the SequentialTaskRunner, specify a task_runner on the flow, and call the tasks with .submit().

from prefect import flow, task\nfrom prefect.task_runners import SequentialTaskRunner\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=SequentialTaskRunner())\ndef greetings(names):\n    for name in names:\nsay_hello.submit(name)\nsay_goodbye.submit(name)\ngreetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

Save this as sequential_flow.py and run it in a terminal. You'll see output similar to the following:

$ python sequential_flow.py\n16:51:17.967 | INFO    | prefect.engine - Created flow run 'humongous-mink' for flow 'greetings'\n16:51:17.967 | INFO    | Flow run 'humongous-mink' - Starting 'SequentialTaskRunner'; submitted tasks will be run sequentially...\n16:51:18.038 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:51:18.038 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-0' immediately...\nhello arthur\n16:51:18.060 | INFO    | Task run 'say_hello-811087cd-0' - Finished in state Completed()\n16:51:18.107 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:51:18.107 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-0' immediately...\ngoodbye arthur\n16:51:18.123 | INFO    | Task run 'say_goodbye-261e56a8-0' - Finished in state Completed()\n16:51:18.134 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:51:18.134 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-1' immediately...\nhello trillian\n16:51:18.150 | INFO    | Task run 'say_hello-811087cd-1' - Finished in state Completed()\n16:51:18.159 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:51:18.159 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-1' immediately...\ngoodbye trillian\n16:51:18.181 | INFO    | Task run 'say_goodbye-261e56a8-1' - Finished in state Completed()\n16:51:18.190 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:51:18.190 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-2' immediately...\nhello ford\n16:51:18.210 | INFO    | Task run 'say_hello-811087cd-2' - Finished in state Completed()\n16:51:18.219 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:51:18.219 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-2' immediately...\ngoodbye ford\n16:51:18.237 | INFO    | Task run 'say_goodbye-261e56a8-2' - Finished in state Completed()\n16:51:18.246 | INFO    | Flow run 'humongous-mink' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:51:18.246 | INFO    | Flow run 'humongous-mink' - Executing 'say_hello-811087cd-3' immediately...\nhello marvin\n16:51:18.264 | INFO    | Task run 'say_hello-811087cd-3' - Finished in state Completed()\n16:51:18.273 | INFO    | Flow run 'humongous-mink' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:51:18.273 | INFO    | Flow run 'humongous-mink' - Executing 'say_goodbye-261e56a8-3' immediately...\ngoodbye marvin\n16:51:18.290 | INFO    | Task run 'say_goodbye-261e56a8-3' - Finished in state Completed()\n16:51:18.321 | INFO    | Flow run 'humongous-mink' - Finished in state Completed('All states completed.')\n

If we take out the log messages and just look at the printed output of the tasks, you see they're executed in sequential order:

$ python sequential_flow.py\nhello arthur\ngoodbye arthur\nhello trillian\ngoodbye trillian\nhello ford\ngoodbye ford\nhello marvin\ngoodbye marvin\n
","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#running-parallel-tasks-with-dask","title":"Running parallel tasks with Dask","text":"

You could argue that this simple flow gains nothing from parallel execution, but let's roll with it so you can see just how simple it is to take advantage of the DaskTaskRunner.

To configure your flow to use the DaskTaskRunner:

  1. Make sure the prefect-dask collection is installed by running pip install prefect-dask.
  2. In your flow code, import DaskTaskRunner from prefect_dask.task_runners.
  3. Assign it as the task runner when the flow is defined using the task_runner=DaskTaskRunner argument.
  4. Use the .submit method when calling functions.

This is the same flow as above, with a few minor changes to use DaskTaskRunner where we previously configured SequentialTaskRunner. Install prefect-dask, made these changes, then save the updated code as dask_flow.py.

from prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=DaskTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\nif __name__ == \"__main__\":\ngreetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

Note that, because you're using DaskTaskRunner in a script, you must use if __name__ == \"__main__\": or you'll see warnings and errors.

Now run dask_flow.py. If you get a warning about accepting incoming network connections, that's okay - everything is local in this example.

$ python dask_flow.py\n19:29:03.798 | INFO    | prefect.engine - Created flow run 'fine-bison' for flow 'greetings'\n\n19:29:03.798 | INFO    | Flow run 'fine-bison' - Using task runner 'DaskTaskRunner' 19:29:04.080 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:54:18.465 | INFO    | prefect.engine - Created flow run 'radical-finch' for flow 'greetings'\n16:54:18.465 | INFO    | Flow run 'radical-finch' - Starting 'DaskTaskRunner'; submitted tasks will be run concurrently...\n16:54:18.465 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:54:19.811 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at http://127.0.0.1:8787/status\n16:54:19.881 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:54:20.364 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-0' for execution.\n16:54:20.379 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:54:20.386 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-0' for execution.\n16:54:20.397 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:54:20.401 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-1' for execution.\n16:54:20.417 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:54:20.423 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-1' for execution.\n16:54:20.443 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:54:20.449 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-2' for execution.\n16:54:20.462 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:54:20.474 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-2' for execution.\n16:54:20.500 | INFO    | Flow run 'radical-finch' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:54:20.511 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_hello-811087cd-3' for execution.\n16:54:20.544 | INFO    | Flow run 'radical-finch' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:54:20.555 | INFO    | Flow run 'radical-finch' - Submitted task run 'say_goodbye-261e56a8-3' for execution.\nhello arthur\ngoodbye ford\ngoodbye arthur\nhello ford\ngoodbye marvin\ngoodbye trillian\nhello trillian\nhello marvin\n

DaskTaskRunner automatically creates a local Dask cluster, then starts executing all of the tasks in parallel. The results do not return in the same order as the sequential code above.

Notice what happens if you do not use the submit method when calling tasks:

from prefect import flow, task\nfrom prefect_dask.task_runners import DaskTaskRunner\n\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n\n@flow(task_runner=DaskTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello(name)\n        say_goodbye(name)\n\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n
$ python dask_flow.py\n\n16:57:34.534 | INFO    | prefect.engine - Created flow run 'papaya-honeybee' for flow 'greetings'\n16:57:34.534 | INFO    | Flow run 'papaya-honeybee' - Starting 'DaskTaskRunner'; submitted tasks will be run concurrently...\n16:57:34.535 | INFO    | prefect.task_runner.dask - Creating a new Dask cluster with `distributed.deploy.local.LocalCluster`\n16:57:35.715 | INFO    | prefect.task_runner.dask - The Dask dashboard is available at http://127.0.0.1:8787/status\n16:57:35.787 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-0' for task 'say_hello'\n16:57:35.788 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-0' immediately...\nhello arthur\n16:57:35.810 | INFO    | Task run 'say_hello-811087cd-0' - Finished in state Completed()\n16:57:35.820 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-0' for task 'say_goodbye'\n16:57:35.820 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-0' immediately...\ngoodbye arthur\n16:57:35.840 | INFO    | Task run 'say_goodbye-261e56a8-0' - Finished in state Completed()\n16:57:35.849 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-1' for task 'say_hello'\n16:57:35.849 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-1' immediately...\nhello trillian\n16:57:35.869 | INFO    | Task run 'say_hello-811087cd-1' - Finished in state Completed()\n16:57:35.878 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-1' for task 'say_goodbye'\n16:57:35.878 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-1' immediately...\ngoodbye trillian\n16:57:35.894 | INFO    | Task run 'say_goodbye-261e56a8-1' - Finished in state Completed()\n16:57:35.907 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-2' for task 'say_hello'\n16:57:35.907 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-2' immediately...\nhello ford\n16:57:35.924 | INFO    | Task run 'say_hello-811087cd-2' - Finished in state Completed()\n16:57:35.933 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-2' for task 'say_goodbye'\n16:57:35.933 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-2' immediately...\ngoodbye ford\n16:57:35.951 | INFO    | Task run 'say_goodbye-261e56a8-2' - Finished in state Completed()\n16:57:35.959 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_hello-811087cd-3' for task 'say_hello'\n16:57:35.959 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_hello-811087cd-3' immediately...\nhello marvin\n16:57:35.976 | INFO    | Task run 'say_hello-811087cd-3' - Finished in state Completed()\n16:57:35.985 | INFO    | Flow run 'papaya-honeybee' - Created task run 'say_goodbye-261e56a8-3' for task 'say_goodbye'\n16:57:35.985 | INFO    | Flow run 'papaya-honeybee' - Executing 'say_goodbye-261e56a8-3' immediately...\ngoodbye marvin\n16:57:36.004 | INFO    | Task run 'say_goodbye-261e56a8-3' - Finished in state Completed()\n16:57:36.289 | INFO    | Flow run 'papaya-honeybee' - Finished in state Completed('All states completed.')\n

The tasks are not submitted to the DaskTaskRunner and are run sequentially.

","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#running-parallel-tasks-with-ray","title":"Running parallel tasks with Ray","text":"

To demonstrate the ability to flexibly apply the task runner appropriate for your workflow, use the same flow as above, with a few minor changes to use the RayTaskRunner where we previously configured DaskTaskRunner.

To configure your flow to use the RayTaskRunner:

  1. Make sure the prefect-ray collection is installed by running pip install prefect-ray.
  2. In your flow code, import RayTaskRunner from prefect_ray.task_runners.
  3. Assign it as the task runner when the flow is defined using the task_runner=RayTaskRunner argument.

Ray environment limitations

While we're excited about parallel task execution via Ray to Prefect, there are some inherent limitations with Ray you should be aware of:

See the Ray installation documentation for further compatibility information.

Save this code in ray_flow.py.

from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=RayTaskRunner())\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

Now run ray_flow.py RayTaskRunner automatically creates a local Ray instance, then immediately starts executing all of the tasks in parallel. If you have an existing Ray instance, you can provide the address as a parameter to run tasks in the instance. See Running tasks on Ray for details.

","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/dask-ray-task-runners/#using-multiple-task-runners","title":"Using multiple task runners","text":"

Many workflows include a variety of tasks, and not all of them benefit from parallel execution. You'll most likely want to use the Dask or Ray task runners and spin up their respective resources only for those tasks that need them.

Because task runners are specified on flows, you can assign different task runners to tasks by using subflows to organize those tasks.

This example uses the same tasks as the previous examples, but on the parent flow greetings() we use the default ConcurrentTaskRunner. Then we call a ray_greetings() subflow that uses the RayTaskRunner to execute the same tasks in a Ray instance.

from prefect import flow, task\nfrom prefect_ray.task_runners import RayTaskRunner\n\n@task\ndef say_hello(name):\n    print(f\"hello {name}\")\n\n@task\ndef say_goodbye(name):\n    print(f\"goodbye {name}\")\n\n@flow(task_runner=RayTaskRunner())\ndef ray_greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n\n@flow()\ndef greetings(names):\n    for name in names:\n        say_hello.submit(name)\n        say_goodbye.submit(name)\n    ray_greetings(names)\n\nif __name__ == \"__main__\":\n    greetings([\"arthur\", \"trillian\", \"ford\", \"marvin\"])\n

If you save this as ray_subflow.py and run it, you'll see that the flow greetings runs as you'd expect for a concurrent flow, then flow ray-greetings spins up a Ray instance to run the tasks again.

","tags":["tasks","task runners","flow configuration","parallel execution","distributed execution","Dask","Ray"],"boost":2},{"location":"guides/migration-guide/","title":"Migrating from Prefect 1 to Prefect 2","text":"

This guide is designed to help you migrate your workflows from Prefect 1 to Prefect 2.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-stayed-the-same","title":"What stayed the same","text":"

Prefect 2 still:

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed","title":"What changed","text":"

Prefect 2 requires modifications to your existing tasks, flows, and deployment patterns. We've organized this section into the following categories:

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#simplified-patterns","title":"Simplified patterns","text":"

Since Prefect 2 allows running native Python code within the flow function, some abstractions are no longer necessary:

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#conceptual-and-syntax-changes","title":"Conceptual and syntax changes","text":"

The changes listed below require you to modify your workflow code. The following table shows how Prefect 1 concepts have been implemented in Prefect 2. The last column contains references to additional resources that provide more details and examples.

Concept Prefect 1 Prefect 2 Reference links Flow definition. with Flow(\"flow_name\") as flow: @flow(name=\"flow_name\") How can I define a flow? Flow executor that determines how to execute your task runs. Executor such as LocalExecutor. Task runner such as ConcurrentTaskRunner. What is the default TaskRunner (executor)? Configuration that determines how and where to execute your flow runs. Run configuration such as flow.run_config = DockerRun(). Create an infrastructure block such as a Docker Container and specify it as the infrastructure when creating a deployment. How can I run my flow in a Docker container? Assignment of schedules and default parameter values. Schedules are attached to the flow object and default parameter values are defined within the Parameter tasks. Schedules and default parameters are assigned to a flow\u2019s Deployment, rather than to a Flow object. How can I attach a schedule to a flow? Retries @task(max_retries=2, retry_delay=timedelta(seconds=5)) @task(retries=2, retry_delay_seconds=5) How can I specify the retry behavior for a specific task? Logger syntax. Logger is retrieved from prefect.context and can only be used within tasks. In Prefect 2, you can log not only from tasks, but also within flows. To get the logger object, use: prefect.get_run_logger(). How can I add logs to my flow? The syntax and contents of Prefect context. Context is a thread-safe way of accessing variables related to the flow run and task run. The syntax to retrieve it: prefect.context. Context is still available, but its content is much richer, allowing you to retrieve even more information about your flow runs and task runs. The syntax to retrieve it: prefect.context.get_run_context(). How to access Prefect context values? Task library. Included in the main Prefect Core repository. Separated into individual repositories per system, cloud provider, or technology. How to migrate Prefect 1 tasks to Prefect 2 integrations.","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed-in-dataflow-orchestration","title":"What changed in dataflow orchestration?","text":"

Let\u2019s look at the differences in how Prefect 2 transitions your flow and task runs between various execution states.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#what-changed-in-flow-deployment-patterns","title":"What changed in flow deployment patterns?","text":"

To deploy your Prefect 1 flows, you have to send flow metadata to the backend in a step called registration. Prefect 2 no longer requires flow pre-registration. Instead, you create a Deployment that specifies the entry point to your flow code and optionally specifies:

The API is now implemented as a REST API rather than GraphQL. This page illustrates how you can interact with the API.

In Prefect 1, the logical grouping of flows was based on projects. Prefect 2 provides a much more flexible way of organizing your flows, tasks, and deployments through customizable filters and\u00a0tags. This page provides more details on how to assign tags to various Prefect 2 objects.

The role of agents has changed:

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#new-features-introduced-in-prefect-2","title":"New features introduced in Prefect 2","text":"

The following new components and capabilities are enabled by Prefect 2.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#orchestration-behind-the-api","title":"Orchestration behind the API","text":"

Apart from new features, Prefect 2 simplifies many usage patterns and provides a much more seamless onboarding experience.

Every time you run a flow, whether it is tracked by the API server or ad-hoc through a Python script, it is on the same UI page for easier debugging and observability.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#code-as-workflows","title":"Code as workflows","text":"

With Prefect 2, your functions\u00a0are\u00a0your flows and tasks. Prefect 2 automatically detects your flows and tasks without the need to define a rigid DAG structure. While use of tasks is encouraged to provide you the maximum visibility into your workflows, they are no longer required. You can add a single @flow decorator to your main function to transform any Python script into a Prefect workflow.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#incremental-adoption","title":"Incremental adoption","text":"

The built-in SQLite database automatically tracks all your locally executed flow runs. As soon as you start a Prefect server and open the Prefect UI in your browser (or authenticate your CLI with your Prefect Cloud workspace), you can see all your locally executed flow runs in the UI. You don't even need to start an agent.

Then, when you want to move toward scheduled, repeatable workflows, you can build a deployment and send it to the server by running a CLI command or a Python script.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#fewer-ambiguities","title":"Fewer ambiguities","text":"

Prefect 2 eliminates ambiguities in many ways. For example. there is no more confusion between Prefect Core and Prefect Server \u2014 Prefect 2 unifies those into a single open source product. This product is also much easier to deploy with no requirement for Docker or docker-compose.

If you want to switch your backend to use Prefect Cloud for an easier production-level managed experience, Prefect profiles let you quickly connect to your workspace.

In Prefect 1, there are several confusing ways you could implement caching. Prefect 2 resolves those ambiguities by providing a single cache_key_fn function paired with cache_expiration, allowing you to define arbitrary caching mechanisms \u2014 no more confusion about whether you need to use cache_for, cache_validator, or file-based caching using targets.

For more details on how to configure caching, check out the following resources:

A similarly confusing concept in Prefect 1 was distinguishing between the functional and imperative APIs. This distinction caused ambiguities with respect to how to define state dependencies between tasks. Prefect 1 users were often unsure whether they should use the functional upstream_tasks keyword argument or the imperative methods such as task.set_upstream(), task.set_downstream(), or flow.set_dependencies(). In Prefect 2, there is only the functional API.

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/migration-guide/#next-steps","title":"Next steps","text":"

We know migrations can be tough. We encourage you to take it step-by-step and experiment with the new features.

To make the migration process easier for you:

Happy Engineering!

","tags":["migration","upgrading","best practices"],"boost":2},{"location":"guides/push-work-pools/","title":"Push Work to Serverless Computing Infrastructure","text":"

Push work pools are a special type of work pool that allows Prefect Cloud to submit flow runs for execution to serverless computing infrastructure without running a worker. Push work pools currently support execution in GCP Cloud Run Jobs, Azure Container Instances, and AWS ECS Tasks.

In this guide you will:

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#setup","title":"Setup","text":"Google Cloud RunAWS ECSAzure Container Instance

To push work to Cloud Run, a GCP service account and an API Key are required.

Create a service account by navigating to the service accounts page and clicking Create. Name and describe your service account, and click continue to configure permissions.

The service account must have two roles at a minimum, Cloud Run Developer, and Service Account User.

Once the Service account is created, navigate to its Keys page to add an API key. Create a JSON type key, download it, and store it somewhere safe for use in the next section.

To push work to ECS, AWS credentials are required.

Create a user and attach the AmazonECS_FullAccess permissions.

From that user's page create credentials and store them somewhere safe for use in the next section.

To push work to Azure, an Azure subscription, resource worker and tenant secret are required.

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#create-subscription-and-resource-worker","title":"Create Subscription and Resource Worker","text":"
  1. In the Azure portal, create a subscription.
  2. Create a resource group within your subscription.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#create-app-registration","title":"Create App Registration","text":"
  1. In the Azure portal, create an app registration.
  2. In the app registration, create a client secret. Copy the value and store it somewhere safe.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#add-app-registration-to-subscription","title":"Add App Registration to Subscription","text":"
  1. Navigate to the resource group you created earlier.
  2. Click on \"Access control (IAM)\" and then \"Role assignments\".
  3. Search for the app registration and select it. Give it a role that has sufficient privileges to create, run, and delete ACI container groups.
","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#work-pool-configuration","title":"Work Pool Configuration","text":"

Our push work pool will store information about what type of infrastructure we're running on, what default values to provide to compute jobs, and other important execution environment parameters. Because our push work pool needs to integrate securely with your serverless infrastructure, we need to start by storing our credentials in Prefect Cloud, which we'll do by making a block.

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#creating-a-credentials-block","title":"Creating a Credentials Block","text":"Google Cloud RunAWS ECSAzure Container Instance

Navigate to the blocks page, click create new block, and select GCP Credentials for the type.

For use in a push work pool, this block must have the contents of the JSON key stored in the Service Account Info field, as such:

Provide any other optional information and create your block.

Navigate to the blocks page, click create new block, and select AWS Credentials for the type.

For use in a push work pool, this block must have the region and cluster name filled out, in addiiton to access key and access key secret.

Provide any other optional information and create your block.

Navigate to the blocks page, click create new block, and select Azure Container Instance Credentials for the type.

Locate the client ID and tenant ID on your app registration and use the client secret you saved earlier.

Provide any other optional information and create your block.

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#create-push-work-pool","title":"Create Push Work Pool","text":"

Now navigate to work pools and click create to start configuring your push work pool by selecting a push option in the infrastructure type step.

Google Cloud RunAWS ECSAzure Container Instance

Each step has several optional fields that are detailed in the work pools documentation. For our purposes, select the block you created under the GCP Credentials field. This will allow Prefect Cloud to securely interact with your GCP project.

Each step has several optional fields that are detailed in the work pools documentation. For our purposes, select the block you created under the AWS Credentials field. This will allow Prefect Cloud to securely interact with your ECS cluster.

Fill in the subscription ID and resource group name from the resource group you created. Add the Azure Container Instance Credentials block you created in the step above.

Create your pool and you are ready to deploy flows to your Push work pool.

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#deployment","title":"Deployment","text":"

Deployment details are described in the deployments concept section. Your deployment needs to be configured to send flow runs to our push work pool. For example, if you create a deployment through the interactive command line experience, choose the work pool you just created. If you are deploying an existing prefect.yaml file, the deployment would contain:

  work_pool:\nname: my-push-pool\n

Deploying your flow to the my-push-pool work pool will ensure that runs that are ready for execution will be submitted immediately, without the need for a worker to poll for them.

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/push-work-pools/#putting-it-all-together","title":"Putting it all Together","text":"

With your deployment created, navigate to its detail page and create a new flow run. You'll see the flow start running without ever having to poll the work pool, because Prefect Cloud securely connected to your serverless infrastructure, created a job, ran the job, and began reporting on its execution.

","tags":["work pools","deployments","Cloud Run","AWS ECS","Azure Container Instance"],"boost":2},{"location":"guides/state-change-hooks/","title":"State Change Hooks","text":"

You've read about how state change hooks execute code in response to changes in flow or task run states, enabling you to define actions for specific state transitions in a workflow. Now let's see some real-world use cases!

","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#example-use-cases","title":"Example use cases","text":"","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#send-a-notification-when-a-flow-run-fails","title":"Send a notification when a flow run fails","text":"

State change hooks enable you to customize messages sent when tasks transition between states, such as sending notifications containing sensitive information when tasks enter a Failed state. Let's run a client-side hook upon a flow run entering a Failed state.

from prefect import flow\nfrom prefect.blocks.core import Block\nfrom prefect.settings import PREFECT_API_URL\n\ndef notify_slack(flow, flow_run, state):\n    slack_webhook_block = Block.load(\n        \"slack-webhook/my-slack-webhook\"\n    )\n\n    slack_webhook_block.notify(\n        (\n            f\"Your job {flow_run.name} entered {state.name} \"\n            f\"with message:\\n\\n\"\n            f\"See <https://{PREFECT_API_URL.value()}/flow-runs/\"\n            f\"flow-run/{flow_run.id}|the flow run in the UI>\\n\\n\"\n            f\"Tags: {flow_run.tags}\\n\\n\"\n            f\"Scheduled start: {flow_run.expected_start_time}\"\n        )\n    )\n\n@flow(on_failure=[notify_slack], retries=1)\ndef failing_flow():\n    raise ValueError(\"oops!\")\n\nif __name__ == \"__main__\":\n    failing_flow()\n

Note that because we've configured retries here, the on_failure hook will not run until all retries have completed, when the flow run finally enters a Failed state.

","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/state-change-hooks/#delete-a-cloud-run-job-when-a-flow-crashes","title":"Delete a Cloud Run job when a flow crashes","text":"

State change hooks can aid in managing infrastructure cleanup in scenarios where tasks spin up individual infrastructure resources independently of Prefect. When a flow run crashes, tasks may exit abruptly, resulting in the potential omission of cleanup logic within the tasks. State change hooks can be used to ensure infrastructure is properly cleaned up even when a flow run enters a Crashed state.

Let's create a hook that deletes a Cloud Run job if the flow run crashes.

import os\nfrom prefect import flow, task\nfrom prefect.blocks.system import String\nfrom prefect.client import get_client\nimport prefect.runtime\n\nasync def delete_cloud_run_job(flow, flow_run, state):\n\"\"\"Flow run state change hook that deletes a Cloud Run Job if\n    the flow run crashes.\"\"\"\n\n    # retrieve Cloud Run job name\n    cloud_run_job_name = await String.load(\n        name=\"crashing-flow-cloud-run-job\"\n    )\n\n    # delete Cloud Run job\n    delete_job_command = f\"yes | gcloud beta run jobs delete \n    {cloud_run_job_name.value} --region us-central1\"\n    os.system(delete_job_command)\n\n    # clean up the Cloud Run job string block as well\n    async with get_client() as client:\n        block_document = await client.read_block_document_by_name(\n            \"crashing-flow-cloud-run-job\", block_type_slug=\"string\"\n        )\n        await client.delete_block_document(block_document.id)\n\n@task\ndef my_task_that_crashes():\n    raise SystemExit(\"Crashing on purpose!\")\n\n@flow(on_crashed=[delete_cloud_run_job])\ndef crashing_flow():\n\"\"\"Save the flow run name (i.e. Cloud Run job name) as a \n    String block. It then executes a task that ends up crashing.\"\"\"\n    flow_run_name = prefect.runtime.flow_run.name\n    cloud_run_job_name = String(value=flow_run_name)\n    cloud_run_job_name.save(\n        name=\"crashing-flow-cloud-run-job\", overwrite=True\n    )\n\n    my_task_that_crashes()\n\nif __name__ == \"__main__\":\n    crashing_flow()\n
","tags":["state change","hooks","triggers"],"boost":2},{"location":"guides/testing/","title":"Testing","text":"

Once you have some awesome flows, you probably want to test them!

","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/testing/#unit-testing-flows","title":"Unit testing flows","text":"

Prefect provides a simple context manager for unit tests that allows you to run flows and tasks against a temporary local SQLite database.

from prefect import flow\nfrom prefect.testing.utilities import prefect_test_harness\n\n@flow\ndef my_favorite_flow():\n    return 42\n\ndef test_my_favorite_flow():\n  with prefect_test_harness():\n      # run the flow against a temporary testing database\n      assert my_favorite_flow() == 42 \n

For more extensive testing, you can leverage prefect_test_harness as a fixture in your unit testing framework. For example, when using pytest:

from prefect import flow\nimport pytest\nfrom prefect.testing.utilities import prefect_test_harness\n\n@pytest.fixture(autouse=True, scope=\"session\")\ndef prefect_test_fixture():\n    with prefect_test_harness():\n        yield\n\n@flow\ndef my_favorite_flow():\n    return 42\n\ndef test_my_favorite_flow():\n    assert my_favorite_flow() == 42\n

Note

In this example, the fixture is scoped to run once for the entire test session. In most cases, you will not need a clean database for each test and just want to isolate your test runs to a test database. Creating a new test database per test creates significant overhead, so we recommend scoping the fixture to the session. If you need to isolate some tests fully, you can use the test harness again to create a fresh database.

","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/testing/#unit-testing-tasks","title":"Unit testing tasks","text":"

To test an individual task, you can access the original function using .fn:

from prefect import flow, task\n\n@task\ndef my_favorite_task():\n    return 42\n\n@flow\ndef my_favorite_flow():\n    val = my_favorite_task()\n    return val\n\ndef test_my_favorite_task():\n    assert my_favorite_task.fn() == 42\n
","tags":["testing","unit testing","development"],"boost":2},{"location":"guides/troubleshooting/","title":"Troubleshooting","text":"

Don't Panic! If you experience an error with Prefect, there are many paths to understanding and resolving it. The first troubleshooting step is confirming that you are running the latest version of Prefect. If you are not, be sure to upgrade to the latest version, since the issue may have already been fixed. Beyond that, there are several categories of errors:

","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#upgrade","title":"Upgrade","text":"

Prefect is constantly evolving, adding new features and fixing bugs. Chances are that a patch has already been identified and released. Search existing issues for similar reports and check out the Release Notes. Upgrade to the newest version with the following command:

pip install --upgrade prefect\n

Different components may use different versions of Prefect:

Integration Versions

Keep in mind that integrations are versioned and released independently of the core Prefect library. They should be upgraded simultaneously with the core library, using the same method.

","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#logs","title":"Logs","text":"

In many cases, there will be an informative stack trace in Prefect's logs. Read it carefully, locate the source of the error, and try to identify the cause.

There are two types of logs:

If your flow and task logs are empty, there may have been an infrastructure issue that prevented your flow from starting. Check your agent logs for more details.

If there is no clear indication of what went wrong, try updating the logging level from the default INFO level to the DEBUG level. Settings such as the logging level are propagated from the agent or worker environment to the flow run environment and can be set via environment variables or the prefect config set CLI:

# Using the CLI\nprefect config set PREFECT_LOGGING_LEVEL=DEBUG\n\n# Using environment variables\nexport PREFECT_LOGGING_LEVEL=DEBUG\n

The DEBUG logging level produces a high volume of logs so consider setting it back to INFO once any issues are resolved.

","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#cloud","title":"Cloud","text":"

When using Prefect Cloud, there are the additional concerns of authentication and authorization. The Prefect API authenticates users and service accounts - collectively known as actors - with API keys. Missing, incorrect, or expired API keys will result in a 401 response with detail Invalid authentication credentials. Use the following command to check your authentication, replacing $PREFECT_API_KEY with your API key:

curl -s -H \"Authorization: Bearer $PREFECT_API_KEY\" \"https://api.prefect.cloud/api/me/\"\n

Users vs Service Accounts

Service accounts - sometimes referred to as bots - represent non-human actors that interact with Prefect such as agents, workers, and CI/CD systems. Each human that interacts with Prefect should be represented as a user. User API keys start with pnu_ and service account API keys start with pnb_.

Supposing the response succeeds, let's check our authorization. Actors can be members of workspaces. An actor attempting an action in a workspace they are not a member of will result in a 404 response. Use the following command to check your actor's workspace memberships:

curl -s -H \"Authorization: Bearer $PREFECT_API_KEY\" \"https://api.prefect.cloud/api/me/workspaces\"\n

Formatting JSON

Python comes with a helpful tool for formatting JSON. Append the following to the end of the command above to make the output more readable: | python -m json.tool

Make sure your actor is a member of the workspace you are working in. Within a workspace, an actor has a role which grants them certain permissions. Insufficient permissions will result in an error. For example, starting an agent or worker with the Viewer role, will result in errors.

","tags":["troubleshooting","guides","how to"]},{"location":"guides/troubleshooting/#execution","title":"Execution","text":"

Prefect flows can be executed locally by the user, or remotely by an agent or worker. Local execution generally means that you - the user - run your flow directly with a command like python flow.py. Remote execution generally means that an agent or worker runs your flow via a deployment, optionally on different infrastructure.

With remote execution, the creation of your flow run happens separately from its execution. Flow runs are assigned to a work pool and a work queue. In order for flow runs to execute, an agent or worker must be subscribed to the work pool and work queue, otherwise the flow runs will go from Scheduled to Late. Ensure that your work pool and work queue have a subscribed agent or worker.

Local and remote execution can also differ in their treatment of relative imports. If switching from local to remote execution results in local import errors, try replicating the behavior by executing the flow locally with the -m flag (i.e. python -m flow instead of python flow.py). Read more about -m here.

","tags":["troubleshooting","guides","how to"]},{"location":"guides/deployment/aci/","title":"Run an Agent with Azure Container Instances","text":"

Microsoft Azure Container Instances (ACI) provides a convenient and simple service for quickly spinning up a Docker container that can host a Prefect Agent and execute flow runs.

","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#prerequisites","title":"Prerequisites","text":"

To follow this quickstart, you'll need the following:

","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#create-a-resource-group","title":"Create a resource group","text":"

Like most Azure resources, ACI applications must live in a resource group. If you don\u2019t already have a resource group you\u2019d like to use, create a new one by running the az group create command. For example, this example creates a resource group called prefect-agents in the eastus region:

az group create --name prefect-agents --location eastus\n

Feel free to change the group name or location to match your use case. You can also run az account list-locations -o table to see all available resource group locations for your account.

","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#create-the-container-instance","title":"Create the container instance","text":"

Prefect provides pre-configured Docker images you can use to quickly stand up a container instance. These Docker images include Python and Prefect. For example, the image prefecthq/prefect:2-python3.10 includes the latest release version of Prefect and Python 3.10.

To create the container instance, use the az container create command. This example shows the syntax, but you'll need to provide the correct values for [ACCOUNT-ID],[WORKSPACE-ID], [API-KEY], and any dependencies you need to pip install on the instance. These options are discussed below.

az container create \\\n--resource-group prefect-agents \\\n--name prefect-agent-example \\\n--image prefecthq/prefect:2-python3.10 \\\n--secure-environment-variables PREFECT_API_URL='https://api.prefect.cloud/api/accounts/[ACCOUNT-ID]/workspaces/[WORKSPACE-ID]' PREFECT_API_KEY='[API-KEY]' \\\n--command-line \"/bin/bash -c 'pip install adlfs s3fs requests pandas; prefect agent start -p default-agent-pool -q test'\"\n

When the container instance is running, go to Prefect Cloud and select the Work Pools page. Select default-agent-pool, then select the Queues tab to see work queues configured on this work pool. When the container instance is running and the agent has started, the test work queue displays \"Healthy\" status. This work queue and agent are ready to execute deployments configured to run on the test queue.

Agents and queues

The agent running in this container instance can now pick up and execute flow runs for any deployment configured to use the test queue on the default-agent-pool work pool.

","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#container-create-options","title":"Container create options","text":"

Let's break down the details of the az container create command used here.

The az container create command creates a new ACI container.

--resource-group prefect-agents tells Azure which resource group the new container is created in. Here, the examples uses the prefect-agents resource group created earlier.

--name prefect-agent-example determines the container name you will see in the Azure Portal. You can set any name you\u2019d like here to suit your use case, but container instance names must be unique in your resource group.

--image prefecthq/prefect:2-python3.10 tells ACI which Docker images to run. The script above pulls a public Prefect image from Docker Hub. You can also build custom images and push them to a public container registry so ACI can access them. Or you can push your image to a private Azure Container Registry and use it to create a container instance.

--secure-environment-variables sets environment variables that are only visible from inside the container. They do not show up when viewing the container\u2019s metadata. You'll populate these environment variables with a few pieces of information to configure the execution environment of the container instance so it can communicate with your Prefect Cloud workspace:

--command-line lets you override the container\u2019s normal entry point and run a command instead. The script above uses this section to install the adlfs pip package so it can read flow code from Azure Blob Storage, along with s3fs, pandas, and requests. It then runs the Prefect agent, in this case using the default work pool and a test work queue. If you want to use a different work pool or queue, make sure to change these values appropriately.

","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#create-a-deployment","title":"Create a deployment","text":"

Following the example of the Flow deployments tutorial, let's create a deployment that can be executed by the agent on this container instance.

In an environment where you have installed Prefect, create a new folder called health_test, and within it create a new file called health_flow.py containing the following code.

import prefect\nfrom prefect import task, flow\nfrom prefect import get_run_logger\n\n\n@task\ndef say_hi():\n    logger = get_run_logger()\n    logger.info(\"Hello from the Health Check Flow! \ud83d\udc4b\")\n\n\n@task\ndef log_platform_info():\n    import platform\n    import sys\n    from prefect.server.api.server import SERVER_API_VERSION\n\n    logger = get_run_logger()\n    logger.info(\"Host's network name = %s\", platform.node())\n    logger.info(\"Python version = %s\", platform.python_version())\n    logger.info(\"Platform information (instance type) = %s \", platform.platform())\n    logger.info(\"OS/Arch = %s/%s\", sys.platform, platform.machine())\n    logger.info(\"Prefect Version = %s \ud83d\ude80\", prefect.__version__)\n    logger.info(\"Prefect API Version = %s\", SERVER_API_VERSION)\n\n\n@flow(name=\"Health Check Flow\")\ndef health_check_flow():\n    hi = say_hi()\n    log_platform_info(wait_for=[hi])\n

Now create a deployment for this flow script, making sure that it's configured to use the test queue on the default-agent-pool work pool.

prefect deployment build --infra process --storage-block azure/flowsville/health_test --name health-test --pool default-agent-pool --work-queue test --apply health_flow.py:health_check_flow\n

Once created, any flow runs for this deployment will be picked up by the agent running on this container instance.

Infrastructure and storage

This Prefect deployment example was built using the Process infrastructure type and Azure Blob Storage.

You might wonder why your deployment needs process infrastructure rather than DockerContainer infrastructure when you are deploying a Docker image to ACI.

A Prefect deployment\u2019s infrastructure type describes how you want Prefect agents to run flows for the deployment. With DockerContainer infrastructure, the agent will try to use Docker to spin up a new container for each flow run. Since you\u2019ll be starting your own container on ACI, you don\u2019t need Prefect to do it for you. Specifying process infrastructure on the deployment tells Prefect you want to agent to run flows by starting a process in your ACI container.

You can use any storage type as long as you've configured a block for it before creating the deployment.

","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/aci/#cleaning-up","title":"Cleaning up","text":"

Note that ACI instances may incur usage charges while running, but must be running for the agent to pick up and execute flow runs.

To stop a container, use the az container stop command:

az container stop --resource-group prefect-agents --name prefect-agent-example\n

To delete a container, use the az container delete command:

az container delete --resource-group prefect-agents --name prefect-agent-example\n
","tags":["Docker","containers","agents","cloud"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/","title":"Developing a New Worker Type","text":"

Advanced Topic

This tutorial is for users who want to extend the Prefect framework and completing this successfully will require deep knowledge of Prefect concepts. For standard use cases, we recommend using one of the available workers instead.

Prefect workers are responsible for setting up execution infrastructure and starting flow runs on that infrastructure.

A list of available workers can be found in the Work Pools, Workers & Agents documentation. What if you want to execute your flow runs on infrastructure that doesn't have an available worker type? This tutorial will walk you through creating a custom worker that can run your flows on your chosen infrastructure.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-configuration","title":"Worker Configuration","text":"

When setting up an execution environment for a flow run, a worker receives configuration for the infrastructure it is designed to work with. Examples of configuration values include memory allocation, CPU allocation, credentials, image name, etc. The worker then uses this configuration to create the execution environment and start the flow run.

How are the configuration values populated?

The work pool that a worker polls for flow runs has a base job template associated with it. The template is the contract for how configuration values populate for each flow run.

The keys in the job_configuration section of this base job template match the worker's configuration class attributes. The values in the job_configuration section of the base job template are used to populate the attributes of the worker's configuration class.

The work pool creator gets to decide how they want to populate the values in the job_configuration section of the base job template. The values can be hard-coded, templated using placeholders, or a mix of these two approaches. Because you, as the worker developer, don't know how the work pool creator will populate the values, you should set sensible defaults for your configuration class attributes as a matter of best practice.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#implementing-a-basejobconfiguration-subclass","title":"Implementing a BaseJobConfiguration Subclass","text":"

A worker developer defines their worker's configuration to function with a class extending BaseJobConfiguration.

BaseJobConfiguration has attributes that are common to all workers:

Attribute Description name The name to assign to the created execution environment. env Environment variables to set in the created execution environment. labels The labels assigned to the created execution environment for metadata purposes. command The command to use when starting a flow run.

Prefect sets values for each attribute before giving the configuration to the worker. If you want to customize the values of these attributes, use the prepare_for_flow_run method.

Here's an example prepare_for_flow_run method that adds a label to the execution environment:

def prepare_for_flow_run(\n    self, flow_run, deployment = None, flow = None,\n):  \n    super().prepare_for_flow_run(flow_run, deployment, flow)  \n    self.labels.append(\"my-custom-label\")\n

A worker configuration class is a Pydantic model, so you can add additional attributes to your configuration class as Pydantic fields. For example, if you want to allow memory and CPU requests for your worker, you can do so like this:

from pydantic import Field\nfrom prefect.workers.base import BaseJobConfiguration\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    memory: int = Field(\n            default=1024,\n            description=\"Memory allocation for the execution environment.\"\n        )\n    cpu: int = Field(\n            default=500, \n            description=\"CPU allocation for the execution environment.\"\n        )\n

This configuration class will populate the job_configuration section of the resulting base job template.

For this example, the base job template would look like this:

job_configuration:\nname: \"{{ name }}\"\nenv: \"{{ env }}\"\nlabels: \"{{ labels }}\"\ncommand: \"{{ command }}\"\nmemory: \"{{ memory }}\"\ncpu: \"{{ cpu }}\"\nvariables:\ntype: object\nproperties:\nname:\ntitle: Name\ndescription: Name given to infrastructure created by a worker.\ntype: string\nenv:\ntitle: Environment Variables\ndescription: Environment variables to set when starting a flow run.\ntype: object\nadditionalProperties:\ntype: string\nlabels:\ntitle: Labels\ndescription: Labels applied to infrastructure created by a worker.\ntype: object\nadditionalProperties:\ntype: string\ncommand:\ntitle: Command\ndescription: The command to use when starting a flow run. In most cases,\nthis should be left blank and the command will be automatically generated\nby the worker.\ntype: string\nmemory:\ntitle: Memory\ndescription: Memory allocation for the execution environment.\ntype: integer\ndefault: 1024\ncpu:\ntitle: CPU\ndescription: CPU allocation for the execution environment.\ntype: integer\ndefault: 500\n

This base job template defines what values can be provided by deployment creators on a per-deployment basis and how those provided values will be translated into the configuration values that the worker will use to create the execution environment.

Notice that each attribute for the class was added in the job_configuration section with placeholders whose name matches the attribute name. The variables section was also populated with the OpenAPI schema for each attribute. If a configuration class is used without explicitly declaring any template variables, the template variables will be inferred from the configuration class attributes.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#customizing-configuration-attribute-templates","title":"Customizing Configuration Attribute Templates","text":"

You can customize the template for each attribute for situations where the configuration values should use more sophisticated templating. For example, if you want to add units for the memory attribute, you can do so like this:

from pydantic import Field\nfrom prefect.workers.base import BaseJobConfiguration\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    memory: str = Field(\n            default=\"1024Mi\",\n            description=\"Memory allocation for the execution environment.\"\n            template=\"{{ memory_request }}Mi\"\n        )\n    cpu: str = Field(\n            default=\"500m\", \n            description=\"CPU allocation for the execution environment.\"\n            template=\"{{ cpu_request }}m\"\n        )\n

Notice that we changed the type of each attribute to str to accommodate the units, and we added a new template attribute to each attribute. The template attribute is used to populate the job_configuration section of the resulting base job template.

For this example, the job_configuration section of the resulting base job template would look like this:

job_configuration:\nname: \"{{ name }}\"\nenv: \"{{ env }}\"\nlabels: \"{{ labels }}\"\ncommand: \"{{ command }}\"\nmemory: \"{{ memory_request }}Mi\"\ncpu: \"{{ cpu_request }}m\"\n

Note that to use custom templates, you will need to declare the template variables used in the template because the names of those variables can no longer be inferred from the configuration class attributes. We will cover how to declare the default variable schema in the Worker Template Variables section.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#rules-for-template-variable-interpolation","title":"Rules for Template Variable Interpolation","text":"

When defining a job configuration model, it's useful to understand how template variables are interpolated into the job configuration. The templating engine follows a few simple rules:

  1. If a template variable is the only value for a key in the job_configuration section, the key will be replaced with the value template variable.
  2. If a template variable is part of a string (i.e., there is text before or after the template variable), the value of the template variable will be interpolated into the string.
  3. If a template variable is the only value for a key in the job_configuration section and no value is provided for the template variable, the key will be removed from the job_configuration section.

These rules allow worker developers and work pool maintainers to define template variables that can be complex types like dictionaries and lists. These rules also mean that worker developers should give reasonable default values to job configuration fields whenever possible because values are not guaranteed to be provided if template variables are unset.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#template-variable-usage-strategies","title":"Template Variable Usage Strategies","text":"

Template variables define the interface that deployment creators interact with to configure the execution environments of their deployments. The complexity of this interface can be controlled via the template variables that are defined for a base job template. This control allows work pool maintainers to find a point along the spectrum of flexibility and simplicity appropriate for their organization.

There are two patterns that are represented in current worker implementations:

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#pass-through","title":"Pass-Through","text":"

In the pass-through pattern, template variables are passed through to the job configuration with little change. This pattern exposes complete control to deployment creators but also requires them to understand the details of the execution environment.

This pattern is useful when the execution environment is simple, and the deployment creators are expected to have high technical knowledge.

The Docker worker is an example of a worker that uses this pattern.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#infrastructure-as-code-templating","title":"Infrastructure as Code Templating","text":"

Depending on the infrastructure they interact with, workers can sometimes employ a declarative infrastructure syntax (i.e., infrastructure as code) to create execution environments (e.g., a Kubernetes manifest or an ECS task definition).

In the IaC pattern, it's often useful to use template variables to template portions of the declarative syntax which then can be used to generate the declarative syntax into a final form.

This approach allows work pool creators to provide a simpler interface to deployment creators while also controlling which portions of infrastructure are configurable by deployment creators.

The Kubernetes worker is an example of a worker that uses this pattern.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#configuring-credentials","title":"Configuring Credentials","text":"

When executing flow runs within cloud services, workers will often need credentials to authenticate with those services. For example, a worker that executes flow runs in AWS Fargate will need AWS credentials. As a worker developer, you can use blocks to accept credentials configuration from the user.

For example, if you want to allow the user to configure AWS credentials, you can do so like this:

from prefect_aws import AwsCredentials\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    aws_credentials: AwsCredentials = Field(\n        default=None,\n        description=\"AWS credentials to use when creating AWS resources.\"\n    )\n

Users can create and assign a block to the aws_credentials attribute in the UI and the worker will use these credentials when interacting with AWS resources.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-template-variables","title":"Worker Template Variables","text":"

Providing template variables for a base job template defines the fields that deployment creators can override per deployment. The work pool creator ultimately defines the template variables for a base job template, but the worker developer is able to define default template variables for the worker to make it easier to use.

Default template variables for a worker are defined by implementing the BaseVariables class. Like the BaseJobConfiguration class, the BaseVariables class has attributes that are common to all workers:

Attribute Description name The name to assign to the created execution environment. env Environment variables to set in the created execution environment. labels The labels assigned the created execution environment for metadata purposes. command The command to use when starting a flow run.

Additional attributes can be added to the BaseVariables class to define additional template variables. For example, if you want to allow memory and CPU requests for your worker, you can do so like this:

from pydantic import Field\nfrom prefect.workers.base import BaseVariables\n\nclass MyWorkerTemplateVariables(BaseVariables):\n    memory_request: int = Field(\n            default=1024,\n            description=\"Memory allocation for the execution environment.\"\n        )\n    cpu_request: int = Field(\n            default=500, \n            description=\"CPU allocation for the execution environment.\"\n        )\n

When MyWorkerTemplateVariables is used in conjunction with MyWorkerConfiguration from the Customizing Configuration Attribute Templates section, the resulting base job template will look like this:

job_configuration:\nname: \"{{ name }}\"\nenv: \"{{ env }}\"\nlabels: \"{{ labels }}\"\ncommand: \"{{ command }}\"\nmemory: \"{{ memory_request }}Mi\"\ncpu: \"{{ cpu_request }}m\"\nvariables:\ntype: object\nproperties:\nname:\ntitle: Name\ndescription: Name given to infrastructure created by a worker.\ntype: string\nenv:\ntitle: Environment Variables\ndescription: Environment variables to set when starting a flow run.\ntype: object\nadditionalProperties:\ntype: string\nlabels:\ntitle: Labels\ndescription: Labels applied to infrastructure created by a worker.\ntype: object\nadditionalProperties:\ntype: string\ncommand:\ntitle: Command\ndescription: The command to use when starting a flow run. In most cases,\nthis should be left blank and the command will be automatically generated\nby the worker.\ntype: string\nmemory_request:\ntitle: Memory Request\ndescription: Memory allocation for the execution environment.\ntype: integer\ndefault: 1024\ncpu_request:\ntitle: CPU Request\ndescription: CPU allocation for the execution environment.\ntype: integer\ndefault: 500\n

Note that template variable classes are never used directly. Instead, they are used to generate a schema that is used to populate the variables section of a base job template and validate the template variables provided by the user.

We don't recommend using template variable classes within your worker implementation for validation purposes because the work pool creator ultimately defines the template variables. The configuration class should handle any necessary run-time validation.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-implementation","title":"Worker Implementation","text":"

Workers set up execution environments using provided configuration. Workers also observe the execution environment as the flow run executes and report any crashes to the Prefect API.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#attributes","title":"Attributes","text":"

To implement a worker, you must implement the BaseWorker class and provide it with the following attributes:

Attribute Description Required type The type of the worker. Yes job_configuration The configuration class for the worker. Yes job_configuration_variables The template variables class for the worker. No _documentation_url Link to documentation for the worker. No _logo_url Link to a logo for the worker. No _description A description of the worker. No","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#methods","title":"Methods","text":"","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#run","title":"run","text":"

In addition to the attributes above, you must also implement a run method. The run method is called for each flow run the worker receives for execution from the work pool.

The run method has the following signature:

 async def run(\n        self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n        ...\n

The run method is passed: the flow run to execute, the execution environment configuration for the flow run, and a task status object that allows the worker to track whether the flow run was submitted successfully.

The run method must also return a BaseWorkerResult object. The BaseWorkerResult object returned contains information about the flow run execution. For the most part, you can implement the BaseWorkerResult with no modifications like so:

from prefect.workers.base import BaseWorkerResult\n\nclass MyWorkerResult(BaseWorkerResult):\n\"\"\"Result returned by the MyWorker.\"\"\"\n

If you would like to return more information about a flow run, then additional attributes can be added to the BaseWorkerResult class.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#kill_infrastructure","title":"kill_infrastructure","text":"

Workers must implement a kill_infrastructure method to support flow run cancellation. The kill_infrastructure method is called when a flow run is canceled and is passed an identifier for the infrastructure to tear down and the execution environment configuration for the flow run.

The infrastructure_pid passed to the kill_infrastructure method is the same identifier used to mark a flow run execution as started in the run method. The infrastructure_pid must be a string, but it can take on any format you choose.

The infrastructure_pid should contain enough information to uniquely identify the infrastructure created for a flow run when used with the job_configuration passed to the kill_infrastructure method. Examples of useful information include: the cluster name, the hostname, the process ID, the container ID, etc.

If a worker cannot tear down infrastructure created for a flow run, the kill_infrastructure command should raise an InfrastructureNotFound or InfrastructureNotAvailable exception.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#worker-implementation-example","title":"Worker Implementation Example","text":"

Below is an example of a worker implementation. This example is not intended to be a complete implementation but to illustrate the aforementioned concepts.

from prefect.workers.base import BaseWorker, BaseWorkerResult, BaseJobConfiguration, BaseVariables\n\nclass MyWorkerConfiguration(BaseJobConfiguration):\n    memory: str = Field(\n            default=\"1024Mi\",\n            description=\"Memory allocation for the execution environment.\"\n            template=\"{{ memory_request }}Mi\"\n        )\n    cpu: str = Field(\n            default=\"500m\", \n            description=\"CPU allocation for the execution environment.\"\n            template=\"{{ cpu_request }}m\"\n        )\n\nclass MyWorkerTemplateVariables(BaseVariables):\n    memory_request: int = Field(\n            default=1024,\n            description=\"Memory allocation for the execution environment.\"\n        )\n    cpu_request: int = Field(\n            default=500, \n            description=\"CPU allocation for the execution environment.\"\n        )\n\nclass MyWorkerResult(BaseWorkerResult):\n\"\"\"Result returned by the MyWorker.\"\"\"\n\nclass MyWorker(BaseWorker):\n    type = \"my-worker\"\n    job_configuration = MyWorkerConfiguration\n    job_configuration_variables = MyWorkerTemplateVariables\n    _documentation_url = \"https://example.com/docs\"\n    _logo_url = \"https://example.com/logo\"\n    _description = \"My worker description.\"\n\n    async def run(\n        self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None,\n    ) -> BaseWorkerResult:\n        # Create the execution environment and start execution\n        job = await self._create_and_start_job(configuration)\n\n        if task_status:\n            # Use a unique ID to mark the run as started. This ID is later used to tear down infrastructure\n            # if the flow run is cancelled.\n            task_status.started(job.id) \n\n        # Monitor the execution\n        job_status = await self._watch_job(job, configuration)\n\n        exit_code = job_status.exit_code if job_status else -1 # Get result of execution for reporting\n        return MyWorkerResult(\n            status_code=exit_code,\n            identifier=job.id,\n        )\n\n    async def kill_infrastructure(self, infrastructure_pid: str, configuration: BaseJobConfiguration) -> None:\n        # Tear down the execution environment\n        await self._kill_job(infrastructure_pid, configuration)\n

Most of the execution logic is omitted from the example above, but it shows that the typical order of operations in the run method is: 1. Create the execution environment and start the flow run execution 2. Mark the flow run as started via the passed task_status object 3. Monitor the execution 4. Get the execution's final status from the infrastructure and return a BaseWorkerResult object

To see other examples of worker implementations, see the ProcessWorker and KubernetesWorker implementations.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/developing-a-new-worker-type/#integrating-with-the-prefect-cli","title":"Integrating with the Prefect CLI","text":"

Workers can be started via the Prefect CLI by providing the --type option to the prefect worker start CLI command. To make your worker type available via the CLI, it must be available at import time.

If your worker is in a package, you can add an entry point to your setup file in the following format:

entry_points={\n    \"prefect.collections\": [\n        \"my_package_name = my_worker_module\",\n    ]\n},\n

Prefect will discover this entry point and load your work module in the specified module. The entry point will allow the worker to be available via the CLI.

","tags":["work pools","workers","orchestration","flow runs","deployments","storage","infrastructure","tutorial","recipes"],"boost":2},{"location":"guides/deployment/docker/","title":"Running flows with Docker","text":"

In the Deployments tutorial, we looked at creating configuration that enables creating flow runs via the API and with code that was uploaded to a remotely accessible location.

In this guide, we will \"bake\" your code directly into a Docker image. This will allow for easy storage and allow you to deploy flow code without the need for a storage block. Then, we'll configure the deployment so flow runs are executed in a Docker container based on that image. We'll run our Docker instance locally, but you can extend this guide to run it on remote machines.

In this guide we'll:

","tags":["Docker","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/docker/#prerequisites","title":"Prerequisites","text":"

To run a deployed flow in a Docker container, you'll need the following:

Docker Desktop works fine for local testing if you don't already have Docker Engine configured in your environment.

Run a Prefect server

This guide assumes you're already running a Prefect server with prefect server start, as described in the Deployments tutorial.

If you shut down the server, you can start it again by opening another terminal session and starting the Prefect server with the prefect server start CLI command.

","tags":["Docker","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/docker/#storing-prefect-flow-code-in-a-docker-image","title":"Storing Prefect Flow Code in a Docker Image","text":"

First let's create a directory to work from, docker-tutorial.

In this directory, you will create a sub-directory named flows and put your flow script from the Deployments tutorial. In this case, I've named the flow docker-tutorial-flow.py.

The next file you will add to the docker-tutorial directory is a requirements.txt. In this file make sure to include all dependencies that are required for your docker-tutorial-flow.py script.

Next, you will create a Dockerfile that will be used to create a Docker image that will also store the flow code. This Dockerfile should look similar to the following:

FROM prefecthq/prefect:2-python3.9\nCOPY requirements.txt .\nRUN pip install -r requirements.txt --trusted-host pypi.python.org --no-cache-dir\nADD flows /opt/prefect/flows\n

Finally, we build this image by running:

docker build -t docker-tutorial-image .\n

This will create a Docker image in your Docker repository with your Prefect flow stored in the image.

","tags":["Docker","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/docker/#prefectyaml-file","title":"Prefect.yaml File","text":"

Now that you have created a Docker image that stores your Prefect flow code, you're going to make use of Prefect's deployment recipes. In your terminal, run:

prefect init --recipe docker\n

You will see a prompt to input values for image name and tag, lets use:

image_name: docker-tutorial-image\ntag: latest\n

This will create a prefect.yaml file for us populated with some fields. By default, it will look like this:

# Welcome to your prefect.yaml file! You can you this file for storing and managing\n# configuration for deploying your flows. We recommend committing this file to source\n# control along with your flow code.\n\n# Generic metadata about this project\nname: docker-tutorial\nprefect-version: 2.10.16\n\n# build section allows you to manage and build docker images\nbuild:\n- prefect_docker.deployments.steps.build_docker_image:\nid: build_image\nrequires: prefect-docker>=0.3.0\nimage_name: docker-tutorial-image\ntag: latest\ndockerfile: auto\npush: true\n\n# push section allows you to manage if and how this project is uploaded to remote locations\npush: null\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect.deployments.steps.set_working_directory:\ndirectory: /opt/prefect/docker-tutorial\n\n# the deployments section allows you to provide configuration for deploying flows\ndeployments:\n- name: null\nversion: null\ntags: []\ndescription: null\nschedule: {}\nflow_name: null\nentrypoint: null\nparameters: {}\nwork_pool:\nname: null\nwork_queue_name: null\njob_variables:\nimage: '{{ build_image.image }}'\n

Once you have made any updates you would like to this prefect.yaml file, in your terminal run:

prefect deploy\n

The Prefect deployment wizard will walk you through the deployment experience to deploy your flow in either Prefect Cloud or your local Prefect Server. Once the flow is deployed, you can even save the configuration to your prefect.yaml file for faster deployment in the future.

","tags":["Docker","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/docker/#cleaning-up","title":"Cleaning up","text":"

When you're finished, just close the Prefect UI tab in your browser, and close the terminal sessions running the Prefect server and agent.

","tags":["Docker","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/","title":"Using Prefect Helm Chart for Prefect Workers","text":"

This guide will walk you through the process of deploying Prefect workers using the Prefect Helm Chart. We will also cover how to set up a Kubernetes secret for the API key and mount it as an environment variable.

","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#prerequisites","title":"Prerequisites","text":"

Before you begin, ensure you have the following installed and configured on your machine:

  1. Helm: https://helm.sh/docs/intro/install/
  2. Kubernetes CLI (kubectl): https://kubernetes.io/docs/tasks/tools/install-kubectl/
  3. Access to a Kubernetes cluster (e.g., Minikube, GKE, AKS, EKS)
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#step-1-add-prefect-helm-repository","title":"Step 1: Add Prefect Helm Repository","text":"

First, add the Prefect Helm repository to your Helm client:

helm repo add prefect https://prefecthq.github.io/prefect-helm\nhelm repo update\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#step-2-create-a-namespace-for-prefect-worker","title":"Step 2: Create a Namespace for Prefect Worker","text":"

Create a new namespace in your Kubernetes cluster to deploy the Prefect worker:

kubectl create namespace prefect\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#step-3-create-a-kubernetes-secret-for-the-api-key","title":"Step 3: Create a Kubernetes Secret for the API Key","text":"

Create a file named api-key.yaml with the following contents:

apiVersion: v1\nkind: Secret\nmetadata:\nname: prefect-api-key\nnamespace: prefect\ntype: Opaque\ndata:\nkey:  <base64-encoded-api-key>\n

Replace <base64-encoded-api-key> with your Prefect Cloud API key encoded in base64. The helm chart looks for a secret of this name and schema, this can be overridden in the values.yaml.

You can use the following command to generate the base64-encoded value:

echo -n \"your-prefect-cloud-api-key\" | base64\n

Apply the api-key.yaml file to create the Kubernetes secret:

kubectl apply -f api-key.yaml\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#step-4-configure-prefect-worker-values","title":"Step 4: Configure Prefect Worker Values","text":"

Create a values.yaml file to customize the Prefect worker configuration. Add the following contents to the file:

worker:\ncloudApiConfig:\naccountId: <target account ID>\nworkspaceId: <target workspace ID>\nconfig:\nworkPool: <target work pool name>\n

These settings will ensure that the worker connects to the proper account, workspace, and work pool.

View your Account ID and Workspace ID in your browser URL when logged into Prefect Cloud. For example: https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here.

","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#step-5-install-prefect-worker-using-helm","title":"Step 5: Install Prefect Worker Using Helm","text":"

Now you can install the Prefect worker using the Helm chart with your custom values.yaml file:

helm install prefect-worker prefect/prefect-worker \\\n--namespace=prefect \\\n-f values.yaml\n
","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#step-6-verify-deployment","title":"Step 6: Verify Deployment","text":"

Check the status of your Prefect worker deployment:

kubectl get pods -n prefect\n

You should see the Prefect worker pod running.

","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/helm-worker/#conclusion","title":"Conclusion","text":"

You have now successfully deployed a Prefect worker using the Prefect Helm chart and configured the API key as a Kubernetes secret. The worker will now be able to communicate with Prefect Cloud and execute your Prefect flows.

","tags":["kubernetes","containers","orchestration","infrastructure","deployments"],"boost":2},{"location":"guides/deployment/storage-guide/","title":"Where to Store Your Flow Code","text":"

When a flow runs, the execution environment needs access to its code. Flow code is not stored in a Prefect server database instance or Prefect Cloud. When deploying a flow, you have several flow code storage options.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-1-local-storage","title":"Option 1: Local storage","text":"

Local flow code storage is often used with a Local Subprocess work pool for initial experimentation.

To create a deployment with local storage and a Local Subprocess work pool, do the following:

  1. Run prefect deploy from the root of the directory containing your flow code.
  2. Select that you want to create a new deployment, select the flow code entrypoint, and name your deployment.
  3. Select a process work pool.

You are then shown the location that your flow code will be fetched from when a flow is run. For example:

Your Prefect workers will attempt to load your flow from: \n/my-path/my-flow-file.py. To see more options for managing your flow's code, run:\n\n    $ prefect project recipes ls\n

When deploying a flow to production, you most likely want code to run with infrastructure-specific configuration. The flow code storage options shown below are recommended for production deployments.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-2-git-based-storage","title":"Option 2: Git-based storage","text":"

Git-based version control platforms are popular locations for code storage. They provide redundancy, version control, and easier collaboration.

GitHub is the most popular cloud-based repository hosting provider. GitLab and Bitbucket are other popular options. Prefect supports each of these platforms.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#creating-a-deployment-with-git-based-storage","title":"Creating a deployment with git-based storage","text":"

Run prefect deploy from within a git repository and create a new deployment. You will see a series of prompts. Select that you want to create a new deployment, select the flow code entrypoint, and name your deployment.

Prefect detects that you are in a git repository and asks if you want to store your flow code in a git repository. Select \"y\" and you will be prompted to confirm the URL of your git repository and the branch name, as in the example below:

? Your Prefect workers will need access to this flow's code in order to run it. \nWould you like your workers to pull your flow code from its remote repository when running this flow? [y/n] (y): \n? Is https://github.com/my_username/my_repo.git the correct URL to pull your flow code from? [y/n] (y): \n? Is main the correct branch to pull your flow code from? [y/n] (y): \n? Is this a private repository? [y/n]: y\n

In this example, the git repository is hosted on GitHub. If you are using Bitbucket or GitLab, the URL will match your provider. If the repository is public, enter \"n\" and you are on your way.

If the repository is private, you can enter a token to access your private repository. This token will be saved in an encrypted Prefect Secret block.

? Please enter a token that can be used to access your private repository. This token will be saved as a Secret block via the Prefect API: \"123_abc_this_is_my_token\"\n

Verify that you have a new Secret block in your active workspace named in the format \"deployment-my-deployment-my-flow-name-repo-token\".

Creating access tokens differs for each provider.

GitHubBitbucketGitLab

We recommend using HTTPS with fine-grained Personal Access Tokens so that you can limit access by repository. See the GitHub docs for Personal Access Tokens (PATs).

Under Your Profile->Developer Settings->Personal access tokens->Fine-grained token choose Generate New Token and fill in the required fields. Under Repository access choose Only select repositories and grant the token permissions for Contents.

We recommend using HTTPS with Repository, Project, or Workspace Access Tokens.

You can create a Repository Access Token with Scopes->Repositories->Read.

We recommend using HTTPS with Project Access Tokens.

In your repository in the GitLab UI, select Settings->Repository->Project Access Tokens and check read_repository under Select scopes.

If you want to configure a Secret block ahead of time for use when deploying a prefect.yaml file, create the block via code or the Prefect UI and reference it like this:

pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://bitbucket.org/org/my-private-repo.git\naccess_token: \"{{ prefect.blocks.secret.my-block-name }}\"\n

Alternatively, you can create a Credentials block ahead of time and reference it in the prefect.yaml pull step.

GitHubBitbucketGitLab
  1. Install the Prefect-Github library with pip install -U prefect-github
  2. Register the blocks in that library to make them available on the server with prefect block register -m prefect_github.
  3. Create a GitHub Credentials block via code or the Prefect UI and reference it as shown above.
  4. In addition to the block name, most users will need to fill in the GitHub Username and GitHub Personal Access Token fields.
pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://github.com/discdiver/my-private-repo.git\ncredentials: \"{{ prefect.blocks.github-credentials.my-block-name }}\"\n
  1. Install the relevant library with pip install -U prefect-bitbucket
  2. Register the blocks in that library with prefect block register -m prefect_bitbucket
  3. Create a Bitbucket credentials block via code or the Prefect UI and reference it as shown above.
  4. In addition to the block name, most users will need to fill in the Bitbucket Username and Bitbucket Personal Access Token fields.
pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://bitbucket.org/org/my-private-repo.git\ncredentials: \"{{ prefect.blocks.bitbucket-credentials.my-block-name }}\"\n
  1. Install the relevant library with pip install -U prefect-gitlab
  2. Register the blocks in that library with prefect block register -m prefect_gitlab
  3. Create a GitLab credentials block via code or the Prefect UI and reference it as shown above.
  4. In addition to the block name, most users will need to fill in the GitLab Username and GitLab Personal Access Token fields.
pull:\n- prefect.deployments.steps.git_clone:\nrepository: https://gitlab.com/org/my-private-repo.git\ncredentials: \"{{ prefect.blocks.gitlab-credentials.my-block-name }}\"\n

Push your code

When you make a change to your code, Prefect does push your code to your git-based version control platform. You need to do push your code manually or as part of your CI/CD pipeline. This design decision is an intentional one to avoid confusion about the git history and push process.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-3-docker-based-storage","title":"Option 3: Docker-based storage","text":"

Another popular way to store your flow code is to include it in a Docker image. The following work pools use Docker containers, so the flow code can be directly baked into the image:

CI/CD may not require push or pull steps

You don't need push or pull steps in the prefect.yaml file if using CI/CD to build a Docker image outside of Prefect. Instead, the work pool can reference the image directly.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#option-4-cloud-provider-storage","title":"Option 4: Cloud-provider storage","text":"

You can store your code in an AWS S3 bucket, Azure Blob Storage container, or GCP GCS bucket and specify the destination directly in the push and pull steps of your prefect.yaml file.

To create a templated prefect.yaml file run prefect init and select the recipe for the applicable cloud-provider storage. Below are the recipe options and the relevant portions of the prefect.yaml file.

AWS S3 bucketAzure Blob Storage containerGCP GCS bucket

Choose s3 as the recipe and enter the bucket name when prompted.

# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_aws.deployments.steps.push_to_s3:\nid: push_code\nrequires: prefect-aws>=0.3.4\nbucket: my-bucket\nfolder: my-folder\ncredentials: \"{{ prefect.blocks.aws-credentials.my-credentials-block }}\" # if private\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_aws.deployments.steps.pull_from_s3:\nid: pull_code\nrequires: prefect-aws>=0.3.4\nbucket: '{{ push_code.bucket }}'\nfolder: '{{ push_code.folder }}'\ncredentials: \"{{ prefect.blocks.aws-credentials.my-credentials-block }}\" # if private \n

If the bucket requires authentication to access it, you can do the following:

  1. Install the Prefect-AWS library with pip install -U prefect-aws
  2. Register the blocks in Prefect-AWS with prefect block register -m prefect_aws
  3. Create a user with a role with read and write permissions to access the bucket. If using the UI, create an access key pair with IAM->Users->Security credentials->Access keys->Create access key. Choose Use case->Other and then copy the Access key and Secret access key values.
  4. Create an AWS Credentials block via code or the Prefect UI. In addition to the block name, most users will fill in the AWS Access Key ID and AWS Access Key Secret fields.
  5. Reference the block as shown in the push and pull steps above.

Choose azure as the recipe and enter the container name when prompted.

# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_azure.deployments.steps.push_to_azure_blob_storage:\nid: push_code\nrequires: prefect-azure>=0.2.8\ncontainer: my-prefect-azure-container\nfolder: my-folder\ncredentials: \"{{ prefect.blocks.azure-blob-storage-credentials.my-credentials-block }}\" # if private\n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_azure.deployments.steps.pull_from_azure_blob_storage:\nid: pull_code\nrequires: prefect-azure>=0.2.8\ncontainer: '{{ push_code.container }}'\nfolder: '{{ push_code.folder }}'\ncredentials: \"{{ prefect.blocks.azure-blob-storage-credentials.my-credentials-block }}\" # if private\n

If the blob requires authentication to access it, you can do the following:

  1. Install the Prefect-Azure library with pip install -U prefect-azure
  2. Register the blocks in Prefect-Azure with prefect block register -m prefect_azure
  3. Create an access key for a role with sufficient (read and write) permissions to access the blob. A connection string that will contain all needed information can be created in the UI under Storage Account->Access keys.
  4. Create an Azure Blob Storage Credentials block via code or the Prefect UI. Enter a name for the block and paste the connection string into the Connection String field.
  5. Reference the block as shown in the push and pull steps above.

Choose `gcs`` as the recipe and enter the bucket name when prompted.

# push section allows you to manage if and how this project is uploaded to remote locations\npush:\n- prefect_gcp.deployment.steps.push_to_gcs:\nid: push_code\nrequires: prefect-gcp>=0.4.3\nbucket: my-bucket\nfolder: my-folder\ncredentials: \"{{ prefect.blocks.gcp-credentials.my-credentials-block }}\" # if private \n\n# pull section allows you to provide instructions for cloning this project in remote locations\npull:\n- prefect_gcp.deployment.steps.pull_from_gcs:\nid: pull_code\nrequires: prefect-gcp>=0.4.3\nbucket: '{{ push_code.bucket }}'\nfolder: '{{ pull_code.folder }}'\ncredentials: \"{{ prefect.blocks.gcp-credentials.my-credentials-block }}\" # if private \n

If the bucket requires authentication to access it, you can do the following:

  1. Install the Prefect-GCP library with pip install -U prefect-gcp
  2. Register the blocks in Prefect-GCP with prefect block register -m prefect_gcp
  3. Create a service account in GCP for a role with read and write permissions to access the bucket contents. If using the GCP console, go to IAM & Admin->Service accounts->Create service account. After choosing a role with the required permissions, see your service account and click on the three dot menu in the Actions column. Select Manage Keys->ADD KEY->Create new key->JSON. Download the JSON file.
  4. Create a GCP Credentials block via code or the Prefect UI. Enter a name for the block and paste the entire contents of the JSON key file into the Service Account Info field.
  5. Reference the block as shown in the push and pull steps above.

Another option for authentication is for the worker to have access to the the storage location at runtime via SSH keys.

Alternatively, you can inject environment variables into your deployment like this example that uses an environment variable named CUSTOM_FOLDER:

 push:\n- prefect_gcp.deployment.steps.push_to_gcs:\nid: push_code\nrequires: prefect-gcp>=0.4.3\nbucket: my-bucket\nfolder: '{{ $CUSTOM_FOLDER }}'\n
","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#including-and-excluding-files-from-storage","title":"Including and excluding files from storage","text":"

By default, Prefect uploads all files in the current folder to the configured storage location when you create a deployment.

When using a git repository, Docker image, or cloud-provider storage location, you may want to exclude certain files or directories. - If you are familiar with git you are likely familiar with the .gitignore file. - If you are familiar with Docker you are likely familiar with the .dockerignore file. - For cloud-provider storage the .prefectignore file serves the same purpose and follows a similar syntax as those files. So an entry of *.pyc will exclude all .pyc files from upload.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#other-code-storage-creation-methods","title":"Other code storage creation methods","text":"

In earlier versions of Prefect storage blocks were the recommended way to store flow code. Storage blocks are still supported, but not recommended.

As shown above, repositories can be referenced directly through interactive prompts with prefect deploy or in a prefect.yaml. When authentication is needed, Secret or Credential blocks can be referenced, and in some cases created automatically through interactive deployment creation prompts.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"guides/deployment/storage-guide/#next-steps","title":"Next steps","text":"

You've seen options for where to store your flow code.

We recommend using Docker-based storage or git-based storage for your production deployments.

Check out more guides to reach your goals with Prefect.

","tags":["guides","guide","flow code","storage","code storage","repository","github","git","gitlab","bitbucket","s3","azure","blob storage","bucket","AWS","GCP","GCS","Google Cloud Storage","Azure Blob Storage","Docker","storage"],"boost":2},{"location":"host/","title":"Hosting a Prefect server","text":"

After you install Prefect you have a Python SDK client that can communicate with Prefect Cloud, the platform hosted by Prefect. You also have an API server backed by a database and a UI.

In this section you'll learn how to host your own Prefect server.

Spin up a local Prefect server UI with the prefect server start CLI command in the terminal:

$ prefect server start\n

Open the URL for the Prefect server UI (http://127.0.0.1:4200 by default) in a browser.

Shut down the Prefect server with ctrl + c in the terminal.","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#differences-between-a-prefect-server-and-prefect-cloud","title":"Differences between a Prefect server and Prefect Cloud","text":"

The self-hosted Prefect server and Prefect Cloud share a common set of features. Prefect Cloud also includes the following features:

You can read more about Prefect Cloud in the Cloud section.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#configuring-a-prefect-server","title":"Configuring a Prefect Server","text":"

Go to your terminal session and run this command to set the API URL to point to a Prefect server instance:

$ prefect config set PREFECT_API_URL=\"http://127.0.0.1:4200/api\"\n

PREFECT_API_URL required when running Prefect inside a container

You must set the API server address to use Prefect within a container, such as a Docker container.

You can save the API server address in a Prefect profile. Whenever that profile is active, the API endpoint will be be at that address.

See Profiles & Configuration for more information on profiles and configurable Prefect settings.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#prefect-database","title":"Prefect Database","text":"

The Prefect database persists data to track the state of your flow runs and related Prefect concepts, including:

Currently Prefect supports the following databases:

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#using-the-database","title":"Using the database","text":"

A local SQLite database is the default database and is configured upon Prefect installation. The database is located at ~/.prefect/prefect.db by default.

To reset your database, run the CLI command:

prefect server database reset -y\n

This command will clear all data and reapply the schema.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#database-settings","title":"Database settings","text":"

Prefect provides several settings for configuring the database. Here are the default settings:

PREFECT_API_DATABASE_CONNECTION_URL='sqlite+aiosqlite:///${PREFECT_HOME}/prefect.db'\nPREFECT_API_DATABASE_ECHO='False'\nPREFECT_API_DATABASE_MIGRATE_ON_START='True'\nPREFECT_API_DATABASE_PASSWORD='None'\n

You can save a setting to your active Prefect profile with prefect config set.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#configuring-a-postgresql-database","title":"Configuring a PostgreSQL database","text":"

To connect Prefect to a PostgreSQL database, you can set the following environment variable:

prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"postgresql+asyncpg://postgres:yourTopSecretPassword@localhost:5432/prefect\"\n

The above environment variable assumes that:

If you want to quickly start a PostgreSQL instance that can be used as your Prefect database, you can use the following command that will start a Docker container running PostgreSQL:

docker run -d --name prefect-postgres -v prefectdb:/var/lib/postgresql/data -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=yourTopSecretPassword -e POSTGRES_DB=prefect postgres:latest\n

The above command:

You can inspect your profile to be sure that the environment variable has been set properly:

prefect config view --show-sources\n

Start the Prefect server and it should from now on use your PostgreSQL database instance:

prefect server start\n
","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#in-memory-database","title":"In-memory database","text":"

One of the benefits of SQLite is in-memory database support.

To use an in-memory SQLite database, set the following environment variable:

prefect config set PREFECT_API_DATABASE_CONNECTION_URL=\"sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false\"\n

Use SQLite database for testing only

SQLite is only supported by Prefect for testing purposes and is not compatible with multiprocessing.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#database-versions","title":"Database versions","text":"

The following database versions are required for use with Prefect:

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#migrations","title":"Migrations","text":"

Prefect uses Alembic to manage database migrations. Alembic is a database migration tool for usage with the SQLAlchemy Database Toolkit for Python. Alembic provides a framework for generating and applying schema changes to a database.

To apply migrations to your database you can run the following commands:

To upgrade:

prefect server database upgrade -y\n

To downgrade:

prefect server database downgrade -y\n

You can use the -r flag to specify a specific migration version to upgrade or downgrade to. For example, to downgrade to the previous migration version you can run:

prefect server database downgrade -y -r -1\n

or to downgrade to a specific revision:

prefect server database downgrade -y -r d20618ce678e\n

See the contributing docs for information on how to create new database migrations.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#notifications","title":"Notifications","text":"

When you use Prefect Cloud you gain access to a hosted platform with Workspace & User controls, Events, and Automations. Prefect Cloud has an option for automation notifications. The more limited Notifications option is provided for the self-hosted Prefect server.

Notifications enable you to set up alerts that are sent when a flow enters any state you specify. When your flow and task runs changes state, Prefect notes the state change and checks whether the new state matches any notification policies. If it does, a new notification is queued.

Prefect supports sending notifications via:

Notifications in Prefect Cloud

Prefect Cloud uses the robust Automations interface to enable notifications related to flow run state changes and work queue health.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"host/#configure-notifications","title":"Configure notifications","text":"

To configure a notification in a Prefect server, go to the Notifications page and select Create Notification or the + button.

Notifications are structured just as you would describe them to someone. You can choose:

For email notifications (supported on Prefect Cloud only), the configuration requires email addresses to which the message is sent.

For Slack notifications, the configuration requires webhook credentials for your Slack and the channel to which the message is sent.

For example, to get a Slack message if a flow with a daily-etl tag fails, the notification will read:

If a run of any flow with daily-etl tag enters a failed state, send a notification to my-slack-webhook

When the conditions of the notification are triggered, you\u2019ll receive a message:

The fuzzy-leopard run of the daily-etl flow entered a failed state at 22-06-27 16:21:37 EST.

On the Notifications page you can pause, edit, or delete any configured notification.

","tags":["UI","dashboard","Prefect Server","Observability","Events","Serve","Database","SQLite"],"boost":2},{"location":"integrations/","title":"Integrations","text":"

Prefect integrations are organized into collections of pre-built tasks, flows, blocks and more that are installable as PyPI packages.

Airbyte

Maintained by Prefect

Alert

Maintained by Khuyen Tran

AWS

Maintained by Prefect

Azure

Maintained by Prefect

Bitbucket

Maintained by Prefect

Census

Maintained by Prefect

CubeJS

Maintained by Alessandro Lollo

Dask

Maintained by Prefect

Databricks

Maintained by Prefect

dbt

Maintained by Prefect

Docker

Maintained by Prefect

Earthdata

Maintained by Giorgio Basile

Email

Maintained by Prefect

Firebolt

Maintained by Prefect

Fivetran

Maintained by Fivetran

Fugue

Maintained by The Fugue Development Team

GCP

Maintained by Prefect

GitHub

Maintained by Prefect

GitLab

Maintained by Prefect

Google Sheets

Maintained by Stefano Cascavilla

Great Expectations

Maintained by Prefect

HashiCorp Vault

Maintained by Pavel Chekin

Hex

Maintained by Prefect

Hightouch

Maintained by Prefect

Jupyter

Maintained by Prefect

Kubernetes

Maintained by Prefect

KV

Maintained by Michael Adkins

MetricFlow

Maintained by Alessandro Lollo

Monday

Maintained by Prefect

MonteCarlo

Maintained by Prefect

OpenAI

Maintained by Prefect

OpenMetadata

Maintained by Prefect

Ray

Maintained by Prefect

Shell

Maintained by Prefect

Sifflet

Maintained by Sifflet and Alessandro Lollo

Slack

Maintained by Prefect

Snowflake

Maintained by Prefect

Soda Core

Maintained by Soda and Alessandro Lollo

Spark on Kubernetes

Maintained by Manoj Babu Katragadda

SQLAlchemy

Maintained by Prefect

Stitch

Maintained by Alessandro Lollo

Transform

Maintained by Alessandro Lollo

Twitter

Maintained by Prefect

","tags":["tasks","flows","blocks","collections","task library","integrations","Airbyte","Alert","AWS","Azure","Bitbucket","Census","CubeJS","Dask","Databricks","dbt","Docker","Earthdata","Email","Firebolt","Fivetran","Fugue","GCP","GitHub","GitLab","Google Sheets","Great Expectations","HashiCorp Vault","Hex","Hightouch","Jupyter","Kubernetes","KV","MetricFlow","Monday","MonteCarlo","OpenAI","OpenMetadata","Ray","Shell","Sifflet","Slack","Snowflake","Soda Core","Spark on Kubernetes","SQLAlchemy","Stitch","Transform","Twitter"],"boost":2},{"location":"integrations/contribute/","title":"Contribute","text":"

We welcome contributors! You can help contribute blocks and integrations by following these steps. We welcome contributors! You can help contribute blocks and integrations by following these steps.

","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contributing-blocks","title":"Contributing Blocks","text":"

Building your own custom block is simple!

  1. Subclass from Block.
  2. Add a description alongside an Attributes and Example section in the docstring.
  3. Set a _logo_url to point to a relevant image.
  4. Create the pydantic.Fields of the block with a type annotation, default or default_factory, and a short description about the field.
  5. Define the methods of the block.

For example, this is how the Secret block is implemented:

from pydantic import Field, SecretStr\nfrom prefect.blocks.core import Block\n\nclass Secret(Block):\n\"\"\"\n    A block that represents a secret value. The value stored in this block will be obfuscated when\n    this block is logged or shown in the UI.\n\n    Attributes:\n        value: A string value that should be kept secret.\n\n    Example:\n        ```python\n        from prefect.blocks.system import Secret\n        secret_block = Secret.load(\"BLOCK_NAME\")\n\n        # Access the stored secret\n        secret_block.get()\n        ```\n    \"\"\"\n\n    _logo_url = \"https://images.ctfassets.net/gm98wzqotmnx/5uUmyGBjRejYuGTWbTxz6E/3003e1829293718b3a5d2e909643a331/image8.png?h=250\"\n\n    value: SecretStr = Field(\n        default=..., description=\"A string value that should be kept secret.\"\n    )  # ... indicates it's a required field\n\n    def get(self):\n        return self.value.get_secret_value()\n

To view in Prefect Cloud or the Prefect server UI, register the block.

","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contributing-integrations","title":"Contributing Integrations","text":"

Anyone can create and share a Prefect Integration and we encourage anyone interested in creating a integration to do so!

","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#generate-a-project","title":"Generate a project","text":"

To help you get started with your integration, we've created a template that gives the tools you need to create and publish your integration.

Use the Prefect Integration template to get started creating an integration with a bootstrapped project!

","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#list-a-project-in-the-integrations-catalog","title":"List a project in the Integrations Catalog","text":"

To list your integration in the Prefect Integrations Catalog, submit a PR to the Prefect repository adding a file to the docs/integrations/catalog directory with details about your integration. Please use TEMPLATE.yaml in that folder as a guide.

","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/contribute/#contribute-fixes-or-enhancements-to-integrations","title":"Contribute fixes or enhancements to Integrations","text":"

If you'd like to help contribute to fix an issue or add a feature to any of our Integrations, please propose changes through a pull request from a fork of the repository.

  1. Fork the repository
  2. Clone the forked repository
  3. Install the repository and its dependencies:
    pip install -e \".[dev]\"\n
  4. Make desired changes
  5. Add tests
  6. Insert an entry to the Integration's CHANGELOG.md
  7. Install pre-commit to perform quality checks prior to commit:
    pre-commit install\n
  8. git commit, git push, and create a pull request
","tags":["blocks","storage","secrets","configuration","infrastructure","integrations","integrations","contributing"],"boost":2},{"location":"integrations/usage/","title":"Using Integrations","text":"","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#installing-an-integration","title":"Installing an Integration","text":"

Install the Integration via pip.

For example, to use prefect-aws:

pip install prefect-aws\n
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#registering-blocks-from-an-integration","title":"Registering Blocks from an Integration","text":"

Once the Prefect Integration is installed, register the blocks within the integration to view them in the Prefect Cloud UI:

For example, to register the blocks available in prefect-aws:

prefect block register -m prefect_aws\n

Updating blocks from an integrations

If you install an updated Prefect integration that adds fields to a block type, you will need to re-register that block type.

Loading a block in code

To use the load method on a Block, you must already have a block document saved either through code or through the Prefect UI.

Learn more about Blocks here!

","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#using-tasks-and-flows-from-an-integration","title":"Using Tasks and Flows from an Integration","text":"

Integrations also contain pre-built tasks and flows that can be imported and called within your code.

As an example, to read a secret from AWS Secrets Manager with the read_secret task:

from prefect import flow\nfrom prefect_aws import AwsCredentials\nfrom prefect_aws.secrets_manager import read_secret\n\n@flow\ndef connect_to_database():\n    aws_credentials = AwsCredentials.load(\"MY_BLOCK_NAME\")\n    secret_value = read_secret(\n        secret_name=\"db_password\",\n        aws_credentials=aws_credentials\n    )\n\n    # Use secret_value to connect to a database\n
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#customizing-tasks-and-flows-from-an-integration","title":"Customizing Tasks and Flows from an Integration","text":"

To customize the settings of a task or flow pre-configured in a collection, use with_options:

from prefect import flow\nfrom prefect_dbt.cloud import DbtCloudCredentials\nfrom prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion\n\ncustom_run_dbt_cloud_job = trigger_dbt_cloud_job_run_and_wait_for_completion.with_options(\n    name=\"Run My DBT Cloud Job\",\n    retries=2,\n    retry_delay_seconds=10\n)\n\n@flow\ndef run_dbt_job_flow():\n    run_result = custom_run_dbt_cloud_job(\n        dbt_cloud_credentials=DbtCloudCredentials.load(\"my-dbt-cloud-credentials\"),\n        job_id=1\n    )\n\nrun_dbt_job_flow()\n
","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"integrations/usage/#recipes-and-tutorials","title":"Recipes and Tutorials","text":"

To learn more about how to use Integrations, check out Prefect recipes on GitHub. These recipes provide examples of how Integrations can be used in various scenarios.

","tags":["tasks","flows","blocks","integrations","task library","contributing"],"boost":2},{"location":"recipes/recipes/","title":"Prefect Recipes","text":"

Prefect recipes are common, extensible examples for setting up Prefect in your execution environment with ready-made ingredients such as Dockerfiles, Terraform files, and GitHub Actions.

Recipes are useful when you are looking for tutorials on how to deploy an agent, use event-driven flows, set up unit testing, and more.

The following are Prefect recipes specific to Prefect 2. You can find a full repository of recipes at https://github.com/PrefectHQ/prefect-recipes and additional recipes at Prefect Discourse.

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#recipe-catalog","title":"Recipe catalog","text":"Agent on Azure with Kubernetes

Configure Prefect on Azure with Kubernetes, running a Prefect agent to execute deployment flow runs.

Maintained by Prefect

This recipe uses:

Agent on ECS Fargate with AWS CLI

Run a Prefect 2 agent on ECS Fargate using the AWS CLI.

Maintained by Prefect

This recipe uses:

Agent on ECS Fargate with Terraform

Run a Prefect 2 agent on ECS Fargate using Terraform.

Maintained by Prefect

This recipe uses:

Agent on an Azure VM

Set up an Azure VM and run a Prefect agent.

Maintained by Prefect

This recipe uses:

Flow Deployment with GitHub Actions

Deploy a Prefect flow with storage and infrastructure blocks, update and push Docker image to container registry.

Maintained by Prefect

This recipe uses:

Flow Deployment with GitHub Storage and Docker Infrastructure

Create a deployment with GitHub as a storage and Docker Container as an infrastructure

Maintained by Prefect

This recipe uses:

Prefect server on an AKS Cluster

Deploy a Prefect server to an Azure Kubernetes Service (AKS) Cluster with Azure Blob Storage.

Maintained by Prefect

This recipe uses:

Serverless Prefect with AWS Chalice

Execute Prefect flows in an AWS Lambda function managed by Chalice.

Maintained by Prefect

This recipe uses:

Serverless Workflows with ECSTask Blocks

Deploy a Prefect agent to AWS ECS Fargate using GitHub Actions and ECSTask infrastructure blocks.

Maintained by Prefect

This recipe uses:

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#contributing-recipes","title":"Contributing recipes","text":"

We're always looking for new recipe contributions! See the Prefect Recipes repository for details on how you can add your Prefect recipe, share best practices with fellow Prefect users, and earn some swag.

Prefect recipes provide a vital cookbook where users can find helpful code examples and, when appropriate, common steps for specific Prefect use cases.

We love recipes from anyone who has example code that another Prefect user can benefit from (e.g. a Prefect flow that loads data into Snowflake).

Have a blog post, Discourse article, or tutorial you\u2019d like to share as a recipe? All submissions are welcome. Clone the prefect-recipes repo, create a branch, add a link to your recipe to the README, and submit a PR. Have more questions? Read on.

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-is-a-recipe","title":"What is a recipe?","text":"

A Prefect recipe is like a cookbook recipe: it tells you what you need \u2014 the ingredients \u2014 and some basic steps, but assumes you can put the pieces together. Think of the Hello Fresh meal experience, but for dataflows.

A tutorial, on the other hand, is Julia Child holding your hand through the entire cooking process: explaining each ingredient and procedure, demonstrating best practices, pointing out potential problems, and generally making sure you can\u2019t stray from the happy path to a delicious meal.

We love Julia, and we love tutorials. But we don\u2019t expect that a Prefect recipe should handhold users through every step and possible contingency of a solution. A recipe can start from an expectation of more expertise and problem-solving ability on the part of the reader.

To see an example of a high quality recipe, check out Serverless with AWS Chalice. This recipe includes all of the elements we like to see.

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#steps-to-add-your-recipe","title":"Steps to add your recipe","text":"

Here\u2019s our guide to creating a recipe:

# Clone the repository\ngit clone git@github.com:PrefectHQ/prefect-recipes.git\ncd prefect-recipes\n\n# Create and checkout a new branch\ngit checkout -b new_recipe_branch_name\n
  1. Add your recipe. Your code may simply be a copy/paste of a single Python file or an entire folder. Unsure of where to add your file or folder? Just add under the flows-advanced/ folder. A Prefect Recipes maintainer will help you find the best place for your recipe. Just want to direct others to a project you made, whether it be a repo or a blogpost? Simply link to it in the Prefect Recipes README!
  2. (Optional) Write a README.
  3. Include a dependencies file, if applicable.
  4. Push your code and make a PR to the repository.

That\u2019s it!

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-makes-a-good-recipe","title":"What makes a good recipe?","text":"

Every recipe is useful, as other Prefect users can adapt the recipe to their needs. Particularly good ones help a Prefect user bake a great dataflow solution! Take a look at the prefect-recipes repo to see some examples.

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-are-the-common-ingredients-of-a-good-recipe","title":"What are the common ingredients of a good recipe?","text":"","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#what-are-some-tips-for-a-good-recipe-readme","title":"What are some tips for a good recipe README?","text":"

A thoughtful README can take a recipe from good to great. Here are some best practices that we\u2019ve found make for a great recipe README:

","tags":["recipes","best practices","examples"],"boost":2},{"location":"recipes/recipes/#next-steps","title":"Next steps","text":"

We hope you\u2019ll feel comfortable sharing your Prefect solutions as recipes in the prefect-recipes repo. Collaboration and knowledge sharing are defining attributes of our Prefect Community!

Have questions about sharing or using recipes? Reach out on our active Prefect Slack Community!

Happy engineering!

","tags":["recipes","best practices","examples"],"boost":2},{"location":"tutorial/","title":"Tutorial Overview","text":"

This tutorial provides a step by step walk-through of Prefect core concepts and instructions on how to use them.

By the end of this tutorial you will have:

  1. Created a Flow
  2. Added Tasks to It
  3. Created a Work Pool
  4. Deployed a Worker
  5. Deployed the Flow
  6. Run the Flow on the Worker

If you're looking for examples of more advanced operations (like deploying on Kubernetes), check out Prefect's guides.

","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#prerequisites","title":"Prerequisites","text":"
  1. Before you start, make sure you have Python installed, then install Prefect: pip install -U prefect

    1. See the install guide for more detailed instructions.
  2. Create a GitHub repository for your tutorial, let's call it prefect-tutorial.

  3. This tutorial requires a Prefect Server instance, so sign up for a forever free Prefect Cloud Account or, alternatively, self-host a Prefect Server.

","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#what-is-prefect","title":"What is Prefect?","text":"

Prefect orchestrates workflows \u2014 it simplifies the creation, scheduling, and monitoring of complex data pipelines. With Prefect, you define workflows as Python code and let it handle the rest.

Prefect also provides error handling, retry mechanisms, and a user-friendly dashboard for monitoring. It's the easiest way to transform any Python function into a unit of work that can be observed and orchestrated.

Just bring your Python code, sprinkle in a few decorators, and go!

","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/#reference-material","title":"Reference Material","text":"

If you've never used Prefect before, check out the core concepts docs. Many of these concepts will be introduced in the tutorial, however, the concept docs provide a more comprehensive overview of each concept.

","tags":["tutorial","getting started","basics","tasks","flows","subflows","deployments","workers","work pools"],"boost":2},{"location":"tutorial/deployments/","title":"Deploying Flows","text":"","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#why-deploy","title":"Why Deploy?","text":"

One of the most common reasons to use a tool like Prefect is scheduling. You want your flows running on production infrastructure in a consistent and predictable way. Up to this point, we\u2019ve demonstrated running Prefect flows as scripts, but this means you have been the one triggering flow runs. In order to schedule flow runs or trigger them based on events or certain conditions, you\u2019ll need to deploy them.

A deployed flow can be interacted with in additional ways:

","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#what-is-a-deployment","title":"What is a Deployment?","text":"

Deploying a flow is the act of specifying where, when, and how it will run. This information is encapsulated and sent to Prefect as a Deployment which contains the crucial metadata needed for orchestration. Deployments elevate workflows from functions that you call manually to API-managed entities.

Attributes of a deployment include (but are not limited to):

Before you build your first deployment, its helpful to understand how Prefect configures flow run infrastructure. This means setting up a work pool and a worker to enable your flows to run on your infrastructure.

","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#why-work-pools-and-workers","title":"Why Work Pools and Workers?","text":"

Running Prefect flows locally is great for testing and development purposes. But for production settings, Prefect work pools and workers allow you to run flows in the environments best suited to their execution.

Workers and work pools bridge the Prefect orchestration layer with the infrastructure the flows are actually executed on.

Work pools prioritize flow runs and respond to polling from the worker. Workers are light-weight processes that run in the environment where flow runs are executed.

You can create and configure work pools within the Prefect UI.

Work Pools:

graph TD\n\n    subgraph your_infra[\"Your Execution Environment\"]\n        worker[\"Worker\"]\n                subgraph flow_run_infra[Flow Run Infra]\n                    flow_run((\"Flow Run\"))\n                end\n\n    end\n\n    subgraph api[\"Prefect API\"]\n                Deployment --> |assigned to| work_pool\n        work_pool([\"Work Pool\"])\n    end\n\n    worker --> |polls| work_pool\n    worker --> |creates| flow_run_infra

Security Note:

Prefect provides execution through the hybrid model, which allows you to deploy workflows that run in the environments best suited to their execution while allowing you to keep your code and data completely private. There is no ingress required. For more information see here.

Now that we\u2019ve reviewed the concepts of a Work Pool and Worker, let\u2019s create them so that you can deploy your tutorial flow, and execute it later using the Prefect API.

","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#setting-up-the-worker-and-work-pool","title":"Setting up the Worker and Work Pool","text":"

For this tutorial you will create a Process type work pool via the CLI.

The Process work pool type specifies that all work sent to this work pool will run as a subprocess on the same infrastructure on which the worker is running.

Other work pool types

There are work pool types for all major managed code execution platforms, such as Kubernetes services or serverless computing environments such as AWS ECS, Azure Container Instances, and GCP Cloud Run.

These are expanded upon in the Guides section.

In your terminal run the following command to set up a Process type work pool.

prefect work-pool create --type process my-process-pool\n

Let\u2019s confirm that the work pool was successfully created by running the following command in the same terminal. You should see your new my-process-pool in the output list.

prefect work-pool ls 

Finally, let\u2019s double check that you can see this work pool in the Prefect Cloud UI. Navigate to the Work Pool tab and verify that you see my-process-pool listed.

                                             Work Pools                                             \n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name                  \u2503 Type          \u2503                                   ID \u2503 Concurrency Limit \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 my-process-pool       \u2502 process       \u2502 33ce63a5-cc80-4f43-9092-11122ccea60b \u2502 None              \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n

When you click into the my-process-pool, select the \"Work Queues\" tab. You should see a red status icon listed for the default work queue signifying that this queue is not ready to submit work. Work queues are an advanced feature. You can learn more about them in the work queue documentation.

To get the work queue healthy and ready to submit flow runs, you need to start a worker.

Workers are a lightweight polling process that kick off scheduled flow runs on a certain type of infrastructure (like Process). To start a worker on your laptop, open a new terminal and confirm that your virtual environment has prefect installed.

Run the following command in this new terminal to start the worker:

prefect worker start --pool my-process-pool\n

You should see the worker start - it's now polling the Prefect API to request any scheduled flow runs it should pick up and then submit for execution. You\u2019ll see your new worker listed in the UI under the worker tab of the Work Pool page with a recent last polled date. You should also be able to see a Healthy status indicator in the default work queue under the work queue tab.

You would need to keep this terminal session active in order for the worker continue to pick up jobs. Since you are running this worker locally, the worker will terminate if you close the terminal. Therefore, in a production setting this worker should be running as a daemonized or managed process. See next steps for more information on this.

Now that we\u2019ve set up your work pool and worker, we have what we need to kick off and execute flow runs of deployed flows. Lets deploy your tutorial flow to my-process-pool.

","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#create-a-deployment","title":"Create a Deployment","text":"","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#from-our-previous-steps-we-now-have","title":"From our previous steps, we now have:","text":"
  1. A flow
  2. A work pool
  3. A worker
  4. An understanding of why Prefect Deployments are useful

Now it\u2019s time to put it all together.

In your terminal (not the terminal in which the worker is running), let\u2019s run the following command from the root of your repo to begin deploying your flow:

prefect deploy\n

Specifying an entrypoint

In non-interactive settings (like CI/CD), you can specify the entrypoint of your flow directly in the CLI.

For example, if my_flow is defined in my_flow.py, provide deployment details with flags prefect deploy my_flow.py:my_flow -n my-deployment -p my-pool.

When running prefect deploy interactively, the CLI will discover all flows in your working directory. Select the flow you want to deploy, and the deployment wizard will walk you through the rest of the deployment creation process:

  1. Deployment name: Choose a name, like my-deployment
  2. Would you like to schedule when this flow runs? (y/n): Type n for now, you can set up a schedule later
  3. Which work pool would you like to deploy this flow to? (use arrow keys): Select the work pool you just created, my-process-pool
  4. When asked if you would like your workers to pull your flow code from its remote repository, select yes if you\u2019ve been following along and defining your flow code script from within a GitHub repository:
    • y (Recommended): Prefect will automatically register your GitHub repo as the the remote location of this flow\u2019s code. This means a worker started on any machine (for example: on your laptop, on your team-mate\u2019s laptop, or in a cloud VM) will be able to facilitate execution of this deployed flow.
    • n: If you would like to continue this tutorial without the use of GitHub, thats ok, Prefect will always first look to see if the flow code exists locally before referring to remote flow code storage, so your local my-process-pool should have all it needs to complete the execution of this deployed flow.

Common Pitfalls

As you continue to use Prefect, you'll likely author many different flows and deployments of them. Check out the next section to learn about defining deployments in a prefect.yaml file.

Did you know?

A Prefect flow can have more than one deployment. This can be useful if you want your flow to run in different execution environments or have multiple different schedules.

","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/deployments/#next-steps","title":"Next Steps","text":"

And more!

","tags":["work pools","workers","orchestration","flow runs","deployments","schedules","triggers","tutorial"],"boost":2},{"location":"tutorial/flows/","title":"Flows","text":"","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#what-is-a-flow","title":"What is a Flow?","text":"

Flows are like functions. They can take inputs, perform work, and return an output. In fact, you can turn any function into a Prefect flow by adding the @flow decorator. When a function becomes a flow, its behavior changes, giving it the following advantages:

","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#run-your-first-flow","title":"Run Your First flow:","text":"

The simplest way get started with Prefect is to import\u00a0and annotate your Python function with the\u00a0@flow\u00a0decorator. The script below fetches statistics about the main Prefect repository. Let's turn it into a Prefect flow:

import httpx\nfrom prefect import flow\n@flow\ndef get_repo_info():\nurl = \"https://api.github.com/repos/PrefectHQ/prefect\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

Running this flow from your terminal will result in some interesting output:

12:47:42.792 | INFO    | prefect.engine - Created flow run 'ludicrous-warthog' for flow 'get-repo-info'\nPrefectHQ/prefect repository statistics \ud83e\udd13:\nStars \ud83c\udf20 : 12146\nForks \ud83c\udf74 : 1245\n12:47:45.008 | INFO    | Flow run 'ludicrous-warthog' - Finished in state Completed()\n
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#parameters","title":"Parameters","text":"

As with any Python function, you can pass arguments to a flow. The positional and keyword arguments defined on your flow function are called parameters. Prefect will automatically perform type conversion by using any provided type hints. Let's make the repository a parameter:

import httpx\nfrom prefect import flow\n\n\n@flow\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\nurl = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    print(f\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n    print(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    print(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info()\n
","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#logging","title":"Logging","text":"

Prefect enables you to log a variety of useful information in the UI about your flow and task runs, capturing information about your workflows for purposes such as monitoring, troubleshooting, and auditing. Let's add some logging to our flow:

import httpx\nfrom prefect import flow, get_run_logger\n@flow\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    url = f\"https://api.github.com/repos/{repo_name}\"\n    response = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\nlogger = get_run_logger()\nlogger.info(f\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\nlogger.info(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\nlogger.info(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\nif __name__ == \"__main__\":\n    get_repo_info()\n

Now the output looks more consistent:

12:47:42.792 | INFO    | prefect.engine - Created flow run 'ludicrous-warthog' for flow 'get-repo-info'\nPrefectHQ/prefect repository statistics \ud83e\udd13:\n12:47:43.016 | INFO    | Flow run 'ludicrous-warthog' - Stars \ud83c\udf20 : 12146\n12:47:43.042 | INFO    | Flow run 'ludicrous-warthog' - Forks \ud83c\udf74 : 1245\n12:47:45.008 | INFO    | Flow run 'ludicrous-warthog' - Finished in state Completed()\n

Prefect can also capture print statements as info logs by specifying log_prints=True in your flow decorator (e.g. @flow(log_prints=True)).

","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#retries","title":"Retries","text":"

So far our script works, but in the future, the GitHub API may be temporarily unavailable or rate limited. Retries help make our script more resilient. Let's add a retry functionality to our example above:

import httpx\nfrom prefect import flow, get_run_logger\n\n\n@flow(retries=3, retry_delay_seconds=5)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\nurl = f\"https://api.github.com/repos/{repo_name}\"\nresponse = httpx.get(url)\n    response.raise_for_status()\n    repo = response.json()\n    logger = get_run_logger()\n    logger.info(f\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n    logger.info(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    logger.info(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/flows/#next-tasks","title":"Next: Tasks","text":"

As you have seen, adding a flow decorator converts our Python function to a resilient and observable workflow. In the next section, you'll supercharge our flow by using tasks to break down the workflow's complexity and make it more performant.

","tags":["tutorial","getting started","basics","flows","logging","parameters","retries"]},{"location":"tutorial/tasks/","title":"Tasks","text":"","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#what-is-a-task","title":"What is a task?","text":"

A task is a Python function decorated with a @task decorator. Tasks are atomic pieces of work that are executed independently within a flow. Tasks, and the dependencies between them, are displayed in the flow run graph, enabling you to break down a complex flow into something you can observe and understand. When a function becomes a task, it can be executed concurrently and its return value can be cached.

Flows and tasks share some common features:

Network calls (such as our GET requests to the GitHub API) are particularly useful as tasks because they take advantage of task features such as retries, caching, and concurrency.

Tasks must be called from flows

All tasks must be called from within a flow. Tasks may not call other tasks directly.

When to use tasks

Not all functions in a flow need be tasks. Use them only when their features are useful.

Let's take our flow from before and move the request into a task:

import httpx\nfrom prefect import flow, task, get_run_logger\n@task\ndef get_url(url: str, params: dict = None):\nresponse = httpx.get(url, params=params)\nresponse.raise_for_status()\nreturn response.json()\n@flow(retries=3, retry_delay_seconds=5)\ndef get_repo_info(repo_name: str = \"PrefectHQ/prefect\"):\n    repo = get_url(f\"https://api.github.com/repos/{repo_name}\")\n    logger = get_run_logger()\nlogger.info(f\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\nlogger.info(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    logger.info(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\n\n\nif __name__ == \"__main__\":\n    get_repo_info()\n

Running the flow in your terminal will result in something like this:

09:55:55.412 | INFO    | prefect.engine - Created flow run 'great-ammonite' for flow 'get-repo-info'\n09:55:55.499 | INFO    | Flow run 'great-ammonite' - Created task run 'get_url-0' for task 'get_url'\n09:55:55.500 | INFO    | Flow run 'great-ammonite' - Executing 'get_url-0' immediately...\n09:55:55.825 | INFO    | Task run 'get_url-0' - Finished in state Completed()\n09:55:55.827 | INFO    | Flow run 'great-ammonite' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n09:55:55.827 | INFO    | Flow run 'great-ammonite' - Stars \ud83c\udf20 : 12157\n09:55:55.827 | INFO    | Flow run 'great-ammonite' - Forks \ud83c\udf74 : 1251\n09:55:55.849 | INFO    | Flow run 'great-ammonite' - Finished in state Completed('All states completed.')\n
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#caching","title":"Caching","text":"

Tasks support the ability to cache their return value. This allows you to efficiently reuse results of tasks that may be expensive to reproduce with every flow run, or reuse cached results if the inputs to a task have not changed.

To enable caching, specify a cache_key_fn \u2014 a function that returns a cache key \u2014 on your task. You may optionally provide a cache_expiration timedelta indicating when the cache expires. You can define a task that is cached based on its inputs by using the Prefect task_input_hash. Let's add caching to our get_url task:

import httpx\nfrom datetime import timedelta\nfrom prefect import flow, task, get_run_logger\nfrom prefect.tasks import task_input_hash\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(hours=1))\ndef get_url(url: str, params: dict = None):\n    response = httpx.get(url, params=params)\n    response.raise_for_status()\n    return response.json()\n

Task results and caching

Task results are cached in memory during a flow run and persisted to your home directory by default. Prefect Cloud only stores the cache key, not the data itself.

","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#concurrency","title":"Concurrency","text":"

Tasks enable concurrency, allowing you to execute multiple tasks asynchronously. This concurrency can greatly enhance the efficiency and performance of your workflows. Let's expand our script to calculate the average open issues per user. This will require making more requests:

import httpx\nfrom datetime import timedelta\nfrom prefect import flow, task, get_run_logger\nfrom prefect.tasks import task_input_hash\n\n\n@task(cache_key_fn=task_input_hash, cache_expiration=timedelta(hours=1))\ndef get_url(url: str, params: dict = None):\n    response = httpx.get(url, params=params)\n    response.raise_for_status()\n    return response.json()\n\n\ndef get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\nissues = []\npages = range(1, -(open_issues_count // -per_page) + 1)\nfor page in pages:\nissues.append(\nget_url(\nf\"https://api.github.com/repos/{repo_name}/issues\",\nparams={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n)\n)\nreturn [i for p in issues for i in p]\n@flow(retries=3, retry_delay_seconds=5)\ndef get_repo_info(\n    repo_name: str = \"PrefectHQ/prefect\"\n):\n    repo = get_url(f\"https://api.github.com/repos/{repo_name}\")\nissues = get_open_issues(repo_name, repo[\"open_issues_count\"])\nissues_per_user = len(issues) / len(set([i[\"user\"][\"id\"] for i in issues]))\nlogger = get_run_logger()\n    logger.info(f\"PrefectHQ/prefect repository statistics \ud83e\udd13:\")\n    logger.info(f\"Stars \ud83c\udf20 : {repo['stargazers_count']}\")\n    logger.info(f\"Forks \ud83c\udf74 : {repo['forks_count']}\")\nlogger.info(f\"Average open issues per user \ud83d\udc8c : {issues_per_user:.2f}\")\nif __name__ == \"__main__\":\n    get_repo_info()\n

Now we're fetching the data we need, but the requests are happening sequentially. Tasks expose a submit method which changes the execution from sequential to concurrent. In our specific example, we also need to use the result method since we are unpacking a list of return values:

def get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n    issues = []\n    pages = range(1, -(open_issues_count // -per_page) + 1)\n    for page in pages:\n        issues.append(\nget_url.submit(\nf\"https://api.github.com/repos/{repo_name}/issues\",\n                params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n            )\n        )\nreturn [i for p in issues for i in p.result()]\n

The logs show that each task is running concurrently:

12:45:28.241 | INFO    | prefect.engine - Created flow run 'intrepid-coua' for flow 'get-repo-info'\n12:45:28.311 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-0' for task 'get_url'\n12:45:28.312 | INFO    | Flow run 'intrepid-coua' - Executing 'get_url-0' immediately...\n12:45:28.543 | INFO    | Task run 'get_url-0' - Finished in state Completed()\n12:45:28.583 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-1' for task 'get_url'\n12:45:28.584 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-1' for execution.\n12:45:28.594 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-2' for task 'get_url'\n12:45:28.594 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-2' for execution.\n12:45:28.609 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-4' for task 'get_url'\n12:45:28.610 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-4' for execution.\n12:45:28.624 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-5' for task 'get_url'\n12:45:28.625 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-5' for execution.\n12:45:28.640 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-6' for task 'get_url'\n12:45:28.641 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-6' for execution.\n12:45:28.708 | INFO    | Flow run 'intrepid-coua' - Created task run 'get_url-3' for task 'get_url'\n12:45:28.708 | INFO    | Flow run 'intrepid-coua' - Submitted task run 'get_url-3' for execution.\n12:45:29.096 | INFO    | Task run 'get_url-6' - Finished in state Completed()\n12:45:29.565 | INFO    | Task run 'get_url-2' - Finished in state Completed()\n12:45:29.721 | INFO    | Task run 'get_url-5' - Finished in state Completed()\n12:45:29.749 | INFO    | Task run 'get_url-4' - Finished in state Completed()\n12:45:29.801 | INFO    | Task run 'get_url-3' - Finished in state Completed()\n12:45:29.817 | INFO    | Task run 'get_url-1' - Finished in state Completed()\n12:45:29.820 | INFO    | Flow run 'intrepid-coua' - PrefectHQ/prefect repository statistics \ud83e\udd13:\n12:45:29.820 | INFO    | Flow run 'intrepid-coua' - Stars \ud83c\udf20 : 12159\n12:45:29.821 | INFO    | Flow run 'intrepid-coua' - Forks \ud83c\udf74 : 1251\nAverage open issues per user \ud83d\udc8c : 2.27\n12:45:29.838 | INFO    | Flow run 'intrepid-coua' - Finished in state Completed('All states completed.')\n
","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#subflows","title":"Subflows","text":"

Not only can you call tasks within a flow, but you can also call other flows! Child flows are called\u00a0subflows\u00a0and allow you to efficiently manage, track, and version common multi-task logic.

Subflows are a great way to organize your workflows and offer more visibility within the UI.

Let's add a flow decorator to our get_open_issues function:

@flow\ndef get_open_issues(repo_name: str, open_issues_count: int, per_page: int = 100):\n    issues = []\n    pages = range(1, -(open_issues_count // -per_page) + 1)\n    for page in pages:\n        issues.append(\n            get_url.submit(\n                f\"https://api.github.com/repos/{repo_name}/issues\",\n                params={\"page\": page, \"per_page\": per_page, \"state\": \"open\"},\n            )\n        )\n    return [i for p in issues for i in p.result()]\n

Whenever we run the parent flow, a new run will be generated for related functions within that as well. Not only is this run tracked as a subflow run of the main flow, but you can also inspect it independently in the UI!

","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]},{"location":"tutorial/tasks/#next-deployments","title":"Next: Deployments","text":"

We now have a flow with tasks, subflows, retries, logging, caching, and concurrent execution. In the next section, we'll see how we can deploy this flow in order to run it on a schedule and/or external infrastructure.

","tags":["tutorial","getting started","basics","tasks","caching","concurrency","subflows"]}]} \ No newline at end of file diff --git a/versions/unreleased/sitemap.xml.gz b/versions/unreleased/sitemap.xml.gz index f7615a88d..76796c9e9 100644 Binary files a/versions/unreleased/sitemap.xml.gz and b/versions/unreleased/sitemap.xml.gz differ