diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 495a3b82..92fc0b5a 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -141,7 +141,7 @@ These commands can be performed on the entire repository, when run from the repo and ```console - $ yapf --diff -r . + $ yapf --in-place -r . ``` The configuration for YAPF is given in `setup.cfg` and `.yapfignore`. See the YAPF link above for advanced usage. @@ -149,7 +149,7 @@ See the YAPF link above for advanced usage. ##### Alternative to YAPF YAPF is not required to follow the style and formatting guidelines. You can -perform all formatting on your own using the linting output as a guild. Painful, +perform all formatting on your own using the linting output as a guide. Painful, maybe, but possible! ## Testing diff --git a/docs/get-started/sync-client-quick-start.md b/docs/get-started/sync-client-quick-start.md new file mode 100644 index 00000000..a5e8861a --- /dev/null +++ b/docs/get-started/sync-client-quick-start.md @@ -0,0 +1,237 @@ +--- +title: Planet Client Quick Start +--- + +The Planet SDK for Python makes it easy to access Planet’s massive repository of satellite imagery and add Planet +data to your data ops workflow. + +**Note:** This is the new, non-asyncio client. If you want to take advantage of asyncio, check the [asyncio client quick start guide](quick-start-guide.md). + +Your feedback on this version of our client is appreciated. Please raise an issue on [GitHub](https://github.com/planetlabs/planet-client-python/issues) if you encounter any problems. + +## Dependencies + +This package requires [Python 3.9 or greater](https://python.org/downloads/). A virtual environment is strongly recommended. + +You will need your Planet API credentials. You can find your API key in [Planet Explorer](https://planet.com/explorer) under Account Settings. + +## Installation + +Install from PyPI using pip: + +```bash +pip install planet +``` + +## Usage + +### Authentication + +Use the `PL_API_KEY` environment variable to authenticate with the Planet API. + +```bash +export PL_API_KEY=your_api_key +``` + +These examples will assume you are using the `PL_API_KEY` environment variable. If you are, you can skip to the next section. + +#### Authenticate using the Session class + +Alternately, you can also authenticate using the `Session` class: + +```python +from planet import Auth, Session, Auth +from planet.auth import APIKeyAuth + +pl = Planet(session=Session(auth=APIKeyAuth(key='your_api_key'))) +``` + + +### The Planet client + +The `Planet` class is the main entry point for the Planet SDK. It provides access to the various APIs available on the Planet platform. + +```python +from planet import Planet +pl = Planet() # automatically detects PL_API_KEY +``` + +The Planet client has members `data`, `orders`, and `subscriptions`, which allow you to interact with the Data API, Orders API, and Subscriptions API. + +### Search + +To search for items in the Planet catalog, use the `data.search()` method on the `Planet` client. The return value is an iterator that yields search +results: + +```python +from planet import Planet + +pl = Planet() +for item in pl.data.search(['PSScene'], limit=5): + print(item) +``` + +#### Geometry + +Use the `geometry` parameter to filter search results by geometry: + +```python +geom = { + "coordinates": [ + [ + [ + -125.41267816101056, + 46.38901501783491 + ], + [ + -125.41267816101056, + 41.101114161051015 + ], + [ + -115.51426167332103, + 41.101114161051015 + ], + [ + -115.51426167332103, + 46.38901501783491 + ], + [ + -125.41267816101056, + 46.38901501783491 + ] + ] + ], + "type": "Polygon" +} +for item in pl.data.search(['PSScene'], geometry=geom, limit=5): + print(item) +``` + +#### Filters + +The Data API allows a wide range of search parameters. Whether using the `.search()` method, or +creating or updating a saved search, or requesting stats, a data search filter +can be provided to the API as a JSON blob. This JSON blob can be built up manually or by using the +`data_filter` module. + +An example of creating the request JSON with `data_filter`: + +```python +from datetime import datetime +from planet import data_filter + +def main(): + pl = Planet() + + sfilter = data_filter.and_filter([ + data_filter.permission_filter(), + data_filter.date_range_filter('acquired', gt=datetime(2022, 6, 1, 1)) + ]) + + for item in pl.data.search(['PSScene'], filter=sfilter, limit=10): + print(item["id"]) +``` + +This returns scenes acquired after the provided date that you have permission to download using +your plan. + +If you prefer to build the JSON blob manually, the above filter would look like this: + +```python +sfilter = { + 'type': 'AndFilter', + 'config': [ + {'type': 'PermissionFilter', 'config': ['assets:download']}, + { + 'type': 'DateRangeFilter', + 'field_name': 'acquired', + 'config': {'gt': '2022-06-01T01:00:00Z'} + } + ] +} +``` + +This means that if you already have Data API filters saved as a query, you can copy them directly into the SDK. + +### Placing an Order + +Once you have a list of scenes you want to download, you can place an order for assets using the Orders API client. Please review +[Items and Assets](https://developers.planet.com/docs/apis/data/items-assets/) in the Developer Center for a refresher on item types +and asset types. + +Use the `order_request` module to build an order request, and then use the `orders.create_order()` method to place the order. + +Orders take time to process. You can use the `orders.wait()` method to wait for the order to be ready, and then use the `orders.download_order()` method to download the assets. + +Warning: running the following code will result in quota usage based on your plan. + +```python +from planet import Planet, order_request + +def main(): + pl = Planet() + image_ids = ["20200925_161029_69_2223"] + request = order_request.build_request( + name='test_order', + products=[ + order_request.product( + item_ids=image_ids, + product_bundle='analytic_udm2', + item_type='psscene') + ] + ) + + order = pl.orders.create_order(request) + + # wait for the order to be ready + # note: this may take several minutes. + pl.orders.wait(order['id']) + + pl.orders.download_order(order['id'], overwrite=True) +``` + +### Creating a subscription + +#### Prerequisites + +Subscriptions can be delivered to a destination. The following example uses Amazon S3. +You will need your ACCESS_KEY_ID, SECRET_ACCESS_KEY, bucket and region name. + +#### Scene subscription + +To subscribe to scenes that match a filter, use the `subscription_request` module to build a request, and +pass it to the `subscriptions.create_subscription()` method of the client. + +Warning: the following code will create a subscription, consuming quota based on your plan. + +```python +from planet.subscription_request import catalog_source, build_request, amazon_s3 + +source = catalog_source( + ["PSScene"], + ["ortho_analytic_4b"], + geometry={ + "type": "Polygon", + "coordinates": [ + [ + [37.791595458984375, 14.84923123791421], + [37.90214538574219, 14.84923123791421], + [37.90214538574219, 14.945448293647944], + [37.791595458984375, 14.945448293647944], + [37.791595458984375, 14.84923123791421], + ] + ], + }, + start_time=datetime.now(), + publishing_stages=["standard"], + time_range_type="acquired", +) + +request = build_request("Standard PSScene Ortho Analytic", source=source, delivery={}) + +# define a delivery method. In this example, we're using AWS S3. +delivery = amazon_s3(ACCESS_KEY_ID, SECRET_ACCESS_KEY, "test", "us-east-1") + +# finally, create the subscription +subscription = pl.subscriptions.create_subscription(request) +``` diff --git a/mkdocs.yml b/mkdocs.yml index 73b3cc84..5eef5fff 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -67,6 +67,7 @@ plugins: nav: - "Get Started": - get-started/quick-start-guide.md + - get-started/sync-client-quick-start.md - get-started/get-your-planet-account.md - get-started/venv-tutorial.md - get-started/upgrading.md diff --git a/planet/__init__.py b/planet/__init__.py index 047c56c4..cb410160 100644 --- a/planet/__init__.py +++ b/planet/__init__.py @@ -18,6 +18,7 @@ from .auth import Auth from .clients import DataClient, OrdersClient, SubscriptionsClient # NOQA from .io import collect +from .sync import Planet __all__ = [ 'Auth', @@ -26,6 +27,7 @@ 'data_filter', 'OrdersClient', 'order_request', + 'Planet', 'reporting', 'Session', 'SubscriptionsClient', diff --git a/planet/clients/data.py b/planet/clients/data.py index 364be582..60deda23 100644 --- a/planet/clients/data.py +++ b/planet/clients/data.py @@ -17,7 +17,7 @@ import logging from pathlib import Path import time -from typing import Any, AsyncIterator, Callable, Dict, List, Optional +from typing import Any, AsyncIterator, Awaitable, Callable, Dict, List, Optional, TypeVar import uuid from ..data_filter import empty_filter @@ -52,6 +52,8 @@ LOGGER = logging.getLogger(__name__) +T = TypeVar("T") + class Items(Paged): """Asynchronous iterator over items from a paged response.""" @@ -96,6 +98,10 @@ def __init__(self, session: Session, base_url: Optional[str] = None): if self._base_url.endswith('/'): self._base_url = self._base_url[:-1] + def call_sync(self, f: Awaitable[T]) -> T: + """block on an async function call, using the call_sync method of the session""" + return self._session.call_sync(f) + @staticmethod def _check_search_id(sid): """Raises planet.exceptions.ClientError if sid is not a valid UUID""" diff --git a/planet/clients/orders.py b/planet/clients/orders.py index cc4bb487..eb566928 100644 --- a/planet/clients/orders.py +++ b/planet/clients/orders.py @@ -16,7 +16,7 @@ import asyncio import logging import time -from typing import AsyncIterator, Callable, List, Optional, Sequence, Union, Dict +from typing import AsyncIterator, Awaitable, Callable, Dict, List, Optional, Sequence, TypeVar, Union import uuid import json import hashlib @@ -39,6 +39,8 @@ LOGGER = logging.getLogger(__name__) +T = TypeVar("T") + class Orders(Paged): """Asynchronous iterator over Orders from a paged response describing @@ -97,6 +99,10 @@ def __init__(self, session: Session, base_url: Optional[str] = None): if self._base_url.endswith('/'): self._base_url = self._base_url[:-1] + def call_sync(self, f: Awaitable[T]) -> T: + """block on an async function call, using the call_sync method of the session""" + return self._session.call_sync(f) + @staticmethod def _check_order_id(oid): """Raises planet.exceptions.ClientError if oid is not a valid UUID""" @@ -435,6 +441,7 @@ async def wait(self, # loop without end if max_attempts is zero # otherwise, loop until num_attempts reaches max_attempts num_attempts = 0 + current_state = "UNKNOWN" while not max_attempts or num_attempts < max_attempts: t = time.time() diff --git a/planet/clients/subscriptions.py b/planet/clients/subscriptions.py index 6e794434..57e6ee58 100644 --- a/planet/clients/subscriptions.py +++ b/planet/clients/subscriptions.py @@ -1,7 +1,7 @@ """Planet Subscriptions API Python client.""" import logging -from typing import AsyncIterator, Optional, Sequence, Dict, Union +from typing import AsyncIterator, Awaitable, Dict, Optional, Sequence, TypeVar, Union from typing_extensions import Literal @@ -14,6 +14,8 @@ LOGGER = logging.getLogger() +T = TypeVar("T") + class SubscriptionsClient: """A Planet Subscriptions Service API 1.0.0 client. @@ -59,6 +61,10 @@ def __init__(self, if self._base_url.endswith('/'): self._base_url = self._base_url[:-1] + def call_sync(self, f: Awaitable[T]) -> T: + """block on an async function call, using the call_sync method of the session""" + return self._session.call_sync(f) + async def list_subscriptions( self, status: Optional[Sequence[str]] = None, @@ -72,11 +78,11 @@ async def list_subscriptions( start_time: Optional[str] = None, sort_by: Optional[str] = None, updated: Optional[str] = None) -> AsyncIterator[dict]: - """Iterate over list of account subscriptions with optional filtering and sorting. + """Iterate over list of account subscriptions with optional filtering. Note: The name of this method is based on the API's method name. - This method provides iteration over subcriptions, it does + This method provides iteration over subscriptions, it does not return a list. Args: diff --git a/planet/http.py b/planet/http.py index 6ec8fb75..5aac422a 100644 --- a/planet/http.py +++ b/planet/http.py @@ -20,8 +20,9 @@ from http import HTTPStatus import logging import random +import threading import time -from typing import AsyncGenerator, Optional +from typing import AsyncGenerator, Awaitable, Optional, TypeVar import httpx from typing_extensions import Literal @@ -30,6 +31,8 @@ from . import exceptions, models from .__version__ import __version__ +T = TypeVar("T") + # NOTE: configuration of the session was performed using the data API quick # search endpoint. These values can be re-tested, tested with a new endpoint or # refined using session_configuration.py in the scripts directory. @@ -272,6 +275,20 @@ async def alog_response(*args, **kwargs): self._limiter = _Limiter(rate_limit=RATE_LIMIT, max_workers=MAX_ACTIVE) self.outcomes: Counter[str] = Counter() + # create a dedicated event loop for this httpx session. + def _start_background_loop(loop): + asyncio.set_event_loop(loop) + loop.run_forever() + + self._loop = asyncio.new_event_loop() + self._loop_thread = threading.Thread(target=_start_background_loop, + args=(self._loop, ), + daemon=True) + self._loop_thread.start() + + def call_sync(self, f: Awaitable[T]) -> T: + return asyncio.run_coroutine_threadsafe(f, self._loop).result() + @classmethod async def _raise_for_status(cls, response): if response.is_error: diff --git a/planet/sync/__init__.py b/planet/sync/__init__.py new file mode 100644 index 00000000..d0f35a99 --- /dev/null +++ b/planet/sync/__init__.py @@ -0,0 +1,17 @@ +# Copyright 2021 Planet Labs, Inc. +# Copyright 2022 Planet Labs PBC. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from .client import Planet + +__all__ = ['Planet'] diff --git a/planet/sync/client.py b/planet/sync/client.py new file mode 100644 index 00000000..9243bd26 --- /dev/null +++ b/planet/sync/client.py @@ -0,0 +1,50 @@ +from typing import Optional +from .data import DataAPI +from .orders import OrdersAPI +from .subscriptions import SubscriptionsAPI +from planet.http import Session + +SYNC_CLIENT_AGENT = "python-sdk-sync" + + +class Planet: + """Planet API client. This client contains non-async methods. + + Authentication is required: defaults to detecting API key from environment (PL_API_KEY). + + Members: + `data`: for interacting with the Planet Data API. + `orders`: Orders API. + `subscriptions`: Subscriptions API. + + Quick start example: + ```python + # requires PL_API_KEY + + pl = Planet() + for item in pl.data.search(['PSScene'], limit=5): + print(item) + + for sub in pl.subscriptions.list_subscriptions(): + print(item) + ``` + + Parameters: + session: Optional Session. The Session can be provided allowing for customization, and + will default to standard behavior when not provided. Example: + + ```python + from planet.sync import Planet + + pl = Planet() + ```` + """ + + def __init__(self, session: Optional[Session] = None) -> None: + self._session = session or Session() + self._session._client.headers.update( + {"X-Planet-App": SYNC_CLIENT_AGENT}) + + self.data = DataAPI(self._session) + self.orders = OrdersAPI(self._session) + self.subscriptions = SubscriptionsAPI(self._session) diff --git a/planet/sync/data.py b/planet/sync/data.py new file mode 100644 index 00000000..69dc1f32 --- /dev/null +++ b/planet/sync/data.py @@ -0,0 +1,412 @@ +# Copyright 2022 Planet Labs PBC. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may not +# use this file except in compliance with the License. You may obtain a copy of +# the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations under +# the License. +"""Functionality for interacting with the data api""" +from pathlib import Path +from typing import Any, Callable, Dict, Iterator, List, Optional + +from ..http import Session + +from planet.clients import DataClient + +LIST_SORT_DEFAULT = 'created desc' +LIST_SEARCH_TYPE_DEFAULT = 'any' + +WAIT_DELAY = 5 +WAIT_MAX_ATTEMPTS = 200 + + +class DataAPI: + """Data API client""" + + _client: DataClient + + def __init__(self, session: Session, base_url: Optional[str] = None): + """ + Parameters: + session: Open session connected to server. + base_url: The base URL to use. Defaults to production data API + base url. + """ + self._client = DataClient(session, base_url) + + def search( + self, + item_types: List[str], + search_filter: Optional[Dict] = None, + name: Optional[str] = None, + sort: Optional[str] = None, + limit: int = 100, + geometry: Optional[Dict] = None, + ) -> Iterator[Dict]: + """ + Search for items + + Example: + + ```python + pl = Planet() + for item in pl.data.search(['PSScene'], limit=5): + print(item) + ``` + + Parameters: + item_types: The item types to include in the search. + search_filter: Structured search criteria to apply. If None, + no search criteria is applied. + sort: Field and direction to order results by. Valid options are + given in SEARCH_SORT. + name: The name of the saved search. + limit: Maximum number of results to return. When set to 0, no + maximum is applied. + geometry: GeoJSON, a feature reference or a list of feature + references + """ + + results = self._client.search(item_types, + search_filter, + name, + sort, + limit, + geometry) + + try: + while True: + yield self._client.call_sync(results.__anext__()) + except StopAsyncIteration: + pass + + def create_search( + self, + item_types: List[str], + search_filter: Dict, + name: str, + enable_email: bool = False, + geometry: Optional[Dict] = None, + ) -> Dict: + """Create a new saved structured item search. + + To filter to items you have access to download which are of standard + (aka not test) quality, use the following: + + ```python + >>> from planet import data_filter + >>> data_filter.and_filter([ + ... data_filter.permission_filter(), + ... data_filter.std_quality_filter() + >>> ]) + + ``` + + To avoid filtering out any imagery, supply a blank AndFilter, which can + be created with `data_filter.and_filter([])`. + + + Parameters: + item_types: The item types to include in the search. + search_filter: Structured search criteria. + name: The name of the saved search. + enable_email: Send a daily email when new results are added. + geometry: A feature reference or a GeoJSON + + Returns: + Description of the saved search. + + Raises: + planet.exceptions.APIError: On API error. + """ + return self._client.call_sync( + self._client.create_search(item_types, + search_filter, + name, + enable_email, + geometry)) + + def update_search(self, + search_id: str, + item_types: List[str], + search_filter: Dict[str, Any], + name: str, + enable_email: bool = False, + geometry: Optional[dict] = None) -> Dict[str, Any]: + """Update an existing saved search. + + Parameters: + search_id: Saved search identifier. + item_types: The item types to include in the search. + search_filter: Structured search criteria. + name: The name of the saved search. + enable_email: Send a daily email when new results are added. + geometry: A feature reference or a GeoJSON + + Returns: + Description of the saved search. + """ + return self._client.call_sync( + self._client.update_search(search_id, + item_types, + search_filter, + name, + enable_email, + geometry)) + + def list_searches(self, + sort: str = LIST_SORT_DEFAULT, + search_type: str = LIST_SEARCH_TYPE_DEFAULT, + limit: int = 100) -> Iterator[Dict[str, Any]]: + """Iterate through list of searches available to the user. + + Parameters: + sort: Field and direction to order results by. + search_type: Filter to specified search type. + limit: Maximum number of results to return. When set to 0, no + maximum is applied. + + Yields: + Description of a search. + + Raises: + planet.exceptions.APIError: On API error. + planet.exceptions.ClientError: If sort or search_type are not + valid. + """ + results = self._client.list_searches(sort, search_type, limit) + + try: + while True: + yield self._client.call_sync(results.__anext__()) + except StopAsyncIteration: + pass + + def delete_search(self, search_id: str): + """Delete an existing saved search. + + Parameters: + search_id: Saved search identifier. + + Raises: + planet.exceptions.APIError: On API error. + """ + return self._client.call_sync(self._client.delete_search(search_id)) + + def get_search(self, search_id: str) -> Dict: + """Get a saved search by id. + + Parameters: + search_id: Stored search identifier. + + Returns: + Saved search details. + + Raises: + planet.exceptions.APIError: On API error. + """ + return self._client.call_sync(self._client.get_search(search_id)) + + def run_search(self, + search_id: str, + sort: Optional[str] = None, + limit: int = 100) -> Iterator[Dict[str, Any]]: + """Iterate over results from a saved search. + + Note: + The name of this method is based on the API's method name. This + method provides iteration over results, it does not get a + single result description or return a list of descriptions. + + Parameters: + search_id: Stored search identifier. + sort: Field and direction to order results by. Valid options are + given in SEARCH_SORT. + limit: Maximum number of results to return. When set to 0, no + maximum is applied. + + Yields: + Description of an item. + + Raises: + planet.exceptions.APIError: On API error. + planet.exceptions.ClientError: If search_id or sort is not valid. + """ + + results = self._client.run_search(search_id, sort, limit) + + try: + while True: + yield self._client.call_sync(results.__anext__()) + except StopAsyncIteration: + pass + + def get_stats(self, + item_types: List[str], + search_filter: Dict[str, Any], + interval: str) -> Dict[str, Any]: + """Get item search statistics. + + Parameters: + item_types: The item types to include in the search. + search_filter: Structured search criteria. + interval: The size of the histogram date buckets. + + Returns: + A full JSON description of the returned statistics result + histogram. + + Raises: + planet.exceptions.APIError: On API error. + planet.exceptions.ClientError: If interval is not valid. + """ + return self._client.call_sync( + self._client.get_stats(item_types, search_filter, interval)) + + def list_item_assets(self, item_type_id: str, + item_id: str) -> Dict[str, Any]: + """List all assets available for an item. + + An asset describes a product that can be derived from an item's source + data, and can be used for various analytic, visual or other purposes. + These are referred to as asset_types. + + Parameters: + item_type_id: Item type identifier. + item_id: Item identifier. + + Returns: + Descriptions of available assets as a dictionary with asset_type_id + as keys and asset description as value. + + Raises: + planet.exceptions.APIError: On API error. + """ + return self._client.call_sync( + self._client.list_item_assets(item_type_id, item_id)) + + def get_asset(self, item_type_id: str, item_id: str, + asset_type_id: str) -> Dict[str, Any]: + """Get an item asset description. + + Parameters: + item_type_id: Item type identifier. + item_id: Item identifier. + asset_type_id: Asset type identifier. + + Returns: + Description of the asset. + + Raises: + planet.exceptions.APIError: On API error. + planet.exceptions.ClientError: If asset type identifier is not + valid. + """ + return self._client.call_sync( + self._client.get_asset(item_type_id, item_id, asset_type_id)) + + def activate_asset(self, asset: Dict[str, Any]): + """Activate an item asset. + + Parameters: + asset: Description of the asset. Obtained from get_asset(). + + Raises: + planet.exceptions.APIError: On API error. + planet.exceptions.ClientError: If asset description is not + valid. + """ + return self._client.call_sync(self._client.activate_asset(asset)) + + def wait_asset( + self, + asset: dict, + delay: int = WAIT_DELAY, + max_attempts: int = WAIT_MAX_ATTEMPTS, + callback: Optional[Callable[[str], + None]] = None) -> Dict[Any, Any]: + """Wait for an item asset to be active. + + Prior to waiting for the asset to be active, be sure to activate the + asset with activate_asset(). + + Parameters: + asset: Description of the asset. Obtained from get_asset(). + delay: Time (in seconds) between polls. + max_attempts: Maximum number of polls. When set to 0, no limit + is applied. + callback: Function that handles state progress updates. + + Returns: + Last received description of the asset. + + Raises: + planet.exceptions.APIError: On API error. + planet.exceptions.ClientError: If asset_type_id is not valid or is + not available or if the maximum number of attempts is reached + before the asset is active. + """ + return self._client.call_sync( + self._client.wait_asset(asset, delay, max_attempts, callback)) + + def download_asset(self, + asset: dict, + filename: Optional[str] = None, + directory: Path = Path('.'), + overwrite: bool = False, + progress_bar: bool = True) -> Path: + """Download an asset. + + The asset must be active before it can be downloaded. This can be + achieved with activate_asset() followed by wait_asset(). + + If overwrite is False and the file already exists, download will be + skipped and the file path will be returned as usual. + + Parameters: + asset: Description of the asset. Obtained from get_asset() or + wait_asset(). + filename: Custom name to assign to downloaded file. + directory: Base directory for file download. + overwrite: Overwrite any existing files. + progress_bar: Show progress bar during download. + + Returns: + Path to downloaded file. + + Raises: + planet.exceptions.APIError: On API error. + planet.exceptions.ClientError: If asset is not active or asset + description is not valid. + """ + return self._client.call_sync( + self._client.download_asset(asset, + filename, + directory, + overwrite, + progress_bar)) + + @staticmethod + def validate_checksum(asset: Dict[str, Any], filename: Path): + """Validate checksum of downloaded file + + Compares checksum calculated from the file against the value provided + in the asset. + + + Parameters: + asset: Description of the asset. Obtained from get_asset() or + wait_asset(). + filename: Full path to downloaded file. + + Raises: + planet.exceptions.ClientError: If the file does not exist or if + checksums do not match. + """ + return DataClient.validate_checksum(asset, filename) diff --git a/planet/sync/orders.py b/planet/sync/orders.py new file mode 100644 index 00000000..901169ca --- /dev/null +++ b/planet/sync/orders.py @@ -0,0 +1,324 @@ +# Copyright 2020 Planet Labs, Inc. +# Copyright 2022 Planet Labs PBC. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may not +# use this file except in compliance with the License. You may obtain a copy of +# the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations under +# the License. +"""Functionality for interacting with the orders api""" +from typing import Any, Callable, Dict, Iterator, List, Optional + +from pathlib import Path +from ..http import Session +from planet.clients import OrdersClient + + +class OrdersAPI: + """Orders API client""" + + _client: OrdersClient + + def __init__(self, session: Session, base_url: Optional[str] = None): + """ + Parameters: + session: Open session connected to server. + base_url: The base URL to use. Defaults to production orders API + base url. + """ + + self._client = OrdersClient(session, base_url) + + def create_order(self, request: Dict) -> Dict: + """Create an order. + + Example: + + ```python + + from planet import Planet, order_request + + def main(): + pl = Planet() + image_ids = ["20200925_161029_69_2223"] + request = order_request.build_request( + 'test_order', + [order_request.product(image_ids, 'analytic_udm2', 'psscene')] + ) + order = pl.orders.create_order(request) + ``` + + Parameters: + request: order request definition (recommended to use the order_request module to build a request) + + Returns: + JSON description of the created order + + Raises: + planet.exceptions.APIError: On API error. + """ + return self._client.call_sync(self._client.create_order(request)) + + def get_order(self, order_id: str) -> Dict: + """Get order details by Order ID. + + Parameters: + order_id: The ID of the order + + Returns: + JSON description of the order + + Raises: + planet.exceptions.ClientError: If order_id is not a valid UUID. + planet.exceptions.APIError: On API error. + """ + return self._client.call_sync(self._client.get_order(order_id)) + + def cancel_order(self, order_id: str) -> Dict[str, Any]: + """Cancel a queued order. + + Parameters: + order_id: The ID of the order + + Returns: + Results of the cancel request + + Raises: + planet.exceptions.ClientError: If order_id is not a valid UUID. + planet.exceptions.APIError: On API error. + """ + return self._client.call_sync(self._client.cancel_order(order_id)) + + def cancel_orders(self, + order_ids: Optional[List[str]] = None) -> Dict[str, Any]: + """Cancel queued orders in bulk. + + Parameters: + order_ids: The IDs of the orders. If empty or None, all orders in a + pre-running state will be cancelled. + + Returns: + Results of the bulk cancel request + + Raises: + planet.exceptions.ClientError: If an entry in order_ids is not a + valid UUID. + planet.exceptions.APIError: On API error. + """ + return self._client.call_sync(self._client.cancel_orders(order_ids)) + + def aggregated_order_stats(self) -> Dict[str, Any]: + """Get aggregated counts of active orders. + + Returns: + Aggregated order counts + + Raises: + planet.exceptions.APIError: On API error. + """ + return self._client.call_sync(self._client.aggregated_order_stats()) + + def download_asset(self, + location: str, + filename: Optional[str] = None, + directory: Path = Path('.'), + overwrite: bool = False, + progress_bar: bool = True) -> Path: + """Download ordered asset. + + Parameters: + location: Download location url including download token. + filename: Custom name to assign to downloaded file. + directory: Base directory for file download. This directory will be + created if it does not already exist. + overwrite: Overwrite any existing files. + progress_bar: Show progress bar during download. + + Returns: + Path to downloaded file. + + Raises: + planet.exceptions.APIError: On API error. + """ + return self._client.call_sync( + self._client.download_asset(location, + filename, + directory, + overwrite, + progress_bar)) + + def download_order(self, + order_id: str, + directory: Path = Path('.'), + overwrite: bool = False, + progress_bar: bool = False) -> List[Path]: + """Download all assets in an order. + + Parameters: + order_id: The ID of the order. + directory: Base directory for file download. This directory must + already exist. + overwrite: Overwrite files if they already exist. + progress_bar: Show progress bar during download. + + Returns: + Paths to downloaded files. + + Raises: + planet.exceptions.APIError: On API error. + planet.exceptions.ClientError: If the order is not in a final + state. + """ + return self._client.call_sync( + self._client.download_order(order_id, + directory, + overwrite, + progress_bar)) + + def validate_checksum(self, directory: Path, checksum: str): + """Validate checksums of downloaded files against order manifest. + + For each file entry in the order manifest, the specified checksum given + in the manifest file will be validated against the checksum calculated + from the downloaded file. + + Parameters: + directory: Path to order directory. + checksum: The type of checksum hash- 'MD5' or 'SHA256'. + + Raises: + planet.exceptions.ClientError: If a file is missing or if checksums + do not match. + """ + return self._client.validate_checksum(directory, checksum) + + def wait(self, + order_id: str, + state: Optional[str] = None, + delay: int = 5, + max_attempts: int = 200, + callback: Optional[Callable[[str], None]] = None) -> str: + """Wait until order reaches desired state. + + Returns the state of the order on the last poll. + + This function polls the Orders API to determine the order state, with + the specified delay between each polling attempt, until the + order reaches a final state, or earlier state, if specified. + If the maximum number of attempts is reached before polling is + complete, an exception is raised. Setting 'max_attempts' to zero will + result in no limit on the number of attempts. + + Setting 'delay' to zero results in no delay between polling attempts. + This will likely result in throttling by the Orders API, which has + a rate limit of 10 requests per second. If many orders are being + polled asynchronously, consider increasing the delay to avoid + throttling. + + By default, polling completes when the order reaches a final state. + If 'state' is given, polling will complete when the specified earlier + state is reached or passed. + + Example: + ```python + from planet.reporting import StateBar + + with StateBar() as bar: + wait(order_id, callback=bar.update_state) + ``` + + Parameters: + order_id: The ID of the order. + state: State prior to a final state that will end polling. + delay: Time (in seconds) between polls. + max_attempts: Maximum number of polls. Set to zero for no limit. + callback: Function that handles state progress updates. + + Returns + State of the order. + + Raises: + planet.exceptions.APIError: On API error. + planet.exceptions.ClientError: If order_id or state is not valid or + if the maximum number of attempts is reached before the + specified state or a final state is reached. + """ + return self._client.call_sync( + self._client.wait(order_id, state, delay, max_attempts, callback)) + + def list_orders(self, + state: Optional[str] = None, + limit: int = 100, + source_type: Optional[str] = None, + name: Optional[str] = None, + name__contains: Optional[str] = None, + created_on: Optional[str] = None, + last_modified: Optional[str] = None, + hosting: Optional[bool] = None, + sort_by: Optional[str] = None) -> Iterator[dict]: + """Iterate over the list of stored orders. + + By default, order descriptions are sorted by creation date with the last created + order returned first. + + Note: + The name of this method is based on the API's method name. This + method provides iteration over results, it does not get a + single result description or return a list of descriptions. + + Parameters: + state (str): filter by state. + source_type (str): filter by source type. + name (str): filter by name. + name__contains (str): only include orders with names containing this string. + created_on (str): filter by creation date-time or interval. + last_modified (str): filter by last modified date-time or interval. + hosting (bool): only return orders that contain a hosting block + (e.g. SentinelHub hosting). + sort_by (str): fields to sort orders by. Multiple fields can be specified, + separated by commas. The sort direction can be specified by appending + ' ASC' or ' DESC' to the field name. The default sort direction is + ascending. When multiple fields are specified, the sort order is applied + in the order the fields are listed. + + Supported fields: name, created_on, state, last_modified + + Examples: + * "name" + * "name DESC" + * "name,state DESC,last_modified" + limit (int): maximum number of results to return. When set to 0, no + maximum is applied. + + Datetime args (created_on and last_modified) can either be a date-time or an + interval, open or closed. Date and time expressions adhere to RFC 3339. Open + intervals are expressed using double-dots. + + Yields: + Description of an order. + + Raises: + planet.exceptions.APIError: On API error. + planet.exceptions.ClientError: If state is not valid. + """ + results = self._client.list_orders(state, + limit, + source_type, + name, + name__contains, + created_on, + last_modified, + hosting, + sort_by) + + try: + while True: + yield self._client.call_sync(results.__anext__()) + except StopAsyncIteration: + pass diff --git a/planet/sync/subscriptions.py b/planet/sync/subscriptions.py new file mode 100644 index 00000000..d8a807ae --- /dev/null +++ b/planet/sync/subscriptions.py @@ -0,0 +1,279 @@ +"""Planet Subscriptions API Python client.""" + +from typing import Any, Dict, Iterator, Optional, Sequence, Union + +from typing_extensions import Literal + +from planet.http import Session +from planet.clients import SubscriptionsClient + + +class SubscriptionsAPI: + """Subscriptions API client + + Example: + ```python + >>> from planet import Planet + >>> + >>> pl = Planet() + >>> pl.subscriptions.list_subscriptions() + ``` + """ + + _client: SubscriptionsClient + + def __init__(self, + session: Session, + base_url: Optional[str] = None) -> None: + """ + Parameters: + session: Open session connected to server. + base_url: The base URL to use. Defaults to production subscriptions + API base url. + """ + + self._client = SubscriptionsClient(session, base_url) + + def list_subscriptions(self, + status: Optional[Sequence[str]] = None, + limit: int = 100, + created: Optional[str] = None, + end_time: Optional[str] = None, + hosting: Optional[bool] = None, + name__contains: Optional[str] = None, + name: Optional[str] = None, + source_type: Optional[str] = None, + start_time: Optional[str] = None, + sort_by: Optional[str] = None, + updated: Optional[str] = None) -> Iterator[dict]: + """Iterate over list of account subscriptions with optional filtering. + + Note: + The name of this method is based on the API's method name. + This method provides iteration over subscriptions, it does + not return a list. + + Args: + created (str): filter by created time or interval. + end_time (str): filter by end time or interval. + hosting (bool): only return subscriptions that contain a + hosting block (e.g. SentinelHub hosting). + name__contains (str): only return subscriptions with a + name that contains the given string. + name (str): filter by name. + source_type (str): filter by source type. + start_time (str): filter by start time or interval. + status (Set[str]): include subscriptions with a status in this set. + sort_by (str): fields to sort subscriptions by. Multiple + fields can be specified, separated by commas. The sort + direction can be specified by appending ' ASC' or ' + DESC' to the field name. The default sort direction is + ascending. When multiple fields are specified, the sort + order is applied in the order the fields are listed. + + Supported fields: name, created, updated, start_time, end_time + + Examples: + * "name" + * "name DESC" + * "name,end_time DESC,start_time" + updated (str): filter by updated time or interval. + limit (int): limit the number of subscriptions in the + results. When set to 0, no maximum is applied. + TODO: user_id + + Datetime args (created, end_time, start_time, updated) can either be a + date-time or an interval, open or closed. Date and time expressions adhere + to RFC 3339. Open intervals are expressed using double-dots. + + Examples: + * A date-time: "2018-02-12T23:20:50Z" + * A closed interval: "2018-02-12T00:00:00Z/2018-03-18T12:31:12Z" + * Open intervals: "2018-02-12T00:00:00Z/.." or "../2018-03-18T12:31:12Z" + + Yields: + dict: a description of a subscription. + + Raises: + APIError: on an API server error. + ClientError: on a client error. + """ + + results = self._client.list_subscriptions(status, + limit, + created, + end_time, + hosting, + name__contains, + name, + source_type, + start_time, + sort_by, + updated) + + try: + while True: + yield self._client.call_sync(results.__anext__()) + except StopAsyncIteration: + pass + + def create_subscription(self, request: Dict) -> Dict: + """Create a Subscription. + + Args: + request (dict): description of a subscription. + + Returns: + dict: description of created subscription. + + Raises: + APIError: on an API server error. + ClientError: on a client error. + """ + return self._client.call_sync( + self._client.create_subscription(request)) + + def cancel_subscription(self, subscription_id: str) -> None: + """Cancel a Subscription. + + Args: + subscription_id (str): id of subscription to cancel. + + Returns: + None + + Raises: + APIError: on an API server error. + ClientError: on a client error. + """ + return self._client.call_sync( + self._client.cancel_subscription(subscription_id)) + + def update_subscription(self, subscription_id: str, request: dict) -> dict: + """Update (edit) a Subscription via PUT. + + Args + subscription_id (str): id of the subscription to update. + request (dict): subscription content for update, full + payload is required. + + Returns: + dict: description of the updated subscription. + + Raises: + APIError: on an API server error. + ClientError: on a client error. + """ + return self._client.call_sync( + self._client.update_subscription(subscription_id, request)) + + def patch_subscription(self, subscription_id: str, + request: Dict[str, Any]) -> Dict[str, Any]: + """Update (edit) a Subscription via PATCH. + + Args + subscription_id (str): id of the subscription to update. + request (dict): subscription content for update, only + attributes to update are required. + + Returns: + dict: description of the updated subscription. + + Raises: + APIError: on an API server error. + ClientError: on a client error. + """ + return self._client.call_sync( + self._client.patch_subscription(subscription_id, request)) + + def get_subscription(self, subscription_id: str) -> Dict[str, Any]: + """Get a description of a Subscription. + + Args: + subscription_id (str): id of a subscription. + + Returns: + dict: description of the subscription. + + Raises: + APIError: on an API server error. + ClientError: on a client error. + """ + return self._client.call_sync( + self._client.get_subscription(subscription_id)) + + def get_results( + self, + subscription_id: str, + status: Optional[Sequence[Literal["created", + "queued", + "processing", + "failed", + "success"]]] = None, + limit: int = 100, + ) -> Iterator[Union[Dict[str, Any], str]]: + """Iterate over results of a Subscription. + + Notes: + The name of this method is based on the API's method name. This + method provides iteration over results, it does not get a + single result description or return a list of descriptions. + + Parameters: + subscription_id (str): id of a subscription. + status (Set[str]): pass result with status in this set, + filter out results with status not in this set. + limit (int): limit the number of subscriptions in the + results. When set to 0, no maximum is applied. + TODO: created, updated, completed, user_id + + Yields: + dict: description of a subscription results. + + Raises: + APIError: on an API server error. + ClientError: on a client error. + """ + results = self._client.get_results(subscription_id, status, limit) + + try: + while True: + yield self._client.call_sync(results.__anext__()) + except StopAsyncIteration: + pass + + def get_results_csv( + self, + subscription_id: str, + status: Optional[Sequence[Literal["created", + "queued", + "processing", + "failed", + "success"]]] = None + ) -> Iterator[str]: + """Iterate over rows of results CSV for a Subscription. + + Parameters: + subscription_id (str): id of a subscription. + status (Set[str]): pass result with status in this set, + filter out results with status not in this set. + TODO: created, updated, completed, user_id + + Yields: + str: a row from a CSV file. + + Raises: + APIError: on an API server error. + ClientError: on a client error. + """ + results = self._client.get_results_csv(subscription_id, status) + # Note: retries are not implemented yet. This project has + # retry logic for HTTP requests, but does not handle errors + # during streaming. We may want to consider a retry decorator + # for this entire method a la stamina: + # https://github.com/hynek/stamina. + try: + while True: + yield self._client.call_sync(results.__anext__()) + except StopAsyncIteration: + pass diff --git a/pyproject.toml b/pyproject.toml index f0a132ed..3ea30205 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -34,7 +34,7 @@ dynamic = ["version"] [project.optional-dependencies] test = [ - "pytest", + "pytest==8.3.3", "anyio", "pytest-cov", "respx>=0.20", diff --git a/tests/integration/test_data_api.py b/tests/integration/test_data_api.py index a9af5cdd..1d752455 100644 --- a/tests/integration/test_data_api.py +++ b/tests/integration/test_data_api.py @@ -28,6 +28,8 @@ from planet.clients.data import (LIST_SORT_DEFAULT, LIST_SEARCH_TYPE_DEFAULT, SEARCH_SORT_DEFAULT) +from planet.sync.data import DataAPI +from planet.http import Session TEST_URL = 'http://www.mocknotrealurl.com/api/path' TEST_SEARCHES_URL = f'{TEST_URL}/searches' @@ -70,6 +72,11 @@ def search_response(item_descriptions): return response +@pytest.fixture(scope="module") +def data_api(): + return DataAPI(Session(), TEST_URL) + + @respx.mock @pytest.mark.anyio async def test_search_basic(item_descriptions, search_response, session): @@ -104,6 +111,38 @@ async def test_search_basic(item_descriptions, search_response, session): assert items_list == item_descriptions +@respx.mock +def test_search_basic_sync(item_descriptions, search_response, data_api): + + quick_search_url = f'{TEST_URL}/quick-search' + next_page_url = f'{TEST_URL}/blob/?page_marker=IAmATest' + + item1, item2, item3 = item_descriptions + page1_response = { + "_links": { + "_next": next_page_url + }, "features": [item1, item2] + } + mock_resp1 = httpx.Response(HTTPStatus.OK, json=page1_response) + respx.post(quick_search_url).return_value = mock_resp1 + + page2_response = {"_links": {"_self": next_page_url}, "features": [item3]} + mock_resp2 = httpx.Response(HTTPStatus.OK, json=page2_response) + respx.get(next_page_url).return_value = mock_resp2 + + items_list = list(data_api.search(['PSScene'])) + + # check that request is correct + expected_request = { + "item_types": ["PSScene"], "filter": data_filter.empty_filter() + } + actual_body = json.loads(respx.calls[0].request.content) + assert actual_body == expected_request + + # check that all of the items were returned unchanged + assert items_list == item_descriptions + + @respx.mock @pytest.mark.anyio async def test_search_name(item_descriptions, search_response, session): @@ -186,6 +225,48 @@ async def test_search_geometry(geom_fixture, assert items_list == item_descriptions +@respx.mock +@pytest.mark.parametrize("geom_fixture", [('geom_geojson'), + ('geom_reference')]) +def test_search_geometry_sync(geom_fixture, + item_descriptions, + data_api, + request): + + quick_search_url = f'{TEST_URL}/quick-search' + next_page_url = f'{TEST_URL}/blob/?page_marker=IAmATest' + + item1, item2, item3 = item_descriptions + page1_response = { + "_links": { + "_next": next_page_url + }, "features": [item1, item2] + } + mock_resp1 = httpx.Response(HTTPStatus.OK, json=page1_response) + respx.post(quick_search_url).return_value = mock_resp1 + + page2_response = {"_links": {"_self": next_page_url}, "features": [item3]} + mock_resp2 = httpx.Response(HTTPStatus.OK, json=page2_response) + respx.get(next_page_url).return_value = mock_resp2 + + geom = request.getfixturevalue(geom_fixture) + items_list = list( + data_api.search(['PSScene'], name='quick_search', geometry=geom)) + # check that request is correct + expected_request = { + "item_types": ["PSScene"], + "geometry": geom, + "filter": data_filter.empty_filter(), + "name": "quick_search" + } + actual_body = json.loads(respx.calls[0].request.content) + + assert actual_body == expected_request + + # check that all of the items were returned unchanged + assert items_list == item_descriptions + + @respx.mock @pytest.mark.anyio async def test_search_filter(item_descriptions, @@ -343,6 +424,42 @@ async def test_create_search_basic(search_filter, session): assert search == page_response +@respx.mock +def test_create_search_basic_sync(search_filter, data_api): + + page_response = { + "__daily_email_enabled": False, + "_links": { + "_self": "string", "thumbnail": "string" + }, + "created": "2019-08-24T14:15:22Z", + "filter": search_filter, + "id": "string", + "last_executed": "2019-08-24T14:15:22Z", + "name": "test", + "updated": "2019-08-24T14:15:22Z" + } + mock_resp = httpx.Response(HTTPStatus.OK, json=page_response) + respx.post(TEST_SEARCHES_URL).return_value = mock_resp + + search = data_api.create_search(item_types=['PSScene'], + search_filter=search_filter, + name='test') + + # check that request is correct + expected_request = { + "item_types": ["PSScene"], + "filter": search_filter, + "name": "test", + "__daily_email_enabled": False + } + actual_body = json.loads(respx.calls[0].request.content) + assert actual_body == expected_request + + # check the response is returned unaltered + assert search == page_response + + @respx.mock @pytest.mark.anyio async def test_create_search_basic_positional_args(search_filter, session): @@ -430,6 +547,15 @@ async def test_get_search_success(search_id, search_result, session): assert search_result == search +@respx.mock +def test_get_search_success_sync(search_id, search_result, data_api): + get_url = f'{TEST_SEARCHES_URL}/{search_id}' + mock_resp = httpx.Response(HTTPStatus.OK, json=search_result) + respx.get(get_url).return_value = mock_resp + search = data_api.get_search(search_id) + assert search_result == search + + @respx.mock @pytest.mark.anyio async def test_get_search_id_doesnt_exist(search_id, session): @@ -487,6 +613,44 @@ async def test_update_search_basic(search_filter, session): assert search == page_response +@respx.mock +def test_update_search_basic_sync(search_filter, data_api): + + page_response = { + "__daily_email_enabled": False, + "_links": { + "_self": "string", "thumbnail": "string" + }, + "created": "2019-08-24T14:15:22Z", + "filter": search_filter, + "id": VALID_SEARCH_ID, + "last_executed": "2019-08-24T14:15:22Z", + "name": "test", + "updated": "2019-08-24T14:15:22Z" + } + mock_resp = httpx.Response(HTTPStatus.OK, json=page_response) + respx.put( + f'{TEST_SEARCHES_URL}/{VALID_SEARCH_ID}').return_value = mock_resp + + search = data_api.update_search(VALID_SEARCH_ID, + item_types=['PSScene'], + search_filter=search_filter, + name='test') + + # check that request is correct + expected_request = { + "item_types": ["PSScene"], + "filter": search_filter, + "name": "test", + "__daily_email_enabled": False + } + actual_body = json.loads(respx.calls[0].request.content) + assert actual_body == expected_request + + # check the response is returned unaltered + assert search == page_response + + @respx.mock @pytest.mark.anyio async def test_update_search_basic_positional_args(search_filter, session): @@ -546,6 +710,22 @@ async def test_list_searches_success(limit, assert route.called +@respx.mock +@pytest.mark.parametrize("limit, expected_list_length", [(None, 4), (3, 3)]) +def test_list_searches_success_sync(limit, + expected_list_length, + search_result, + data_api): + page1_response = {"_links": {}, "searches": [search_result] * 4} + route = respx.get(TEST_SEARCHES_URL) + route.return_value = httpx.Response(200, json=page1_response) + + assert len(list( + data_api.list_searches(limit=limit))) == expected_list_length + + assert route.called + + @respx.mock @pytest.mark.anyio @pytest.mark.parametrize("sort, rel_url", @@ -619,6 +799,21 @@ async def test_delete_search(retcode, expectation, session): assert route.called +@respx.mock +@pytest.mark.parametrize("retcode, expectation", + [(204, does_not_raise()), + (404, pytest.raises(exceptions.APIError))]) +def test_delete_search_sync(retcode, expectation, data_api): + mock_resp = httpx.Response(retcode) + route = respx.delete(f'{TEST_SEARCHES_URL}/{VALID_SEARCH_ID}') + route.return_value = mock_resp + + with expectation: + data_api.delete_search(VALID_SEARCH_ID) + + assert route.called + + @respx.mock @pytest.mark.anyio @pytest.mark.parametrize("search_id, valid", [(VALID_SEARCH_ID, True), @@ -660,6 +855,44 @@ async def test_run_search_basic(item_descriptions, [i async for i in cl.run_search(search_id)] +@respx.mock +@pytest.mark.parametrize("search_id, valid", [(VALID_SEARCH_ID, True), + ('invalid', False)]) +@pytest.mark.parametrize("limit, expected_count", [(None, 3), (2, 2)]) +def test_run_search_basic_sync(item_descriptions, + data_api, + search_id, + valid, + limit, + expected_count): + """Ensure run_search is successful and handles search_id and limit""" + next_page_url = f'{TEST_URL}/blob/?page_marker=IAmATest' + item1, item2, item3 = item_descriptions + page1_response = { + "_links": { + "_next": next_page_url + }, "features": [item1, item2] + } + + route = respx.get(f'{TEST_SEARCHES_URL}/{search_id}/results') + route.return_value = httpx.Response(204, json=page1_response) + + page2_response = {"_links": {"_self": next_page_url}, "features": [item3]} + mock_resp2 = httpx.Response(HTTPStatus.OK, json=page2_response) + respx.get(next_page_url).return_value = mock_resp2 + + if valid: + items_list = list(data_api.run_search(search_id, limit=limit)) + + assert route.called + + # check that all of the items were returned unchanged + assert items_list == item_descriptions[:expected_count] + else: + with pytest.raises(exceptions.ClientError): + list(data_api.run_search(search_id)) + + @respx.mock @pytest.mark.anyio @pytest.mark.parametrize("sort, rel_url, valid", @@ -744,6 +977,38 @@ async def test_get_stats_success(search_filter, session): assert stats == page_response +@respx.mock +def test_get_stats_success_sync(search_filter, data_api): + + page_response = { + "buckets": [ + { + "count": 433638, "start_time": "2022-01-01T00:00:00.000000Z" + }, + { + "count": 431924, "start_time": "2022-01-02T00:00:00.000000Z" + }, + { + "count": 417138, "start_time": "2022-01-03T00:00:00.000000Z" + }, + ], + } + mock_resp = httpx.Response(HTTPStatus.OK, json=page_response) + respx.post(TEST_STATS_URL).return_value = mock_resp + + stats = data_api.get_stats(['PSScene'], search_filter, 'day') + + # check that request is correct + expected_request = { + "item_types": ["PSScene"], "filter": search_filter, "interval": "day" + } + actual_body = json.loads(respx.calls[0].request.content) + assert actual_body == expected_request + + # check the response is returned unaltered + assert stats == page_response + + @respx.mock @pytest.mark.anyio async def test_get_stats_invalid_interval(search_filter, session): @@ -797,6 +1062,48 @@ async def test_list_item_assets_success(session): assert assets == page_response +@respx.mock +def test_list_item_assets_success_sync(data_api): + item_type_id = 'PSScene' + item_id = '20221003_002705_38_2461' + assets_url = f'{TEST_URL}/item-types/{item_type_id}/items/{item_id}/assets' + + page_response = { + "basic_analytic_4b": { + "_links": { + "_self": + "SELFURL", + "activate": + "ACTIVATEURL", + "type": + "https://api.planet.com/data/v1/asset-types/basic_analytic_4b" + }, + "_permissions": ["download"], + "md5_digest": None, + "status": "inactive", + "type": "basic_analytic_4b" + }, + "basic_udm2": { + "_links": { + "_self": "SELFURL", + "activate": "ACTIVATEURL", + "type": "https://api.planet.com/data/v1/asset-types/basic_udm2" + }, + "_permissions": ["download"], + "md5_digest": None, + "status": "inactive", + "type": "basic_udm2" + } + } + mock_resp = httpx.Response(HTTPStatus.OK, json=page_response) + respx.get(assets_url).return_value = mock_resp + + assets = data_api.list_item_assets(item_type_id, item_id) + + # check the response is returned unaltered + assert assets == page_response + + @respx.mock @pytest.mark.anyio async def test_list_item_assets_missing(session): @@ -863,6 +1170,53 @@ async def test_get_asset(asset_type_id, expectation, session): assert asset == basic_udm2_asset +@respx.mock +@pytest.mark.parametrize("asset_type_id, expectation", + [('basic_udm2', does_not_raise()), + ('invalid', pytest.raises(exceptions.ClientError))]) +def test_get_asset_sync(asset_type_id, expectation, data_api): + item_type_id = 'PSScene' + item_id = '20221003_002705_38_2461' + assets_url = f'{TEST_URL}/item-types/{item_type_id}/items/{item_id}/assets' + + basic_udm2_asset = { + "_links": { + "_self": "SELFURL", + "activate": "ACTIVATEURL", + "type": "https://api.planet.com/data/v1/asset-types/basic_udm2" + }, + "_permissions": ["download"], + "md5_digest": None, + "status": "inactive", + "type": "basic_udm2" + } + + page_response = { + "basic_analytic_4b": { + "_links": { + "_self": + "SELFURL", + "activate": + "ACTIVATEURL", + "type": + "https://api.planet.com/data/v1/asset-types/basic_analytic_4b" + }, + "_permissions": ["download"], + "md5_digest": None, + "status": "inactive", + "type": "basic_analytic_4b" + }, + "basic_udm2": basic_udm2_asset + } + + mock_resp = httpx.Response(HTTPStatus.OK, json=page_response) + respx.get(assets_url).return_value = mock_resp + + with expectation: + asset = data_api.get_asset(item_type_id, item_id, asset_type_id) + assert asset == basic_udm2_asset + + @respx.mock @pytest.mark.anyio @pytest.mark.parametrize("status, expectation", [('inactive', True), @@ -892,6 +1246,33 @@ async def test_activate_asset_success(status, expectation, session): assert route.called == expectation +@respx.mock +@pytest.mark.parametrize("status, expectation", [('inactive', True), + ('active', False)]) +def test_activate_asset_success_sync(status, expectation, data_api): + activate_url = f'{TEST_URL}/activate' + + mock_resp = httpx.Response(HTTPStatus.OK) + route = respx.get(activate_url) + route.return_value = mock_resp + + basic_udm2_asset = { + "_links": { + "_self": "SELFURL", + "activate": activate_url, + "type": "https://api.planet.com/data/v1/asset-types/basic_udm2" + }, + "_permissions": ["download"], + "md5_digest": None, + "status": status, + "type": "basic_udm2" + } + + data_api.activate_asset(basic_udm2_asset) + + assert route.called == expectation + + @respx.mock @pytest.mark.anyio async def test_activate_asset_invalid_asset(session): @@ -934,6 +1315,37 @@ async def test_wait_asset_success(session): assert asset == basic_udm2_asset_active +@respx.mock +def test_wait_asset_success_sync(data_api): + asset_url = f'{TEST_URL}/asset' + + basic_udm2_asset = { + "_links": { + "_self": asset_url, + "activate": "ACTIVATEURL", + "type": "https://api.planet.com/data/v1/asset-types/basic_udm2" + }, + "_permissions": ["download"], + "md5_digest": None, + "status": 'activating', + "type": "basic_udm2" + } + + basic_udm2_asset_active = copy.deepcopy(basic_udm2_asset) + basic_udm2_asset_active['status'] = 'active' + + route = respx.get(asset_url) + route.side_effect = [ + httpx.Response(HTTPStatus.OK, json=basic_udm2_asset), + httpx.Response(HTTPStatus.OK, json=basic_udm2_asset), + httpx.Response(HTTPStatus.OK, json=basic_udm2_asset_active) + ] + + asset = data_api.wait_asset(basic_udm2_asset, delay=0) + + assert asset == basic_udm2_asset_active + + @respx.mock @pytest.mark.anyio async def test_wait_asset_max_attempts(session): @@ -1030,6 +1442,71 @@ async def _stream_img(): assert len(path.read_bytes()) == 527 +@respx.mock +@pytest.mark.anyio +@pytest.mark.parametrize("exists, overwrite", + [(False, False), (True, False), (True, True), + (False, True)]) +async def test_download_asset_sync(exists, + overwrite, + tmpdir, + open_test_img, + data_api): + # NOTE: this is a slightly edited version of test_download_asset_img from + # tests/integration/test_orders_api + dl_url = f'{TEST_URL}/1?token=IAmAToken' + + img_headers = { + 'Content-Type': 'image/tiff', + 'Content-Length': '527', + 'Content-Disposition': 'attachment; filename="img.tif"' + } + + async def _stream_img(): + data = open_test_img.read() + v = memoryview(data) + + chunksize = 100 + for i in range(math.ceil(len(v) / (chunksize))): + yield v[i * chunksize:min((i + 1) * chunksize, len(v))] + + # populate request parameter to avoid respx cloning, which throws + # an error caused by respx and not this code + # https://github.com/lundberg/respx/issues/130 + mock_resp = httpx.Response(HTTPStatus.OK, + stream=_stream_img(), + headers=img_headers, + request='donotcloneme') + respx.get(dl_url).return_value = mock_resp + + basic_udm2_asset = { + "_links": { + "_self": "SELFURL", + "activate": "ACTIVATEURL", + "type": "https://api.planet.com/data/v1/asset-types/basic_udm2" + }, + "_permissions": ["download"], + "md5_digest": None, + "status": 'active', + "location": dl_url, + "type": "basic_udm2" + } + + if exists: + Path(tmpdir, 'img.tif').write_text('i exist') + + path = data_api.download_asset(basic_udm2_asset, + directory=tmpdir, + overwrite=overwrite) + assert path.name == 'img.tif' + assert path.is_file() + + if exists and not overwrite: + assert path.read_text() == 'i exist' + else: + assert len(path.read_bytes()) == 527 + + @respx.mock @pytest.mark.anyio @pytest.mark.parametrize( @@ -1062,3 +1539,36 @@ async def test_validate_checksum(hashes_match, md5_entry, expectation, tmpdir): with expectation: DataClient.validate_checksum(basic_udm2_asset, testfile) + + +@respx.mock +@pytest.mark.parametrize( + "hashes_match, md5_entry, expectation", + [(True, True, does_not_raise()), + (False, True, pytest.raises(exceptions.ClientError)), + (True, False, pytest.raises(exceptions.ClientError))]) +def test_validate_checksum_sync(hashes_match, md5_entry, expectation, tmpdir): + test_bytes = b'foo bar' + testfile = Path(tmpdir / 'test.txt') + testfile.write_bytes(test_bytes) + + hash_md5 = hashlib.md5(test_bytes).hexdigest() + + basic_udm2_asset = { + "_links": { + "_self": "SELFURL", + "activate": "ACTIVATEURL", + "type": "https://api.planet.com/data/v1/asset-types/basic_udm2" + }, + "_permissions": ["download"], + "status": 'active', + "location": "DOWNLOADURL", + "type": "basic_udm2" + } + + if md5_entry: + asset_hash = hash_md5 if hashes_match else 'invalid' + basic_udm2_asset["md5_digest"] = asset_hash + + with expectation: + DataAPI.validate_checksum(basic_udm2_asset, testfile) diff --git a/tests/integration/test_orders_api.py b/tests/integration/test_orders_api.py index 0257bf5f..09c77d51 100644 --- a/tests/integration/test_orders_api.py +++ b/tests/integration/test_orders_api.py @@ -29,6 +29,7 @@ from planet import OrdersClient, exceptions, reporting from planet.clients.orders import OrderStates +from planet.sync import Planet TEST_URL = 'http://www.MockNotRealURL.com/api/path' TEST_BULK_CANCEL_URL = f'{TEST_URL}/bulk/orders/v2/cancel' @@ -119,6 +120,30 @@ async def test_list_orders_basic(order_descriptions, session): assert order_descriptions == [o async for o in cl.list_orders()] +@respx.mock +def test_list_orders_basic_sync(order_descriptions, session): + next_page_url = TEST_ORDERS_URL + 'blob/?page_marker=IAmATest' + + order1, order2, order3 = order_descriptions + + page1_response = { + "_links": { + "_self": "string", "next": next_page_url + }, + "orders": [order1, order2] + } + mock_resp = httpx.Response(HTTPStatus.OK, json=page1_response) + respx.get(TEST_ORDERS_URL).return_value = mock_resp + + page2_response = {"_links": {"_self": next_page_url}, "orders": [order3]} + mock_resp2 = httpx.Response(HTTPStatus.OK, json=page2_response) + respx.get(next_page_url).return_value = mock_resp2 + + pl = Planet() + pl.orders._client._base_url = TEST_URL + assert order_descriptions == list(pl.orders.list_orders()) + + @respx.mock @pytest.mark.anyio async def test_list_orders_filtering_and_sorting(order_descriptions, session): @@ -147,6 +172,28 @@ async def test_list_orders_filtering_and_sorting(order_descriptions, session): ] +@respx.mock +def test_list_orders_state_success_sync(order_descriptions, session): + list_url = TEST_ORDERS_URL + '?source_type=all&state=failed' + + order1, order2, _ = order_descriptions + + page1_response = { + "_links": { + "_self": "string" + }, "orders": [order1, order2] + } + mock_resp = httpx.Response(HTTPStatus.OK, json=page1_response) + respx.get(list_url).return_value = mock_resp + + pl = Planet() + pl.orders._client._base_url = TEST_URL + + # if the value of state doesn't get sent as a url parameter, + # the mock will fail and this test will fail + assert [order1, order2] == list(pl.orders.list_orders(state='failed')) + + @pytest.mark.anyio async def test_list_orders_state_invalid(session): cl = OrdersClient(session, base_url=TEST_URL) @@ -155,6 +202,14 @@ async def test_list_orders_state_invalid(session): [o async for o in cl.list_orders(state='invalidstate')] +def test_list_orders_state_invalid_sync(session): + pl = Planet() + pl.orders._client._base_url = TEST_URL + + with pytest.raises(exceptions.ClientError): + list(pl.orders.list_orders(state='invalidstate')) + + @respx.mock @pytest.mark.anyio @pytest.mark.parametrize("limit,limited_list_length", [(None, 100), (0, 102), @@ -207,6 +262,23 @@ async def test_create_order_basic(oid, assert json.loads(route.calls.last.request.content) == order_request +@respx.mock +def test_create_order_basic_sync(oid, + order_description, + order_request, + session): + route = respx.post(TEST_ORDERS_URL) + route.return_value = httpx.Response(HTTPStatus.OK, json=order_description) + + pl = Planet() + pl.orders._client._base_url = TEST_URL + order = pl.orders.create_order(order_request) + + assert order == order_description + + assert json.loads(route.calls.last.request.content) == order_request + + @respx.mock @pytest.mark.anyio async def test_create_order_bad_item_type(order_request, session): @@ -266,6 +338,18 @@ async def test_get_order(oid, order_description, session): assert order_description == order +@respx.mock +def test_get_order_sync(oid, order_description, session): + get_url = f'{TEST_ORDERS_URL}/{oid}' + mock_resp = httpx.Response(HTTPStatus.OK, json=order_description) + respx.get(get_url).return_value = mock_resp + + pl = Planet() + pl.orders._client._base_url = TEST_URL + order = pl.orders.get_order(oid) + assert order_description == order + + @pytest.mark.anyio async def test_get_order_invalid_id(session): cl = OrdersClient(session, base_url=TEST_URL) @@ -302,6 +386,20 @@ async def test_cancel_order(oid, order_description, session): assert json_resp == example_resp +@respx.mock +def test_cancel_order_sync(oid, order_description, session): + cancel_url = f'{TEST_ORDERS_URL}/{oid}' + order_description['state'] = 'cancelled' + mock_resp = httpx.Response(HTTPStatus.OK, json=order_description) + example_resp = mock_resp.json() + respx.put(cancel_url).return_value = mock_resp + + pl = Planet() + pl.orders._client._base_url = TEST_URL + json_resp = pl.orders.cancel_order(oid) + assert json_resp == example_resp + + @pytest.mark.anyio async def test_cancel_order_invalid_id(session): cl = OrdersClient(session, base_url=TEST_URL) @@ -372,6 +470,39 @@ async def test_cancel_orders_by_ids(session, oid): assert actual_body == expected_body +@respx.mock +def test_cancel_orders_by_ids_sync(session, oid): + oid2 = '5ece1dc0-ea81-11eb-837c-acde48001122' + test_ids = [oid, oid2] + example_result = { + "result": { + "succeeded": { + "count": 1 + }, + "failed": { + "count": + 1, + "failures": [{ + "order_id": oid2, + "message": "Order not in a cancellable state", + }] + } + } + } + mock_resp = httpx.Response(HTTPStatus.OK, json=example_result) + respx.post(TEST_BULK_CANCEL_URL).return_value = mock_resp + + pl = Planet() + pl.orders._client._base_url = TEST_URL + res = pl.orders.cancel_orders(test_ids) + + assert res == example_result + + expected_body = {"order_ids": test_ids} + actual_body = json.loads(respx.calls.last.request.content) + assert actual_body == expected_body + + @pytest.mark.anyio async def test_cancel_orders_by_ids_invalid_id(session, oid): cl = OrdersClient(session, base_url=TEST_URL) @@ -403,6 +534,30 @@ async def test_cancel_orders_all(session): assert actual_body == {} +@respx.mock +def test_cancel_orders_all_sync(session): + example_result = { + "result": { + "succeeded": { + "count": 2 + }, "failed": { + "count": 0, "failures": [] + } + } + } + mock_resp = httpx.Response(HTTPStatus.OK, json=example_result) + respx.post(TEST_BULK_CANCEL_URL).return_value = mock_resp + + pl = Planet() + pl.orders._client._base_url = TEST_URL + res = pl.orders.cancel_orders() + + assert res == example_result + + actual_body = json.loads(respx.calls.last.request.content) + assert actual_body == {} + + @respx.mock @pytest.mark.anyio async def test_wait_default(oid, order_description, session): @@ -425,6 +580,28 @@ async def test_wait_default(oid, order_description, session): assert state == 'success' +@respx.mock +def test_wait_default_sync(oid, order_description, session): + get_url = f'{TEST_ORDERS_URL}/{oid}' + + order_description2 = copy.deepcopy(order_description) + order_description2['state'] = 'running' + order_description3 = copy.deepcopy(order_description) + order_description3['state'] = 'success' + + route = respx.get(get_url) + route.side_effect = [ + httpx.Response(HTTPStatus.OK, json=order_description), + httpx.Response(HTTPStatus.OK, json=order_description2), + httpx.Response(HTTPStatus.OK, json=order_description3) + ] + + pl = Planet() + pl.orders._client._base_url = TEST_URL + state = pl.orders.wait(oid, delay=0) + assert state == 'success' + + @respx.mock @pytest.mark.anyio async def test_wait_callback(oid, order_description, session): @@ -453,6 +630,34 @@ async def test_wait_callback(oid, order_description, session): mock_callback.assert_has_calls(expected) +@respx.mock +def test_wait_callback_sync(oid, order_description, session): + get_url = f'{TEST_ORDERS_URL}/{oid}' + + order_description2 = copy.deepcopy(order_description) + order_description2['state'] = 'running' + order_description3 = copy.deepcopy(order_description) + order_description3['state'] = 'success' + + route = respx.get(get_url) + route.side_effect = [ + httpx.Response(HTTPStatus.OK, json=order_description), + httpx.Response(HTTPStatus.OK, json=order_description2), + httpx.Response(HTTPStatus.OK, json=order_description3) + ] + + mock_bar = create_autospec(reporting.StateBar) + mock_callback = mock_bar.update_state + + pl = Planet() + pl.orders._client._base_url = TEST_URL + pl.orders.wait(oid, delay=0, callback=mock_callback) + + # check state was sent to callback as expected + expected = [call(s) for s in ['queued', 'running', 'success']] + mock_callback.assert_has_calls(expected) + + @respx.mock @pytest.mark.anyio async def test_wait_state(oid, order_description, session): @@ -540,6 +745,26 @@ async def test_aggegated_order_stats(session): assert res == example_stats +@respx.mock +def test_aggegated_order_stats_sync(session): + example_stats = { + "organization": { + "queued_orders": 0, "running_orders": 6 + }, + "user": { + "queued_orders": 0, "running_orders": 0 + } + } + mock_resp = httpx.Response(HTTPStatus.OK, json=example_stats) + respx.get(TEST_STATS_URL).return_value = mock_resp + + pl = Planet() + pl.orders._client._base_url = TEST_URL + res = pl.orders.aggregated_order_stats() + + assert res == example_stats + + @respx.mock @pytest.mark.anyio async def test_download_asset_md(tmpdir, session): @@ -570,6 +795,36 @@ async def test_download_asset_md(tmpdir, session): assert Path(filename).name == 'metadata.json' +@respx.mock +def test_download_asset_md_sync(tmpdir, session): + dl_url = TEST_DOWNLOAD_URL + '/1?token=IAmAToken' + + md_json = {'key': 'value'} + md_headers = { + 'Content-Type': 'application/json', + 'Content-Disposition': 'attachment; filename="metadata.json"' + } + + mock_redirect = httpx.Response(HTTPStatus.FOUND, + headers={ + 'Location': TEST_DOWNLOAD_ACTUAL_URL, + 'Content-Length': '0' + }) + mock_resp = httpx.Response(HTTPStatus.OK, json=md_json, headers=md_headers) + + respx.get(dl_url).return_value = mock_redirect + respx.get(TEST_DOWNLOAD_ACTUAL_URL).return_value = mock_resp + + pl = Planet() + pl.orders._client._base_url = TEST_URL + filename = pl.orders.download_asset(dl_url, directory=str(tmpdir)) + + with open(filename) as f: + assert json.load(f) == {'key': 'value'} + + assert Path(filename).name == 'metadata.json' + + @respx.mock @pytest.mark.anyio async def test_download_asset_img(tmpdir, open_test_img, session): @@ -648,6 +903,50 @@ async def test_validate_checksum_checksum(tmpdir, OrdersClient.validate_checksum(Path(tmpdir), checksum) +@respx.mock +@pytest.mark.parametrize("checksum", [("MD5"), ("SHA256")]) +@pytest.mark.parametrize( + "asset1_bytes, expectation", + [(b"1", does_not_raise()), (b"1", does_not_raise()), + (b"does not match", pytest.raises(exceptions.ClientError))]) +def test_validate_checksum_checksum_sync(tmpdir, + asset1_bytes, + expectation, + checksum): + + itemtype1_dir = Path(tmpdir, 'itemtype1') + itemtype1_dir.mkdir() + + asset1 = itemtype1_dir / 'asset1.tif' + asset1.write_bytes(b"1") + + asset2 = itemtype1_dir / 'asset2.json' + asset2.write_bytes(b'{"foo": "bar"}') + asset2_bytes = asset2.read_bytes() + + manifest_data = { + "name": "", + "files": [ + { + "path": "itemtype1/asset1.tif", + "digests": { + "md5": hashlib.md5(asset1_bytes).hexdigest(), + "sha256": hashlib.sha256(asset1_bytes).hexdigest()} + }, { + "path": "itemtype1/asset2.json", + "digests": { + "md5": hashlib.md5(asset2_bytes).hexdigest(), + "sha256": hashlib.sha256(asset2_bytes).hexdigest()} + }] + } # yapf: disable + Path(tmpdir, 'manifest.json').write_text(json.dumps(manifest_data)) + + pl = Planet() + + with expectation: + pl.orders.validate_checksum(Path(tmpdir), checksum) + + @respx.mock @pytest.mark.anyio @pytest.mark.parametrize( @@ -753,6 +1052,69 @@ async def test_download_order_success(results, assert json.load(f) == {'key2': 'value2'} +@respx.mock +@pytest.mark.parametrize( + "results, paths", + [(None, []), + ([], []), + ([{"location": f'{TEST_DOWNLOAD_URL}/1', + "name": "oid/itemtype1/asset.json"}, + {"location": f'{TEST_DOWNLOAD_URL}/2', + "name": "oid/itemtype2/asset.json"}, + ], + [Path('oid', 'itemtype1', 'asset.json'), + Path('oid', 'itemtype2', 'asset.json'), + ]) + ]) # yapf: disable +def test_download_order_success_sync(results, + paths, + tmpdir, + order_description, + oid, + session): + + # Mock an HTTP response for download + order_description['state'] = 'success' + order_description['_links']['results'] = results + + get_url = f'{TEST_ORDERS_URL}/{oid}' + mock_resp = httpx.Response(HTTPStatus.OK, json=order_description) + respx.get(get_url).return_value = mock_resp + + mock_resp1 = httpx.Response(HTTPStatus.OK, + json={'key': 'value'}, + headers={ + 'Content-Type': + 'application/json', + 'Content-Disposition': + 'attachment; filename="asset.json"' + }) + respx.get(f'{TEST_DOWNLOAD_URL}/1').return_value = mock_resp1 + + mock_resp2 = httpx.Response(HTTPStatus.OK, + json={'key2': 'value2'}, + headers={ + 'Content-Type': + 'application/json', + 'Content-Disposition': + 'attachment; filename="asset.json"' + }) + respx.get(f'{TEST_DOWNLOAD_URL}/2').return_value = mock_resp2 + + pl = Planet() + pl.orders._client._base_url = TEST_URL + filenames = pl.orders.download_order(oid, directory=str(tmpdir)) + + assert filenames == [Path(tmpdir, p) for p in paths] + + if filenames: + with open(filenames[0]) as f: + assert json.load(f) == {'key': 'value'} + + with open(filenames[1]) as f: + assert json.load(f) == {'key2': 'value2'} + + @respx.mock @pytest.mark.anyio async def test_download_order_state(tmpdir, order_description, oid, session): @@ -772,6 +1134,25 @@ async def test_download_order_state(tmpdir, order_description, oid, session): await cl.download_order(oid, directory=str(tmpdir)) +@respx.mock +def test_download_order_state_sync(tmpdir, order_description, oid, session): + dl_url1 = TEST_DOWNLOAD_URL + '/1?token=IAmAToken' + order_description['_links']['results'] = [ + { + 'location': dl_url1 + }, + ] + + get_url = f'{TEST_ORDERS_URL}/{oid}' + mock_resp = httpx.Response(HTTPStatus.OK, json=order_description) + respx.get(get_url).return_value = mock_resp + + pl = Planet() + pl.orders._client._base_url = TEST_URL + with pytest.raises(exceptions.ClientError): + pl.orders.download_order(oid, directory=str(tmpdir)) + + @respx.mock @pytest.mark.anyio async def test_download_order_overwrite_true_preexisting_data( diff --git a/tests/integration/test_subscriptions_api.py b/tests/integration/test_subscriptions_api.py index 2b1c2f96..290ffcb4 100644 --- a/tests/integration/test_subscriptions_api.py +++ b/tests/integration/test_subscriptions_api.py @@ -10,6 +10,7 @@ import respx from respx.patterns import M +from planet.sync import Planet from planet.clients.subscriptions import SubscriptionsClient from planet.exceptions import APIError, PagingError, ServerError from planet.http import Session @@ -218,6 +219,20 @@ async def test_list_subscriptions_success( ]) == count +@pytest.mark.parametrize("status, count", [({"running"}, 100), ({"failed"}, 0), + (None, 100)]) +@api_mock +def test_list_subscriptions_success_sync( + status, + count, +): + """Account subscriptions iterator yields expected descriptions.""" + client = Planet() + client.subscriptions._client._base_url = TEST_URL + assert len(list( + client.subscriptions.list_subscriptions(status=status))) == count + + @pytest.mark.parametrize("source_type, count", [("catalog", 100), ("soil_water_content", 0), (None, 100)]) @@ -274,6 +289,18 @@ async def test_create_subscription_success(): assert sub['name'] == 'test' +@create_mock +def test_create_subscription_success_sync(): + """Subscription is created, description has the expected items.""" + + pl = Planet() + pl.subscriptions._client._base_url = TEST_URL + sub = pl.subscriptions.create_subscription({ + 'name': 'test', 'delivery': 'yes, please', 'source': 'test' + }) + assert sub['name'] == 'test' + + @pytest.mark.anyio @create_mock async def test_create_subscription_with_hosting_success(): @@ -286,6 +313,17 @@ async def test_create_subscription_with_hosting_success(): assert sub['name'] == 'test' +@create_mock +def test_create_subscription_with_hosting_success_sync(): + """Subscription is created, description has the expected items.""" + pl = Planet() + pl.subscriptions._client._base_url = TEST_URL + sub = pl.subscriptions.create_subscription({ + 'name': 'test', 'source': 'test', 'hosting': 'yes, please' + }) + assert sub['name'] == 'test' + + @pytest.mark.anyio @failing_api_mock async def test_cancel_subscription_failure(): @@ -305,6 +343,14 @@ async def test_cancel_subscription_success(): _ = await client.cancel_subscription("test") +@cancel_mock +def test_cancel_subscription_success_sync(): + """Subscription is canceled, description has the expected items.""" + pl = Planet() + pl.subscriptions._client._base_url = TEST_URL + _ = pl.subscriptions.cancel_subscription("test") + + @pytest.mark.anyio @failing_api_mock async def test_update_subscription_failure(): @@ -328,6 +374,18 @@ async def test_update_subscription_success(): assert sub["delivery"] == "no, thanks" +@update_mock +def test_update_subscription_success_sync(): + """Subscription is updated, description has the expected items.""" + pl = Planet() + pl.subscriptions._client._base_url = TEST_URL + sub = pl.subscriptions.update_subscription( + "test", { + "name": "test", "delivery": "no, thanks", "source": "test" + }) + assert sub["delivery"] == "no, thanks" + + @pytest.mark.anyio @patch_mock async def test_patch_subscription_success(): @@ -338,6 +396,15 @@ async def test_patch_subscription_success(): assert sub["name"] == "test patch" +@patch_mock +def test_patch_subscription_success_sync(): + """Subscription is patched, description has the expected items.""" + pl = Planet() + pl.subscriptions._client._base_url = TEST_URL + sub = pl.subscriptions.patch_subscription("test", {"name": "test patch"}) + assert sub["name"] == "test patch" + + @pytest.mark.anyio @failing_api_mock async def test_get_subscription_failure(): @@ -358,6 +425,15 @@ async def test_get_subscription_success(monkeypatch): assert sub['delivery'] == "yes, please" +@get_mock +def test_get_subscription_success_sync(monkeypatch): + """Subscription description fetched, has the expected items.""" + pl = Planet() + pl.subscriptions._client._base_url = TEST_URL + sub = pl.subscriptions.get_subscription("test") + assert sub['delivery'] == "yes, please" + + @pytest.mark.anyio @failing_api_mock async def test_get_results_failure(): @@ -378,6 +454,15 @@ async def test_get_results_success(): assert len(results) == 100 +@res_api_mock +def test_get_results_success_sync(): + """Subscription description fetched, has the expected items.""" + pl = Planet() + pl.subscriptions._client._base_url = TEST_URL + results = list(pl.subscriptions.get_results("42")) + assert len(results) == 100 + + @pytest.mark.anyio @res_api_mock async def test_get_results_csv(): @@ -389,6 +474,16 @@ async def test_get_results_csv(): assert rows == [['id', 'status'], ['1234-abcd', 'SUCCESS']] +@res_api_mock +def test_get_results_csv_sync(): + """Subscription CSV fetched, has the expected items.""" + pl = Planet() + pl.subscriptions._client._base_url = TEST_URL + results = list(pl.subscriptions.get_results_csv("42")) + rows = list(csv.reader(results)) + assert rows == [['id', 'status'], ['1234-abcd', 'SUCCESS']] + + paging_cycle_api_mock = respx.mock() # Identical next links is a hangup we want to avoid.