-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: provide a way to use pagination concurrently to retrieve objects using the SDK #159
Comments
I have a working example of this for both from rich import print as rprint
import os
from infrahub_sdk import InfrahubClientSync
from infrahub_sdk import Config
client = InfrahubClientSync(config=Config(pagination_size=2))
def main():
branches = client.all(kind="OrganizationGeneric", batch=True)
rprint(branches)
if __name__ == "__main__":
main() Async: from asyncio import run as aiorun
from rich import print as rprint
from infrahub_sdk import Config, InfrahubClient
client = InfrahubClient(config=Config(pagination_size=2))
async def create_data(number: int):
data = {
"name": f"Vendor {number}",
}
obj = await client.create(kind="OrganizationGeneric", data=data)
await obj.save()
print(f"New OrganizationGeneric created with the Id {obj.id}")
async def main():
# for i in range(1,1000):
# await create_data(i)
branches = await client.all(kind="OrganizationGeneric", batch=True)
rprint(len(branches))
if __name__ == "__main__":
aiorun(main()) I have manually set the pagination size to 2 to slow things down but without @wvandeun mentioned that |
Component
Python SDK
Describe the Feature Request
When you execute a query that retrieves a large amount of nodes from the db, using the filters or all method, then the SDK will leverage pagination to break the query into smaller pages.
The retrieval of this pages happens in a serial way, which is ideal. We can do this better and faster if we retrieve the pages concurrently.
Pseud code of what it could look like
Describe the Use Case
Retrieval of a large number of nodes using a GraphQL query is taking some time, since we are retrieving the pages one by one in a serial way. Being able to get the pages in a concurrent way should improve the speed.
Additional Information
No response
The text was updated successfully, but these errors were encountered: