Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pr file patch into MAIN #2806

Merged
merged 24 commits into from
May 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
a11097a
added some null value condition checking on pr files for error:
sgoggins May 10, 2024
793e100
small edit
sgoggins May 10, 2024
0cc6a54
collection rate for secondary bump
sgoggins May 10, 2024
4a18742
typo fix
sgoggins May 10, 2024
b230d73
`is not None` instead.
sgoggins May 10, 2024
f296a07
updated is not None logic:
sgoggins May 11, 2024
8663655
better update:
sgoggins May 11, 2024
ca6cb79
woops. one error.
sgoggins May 11, 2024
0de7fdd
one more.
sgoggins May 11, 2024
85bd70f
another tweak.
sgoggins May 11, 2024
e9b83c6
and another
sgoggins May 11, 2024
b244c60
another tweak, as the error just moved up:
sgoggins May 11, 2024
478048d
Issue with extremely large repository pull_requests fixed by adding i…
sgoggins May 11, 2024
e1f9950
Added error logging to dependency task for this error, which I think …
sgoggins May 11, 2024
f50d124
changing frequency
sgoggins May 13, 2024
f55ed92
Make retrieve_all_pr_data a generator
Ulincsys May 13, 2024
d4e2762
Add un-pushed changes
Ulincsys May 13, 2024
6e98367
Simplify collection
ABrain7710 May 14, 2024
8f596cc
Fix annotation
ABrain7710 May 14, 2024
f6d5680
small dial back of concurrency which didn't make sense.
sgoggins May 14, 2024
03fc1fe
updated versions
sgoggins May 15, 2024
2241f33
Fix issue where secondary tries to collect before core is collected
ABrain7710 May 17, 2024
16f6d6b
commented out the rebuilding of the dm_ tables. This should be rebuil…
sgoggins May 21, 2024
fcbb819
Merge pull request #2804 from chaoss/main
sgoggins May 21, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Augur NEW Release v0.63.3
# Augur NEW Release v0.70.0

Augur is primarily a data engineering tool that makes it possible for data scientists to gather open source software community data. Less data carpentry for everyone else!
The primary way of looking at Augur data is through [8Knot](https://github.com/oss-aspen/8knot) ... A public instance of 8Knot is available at https://metrix.chaoss.io ... That is tied to a public instance of Augur at https://ai.chaoss.io
Expand All @@ -10,7 +10,7 @@ The primary way of looking at Augur data is through [8Knot](https://github.com/o
## NEW RELEASE ALERT!
### [If you want to jump right in, updated docker build/compose and bare metal installation instructions are available here](docs/new-install.md)

Augur is now releasing a dramatically improved new version to the main branch. It is also available here: https://github.com/chaoss/augur/releases/tag/v0.63.3
Augur is now releasing a dramatically improved new version to the main branch. It is also available here: https://github.com/chaoss/augur/releases/tag/v0.70.0

- The `main` branch is a stable version of our new architecture, which features:
- Dramatic improvement in the speed of large scale data collection (100,000+ repos). All data is obtained for 100k+ repos within 2 weeks.
Expand Down
2 changes: 1 addition & 1 deletion augur/application/cli/backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
@test_db_connection
@with_database
@click.pass_context
def start(ctx, disable_collection, development, port):

Check warning on line 46 in augur/application/cli/backend.py

View workflow job for this annotation

GitHub Actions / runner / pylint

[pylint] reported by reviewdog 🐶 R0912: Too many branches (15/12) (too-many-branches) Raw Output: augur/application/cli/backend.py:46:0: R0912: Too many branches (15/12) (too-many-branches)

Check warning on line 46 in augur/application/cli/backend.py

View workflow job for this annotation

GitHub Actions / runner / pylint

[pylint] reported by reviewdog 🐶 R0915: Too many statements (60/50) (too-many-statements) Raw Output: augur/application/cli/backend.py:46:0: R0915: Too many statements (60/50) (too-many-statements)
"""Start Augur's backend server."""

try:
Expand Down Expand Up @@ -73,13 +73,13 @@
worker_vmem_cap = get_value("Celery", 'worker_process_vmem_cap')

gunicorn_command = f"gunicorn -c {gunicorn_location} -b {host}:{port} augur.api.server:app --log-file gunicorn.log"
server = subprocess.Popen(gunicorn_command.split(" "))

Check warning on line 76 in augur/application/cli/backend.py

View workflow job for this annotation

GitHub Actions / runner / pylint

[pylint] reported by reviewdog 🐶 R1732: Consider using 'with' for resource-allocating operations (consider-using-with) Raw Output: augur/application/cli/backend.py:76:13: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)

time.sleep(3)
logger.info('Gunicorn webserver started...')
logger.info(f'Augur is running at: {"http" if development else "https"}://{host}:{port}')

processes = start_celery_worker_processes(float(worker_vmem_cap), disable_collection)

Check warning on line 82 in augur/application/cli/backend.py

View workflow job for this annotation

GitHub Actions / runner / pylint

[pylint] reported by reviewdog 🐶 W0621: Redefining name 'processes' from outer scope (line 366) (redefined-outer-name) Raw Output: augur/application/cli/backend.py:82:4: W0621: Redefining name 'processes' from outer scope (line 366) (redefined-outer-name)

if os.path.exists("celerybeat-schedule.db"):
logger.info("Deleting old task schedule")
Expand All @@ -88,7 +88,7 @@
log_level = get_value("Logging", "log_level")
celery_beat_process = None
celery_command = f"celery -A augur.tasks.init.celery_app.celery_app beat -l {log_level.lower()}"
celery_beat_process = subprocess.Popen(celery_command.split(" "))

Check warning on line 91 in augur/application/cli/backend.py

View workflow job for this annotation

GitHub Actions / runner / pylint

[pylint] reported by reviewdog 🐶 R1732: Consider using 'with' for resource-allocating operations (consider-using-with) Raw Output: augur/application/cli/backend.py:91:26: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)

if not disable_collection:

Expand Down Expand Up @@ -153,7 +153,7 @@

frontend_worker = f"celery -A augur.tasks.init.celery_app.celery_app worker -l info --concurrency=1 -n frontend:{uuid.uuid4().hex}@%h -Q frontend"
max_process_estimate -= 1
process_list.append(subprocess.Popen(frontend_worker.split(" ")))

Check warning on line 156 in augur/application/cli/backend.py

View workflow job for this annotation

GitHub Actions / runner / pylint

[pylint] reported by reviewdog 🐶 R1732: Consider using 'with' for resource-allocating operations (consider-using-with) Raw Output: augur/application/cli/backend.py:156:24: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
sleep_time += 6

if not disable_collection:
Expand All @@ -161,18 +161,18 @@
#2 processes are always reserved as a baseline.
scheduling_worker = f"celery -A augur.tasks.init.celery_app.celery_app worker -l info --concurrency=2 -n scheduling:{uuid.uuid4().hex}@%h -Q scheduling"
max_process_estimate -= 2
process_list.append(subprocess.Popen(scheduling_worker.split(" ")))

Check warning on line 164 in augur/application/cli/backend.py

View workflow job for this annotation

GitHub Actions / runner / pylint

[pylint] reported by reviewdog 🐶 R1732: Consider using 'with' for resource-allocating operations (consider-using-with) Raw Output: augur/application/cli/backend.py:164:28: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
sleep_time += 6

#60% of estimate, Maximum value of 45
core_num_processes = determine_worker_processes(.6, 45)
logger.info(f"Starting core worker processes with concurrency={core_num_processes}")
core_worker = f"celery -A augur.tasks.init.celery_app.celery_app worker -l info --concurrency={core_num_processes} -n core:{uuid.uuid4().hex}@%h"
process_list.append(subprocess.Popen(core_worker.split(" ")))

Check warning on line 171 in augur/application/cli/backend.py

View workflow job for this annotation

GitHub Actions / runner / pylint

[pylint] reported by reviewdog 🐶 R1732: Consider using 'with' for resource-allocating operations (consider-using-with) Raw Output: augur/application/cli/backend.py:171:28: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
sleep_time += 6

#20% of estimate, Maximum value of 25
secondary_num_processes = determine_worker_processes(.25, 25)
secondary_num_processes = determine_worker_processes(.25, 45)
logger.info(f"Starting secondary worker processes with concurrency={secondary_num_processes}")
secondary_worker = f"celery -A augur.tasks.init.celery_app.celery_app worker -l info --concurrency={secondary_num_processes} -n secondary:{uuid.uuid4().hex}@%h -Q secondary"
process_list.append(subprocess.Popen(secondary_worker.split(" ")))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pylint] reported by reviewdog 🐶
R1732: Consider using 'with' for resource-allocating operations (consider-using-with)

Expand All @@ -183,7 +183,7 @@
logger.info(f"Starting facade worker processes with concurrency={facade_num_processes}")
facade_worker = f"celery -A augur.tasks.init.celery_app.celery_app worker -l info --concurrency={facade_num_processes} -n facade:{uuid.uuid4().hex}@%h -Q facade"

process_list.append(subprocess.Popen(facade_worker.split(" ")))

Check warning on line 186 in augur/application/cli/backend.py

View workflow job for this annotation

GitHub Actions / runner / pylint

[pylint] reported by reviewdog 🐶 R1732: Consider using 'with' for resource-allocating operations (consider-using-with) Raw Output: augur/application/cli/backend.py:186:28: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
sleep_time += 6

time.sleep(sleep_time)
Expand Down Expand Up @@ -307,7 +307,7 @@
SET facade_status='Pending', facade_task_id=NULL
WHERE facade_status='Failed Clone' OR facade_status='Initializing';
"""))
#TODO: write timestamp for currently running repos.

Check warning on line 310 in augur/application/cli/backend.py

View workflow job for this annotation

GitHub Actions / runner / pylint

[pylint] reported by reviewdog 🐶 W0511: TODO: write timestamp for currently running repos. (fixme) Raw Output: augur/application/cli/backend.py:310:5: W0511: TODO: write timestamp for currently running repos. (fixme)

def assign_orphan_repos_to_default_user(session):
query = s.sql.text("""
Expand Down
2 changes: 1 addition & 1 deletion augur/application/cli/collection.py
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ def determine_worker_processes(ratio,maximum):
sleep_time += 6

#20% of estimate, Maximum value of 25
secondary_num_processes = determine_worker_processes(.25, 25)
secondary_num_processes = determine_worker_processes(.25, 45)
logger.info(f"Starting secondary worker processes with concurrency={secondary_num_processes}")
secondary_worker = f"celery -A augur.tasks.init.celery_app.celery_app worker -l info --concurrency={secondary_num_processes} -n secondary:{uuid.uuid4().hex}@%h -Q secondary"
process_list.append(subprocess.Popen(secondary_worker.split(" ")))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pylint] reported by reviewdog 🐶
R1732: Consider using 'with' for resource-allocating operations (consider-using-with)

Expand Down
2 changes: 1 addition & 1 deletion augur/application/cli/tasks.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ def start():

scheduling_worker = f"celery -A augur.tasks.init.celery_app.celery_app worker -l info --concurrency=1 -n scheduling:{uuid.uuid4().hex}@%h -Q scheduling"
core_worker = f"celery -A augur.tasks.init.celery_app.celery_app worker -l info --concurrency=45 -n core:{uuid.uuid4().hex}@%h"
secondary_worker = f"celery -A augur.tasks.init.celery_app.celery_app worker -l info --concurrency=25 -n secondary:{uuid.uuid4().hex}@%h -Q secondary"
secondary_worker = f"celery -A augur.tasks.init.celery_app.celery_app worker -l info --concurrency=45 -n secondary:{uuid.uuid4().hex}@%h -Q secondary"

scheduling_worker_process = subprocess.Popen(scheduling_worker.split(" "))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pylint] reported by reviewdog 🐶
R1732: Consider using 'with' for resource-allocating operations (consider-using-with)

core_worker_process = subprocess.Popen(core_worker.split(" "))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pylint] reported by reviewdog 🐶
R1732: Consider using 'with' for resource-allocating operations (consider-using-with)

Expand Down
6 changes: 5 additions & 1 deletion augur/tasks/git/dependency_tasks/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,11 @@ def generate_scorecard(session,repo_id,path):
key_handler = GithubApiKeyHandler(session, session.logger)
os.environ['GITHUB_AUTH_TOKEN'] = key_handler.get_random_key()

required_output = parse_json_from_subprocess_call(session.logger,['./scorecard', command, '--format=json'],cwd=path_to_scorecard)
try:
required_output = parse_json_from_subprocess_call(session.logger,['./scorecard', command, '--format=json'],cwd=path_to_scorecard)
except Exception as e:
session.logger.error(f"Could not parse required output! Error: {e}")
raise e

session.logger.info('adding to database...')
session.logger.debug(f"output: {required_output}")
Expand Down
26 changes: 14 additions & 12 deletions augur/tasks/git/util/facade_worker/facade_worker/rebuildcache.py
Original file line number Diff line number Diff line change
Expand Up @@ -396,7 +396,8 @@ def rebuild_unknown_affiliation_and_web_caches(session):
# ("DELETE c.* FROM dm_repo_group_weekly c "
# "JOIN repo_groups p ON c.repo_group_id = p.repo_group_id WHERE "
# "p.rg_recache=TRUE")
session.execute_sql(clear_dm_repo_group_weekly)

# session.execute_sql(clear_dm_repo_group_weekly)

clear_dm_repo_group_monthly = s.sql.text("""
DELETE
Expand All @@ -410,7 +411,8 @@ def rebuild_unknown_affiliation_and_web_caches(session):
# ("DELETE c.* FROM dm_repo_group_monthly c "
# "JOIN repo_groups p ON c.repo_group_id = p.repo_group_id WHERE "
# "p.rg_recache=TRUE")
session.execute_sql(clear_dm_repo_group_monthly)

# session.execute_sql(clear_dm_repo_group_monthly)

clear_dm_repo_group_annual = s.sql.text("""
DELETE
Expand All @@ -424,7 +426,7 @@ def rebuild_unknown_affiliation_and_web_caches(session):
# ("DELETE c.* FROM dm_repo_group_annual c "
# "JOIN repo_groups p ON c.repo_group_id = p.repo_group_id WHERE "
# "p.rg_recache=TRUE")
session.execute_sql(clear_dm_repo_group_annual)
# session.execute_sql(clear_dm_repo_group_annual)

clear_dm_repo_weekly = s.sql.text("""
DELETE
Expand All @@ -441,7 +443,7 @@ def rebuild_unknown_affiliation_and_web_caches(session):
# "JOIN repo r ON c.repo_id = r.repo_id "
# "JOIN repo_groups p ON r.repo_group_id = p.repo_group_id WHERE "
# "p.rg_recache=TRUE")
session.execute_sql(clear_dm_repo_weekly)
# session.execute_sql(clear_dm_repo_weekly)

clear_dm_repo_monthly = s.sql.text("""
DELETE
Expand All @@ -458,7 +460,7 @@ def rebuild_unknown_affiliation_and_web_caches(session):
# "JOIN repo r ON c.repo_id = r.repo_id "
# "JOIN repo_groups p ON r.repo_group_id = p.repo_group_id WHERE "
# "p.rg_recache=TRUE")
session.execute_sql(clear_dm_repo_monthly)
# session.execute_sql(clear_dm_repo_monthly)

clear_dm_repo_annual = s.sql.text("""
DELETE
Expand All @@ -475,7 +477,7 @@ def rebuild_unknown_affiliation_and_web_caches(session):
# "JOIN repo r ON c.repo_id = r.repo_id "
# "JOIN repo_groups p ON r.repo_group_id = p.repo_group_id WHERE "
# "p.rg_recache=TRUE")
session.execute_sql(clear_dm_repo_annual)
# session.execute_sql(clear_dm_repo_annual)

clear_unknown_cache = s.sql.text("""
DELETE
Expand Down Expand Up @@ -573,7 +575,7 @@ def rebuild_unknown_affiliation_and_web_caches(session):
"r.repo_group_id, info.a, info.b, info.c")
).bindparams(tool_source=session.tool_source,tool_version=session.tool_version,data_source=session.data_source)

session.execute_sql(cache_projects_by_week)
# session.execute_sql(cache_projects_by_week)

cache_projects_by_month = s.sql.text(
("INSERT INTO dm_repo_group_monthly (repo_group_id, email, affiliation, month, year, added, removed, whitespace, files, patches, tool_source, tool_version, data_source) "
Expand Down Expand Up @@ -609,7 +611,7 @@ def rebuild_unknown_affiliation_and_web_caches(session):
"r.repo_group_id, info.a, info.b, info.c"
)).bindparams(tool_source=session.tool_source,tool_version=session.tool_version,data_source=session.data_source)

session.execute_sql(cache_projects_by_month)
# session.execute_sql(cache_projects_by_month)

cache_projects_by_year = s.sql.text((
"INSERT INTO dm_repo_group_annual (repo_group_id, email, affiliation, year, added, removed, whitespace, files, patches, tool_source, tool_version, data_source) "
Expand Down Expand Up @@ -649,7 +651,7 @@ def rebuild_unknown_affiliation_and_web_caches(session):



session.execute_sql(cache_projects_by_year)
# session.execute_sql(cache_projects_by_year)
# Start caching by repo

session.log_activity('Verbose','Caching repos')
Expand Down Expand Up @@ -689,7 +691,7 @@ def rebuild_unknown_affiliation_and_web_caches(session):
"a.repo_id, info.a, info.b, info.c"
)).bindparams(tool_source=session.tool_source,tool_version=session.tool_version,data_source=session.data_source)

session.execute_sql(cache_repos_by_week)
# session.execute_sql(cache_repos_by_week)

cache_repos_by_month = s.sql.text((
"INSERT INTO dm_repo_monthly (repo_id, email, affiliation, month, year, added, removed, whitespace, files, patches, tool_source, tool_version, data_source)"
Expand Down Expand Up @@ -725,7 +727,7 @@ def rebuild_unknown_affiliation_and_web_caches(session):
"a.repo_id, info.a, info.b, info.c"
)).bindparams(tool_source=session.tool_source,tool_version=session.tool_version,data_source=session.data_source)

session.execute_sql(cache_repos_by_month)
# session.execute_sql(cache_repos_by_month)

cache_repos_by_year = s.sql.text((
"INSERT INTO dm_repo_annual (repo_id, email, affiliation, year, added, removed, whitespace, files, patches, tool_source, tool_version, data_source)"
Expand Down Expand Up @@ -759,7 +761,7 @@ def rebuild_unknown_affiliation_and_web_caches(session):
"a.repo_id, info.a, info.b, info.c"
)).bindparams(tool_source=session.tool_source,tool_version=session.tool_version,data_source=session.data_source)

session.execute_sql(cache_repos_by_year)
# session.execute_sql(cache_repos_by_year)

# Reset cache flags

Expand Down
35 changes: 23 additions & 12 deletions augur/tasks/github/pull_requests/tasks.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@
from augur.application.db.util import execute_session_query
from ..messages.tasks import process_github_comment_contributors

from typing import Generator, List, Dict


platform_id = 1

Expand All @@ -29,20 +31,32 @@ def collect_pull_requests(repo_git: str) -> int:
Repo.repo_git == repo_git).one().repo_id

owner, repo = get_owner_repo(repo_git)
pr_data = retrieve_all_pr_data(repo_git, logger, manifest.key_auth)

if pr_data:
process_pull_requests(pr_data, f"{owner}/{repo}: Pr task", repo_id, logger, augur_db)
total_count = 0
all_data = []
for page in retrieve_all_pr_data(repo_git, logger, manifest.key_auth):
all_data += page

if len(all_data) >= 1000:
process_pull_requests(all_data, f"{owner}/{repo}: Pr task", repo_id, logger, augur_db)
total_count += len(all_data)
all_data.clear()

if len(all_data):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pylint] reported by reviewdog 🐶
C1802: Do not use len(SEQUENCE) without comparison to determine if a sequence is empty (use-implicit-booleaness-not-len)

process_pull_requests(all_data, f"{owner}/{repo}: Pr task", repo_id, logger, augur_db)
total_count += len(all_data)

return len(pr_data)
if total_count > 0:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pylint] reported by reviewdog 🐶
R1705: Unnecessary "else" after "return", remove the "else" and de-indent the code inside it (no-else-return)

return total_count
else:
logger.info(f"{owner}/{repo} has no pull requests")
return 0



# TODO: Rename pull_request_reviewers table to pull_request_requested_reviewers

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pylint] reported by reviewdog 🐶
W0511: TODO: Rename pull_request_reviewers table to pull_request_requested_reviewers (fixme)

# TODO: Fix column names in pull request labels table

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pylint] reported by reviewdog 🐶
W0511: TODO: Fix column names in pull request labels table (fixme)

def retrieve_all_pr_data(repo_git: str, logger, key_auth) -> None:
def retrieve_all_pr_data(repo_git: str, logger, key_auth): #-> Generator[List[Dict]]:

owner, repo = get_owner_repo(repo_git)

Expand All @@ -52,24 +66,21 @@ def retrieve_all_pr_data(repo_git: str, logger, key_auth) -> None:
# returns an iterable of all prs at this url (this essentially means you can treat the prs variable as a list of the prs)
prs = GithubPaginator(url, key_auth, logger)

all_data = []
num_pages = prs.get_num_pages()
for page_data, page in prs.iter_pages():

if page_data is None:
return all_data
return

if len(page_data) == 0:
logger.debug(
f"{owner}/{repo} Prs Page {page} contains no data...returning")
logger.info(f"{owner}/{repo} Prs Page {page} of {num_pages}")
return all_data
return

logger.info(f"{owner}/{repo} Prs Page {page} of {num_pages}")

all_data += page_data

return all_data

yield page_data


def process_pull_requests(pull_requests, task_name, repo_id, logger, augur_db):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pylint] reported by reviewdog 🐶
R0914: Too many local variables (32/30) (too-many-locals)

Expand Down
20 changes: 12 additions & 8 deletions augur/tasks/github/util/gh_graphql_entities.py
Original file line number Diff line number Diff line change
Expand Up @@ -338,17 +338,21 @@ def __iter__(self):
#self.logger.info(f"{params}")
data = self.request_graphql_dict(variables=params)
try:
coreData = self.extract_paginate_result(data)

#Check to make sure we have data
coreData['totalCount']
coreData = self.extract_paginate_result(data)
if coreData is not None:
if coreData.get('totalCount') is not None:
self.logger.info("... core data obtained")
else:
self.logger.info(f"Helen, the ghost in our machine, did not get a numerical result for core data (value): {data} \n Zero value assigned.")
coreData['totalCount'] = 0
else:
self.logger.error("Core data is None, cannot proceed with operations on it, but assigning a value of Zero to ensure continued collection.")
yield None
return
except KeyError as e:
self.logger.error("Could not extract paginate result because there was no data returned")
self.logger.error(
''.join(traceback.format_exception(None, e, e.__traceback__)))

self.logger.info(f"Graphql paramters: {params}")
return
self.logger.error(''.join(traceback.format_exception(None, e, e.__traceback__)))


if int(coreData['totalCount']) == 0:
Expand Down
Loading
Loading