Skip to content

Commit

Permalink
feat: REPORT_TITLE and OUTPUT_FILE environment variables
Browse files Browse the repository at this point in the history
Closes #160

- [x] Adds the ability for users to designate custom report title and output files.
  - Defaults for each:
    - REPORT_TITLE: "Issue Metrics"
    - OUTPUT_FILE: "issue_metrics.md" if markdown and "issue_metrics.json" if json
- [x] Update `.env-example` and include changes there and in docs we forgot in #370

Signed-off-by: jmeridth <jmeridth@gmail.com>
Co-authored-by: Sebastian Spier <github@spier.hu>
Co-authored-by: Zack Koppert <zkoppert@github.com>
  • Loading branch information
3 people committed Sep 19, 2024
1 parent 2c95dd3 commit 2bb3360
Show file tree
Hide file tree
Showing 9 changed files with 94 additions and 22 deletions.
3 changes: 3 additions & 0 deletions .env-example
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,7 @@ HIDE_TIME_TO_CLOSE = "false"
HIDE_TIME_TO_FIRST_RESPONSE = "false"
IGNORE_USERS = "user1,user2"
LABELS_TO_MEASURE = "waiting-for-review,waiting-for-manager"
NON_MENTIONING_LINKS = "false"
OUTPUT_FILE = ""
REPORT_TITLE = "Issue Metrics"
SEARCH_QUERY = "repo:owner/repo is:open is:issue"
36 changes: 19 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,23 +139,25 @@ This action can be configured to authenticate with GitHub App Installation or Pe

#### Other Configuration Options

| field | required | default | description |
| ----------------------------- | -------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `GH_ENTERPRISE_URL` | False | `""` | URL of GitHub Enterprise instance to use for auth instead of github.com |
| `HIDE_AUTHOR` | False | False | If set to `true`, the author will not be displayed in the generated Markdown file. |
| `HIDE_ITEMS_CLOSED_COUNT` | False | False | If set to `true`, the number of items closed metric will not be displayed in the generated Markdown file. |
| `HIDE_LABEL_METRICS` | False | False | If set to `true`, the time in label metrics will not be displayed in the generated Markdown file. |
| `HIDE_TIME_TO_ANSWER` | False | False | If set to `true`, the time to answer a discussion will not be displayed in the generated Markdown file. |
| `HIDE_TIME_TO_CLOSE` | False | False | If set to `true`, the time to close will not be displayed in the generated Markdown file. |
| `HIDE_TIME_TO_FIRST_RESPONSE` | False | False | If set to `true`, the time to first response will not be displayed in the generated Markdown file. |
| `IGNORE_USERS` | False | False | A comma separated list of users to ignore when calculating metrics. (ie. `IGNORE_USERS: 'user1,user2'`). To ignore bots, append `[bot]` to the user (ie. `IGNORE_USERS: 'github-actions[bot]'`) |
| `ENABLE_MENTOR_COUNT` | False | False | If set to 'TRUE' count number of comments users left on discussions, issues and PRs and display number of active mentors |
| `MIN_MENTOR_COMMENTS` | False | 10 | Minimum number of comments to count as a mentor |
| `MAX_COMMENTS_EVAL` | False | 20 | Maximum number of comments per thread to evaluate for mentor stats |
| `HEAVILY_INVOLVED_CUTOFF` | False | 3 | Cutoff after which a mentor's comments in one issue are no longer counted against their total score |
| `LABELS_TO_MEASURE` | False | `""` | A comma separated list of labels to measure how much time the label is applied. If not provided, no labels durations will be measured. Not compatible with discussions at this time. |
| `NON_MENTIONING_LINKS` | False | False | If set to `true`, will use non-mentioning GitHub links to avoid linking to the generated issue from the source repository. Links of the form `https://www.github.com` will be used. |
| `SEARCH_QUERY` | True | `""` | The query by which you can filter issues/PRs which must contain a `repo:`, `org:`, `owner:`, or a `user:` entry. For discussions, include `type:discussions` in the query. |
| field | required | default | description |
| ----------------------------- | -------- | ------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `GH_ENTERPRISE_URL` | False | `""` | URL of GitHub Enterprise instance to use for auth instead of github.com |
| `HIDE_AUTHOR` | False | False | If set to `true`, the author will not be displayed in the generated Markdown file. |
| `HIDE_ITEMS_CLOSED_COUNT` | False | False | If set to `true`, the number of items closed metric will not be displayed in the generated Markdown file. |
| `HIDE_LABEL_METRICS` | False | False | If set to `true`, the time in label metrics will not be displayed in the generated Markdown file. |
| `HIDE_TIME_TO_ANSWER` | False | False | If set to `true`, the time to answer a discussion will not be displayed in the generated Markdown file. |
| `HIDE_TIME_TO_CLOSE` | False | False | If set to `true`, the time to close will not be displayed in the generated Markdown file. |
| `HIDE_TIME_TO_FIRST_RESPONSE` | False | False | If set to `true`, the time to first response will not be displayed in the generated Markdown file. |
| `IGNORE_USERS` | False | False | A comma separated list of users to ignore when calculating metrics. (ie. `IGNORE_USERS: 'user1,user2'`). To ignore bots, append `[bot]` to the user (ie. `IGNORE_USERS: 'github-actions[bot]'`) |
| `ENABLE_MENTOR_COUNT` | False | False | If set to 'TRUE' count number of comments users left on discussions, issues and PRs and display number of active mentors |
| `MIN_MENTOR_COMMENTS` | False | 10 | Minimum number of comments to count as a mentor |
| `MAX_COMMENTS_EVAL` | False | 20 | Maximum number of comments per thread to evaluate for mentor stats |
| `HEAVILY_INVOLVED_CUTOFF` | False | 3 | Cutoff after which a mentor's comments in one issue are no longer counted against their total score |
| `LABELS_TO_MEASURE` | False | `""` | A comma separated list of labels to measure how much time the label is applied. If not provided, no labels durations will be measured. Not compatible with discussions at this time. |
| `NON_MENTIONING_LINKS` | False | False | If set to `true`, will use non-mentioning GitHub links to avoid linking to the generated issue from the source repository. Links of the form `https://www.github.com` will be used. |
| `OUTPUT_FILE` | False | `issue_metrics.md` or `issue_metrics.json` | Output filename. |
| `REPORT_TITLE` | False | `"Issue Metrics"` | Title to have on the report issue. |
| `SEARCH_QUERY` | True | `""` | The query by which you can filter issues/PRs which must contain a `repo:`, `org:`, `owner:`, or a `user:` entry. For discussions, include `type:discussions` in the query. |

## Further Documentation

Expand Down
14 changes: 14 additions & 0 deletions config.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,9 @@ class EnvVars:
max_comments_eval (str): If set, defines the maximum number of comments to look at for mentor evaluation
heavily_involved_cutoff (str): If set, defines the cutoff after which heavily involved commentors in
search_query (str): Search query used to filter issues/prs/discussions on GitHub
non_mentioning_links (bool): If set to TRUE, links do not cause a notification in the desitnation repository
report_title (str): The title of the report
output_file (str): The name of the file to write the report to
"""

def __init__(
Expand All @@ -63,6 +66,8 @@ def __init__(
heavily_involved_cutoff: str,
search_query: str,
non_mentioning_links: bool,
report_title: str,
output_file: str,
):
self.gh_app_id = gh_app_id
self.gh_app_installation_id = gh_app_installation_id
Expand All @@ -83,6 +88,8 @@ def __init__(
self.heavily_involved_cutoff = heavily_involved_cutoff
self.search_query = search_query
self.non_mentioning_links = non_mentioning_links
self.report_title = report_title
self.output_file = output_file

def __repr__(self):
return (
Expand All @@ -106,6 +113,8 @@ def __repr__(self):
f"{self.heavily_involved_cutoff},"
f"{self.search_query}"
f"{self.non_mentioning_links}"
f"{self.report_title}"
f"{self.output_file}"
)


Expand Down Expand Up @@ -187,6 +196,9 @@ def get_env_vars(test: bool = False) -> EnvVars:
if ignore_users:
ignore_users_list = ignore_users.split(",")

report_title = os.getenv("REPORT_TITLE", "Issue Metrics")
output_file = os.getenv("OUTPUT_FILE", "")

# Hidden columns
hide_author = get_bool_env_var("HIDE_AUTHOR", False)
hide_items_closed_count = get_bool_env_var("HIDE_ITEMS_CLOSED_COUNT", False)
Expand Down Expand Up @@ -220,4 +232,6 @@ def get_env_vars(test: bool = False) -> EnvVars:
heavily_involved_cutoff,
search_query,
non_mentioning_links,
report_title,
output_file,
)
6 changes: 6 additions & 0 deletions issue_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -307,6 +307,8 @@ def main():
ignore_users = env_vars.ignore_users
hide_items_closed_count = env_vars.hide_items_closed_count
non_mentioning_links = env_vars.non_mentioning_links
report_title = env_vars.report_title
output_file = env_vars.output_file

gh_app_id = env_vars.gh_app_id
gh_app_installation_id = env_vars.gh_app_installation_id
Expand Down Expand Up @@ -401,6 +403,8 @@ def main():
num_issues_closed,
num_mentor_count,
search_query,
report_title,
output_file,
)

write_to_markdown(
Expand All @@ -416,6 +420,8 @@ def main():
search_query,
hide_items_closed_count,
non_mentioning_links,
report_title,
output_file,
)

max_char_count = 65535
Expand Down
5 changes: 4 additions & 1 deletion json_writer.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
num_issues_opened: int,
num_issues_closed: int,
search_query: str,
output_file: str,
) -> str:
Write the issues with metrics to a json file.
Expand All @@ -32,6 +33,7 @@ def write_to_json(
num_issues_closed: Union[int, None],
num_mentor_count: Union[int, None],
search_query: str,
output_file: str,
) -> str:
"""
Write the issues with metrics to a JSON file called issue_metrics.json.
Expand Down Expand Up @@ -168,7 +170,8 @@ def write_to_json(
print(f"metrics={metrics_json}", file=file_handle)

# Write the metrics to a JSON file
with open("issue_metrics.json", "w", encoding="utf-8") as file:
output_file_name = output_file if output_file else "issue_metrics.json"
with open(output_file_name, "w", encoding="utf-8") as file:
json.dump(metrics, file, indent=4)

return metrics_json
10 changes: 8 additions & 2 deletions markdown_writer.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,8 @@ def write_to_markdown(
hide_label_metrics=False,
hide_items_closed_count=False,
non_mentioning_links=False,
report_title="",
output_file="",
) -> None:
"""Write the issues with metrics to a markdown file.
Expand All @@ -105,14 +107,18 @@ def write_to_markdown(
search_query (str): The search query used to find the issues.
hide_label_metrics (bool): Represents whether the user has chosen to hide label metrics in the output
hide_items_closed_count (bool): Represents whether the user has chosen to hide the number of items closed
non_mentioning_links (bool): Represents whether links do not cause a notification in the desitnation repository
report_title (str): The title of the report
output_file (str): The name of the file to write the report to
Returns:
None.
"""
columns = get_non_hidden_columns(labels)
with open("issue_metrics.md", "w", encoding="utf-8") as file:
file.write("# Issue Metrics\n\n")
output_file_name = output_file if output_file else "issue_metrics.md"
with open(output_file_name, "w", encoding="utf-8") as file:
file.write(f"# {report_title}\n\n")

# If all the metrics are None, then there are no issues
if not issues_with_metrics or len(issues_with_metrics) == 0:
Expand Down
22 changes: 21 additions & 1 deletion test_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,9 @@ def setUp(self):
"HIDE_TIME_TO_FIRST_RESPONSE",
"IGNORE_USERS",
"LABELS_TO_MEASURE",
"NON_MENTIONING_LINKS",
"OUTPUT_FILE",
"REPORT_TITLE",
"SEARCH_QUERY",
]
for key in env_keys:
Expand All @@ -101,6 +104,9 @@ def setUp(self):
"HIDE_TIME_TO_FIRST_RESPONSE": "",
"IGNORE_USERS": "",
"LABELS_TO_MEASURE": "",
"NON_MENTIONING_LINKS": "false",
"OUTPUT_FILE": "",
"REPORT_TITLE": "",
"SEARCH_QUERY": SEARCH_QUERY,
},
clear=True,
Expand All @@ -127,6 +133,8 @@ def test_get_env_vars_with_github_app(self):
"3",
SEARCH_QUERY,
False,
"",
"",
)
result = get_env_vars(True)
self.assertEqual(str(result), str(expected_result))
Expand All @@ -147,6 +155,9 @@ def test_get_env_vars_with_github_app(self):
"HIDE_TIME_TO_FIRST_RESPONSE": "",
"IGNORE_USERS": "",
"LABELS_TO_MEASURE": "",
"NON_MENTIONING_LINKS": "false",
"OUTPUT_FILE": "",
"REPORT_TITLE": "",
"SEARCH_QUERY": SEARCH_QUERY,
},
clear=True,
Expand All @@ -173,6 +184,8 @@ def test_get_env_vars_with_token(self):
"3",
SEARCH_QUERY,
False,
"",
"",
)
result = get_env_vars(True)
self.assertEqual(str(result), str(expected_result))
Expand Down Expand Up @@ -228,6 +241,9 @@ def test_get_env_vars_missing_query(self):
"HIDE_TIME_TO_FIRST_RESPONSE": "true",
"IGNORE_USERS": "",
"LABELS_TO_MEASURE": "waiting-for-review,waiting-for-manager",
"NON_MENTIONING_LINKS": "true",
"OUTPUT_FILE": "issue_metrics.md",
"REPORT_TITLE": "Issue Metrics",
"SEARCH_QUERY": SEARCH_QUERY,
},
)
Expand All @@ -252,7 +268,9 @@ def test_get_env_vars_optional_values(self):
20,
3,
SEARCH_QUERY,
False,
True,
"Issue Metrics",
"issue_metrics.md",
)
result = get_env_vars(True)
self.assertEqual(str(result), str(expected_result))
Expand Down Expand Up @@ -290,6 +308,8 @@ def test_get_env_vars_optionals_are_defaulted(self):
"3",
SEARCH_QUERY,
False,
"Issue Metrics",
"",
)
result = get_env_vars(True)
self.assertEqual(str(result), str(expected_result))
Expand Down
2 changes: 2 additions & 0 deletions test_json_writer.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,7 @@ def test_write_to_json(self):
num_issues_closed=num_issues_closed,
num_mentor_count=num_mentor_count,
search_query="is:issue repo:owner/repo",
output_file="issue_metrics.json",
),
json.dumps(expected_output),
)
Expand Down Expand Up @@ -206,6 +207,7 @@ def test_write_to_json_with_no_response(self):
num_issues_closed=num_issues_closed,
num_mentor_count=num_mentor_count,
search_query="is:issue repo:owner/repo",
output_file="issue_metrics.json",
),
json.dumps(expected_output),
)
Expand Down
18 changes: 17 additions & 1 deletion test_markdown_writer.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,8 @@ def test_write_to_markdown(self):
num_mentor_count=num_mentor_count,
labels=["bug"],
search_query="is:issue is:open label:bug",
report_title="Issue Metrics",
output_file="issue_metrics.md",
)

# Check that the function writes the correct markdown file
Expand Down Expand Up @@ -191,6 +193,8 @@ def test_write_to_markdown_with_vertical_bar_in_title(self):
num_issues_closed=num_issues_closed,
num_mentor_count=num_mentor_count,
labels=["bug"],
report_title="Issue Metrics",
output_file="issue_metrics.md",
)

# Check that the function writes the correct markdown file
Expand Down Expand Up @@ -227,7 +231,17 @@ def test_write_to_markdown_no_issues(self):
"""Test that write_to_markdown writes the correct markdown file when no issues are found."""
# Call the function with no issues
with patch("builtins.open", mock_open()) as mock_open_file:
write_to_markdown(None, None, None, None, None, None, None, None)
write_to_markdown(
None,
None,
None,
None,
None,
None,
None,
None,
report_title="Issue Metrics",
)

# Check that the file was written correctly
expected_output = [
Expand Down Expand Up @@ -316,6 +330,8 @@ def test_writes_markdown_file_with_non_hidden_columns_only(self):
hide_label_metrics=True,
hide_items_closed_count=True,
non_mentioning_links=True,
report_title="Issue Metrics",
output_file="issue_metrics.md",
)

# Check that the function writes the correct markdown file
Expand Down

0 comments on commit 2bb3360

Please sign in to comment.