-
Notifications
You must be signed in to change notification settings - Fork 1
Issue 192 #312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue 192 #312
Conversation
Reviewer's GuideThis PR modernizes tooling and dependency management, introduces automated dependency checking utilities, updates CI workflows and documentation badges, and aligns the project’s supported Python version and docs/lint configs with the new setup. Sequence diagram for dependency upgrade checking via clean.sh and check_project_dependencies.shsequenceDiagram
actor Developer
participant clean_sh as clean_sh
participant check_deps as check_project_dependencies_sh
participant Pipenv
participant generate_req as generate_requirements_file_py
participant pcu as pcu
participant count_pcu as count_remaining_pcu_packages_py
Developer->>clean_sh: run clean.sh
clean_sh->>clean_sh: analyze_project
clean_sh->>clean_sh: find_unused_pylint_suppressions
clean_sh->>check_deps: run ./check_project_dependencies.sh
check_deps->>check_deps: parse_command_line(check_mode=1)
check_deps->>check_deps: start_process
check_deps->>Pipenv: pre-commit-update --dry-run
Pipenv-->>check_deps: exit status (0 or 1)
check_deps->>check_deps: set NEED_UPDATE based on status
loop For each of standard and development packages
check_deps->>generate_req: pipenv run python utils/generate_requirements_file.py [--dev]
generate_req-->>check_deps: temp requirements file path
check_deps->>pcu: pipenv run pcu temp_requirements.txt
pcu-->>check_deps: pcu report file
check_deps->>count_pcu: pipenv run python utils/count_remaining_pcu_packages.py [--dev] pcu_report
count_pcu-->>check_deps: remaining_upgradable_count
check_deps->>check_deps: update NEED_UPDATE if count > 0
end
alt NEED_UPDATE == 0
check_deps-->>clean_sh: exit 0
clean_sh->>clean_sh: execute_test_suite
else NEED_UPDATE != 0
check_deps-->>clean_sh: exit 2
clean_sh->>clean_sh: complete_process 1 "One or more project dependencies can be updated."
end
clean_sh-->>Developer: script result and messages
Class diagram for new dependency management and helper utilitiesclassDiagram
class CleanSh {
+look_for_upgrades()
+execute_test_suite()
+complete_process(return_code, reason)
}
class CheckProjectDependenciesSh {
+parse_command_line(args)
+start_process()
+complete_process(return_code, reason)
+check_for_updates(package_type, dev_flag)
+perform_updates(package_type, dev_flag, import_flags)
}
class GenerateDependenciesFilePy {
+icons_map: Dict
+__match_icons(first_column_value)
+__load_precommit_packages_and_versions(error_messages)
+__load_precommit_packages_and_versions_translate_ouptut(data, version_list)
+__load_pipfile_packages_and_versions(error_messages)
+__abc()
}
class GenerateRequirementsFilePy {
+__load_pipfile_packages_and_versions(error_list)
+__handle_arguments()
+__load_package_exclude_list_from_properties_file(use_dev_list)
+main()
}
class CountRemainingPcuPackagesPy {
+__handle_arguments()
+__load_package_exclude_list_from_properties_file(use_dev_list)
+main()
}
class ExtractPythonVersionFromPipfilePy {
+PYTHON_VERSION_REGEX
+main()
}
class VerifyInstallRequirementsPy {
+__read_requirements_file()
+__read_pipfile()
+errors_found
+main()
}
class PtestSh {
+parse_command_line()
+execute_tests()
+TEST_VERBOSE_MODE
}
class Pytest
class Pipfile
class Install_requirements_txt
class Project_properties
class Pre_commit_config_yaml
class Publish_dependencies_json
CleanSh ..> CheckProjectDependenciesSh : invokes
CheckProjectDependenciesSh ..> GenerateRequirementsFilePy : uses via pipenv run python
CheckProjectDependenciesSh ..> CountRemainingPcuPackagesPy : uses via pipenv run python
GenerateDependenciesFilePy ..> VerifyInstallRequirementsPy : shares Pipfile context
GenerateDependenciesFilePy ..> ExtractPythonVersionFromPipfilePy : shares Pipfile context
PtestSh ..> Pytest : configures logging flags
VerifyInstallRequirementsPy ..> Pipfile : reads Pipfile
VerifyInstallRequirementsPy ..> Install_requirements_txt : reads install-requirements.txt
GenerateRequirementsFilePy ..> Pipfile : reads Pipfile
CountRemainingPcuPackagesPy ..> Project_properties : reads exclude lists from project.properties
GenerateRequirementsFilePy ..> Project_properties : reads exclude lists from project.properties
GenerateDependenciesFilePy ..> Pre_commit_config_yaml : reads .pre-commit-config.yaml
GenerateDependenciesFilePy ..> Publish_dependencies_json : writes publish/dependencies.json
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey - I've found 10 issues, and left some high level feedback:
- In
utils/generate_dependencies_file.py, the subprocess handling aroundpre-commit-updateshould usecommunicate()instead ofwait()followed bystdout.read()/stderr.read()to avoid potential deadlocks with large outputs. - The argparse descriptions and help texts in
utils/generate_requirements_file.pyandutils/count_remaining_pcu_packages.pystill refer to PyTest analysis or copy-pasted wording (e.g.,Analyze PyTest tests...,output the dev-packages section instead of packages sectionfor--list), which should be updated to accurately describe what these utilities do. - The logic in
utils/extract_python_version_from_pipfile.pyassumes that the Python version is always on the line immediately after[requires], making it brittle to formatting changes inPipfile; consider parsing the section more robustly (e.g., by scanning until apython_version = "..."line or using a TOML parser).
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `utils/generate_dependencies_file.py`, the subprocess handling around `pre-commit-update` should use `communicate()` instead of `wait()` followed by `stdout.read()`/`stderr.read()` to avoid potential deadlocks with large outputs.
- The argparse descriptions and help texts in `utils/generate_requirements_file.py` and `utils/count_remaining_pcu_packages.py` still refer to PyTest analysis or copy-pasted wording (e.g., `Analyze PyTest tests...`, `output the dev-packages section instead of packages section` for `--list`), which should be updated to accurately describe what these utilities do.
- The logic in `utils/extract_python_version_from_pipfile.py` assumes that the Python version is always on the line immediately after `[requires]`, making it brittle to formatting changes in `Pipfile`; consider parsing the section more robustly (e.g., by scanning until a `python_version = "..."` line or using a TOML parser).
## Individual Comments
### Comment 1
<location> `utils/generate_dependencies_file.py:89` </location>
<code_context>
+ return package_version_map
+
+
+def __load_precommit_packages_and_versions_translate_ouptut(
+ data: str, version_list: List[List[str]]
+) -> None:
</code_context>
<issue_to_address>
**issue (typo):** Function name contains a typo that harms readability and discoverability.
The function name `__load_precommit_packages_and_versions_translate_ouptut` has a typo in `ouptut`. Please rename it to `..._translate_output` and update its usages for clarity and easier searchability.
Suggested implementation:
```python
def __load_precommit_packages_and_versions_translate_output(
data: str, version_list: List[List[str]]
) -> None:
```
Search the codebase for all usages of `__load_precommit_packages_and_versions_translate_ouptut` and update them to `__load_precommit_packages_and_versions_translate_output`. No other behavior changes are required.
</issue_to_address>
### Comment 2
<location> `utils/generate_dependencies_file.py:98-103` </location>
<code_context>
+ if not next_line:
+ continue
+ split_line = list(next_line.split(" "))
+ matched_icon = __match_icons(split_line[0])
+ assert (
+ matched_icon
+ ), f"Unrecognized icon in pre-commit output: '{split_line[0]}'"
+ split_line[0] = matched_icon
+ assert split_line[2] == "-"
+ del split_line[2]
+ if len(split_line) > 3 and split_line[3] == "->":
</code_context>
<issue_to_address>
**issue (bug_risk):** Using `assert` to validate external tool output can cause unexpected crashes and should be replaced with explicit error handling.
`assert matched_icon` and `assert split_line[2] == "-"` assume the pre-commit output format never changes; if it does, the script will crash with an unhelpful `AssertionError`. Instead, use explicit checks that either append a descriptive message to `error_messages` or raise a controlled exception, consistent with the rest of the module’s error handling, rather than relying on `assert` for runtime validation.
</issue_to_address>
### Comment 3
<location> `utils/generate_dependencies_file.py:129` </location>
<code_context>
+# pylint: disable=broad-exception-caught
+
+
+def __abc():
+ error_messages: List[str] = []
+
</code_context>
<issue_to_address>
**suggestion:** Top-level function name `__abc` is non-descriptive and makes navigation harder.
Please rename this driver function to something intent-revealing (for example, `__generate_combined_dependencies`) and update its invocation under `if __name__ == "__main__":` accordingly to improve readability and maintainability.
Suggested implementation:
```python
def __generate_combined_dependencies():
error_messages: List[str] = []
```
You’ll also need to update the function invocation under `if __name__ == "__main__":`. Look for a call like:
`__abc()`
and rename it to:
`__generate_combined_dependencies()`
If there are any other internal references to `__abc` (e.g., tests or helper wrappers), they should be updated to `__generate_combined_dependencies` as well.
</issue_to_address>
### Comment 4
<location> `utils/generate_requirements_file.py:1-5` </location>
<code_context>
- handle_error_fn: Optional[Callable[[str, Optional[Exception]], None]]
+ handle_error_fn: Optional[Callable[[str, Optional[Exception]], None]],
) -> Callable[[str, Optional[Exception]], None]:
"""
If the error_handler is not set, make sure to set it to print to output.
</code_context>
<issue_to_address>
**issue (typo):** Module docstring has typos and could be tightened for clarity.
Please correct `targetted` → `targeted` and `sectopmn` → `section`, and consider phrasing this as "targeted requirements file" and "[packages] or [dev-packages] section" for clearer readability.
</issue_to_address>
### Comment 5
<location> `utils/generate_requirements_file.py:44-45` </location>
<code_context>
+
+
+def __handle_arguments():
+ parser = argparse.ArgumentParser(
+ description="Analyze PyTest tests to determine if extra coverage is present."
+ )
+ parser.add_argument(
</code_context>
<issue_to_address>
**issue:** Argument parser description is misleading and appears to be copy-pasted from an unrelated tool.
This description refers to analyzing PyTest coverage, but this script exports sections from a Pipfile. Please update it to accurately describe the tool’s behavior, e.g. "Generate a requirements file for either [packages] or [dev-packages] from Pipfile," to avoid confusing users.
</issue_to_address>
### Comment 6
<location> `utils/count_remaining_pcu_packages.py:11-12` </location>
<code_context>
+
+
+def __handle_arguments():
+ parser = argparse.ArgumentParser(
+ description="Analyze PyTest tests to determine if extra coverage is present."
+ )
+ parser.add_argument(
</code_context>
<issue_to_address>
**issue:** CLI help texts are misleading for this script’s purpose.
The parser description and `--list` help still reference dev-packages/coverage analysis, which doesn’t match this script’s behavior (counting/listing updatable packages from pcu output). Please update both to describe the actual behavior, e.g. “Count or list packages that can be updated from pcu output,” and adjust the `--list` help accordingly.
</issue_to_address>
### Comment 7
<location> `utils/extract_python_version_from_pipfile.py:13-16` </location>
<code_context>
+ with open("Pipfile", "r", encoding="utf-8") as readme_file:
+ line_in_file = readme_file.read().split("\n")
+
+ requires_index = line_in_file.index("[requires]")
+
+ PYTHON_VERSION_REGEX = r"^python_version\s=\s\"3\.(\d+)\"$"
+ version_regex_match = re.match(
+ PYTHON_VERSION_REGEX, line_in_file[requires_index + 1]
+ )
</code_context>
<issue_to_address>
**suggestion:** Parsing the Pipfile by line position is fragile; consider using a TOML parser instead.
This relies on `python_version` being the very next line after `[requires]` in a fixed format, so comments, blank lines, or reordering will break it. Instead, load the Pipfile with `toml` and read `config['requires']['python_version']` so it’s resilient to formatting changes.
</issue_to_address>
### Comment 8
<location> `test/test_multisource_configuration_loader.py:2` </location>
<code_context>
"""
-Moduole for tests of the multisource configuration loader.
+Moduole for tests of the multisource configuration loader.
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests covering the new dependency/requirements utility scripts.
This PR adds several new utilities under `utils/` (`generate_dependencies_file.py`, `generate_requirements_file.py`, `count_remaining_pcu_packages.py`, `extract_python_version_from_pipfile.py`) without tests. Please add focused tests (e.g., under `test/test_utils_*`) that exercise normal flows, parsing edge cases (unexpected `pre-commit-update` output, missing/odd sections in `Pipfile`/`project.properties`, irregular `pcu` output), and error conditions (missing files, invalid TOML/Pipfile, non-zero exit codes). This will help keep these scripts reliable as formats evolve.
Suggested implementation:
```python
"""
Moduole for tests of the multisource configuration loader.
NOTE: These tests are for the configuration loader, not to test the correctness of any
of the individual loaders themselves.
"""
from __future__ import annotations
import subprocess
import sys
from pathlib import Path
from textwrap import dedent
import pytest
# ---------------------------------------------------------------------------
# Utilities / helpers for testing scripts in utils/
# ---------------------------------------------------------------------------
def _utils_dir() -> Path:
"""
Return the repository's ``utils/`` directory, assuming tests live under ``test/``.
"""
return Path(__file__).resolve().parents[1] / "utils"
def _utils_script(name: str) -> Path:
"""
Return the full path to a script in ``utils/``.
"""
script = _utils_dir() / name
if not script.exists():
pytest.skip(f"utils script {name!r} not found at {script}")
return script
def _run_script(script: Path, *args: str, cwd: Path | None = None) -> subprocess.CompletedProcess:
"""
Run a Python utility script as a subprocess, returning the CompletedProcess.
"""
return subprocess.run(
[sys.executable, str(script), *args],
cwd=str(cwd) if cwd is not None else None,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)
# ---------------------------------------------------------------------------
# Tests for utils/extract_python_version_from_pipfile.py
# ---------------------------------------------------------------------------
def _get_extract_python_func():
"""
Dynamically locate the extraction function in utils/extract_python_version_from_pipfile.py.
We allow either:
- extract_python_version_from_pipfile(path)
- extract_python_version(path)
to keep this test resilient to minor naming differences.
"""
sys.path.insert(0, str(_utils_dir()))
try:
mod = pytest.importorskip("extract_python_version_from_pipfile")
finally:
# Avoid permanently contaminating sys.path
sys.path.pop(0)
func = getattr(mod, "extract_python_version_from_pipfile", None) or getattr(
mod, "extract_python_version", None
)
if func is None:
pytest.skip(
"No Python version extraction function found in "
"utils/extract_python_version_from_pipfile.py"
)
return func
def test_extract_python_version_from_pipfile_happy_path(tmp_path: Path):
extract_func = _get_extract_python_func()
pipfile = tmp_path / "Pipfile"
pipfile.write_text(
dedent(
"""
[requires]
python_version = "3.11"
"""
),
encoding="utf-8",
)
version = extract_func(pipfile)
# Allow both "3.11" and normalized variants like "3.11.x"
assert "3.11" in str(version)
def test_extract_python_version_from_pipfile_missing_requires_section(tmp_path: Path):
extract_func = _get_extract_python_func()
pipfile = tmp_path / "Pipfile"
pipfile.write_text(
dedent(
"""
[packages]
requests = "*"
"""
),
encoding="utf-8",
)
with pytest.raises(Exception):
# Depending on implementation this could be ValueError, KeyError, RuntimeError, etc.
extract_func(pipfile)
def test_extract_python_version_from_pipfile_invalid_toml(tmp_path: Path):
extract_func = _get_extract_python_func()
pipfile = tmp_path / "Pipfile"
# Deliberately invalid TOML
pipfile.write_text("not-[valid = toml", encoding="utf-8")
with pytest.raises(Exception):
extract_func(pipfile)
def test_extract_python_version_from_missing_pipfile(tmp_path: Path):
extract_func = _get_extract_python_func()
missing = tmp_path / "does_not_exist_Pipfile"
with pytest.raises(Exception):
extract_func(missing)
# ---------------------------------------------------------------------------
# Tests for utils/generate_dependencies_file.py
# ---------------------------------------------------------------------------
def test_generate_dependencies_file_missing_input_fails(tmp_path: Path):
script = _utils_script("generate_dependencies_file.py")
missing_project_properties = tmp_path / "project.properties"
result = _run_script(script, str(missing_project_properties), "out.txt", cwd=tmp_path)
# Command should fail with non-zero return code when input is missing.
assert result.returncode != 0
assert "project.properties" in (result.stderr or "") or "not found" in (
result.stderr or ""
).lower()
def test_generate_dependencies_file_with_minimal_project_properties(tmp_path: Path):
script = _utils_script("generate_dependencies_file.py")
project_properties = tmp_path / "project.properties"
output_file = tmp_path / "dependencies.txt"
# Minimal but syntactically valid properties file; adjust keys if needed.
project_properties.write_text(
dedent(
"""
name=my-project
version=1.0.0
"""
),
encoding="utf-8",
)
result = _run_script(script, str(project_properties), str(output_file), cwd=tmp_path)
assert result.returncode == 0
assert output_file.exists()
# Very generic sanity check: file shouldn't be empty.
assert output_file.read_text(encoding="utf-8").strip() != ""
# ---------------------------------------------------------------------------
# Tests for utils/generate_requirements_file.py
# ---------------------------------------------------------------------------
def test_generate_requirements_file_missing_pipfile_fails(tmp_path: Path):
script = _utils_script("generate_requirements_file.py")
missing_pipfile = tmp_path / "Pipfile"
requirements_txt = tmp_path / "requirements.txt"
result = _run_script(script, str(missing_pipfile), str(requirements_txt), cwd=tmp_path)
assert result.returncode != 0
assert "Pipfile" in (result.stderr or "") or "not found" in (result.stderr or "").lower()
def test_generate_requirements_file_with_irregular_pipfile_sections(tmp_path: Path):
script = _utils_script("generate_requirements_file.py")
pipfile = tmp_path / "Pipfile"
requirements_txt = tmp_path / "requirements.txt"
# Pipfile with odd/missing sections – e.g., dev-packages only, or extra tables.
pipfile.write_text(
dedent(
"""
[dev-packages]
pytest = "*"
[something-weird]
foo = "bar"
"""
),
encoding="utf-8",
)
result = _run_script(script, str(pipfile), str(requirements_txt), cwd=tmp_path)
# The script may either succeed (ignoring odd sections) or fail with an explicit error.
# We assert that if it succeeds, it produces some content.
if result.returncode == 0:
assert requirements_txt.exists()
assert requirements_txt.read_text(encoding="utf-8").strip() != ""
else:
assert "Pipfile" in (result.stderr or "") or "requires" in (result.stderr or "").lower()
# ---------------------------------------------------------------------------
# Tests for utils/count_remaining_pcu_packages.py
# ---------------------------------------------------------------------------
def test_count_remaining_pcu_packages_handles_irregular_output(tmp_path: Path):
script = _utils_script("count_remaining_pcu_packages.py")
# Assume the script can read from a file containing pcu output or takes it via CLI.
# Here we simulate weird/irregular output via a temp file and pass its path.
pcu_output_file = tmp_path / "pcu_output.txt"
pcu_output_file.write_text(
dedent(
"""
some header line
1 package(s) left
garbage-line
3 package(s) left
"""
),
encoding="utf-8",
)
result = _run_script(script, str(pcu_output_file), cwd=tmp_path)
# Script should not crash on irregular lines.
assert result.returncode == 0
# We don't assert the exact count because the implementation details are unknown,
# but we require that it prints *some* result.
assert (result.stdout or "").strip() != ""
def test_count_remaining_pcu_packages_missing_input_fails(tmp_path: Path):
script = _utils_script("count_remaining_pcu_packages.py")
missing_file = tmp_path / "missing_pcu_output.txt"
result = _run_script(script, str(missing_file), cwd=tmp_path)
assert result.returncode != 0
assert "pcu" in (result.stderr or "").lower() or "not found" in (
result.stderr or ""
).lower()
```
These tests assume:
1. The new scripts live in a top-level `utils/` directory relative to the repository root.
2. Their CLIs follow a simple pattern:
- `generate_dependencies_file.py PROJECT_PROPERTIES OUTPUT_FILE`
- `generate_requirements_file.py PIPFILE OUTPUT_FILE`
- `count_remaining_pcu_packages.py PCU_OUTPUT_FILE`
3. `extract_python_version_from_pipfile.py` exposes either `extract_python_version_from_pipfile(path)` or `extract_python_version(path)` and raises an exception on invalid/missing input.
If any of these assumptions differ from your actual implementation, you should:
1. Adjust `_utils_dir()` if `utils/` is located elsewhere.
2. Update the argument order in `_run_script(...)` calls to match each script’s true CLI.
3. Refine the assertions on `stdout`/`stderr` and return codes to line up with actual behavior.
4. If `extract_python_version_from_pipfile.py` exposes a differently named function or a CLI-only interface, update `_get_extract_python_func()` or replace those tests with subprocess-based tests similar to the others.
5. If the project already has common test utilities (e.g., helpers for resolving repo root or running scripts), replace `_utils_dir`, `_utils_script`, and `_run_script` with those existing helpers to stay consistent with the rest of the codebase.
</issue_to_address>
### Comment 9
<location> `utils/generate_requirements_file.py:18` </location>
<code_context>
+# pylint: disable=broad-exception-caught
+
+
+def __load_pipfile_packages_and_versions(
+ error_list: List[str],
+) -> Tuple[Dict[str, str], List[str], List[str]]:
</code_context>
<issue_to_address>
**issue (complexity):** Consider refactoring the script into smaller, clearly named functions with a focused CLI description and a main() entrypoint to make control flow and responsibilities easier to follow.
You can simplify this script without changing behavior by tightening responsibilities and naming:
1. **Clarify CLI description**
The current description is misleading noise for this tool.
```python
def parse_args():
parser = argparse.ArgumentParser(
description="Emit Pipfile [packages] or [dev-packages] as requirements-style lines."
)
parser.add_argument(
"-d",
"--dev",
dest="export_dev_packages",
action="store_true",
default=False,
help="output the dev-packages section instead of packages section",
)
return parser.parse_args()
```
2. **Split Pipfile loading into two focused helpers**
Right now `__load_pipfile_packages_and_versions` mixes version extraction, package-name extraction, and error accumulation into one function.
You can keep functionality but make the flow easier to follow by separating the concerns and returning explicit types instead of a tuple with positional meaning:
```python
def load_pipfile_versions() -> Dict[str, str]:
versions: Dict[str, str] = {}
for name, _, version in cast(List[List[str]], load_dependencies("Pipfile", False, [])):
versions[name] = version
return versions
def load_pipfile_package_names() -> Tuple[List[str], List[str]]:
with open("Pipfile", "rt", encoding="utf-8") as f:
config = toml.load(f)
package_list = list(config["packages"].keys())
dev_package_list = list(config["dev-packages"].keys())
return package_list, dev_package_list
```
Then in `__main__` (or `main()`):
```python
try:
pipfile_versions = load_pipfile_versions()
package_names, dev_package_names = load_pipfile_package_names()
except Exception as exc: # noqa: B036
print(f"Failed to load Pipfile: {exc}")
raise SystemExit(1)
```
This removes the `error_list` pattern and avoids partially valid data.
3. **Use simple, non-mangled names and a `main()`**
The leading double-underscore naming and top-level logic make the control flow harder to scan than necessary in a small script. You can keep behavior identical and make the flow more obvious:
```python
def load_package_exclude_list(use_dev_list: bool) -> List[str]:
with open("project.properties", "r", encoding="utf-8") as file:
text = file.read()
config = configparser.RawConfigParser()
config.read_string(f"[main]\n{text}")
key = "PACKAGE_UPDATE_DEV_EXCLUDE_LIST" if use_dev_list else "PACKAGE_UPDATE_EXCLUDE_LIST"
return [item.strip() for item in config.get("main", key, fallback="").split(",")]
def main() -> None:
args = parse_args()
exclude_list = load_package_exclude_list(args.export_dev_packages)
try:
versions = load_pipfile_versions()
packages, dev_packages = load_pipfile_package_names()
except Exception as exc: # noqa: B036
print(f"Failed to load Pipfile: {exc}")
raise SystemExit(1)
selected = dev_packages if args.export_dev_packages else packages
for name in selected:
if name in exclude_list:
continue
print(f"{name}=={versions[name]}")
print("", flush=True)
if __name__ == "__main__":
main()
```
These changes keep all behavior, but reduce cognitive load by:
- Removing tuple-with-error-list outputs
- Making argument parsing and Pipfile loading self-descriptive
- Making the main control flow explicit and linear.
</issue_to_address>
### Comment 10
<location> `utils/count_remaining_pcu_packages.py:53` </location>
<code_context>
+ return [i.strip() for i in ss]
+
+
+parsed_args = __handle_arguments()
+exclude_list = __load_package_exclude_list_from_properties_file(
+ parsed_args.export_dev_packages
</code_context>
<issue_to_address>
**issue (complexity):** Consider refactoring this script by moving top-level logic into a main function, extracting a line-parsing helper, clarifying CLI help text, and optionally reusing shared config-loading utilities to make the code easier to read and test.
You can reduce complexity and improve testability without changing behavior by:
### 1. Move top-level logic into a `main()` function
This removes import-time side effects and makes the script easier to skim and unit test.
```python
def main() -> None:
parsed_args = __handle_arguments()
exclude_list = __load_package_exclude_list_from_properties_file(
parsed_args.export_dev_packages
)
with open(parsed_args.file_name, "r", encoding="utf-8") as file:
text = file.read()
has_extra_parameters = parsed_args.list_items
non_excluded_packages: List[str] = []
have_seen_start = False
for line in text.splitlines():
pkg = __parse_pcu_line(line, have_seen_start)
if pkg is None:
if line.strip().startswith("In "):
have_seen_start = True
continue
name, current, target = pkg
if name not in exclude_list:
if has_extra_parameters:
print(
f" Package '{name}' has version '{current}' "
f"and can be updated to version '{target}'"
)
non_excluded_packages.append(name)
if not has_extra_parameters:
print(len(non_excluded_packages))
if __name__ == "__main__":
main()
```
### 2. Extract a small helper for parsing `pcu` lines
Encapsulating the string munging makes the main loop much easier to follow and isolates assumptions about the `pcu` output format.
```python
from typing import Optional, Tuple
def __parse_pcu_line(
line: str,
have_seen_start: bool,
) -> Optional[Tuple[str, str, str]]:
line = line.strip()
if not have_seen_start or not line:
return None
if line.startswith("All dependencies match") or line.startswith("Run pcu"):
return None
# normalize whitespace and split
while " " in line:
line = line.replace(" ", " ")
parts = line.split(" ")
if len(parts) < 4:
return None # or raise if this should never happen
# name, current_version, "->", target_version
return parts[0], parts[1], parts[3]
```
You can adjust the `have_seen_start` handling to match your exact control flow, but the idea is to keep the string logic out of the main loop.
### 3. Clarify argument help text
`--dev` and `--list` currently have identical help strings, which is confusing. You can keep behavior while improving clarity:
```python
def __handle_arguments():
parser = argparse.ArgumentParser(
description="Count or list updatable packages from a 'pcu' output file."
)
parser.add_argument(
"-d",
"--dev",
dest="export_dev_packages",
action="store_true",
default=False,
help="use dev package exclude list instead of the main package exclude list",
)
parser.add_argument(
"-l",
"--list",
dest="list_items",
action="store_true",
default=False,
help="list each updatable package instead of printing only the count",
)
parser.add_argument("file_name", action="store", help="pcu output file to process")
return parser.parse_args()
```
### 4. (If applicable) Deduplicate config parsing
If another module (e.g., `generate_requirements_file.py`) already has a helper that reads `project.properties` via `configparser`, consider centralizing that logic:
```python
# in a shared module, e.g., config_utils.py
def load_properties_section(path: str = "project.properties") -> configparser.SectionProxy:
with open(path, "r", encoding="utf-8") as properties_file:
properties_text = properties_file.read()
config = configparser.RawConfigParser()
config.read_string(f"[main]\n{properties_text}")
return config["main"]
```
Then:
```python
from config_utils import load_properties_section
def __load_package_exclude_list_from_properties_file(use_dev_list: bool) -> List[str]:
section = load_properties_section()
key = "PACKAGE_UPDATE_DEV_EXCLUDE_LIST" if use_dev_list else "PACKAGE_UPDATE_EXCLUDE_LIST"
return [i.strip() for i in section.get(key, fallback="").split(",") if i.strip()]
```
These changes preserve behavior but significantly reduce cognitive load and make the script easier to maintain.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| return package_version_map | ||
|
|
||
|
|
||
| def __load_precommit_packages_and_versions_translate_ouptut( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (typo): Function name contains a typo that harms readability and discoverability.
The function name __load_precommit_packages_and_versions_translate_ouptut has a typo in ouptut. Please rename it to ..._translate_output and update its usages for clarity and easier searchability.
Suggested implementation:
def __load_precommit_packages_and_versions_translate_output(
data: str, version_list: List[List[str]]
) -> None:Search the codebase for all usages of __load_precommit_packages_and_versions_translate_ouptut and update them to __load_precommit_packages_and_versions_translate_output. No other behavior changes are required.
| # pylint: disable=broad-exception-caught | ||
|
|
||
|
|
||
| def __abc(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Top-level function name __abc is non-descriptive and makes navigation harder.
Please rename this driver function to something intent-revealing (for example, __generate_combined_dependencies) and update its invocation under if __name__ == "__main__": accordingly to improve readability and maintainability.
Suggested implementation:
def __generate_combined_dependencies():
error_messages: List[str] = []You’ll also need to update the function invocation under if __name__ == "__main__":. Look for a call like:
__abc()
and rename it to:
__generate_combined_dependencies()
If there are any other internal references to __abc (e.g., tests or helper wrappers), they should be updated to __generate_combined_dependencies as well.
| """ | ||
| Module to generate a targetted requirements.txt file from a Pipfile. Only the [packages] sectopmn | ||
| or [dev-packages] section is exported, and only the package names listed in the file are exported. | ||
|
|
||
| Other tools will export all packages (listed and discovered) to a requirements.txt file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (typo): Module docstring has typos and could be tightened for clarity.
Please correct targetted → targeted and sectopmn → section, and consider phrasing this as "targeted requirements file" and "[packages] or [dev-packages] section" for clearer readability.
| parser = argparse.ArgumentParser( | ||
| description="Analyze PyTest tests to determine if extra coverage is present." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue: Argument parser description is misleading and appears to be copy-pasted from an unrelated tool.
This description refers to analyzing PyTest coverage, but this script exports sections from a Pipfile. Please update it to accurately describe the tool’s behavior, e.g. "Generate a requirements file for either [packages] or [dev-packages] from Pipfile," to avoid confusing users.
| parser = argparse.ArgumentParser( | ||
| description="Analyze PyTest tests to determine if extra coverage is present." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue: CLI help texts are misleading for this script’s purpose.
The parser description and --list help still reference dev-packages/coverage analysis, which doesn’t match this script’s behavior (counting/listing updatable packages from pcu output). Please update both to describe the actual behavior, e.g. “Count or list packages that can be updated from pcu output,” and adjust the --list help accordingly.
| requires_index = line_in_file.index("[requires]") | ||
|
|
||
| PYTHON_VERSION_REGEX = r"^python_version\s=\s\"3\.(\d+)\"$" | ||
| version_regex_match = re.match( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Parsing the Pipfile by line position is fragile; consider using a TOML parser instead.
This relies on python_version being the very next line after [requires] in a fixed format, so comments, blank lines, or reordering will break it. Instead, load the Pipfile with toml and read config['requires']['python_version'] so it’s resilient to formatting changes.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #312 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 10 10
Lines 657 657
Branches 89 89
=========================================
Hits 657 657 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Summary by Sourcery
Update tooling, dependencies, and minimum supported Python version while adding automated dependency management utilities and improving test/CI/reporting integration.
New Features:
Enhancements:
Build:
CI:
Documentation:
Tests: