Skip to content

Conversation

@jackdewinter
Copy link
Owner

@jackdewinter jackdewinter commented Jan 12, 2026

Summary by Sourcery

Update tooling, dependencies, and minimum supported Python version while adding automated dependency management utilities and improving test/CI/reporting integration.

New Features:

  • Add a test runner option to enable detailed per-test debug logging via a new --test-verbose flag.
  • Introduce scripts and utilities to detect, list, and apply dependency upgrades across Pipenv and pre-commit, and to generate combined dependency metadata for external consumers.

Enhancements:

  • Raise the minimum supported Python version to 3.10 and align tooling configuration (Sourcery, classifiers) with this baseline.
  • Refresh pre-commit hooks, static analysis tools, formatters, and documentation tooling to current versions and normalize related configuration and comments.
  • Simplify packaging and release scripts by disabling the pyroma gate and adjusting shellcheck directives.
  • Improve dependency alignment by expanding cookiecutter templates, Pipfile sections, and install requirement verification to cover additional runtime packages and dev tools.
  • Refine internal helper signatures and minor code comments for consistency and style.

Build:

  • Upgrade multiple dev dependencies in Pipfile, including build, coverage, pytest stack, pylint, mkdocs-related plugins, twine, and supporting libraries.
  • Update project metadata (cookiecutter defaults, project.properties) to reflect the new release version and to control which packages participate in automated upgrade flows.

CI:

  • Bump CodeQL GitHub Actions to v4 and upload-artifact actions to v6.
  • Expose dynamic dependency badges in the README powered by a generated dependencies.json file in the publish directory.

Documentation:

  • Normalize PyMarkdown suppression comments and restructure the new documentation changelog to a simpler, non-historical format.
  • Clean up documentation examples and contribution guidelines to use the updated comment style expected by the markdown linter.

Tests:

  • Enhance the test execution script to always configure structured log formatting and optional debug-level output while capturing logs to file in quiet mode.

@sourcery-ai
Copy link

sourcery-ai bot commented Jan 12, 2026

Reviewer's Guide

This PR modernizes tooling and dependency management, introduces automated dependency checking utilities, updates CI workflows and documentation badges, and aligns the project’s supported Python version and docs/lint configs with the new setup.

Sequence diagram for dependency upgrade checking via clean.sh and check_project_dependencies.sh

sequenceDiagram
    actor Developer
    participant clean_sh as clean_sh
    participant check_deps as check_project_dependencies_sh
    participant Pipenv
    participant generate_req as generate_requirements_file_py
    participant pcu as pcu
    participant count_pcu as count_remaining_pcu_packages_py

    Developer->>clean_sh: run clean.sh
    clean_sh->>clean_sh: analyze_project
    clean_sh->>clean_sh: find_unused_pylint_suppressions
    clean_sh->>check_deps: run ./check_project_dependencies.sh

    check_deps->>check_deps: parse_command_line(check_mode=1)
    check_deps->>check_deps: start_process

    check_deps->>Pipenv: pre-commit-update --dry-run
    Pipenv-->>check_deps: exit status (0 or 1)
    check_deps->>check_deps: set NEED_UPDATE based on status

    loop For each of standard and development packages
        check_deps->>generate_req: pipenv run python utils/generate_requirements_file.py [--dev]
        generate_req-->>check_deps: temp requirements file path

        check_deps->>pcu: pipenv run pcu temp_requirements.txt
        pcu-->>check_deps: pcu report file

        check_deps->>count_pcu: pipenv run python utils/count_remaining_pcu_packages.py [--dev] pcu_report
        count_pcu-->>check_deps: remaining_upgradable_count
        check_deps->>check_deps: update NEED_UPDATE if count > 0
    end

    alt NEED_UPDATE == 0
        check_deps-->>clean_sh: exit 0
        clean_sh->>clean_sh: execute_test_suite
    else NEED_UPDATE != 0
        check_deps-->>clean_sh: exit 2
        clean_sh->>clean_sh: complete_process 1 "One or more project dependencies can be updated."
    end

    clean_sh-->>Developer: script result and messages
Loading

Class diagram for new dependency management and helper utilities

classDiagram
    class CleanSh {
        +look_for_upgrades()
        +execute_test_suite()
        +complete_process(return_code, reason)
    }

    class CheckProjectDependenciesSh {
        +parse_command_line(args)
        +start_process()
        +complete_process(return_code, reason)
        +check_for_updates(package_type, dev_flag)
        +perform_updates(package_type, dev_flag, import_flags)
    }

    class GenerateDependenciesFilePy {
        +icons_map: Dict
        +__match_icons(first_column_value)
        +__load_precommit_packages_and_versions(error_messages)
        +__load_precommit_packages_and_versions_translate_ouptut(data, version_list)
        +__load_pipfile_packages_and_versions(error_messages)
        +__abc()
    }

    class GenerateRequirementsFilePy {
        +__load_pipfile_packages_and_versions(error_list)
        +__handle_arguments()
        +__load_package_exclude_list_from_properties_file(use_dev_list)
        +main()
    }

    class CountRemainingPcuPackagesPy {
        +__handle_arguments()
        +__load_package_exclude_list_from_properties_file(use_dev_list)
        +main()
    }

    class ExtractPythonVersionFromPipfilePy {
        +PYTHON_VERSION_REGEX
        +main()
    }

    class VerifyInstallRequirementsPy {
        +__read_requirements_file()
        +__read_pipfile()
        +errors_found
        +main()
    }

    class PtestSh {
        +parse_command_line()
        +execute_tests()
        +TEST_VERBOSE_MODE
    }

    class Pytest
    class Pipfile
    class Install_requirements_txt
    class Project_properties
    class Pre_commit_config_yaml
    class Publish_dependencies_json

    CleanSh ..> CheckProjectDependenciesSh : invokes
    CheckProjectDependenciesSh ..> GenerateRequirementsFilePy : uses via pipenv run python
    CheckProjectDependenciesSh ..> CountRemainingPcuPackagesPy : uses via pipenv run python
    GenerateDependenciesFilePy ..> VerifyInstallRequirementsPy : shares Pipfile context
    GenerateDependenciesFilePy ..> ExtractPythonVersionFromPipfilePy : shares Pipfile context
    PtestSh ..> Pytest : configures logging flags
    VerifyInstallRequirementsPy ..> Pipfile : reads Pipfile
    VerifyInstallRequirementsPy ..> Install_requirements_txt : reads install-requirements.txt
    GenerateRequirementsFilePy ..> Pipfile : reads Pipfile
    CountRemainingPcuPackagesPy ..> Project_properties : reads exclude lists from project.properties
    GenerateRequirementsFilePy ..> Project_properties : reads exclude lists from project.properties
    GenerateDependenciesFilePy ..> Pre_commit_config_yaml : reads .pre-commit-config.yaml
    GenerateDependenciesFilePy ..> Publish_dependencies_json : writes publish/dependencies.json
Loading

File-Level Changes

Change Details Files
Upgrade formatting, linting, and security tooling versions and normalize their configuration in pre-commit.
  • Bump versions for black, isort, shfmt, flake8, bandit, mypy, pymarkdown, and shellcheck hooks
  • Normalize YAML indentation, argument spacing, and comments in hook definitions
  • Keep existing hook behavior (excludes, args, additional_dependencies) while aligning style
.pre-commit-config.yaml
Standardize documentation comments to satisfy updated PyMarkdown rules and simplify changelog structure.
  • Replace older HTML comment directive syntax with the new <!-- pyml ... --> form across docs
  • Simplify the newdocs changelog to use ‘Added/Changed/Fixed’ sections with placeholder content instead of duplicating historic releases
newdocs/src/changelog.md
newdocs/src/user-guide.md
newdocs/src/contribute.md
Refresh dev dependencies, add dependency‑management tools, and align install vs dev package declarations.
  • Update versions of dev tools (build, coverage, pylint, pytest, pytest-cov, twine, mkdocstrings, mkdocstrings-python, pre-commit, sourcery, importlib-metadata)
  • Add setuptools explicitly to dev-packages, plus pip-check-updates, pre-commit-update, pyyaml, tomli
  • Reorder and deduplicate core packages (tomli, pyyaml) to keep constraints consistent between sections
Pipfile
Pipfile.lock
install-requirements.txt
cookieslicer.json
Introduce an automated dependency auditing and upgrade flow integrated with project scripts.
  • Add check_project_dependencies.sh to check/upgrade Pipenv and pre-commit dependencies using pip-check-updates and pre-commit-update
  • Add helper Python utilities to generate requirements snippets, filter upgrade candidates, merge Pipfile and pre-commit versions into publish/dependencies.json, and extract the Python version from Pipfile
  • Wire dependency check into clean.sh so CI/local cleanliness runs fail if upgrades are available
  • Add project properties to configure package exclude lists for upgrade checks
check_project_dependencies.sh
utils/generate_requirements_file.py
utils/count_remaining_pcu_packages.py
utils/generate_dependencies_file.py
utils/extract_python_version_from_pipfile.py
clean.sh
project.properties
Adjust test and packaging scripts to new tool versions and improve logging/debuggability.
  • Update shellcheck suppression code to SC2329 in shell scripts
  • Disable pyroma packaging check in package-release.sh while keeping function stub commented for possible re‑enablement
  • Extend ptest.sh with a --test-verbose flag and add consistent pytest log formatting/date-format options in both verbose and quiet modes
package-release.sh
ptest.sh
clean.sh
Align project metadata, badges, and Python version support with the new environment.
  • Update cookiecutter default project version and dependency lists to reflect new core deps
  • Change README CI and dependency badges to dynamic badges driven by publish/dependencies.json and simplify workflow-status badge URL
  • Bump minimum supported Python from 3.9 to 3.10 in setup.py and Sourcery config, and adjust classifiers accordingly
cookiecutter-config.yaml
README.md
setup.py
.sourcery.yaml
Fix minor Python style issues and align naming with conventions in existing utilities.
  • Convert ERRORS_FOUND to a lower-case errors_found variable in verify_install_requirements.py and update references
  • Add a trailing comma in ApplicationPropertiesLoaderHelper.set_error_handler_if_not_set signature for formatting consistency
  • Fix a small whitespace typo in a test module docstring
utils/verify_install_requirements.py
application_properties/application_properties_loader_helper.py
test/test_multisource_configuration_loader.py
Update GitHub Actions workflows to newer major versions of CodeQL and artifact upload actions.
  • Bump github/codeql-action init, autobuild, and analyze steps from v3 to v4
  • Bump actions/upload-artifact from v5 to v6 in main workflow for coverage and package upload artifacts
.github/workflows/codeql.yml
.github/workflows/main.yml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 10 issues, and left some high level feedback:

  • In utils/generate_dependencies_file.py, the subprocess handling around pre-commit-update should use communicate() instead of wait() followed by stdout.read()/stderr.read() to avoid potential deadlocks with large outputs.
  • The argparse descriptions and help texts in utils/generate_requirements_file.py and utils/count_remaining_pcu_packages.py still refer to PyTest analysis or copy-pasted wording (e.g., Analyze PyTest tests..., output the dev-packages section instead of packages section for --list), which should be updated to accurately describe what these utilities do.
  • The logic in utils/extract_python_version_from_pipfile.py assumes that the Python version is always on the line immediately after [requires], making it brittle to formatting changes in Pipfile; consider parsing the section more robustly (e.g., by scanning until a python_version = "..." line or using a TOML parser).
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `utils/generate_dependencies_file.py`, the subprocess handling around `pre-commit-update` should use `communicate()` instead of `wait()` followed by `stdout.read()`/`stderr.read()` to avoid potential deadlocks with large outputs.
- The argparse descriptions and help texts in `utils/generate_requirements_file.py` and `utils/count_remaining_pcu_packages.py` still refer to PyTest analysis or copy-pasted wording (e.g., `Analyze PyTest tests...`, `output the dev-packages section instead of packages section` for `--list`), which should be updated to accurately describe what these utilities do.
- The logic in `utils/extract_python_version_from_pipfile.py` assumes that the Python version is always on the line immediately after `[requires]`, making it brittle to formatting changes in `Pipfile`; consider parsing the section more robustly (e.g., by scanning until a `python_version = "..."` line or using a TOML parser).

## Individual Comments

### Comment 1
<location> `utils/generate_dependencies_file.py:89` </location>
<code_context>
+    return package_version_map
+
+
+def __load_precommit_packages_and_versions_translate_ouptut(
+    data: str, version_list: List[List[str]]
+) -> None:
</code_context>

<issue_to_address>
**issue (typo):** Function name contains a typo that harms readability and discoverability.

The function name `__load_precommit_packages_and_versions_translate_ouptut` has a typo in `ouptut`. Please rename it to `..._translate_output` and update its usages for clarity and easier searchability.

Suggested implementation:

```python
def __load_precommit_packages_and_versions_translate_output(
    data: str, version_list: List[List[str]]
) -> None:

```

Search the codebase for all usages of `__load_precommit_packages_and_versions_translate_ouptut` and update them to `__load_precommit_packages_and_versions_translate_output`. No other behavior changes are required.
</issue_to_address>

### Comment 2
<location> `utils/generate_dependencies_file.py:98-103` </location>
<code_context>
+        if not next_line:
+            continue
+        split_line = list(next_line.split(" "))
+        matched_icon = __match_icons(split_line[0])
+        assert (
+            matched_icon
+        ), f"Unrecognized icon in pre-commit output: '{split_line[0]}'"
+        split_line[0] = matched_icon
+        assert split_line[2] == "-"
+        del split_line[2]
+        if len(split_line) > 3 and split_line[3] == "->":
</code_context>

<issue_to_address>
**issue (bug_risk):** Using `assert` to validate external tool output can cause unexpected crashes and should be replaced with explicit error handling.

`assert matched_icon` and `assert split_line[2] == "-"` assume the pre-commit output format never changes; if it does, the script will crash with an unhelpful `AssertionError`. Instead, use explicit checks that either append a descriptive message to `error_messages` or raise a controlled exception, consistent with the rest of the module’s error handling, rather than relying on `assert` for runtime validation.
</issue_to_address>

### Comment 3
<location> `utils/generate_dependencies_file.py:129` </location>
<code_context>
+# pylint: disable=broad-exception-caught
+
+
+def __abc():
+    error_messages: List[str] = []
+
</code_context>

<issue_to_address>
**suggestion:** Top-level function name `__abc` is non-descriptive and makes navigation harder.

Please rename this driver function to something intent-revealing (for example, `__generate_combined_dependencies`) and update its invocation under `if __name__ == "__main__":` accordingly to improve readability and maintainability.

Suggested implementation:

```python
def __generate_combined_dependencies():
    error_messages: List[str] = []

```

You’ll also need to update the function invocation under `if __name__ == "__main__":`. Look for a call like:
`__abc()`  
and rename it to:
`__generate_combined_dependencies()`
If there are any other internal references to `__abc` (e.g., tests or helper wrappers), they should be updated to `__generate_combined_dependencies` as well.
</issue_to_address>

### Comment 4
<location> `utils/generate_requirements_file.py:1-5` </location>
<code_context>
-        handle_error_fn: Optional[Callable[[str, Optional[Exception]], None]]
+        handle_error_fn: Optional[Callable[[str, Optional[Exception]], None]],
     ) -> Callable[[str, Optional[Exception]], None]:
         """
         If the error_handler is not set, make sure to set it to print to output.
</code_context>

<issue_to_address>
**issue (typo):** Module docstring has typos and could be tightened for clarity.

Please correct `targetted``targeted` and `sectopmn``section`, and consider phrasing this as "targeted requirements file" and "[packages] or [dev-packages] section" for clearer readability.
</issue_to_address>

### Comment 5
<location> `utils/generate_requirements_file.py:44-45` </location>
<code_context>
+
+
+def __handle_arguments():
+    parser = argparse.ArgumentParser(
+        description="Analyze PyTest tests to determine if extra coverage is present."
+    )
+    parser.add_argument(
</code_context>

<issue_to_address>
**issue:** Argument parser description is misleading and appears to be copy-pasted from an unrelated tool.

This description refers to analyzing PyTest coverage, but this script exports sections from a Pipfile. Please update it to accurately describe the tool’s behavior, e.g. "Generate a requirements file for either [packages] or [dev-packages] from Pipfile," to avoid confusing users.
</issue_to_address>

### Comment 6
<location> `utils/count_remaining_pcu_packages.py:11-12` </location>
<code_context>
+
+
+def __handle_arguments():
+    parser = argparse.ArgumentParser(
+        description="Analyze PyTest tests to determine if extra coverage is present."
+    )
+    parser.add_argument(
</code_context>

<issue_to_address>
**issue:** CLI help texts are misleading for this script’s purpose.

The parser description and `--list` help still reference dev-packages/coverage analysis, which doesn’t match this script’s behavior (counting/listing updatable packages from pcu output). Please update both to describe the actual behavior, e.g. “Count or list packages that can be updated from pcu output,” and adjust the `--list` help accordingly.
</issue_to_address>

### Comment 7
<location> `utils/extract_python_version_from_pipfile.py:13-16` </location>
<code_context>
+    with open("Pipfile", "r", encoding="utf-8") as readme_file:
+        line_in_file = readme_file.read().split("\n")
+
+    requires_index = line_in_file.index("[requires]")
+
+    PYTHON_VERSION_REGEX = r"^python_version\s=\s\"3\.(\d+)\"$"
+    version_regex_match = re.match(
+        PYTHON_VERSION_REGEX, line_in_file[requires_index + 1]
+    )
</code_context>

<issue_to_address>
**suggestion:** Parsing the Pipfile by line position is fragile; consider using a TOML parser instead.

This relies on `python_version` being the very next line after `[requires]` in a fixed format, so comments, blank lines, or reordering will break it. Instead, load the Pipfile with `toml` and read `config['requires']['python_version']` so it’s resilient to formatting changes.
</issue_to_address>

### Comment 8
<location> `test/test_multisource_configuration_loader.py:2` </location>
<code_context>
 """
-Moduole for tests of the multisource configuration loader.  
+Moduole for tests of the multisource configuration loader.

</code_context>

<issue_to_address>
**suggestion (testing):** Add tests covering the new dependency/requirements utility scripts.

This PR adds several new utilities under `utils/` (`generate_dependencies_file.py`, `generate_requirements_file.py`, `count_remaining_pcu_packages.py`, `extract_python_version_from_pipfile.py`) without tests. Please add focused tests (e.g., under `test/test_utils_*`) that exercise normal flows, parsing edge cases (unexpected `pre-commit-update` output, missing/odd sections in `Pipfile`/`project.properties`, irregular `pcu` output), and error conditions (missing files, invalid TOML/Pipfile, non-zero exit codes). This will help keep these scripts reliable as formats evolve.

Suggested implementation:

```python
"""
Moduole for tests of the multisource configuration loader.

NOTE:   These tests are for the configuration loader, not to test the correctness of any
        of the individual loaders themselves.
"""

from __future__ import annotations

import subprocess
import sys
from pathlib import Path
from textwrap import dedent

import pytest


# ---------------------------------------------------------------------------
# Utilities / helpers for testing scripts in utils/
# ---------------------------------------------------------------------------


def _utils_dir() -> Path:
    """
    Return the repository's ``utils/`` directory, assuming tests live under ``test/``.
    """
    return Path(__file__).resolve().parents[1] / "utils"


def _utils_script(name: str) -> Path:
    """
    Return the full path to a script in ``utils/``.
    """
    script = _utils_dir() / name
    if not script.exists():
        pytest.skip(f"utils script {name!r} not found at {script}")
    return script


def _run_script(script: Path, *args: str, cwd: Path | None = None) -> subprocess.CompletedProcess:
    """
    Run a Python utility script as a subprocess, returning the CompletedProcess.
    """
    return subprocess.run(
        [sys.executable, str(script), *args],
        cwd=str(cwd) if cwd is not None else None,
        stdout=subprocess.PIPE,
        stderr=subprocess.PIPE,
        text=True,
    )


# ---------------------------------------------------------------------------
# Tests for utils/extract_python_version_from_pipfile.py
# ---------------------------------------------------------------------------


def _get_extract_python_func():
    """
    Dynamically locate the extraction function in utils/extract_python_version_from_pipfile.py.

    We allow either:
      - extract_python_version_from_pipfile(path)
      - extract_python_version(path)

    to keep this test resilient to minor naming differences.
    """
    sys.path.insert(0, str(_utils_dir()))
    try:
        mod = pytest.importorskip("extract_python_version_from_pipfile")
    finally:
        # Avoid permanently contaminating sys.path
        sys.path.pop(0)

    func = getattr(mod, "extract_python_version_from_pipfile", None) or getattr(
        mod, "extract_python_version", None
    )
    if func is None:
        pytest.skip(
            "No Python version extraction function found in "
            "utils/extract_python_version_from_pipfile.py"
        )
    return func


def test_extract_python_version_from_pipfile_happy_path(tmp_path: Path):
    extract_func = _get_extract_python_func()

    pipfile = tmp_path / "Pipfile"
    pipfile.write_text(
        dedent(
            """
            [requires]
            python_version = "3.11"
            """
        ),
        encoding="utf-8",
    )

    version = extract_func(pipfile)
    # Allow both "3.11" and normalized variants like "3.11.x"
    assert "3.11" in str(version)


def test_extract_python_version_from_pipfile_missing_requires_section(tmp_path: Path):
    extract_func = _get_extract_python_func()

    pipfile = tmp_path / "Pipfile"
    pipfile.write_text(
        dedent(
            """
            [packages]
            requests = "*"
            """
        ),
        encoding="utf-8",
    )

    with pytest.raises(Exception):
        # Depending on implementation this could be ValueError, KeyError, RuntimeError, etc.
        extract_func(pipfile)


def test_extract_python_version_from_pipfile_invalid_toml(tmp_path: Path):
    extract_func = _get_extract_python_func()

    pipfile = tmp_path / "Pipfile"
    # Deliberately invalid TOML
    pipfile.write_text("not-[valid = toml", encoding="utf-8")

    with pytest.raises(Exception):
        extract_func(pipfile)


def test_extract_python_version_from_missing_pipfile(tmp_path: Path):
    extract_func = _get_extract_python_func()

    missing = tmp_path / "does_not_exist_Pipfile"

    with pytest.raises(Exception):
        extract_func(missing)


# ---------------------------------------------------------------------------
# Tests for utils/generate_dependencies_file.py
# ---------------------------------------------------------------------------


def test_generate_dependencies_file_missing_input_fails(tmp_path: Path):
    script = _utils_script("generate_dependencies_file.py")

    missing_project_properties = tmp_path / "project.properties"

    result = _run_script(script, str(missing_project_properties), "out.txt", cwd=tmp_path)

    # Command should fail with non-zero return code when input is missing.
    assert result.returncode != 0
    assert "project.properties" in (result.stderr or "") or "not found" in (
        result.stderr or ""
    ).lower()


def test_generate_dependencies_file_with_minimal_project_properties(tmp_path: Path):
    script = _utils_script("generate_dependencies_file.py")

    project_properties = tmp_path / "project.properties"
    output_file = tmp_path / "dependencies.txt"

    # Minimal but syntactically valid properties file; adjust keys if needed.
    project_properties.write_text(
        dedent(
            """
            name=my-project
            version=1.0.0
            """
        ),
        encoding="utf-8",
    )

    result = _run_script(script, str(project_properties), str(output_file), cwd=tmp_path)

    assert result.returncode == 0
    assert output_file.exists()
    # Very generic sanity check: file shouldn't be empty.
    assert output_file.read_text(encoding="utf-8").strip() != ""


# ---------------------------------------------------------------------------
# Tests for utils/generate_requirements_file.py
# ---------------------------------------------------------------------------


def test_generate_requirements_file_missing_pipfile_fails(tmp_path: Path):
    script = _utils_script("generate_requirements_file.py")

    missing_pipfile = tmp_path / "Pipfile"
    requirements_txt = tmp_path / "requirements.txt"

    result = _run_script(script, str(missing_pipfile), str(requirements_txt), cwd=tmp_path)

    assert result.returncode != 0
    assert "Pipfile" in (result.stderr or "") or "not found" in (result.stderr or "").lower()


def test_generate_requirements_file_with_irregular_pipfile_sections(tmp_path: Path):
    script = _utils_script("generate_requirements_file.py")

    pipfile = tmp_path / "Pipfile"
    requirements_txt = tmp_path / "requirements.txt"

    # Pipfile with odd/missing sections – e.g., dev-packages only, or extra tables.
    pipfile.write_text(
        dedent(
            """
            [dev-packages]
            pytest = "*"

            [something-weird]
            foo = "bar"
            """
        ),
        encoding="utf-8",
    )

    result = _run_script(script, str(pipfile), str(requirements_txt), cwd=tmp_path)

    # The script may either succeed (ignoring odd sections) or fail with an explicit error.
    # We assert that if it succeeds, it produces some content.
    if result.returncode == 0:
        assert requirements_txt.exists()
        assert requirements_txt.read_text(encoding="utf-8").strip() != ""
    else:
        assert "Pipfile" in (result.stderr or "") or "requires" in (result.stderr or "").lower()


# ---------------------------------------------------------------------------
# Tests for utils/count_remaining_pcu_packages.py
# ---------------------------------------------------------------------------


def test_count_remaining_pcu_packages_handles_irregular_output(tmp_path: Path):
    script = _utils_script("count_remaining_pcu_packages.py")

    # Assume the script can read from a file containing pcu output or takes it via CLI.
    # Here we simulate weird/irregular output via a temp file and pass its path.
    pcu_output_file = tmp_path / "pcu_output.txt"
    pcu_output_file.write_text(
        dedent(
            """
            some header line
            1 package(s) left
            garbage-line
            3 package(s) left
            """
        ),
        encoding="utf-8",
    )

    result = _run_script(script, str(pcu_output_file), cwd=tmp_path)

    # Script should not crash on irregular lines.
    assert result.returncode == 0
    # We don't assert the exact count because the implementation details are unknown,
    # but we require that it prints *some* result.
    assert (result.stdout or "").strip() != ""


def test_count_remaining_pcu_packages_missing_input_fails(tmp_path: Path):
    script = _utils_script("count_remaining_pcu_packages.py")

    missing_file = tmp_path / "missing_pcu_output.txt"

    result = _run_script(script, str(missing_file), cwd=tmp_path)

    assert result.returncode != 0
    assert "pcu" in (result.stderr or "").lower() or "not found" in (
        result.stderr or ""
    ).lower()

```

These tests assume:
1. The new scripts live in a top-level `utils/` directory relative to the repository root.
2. Their CLIs follow a simple pattern:
   - `generate_dependencies_file.py PROJECT_PROPERTIES OUTPUT_FILE`
   - `generate_requirements_file.py PIPFILE OUTPUT_FILE`
   - `count_remaining_pcu_packages.py PCU_OUTPUT_FILE`
3. `extract_python_version_from_pipfile.py` exposes either `extract_python_version_from_pipfile(path)` or `extract_python_version(path)` and raises an exception on invalid/missing input.

If any of these assumptions differ from your actual implementation, you should:
1. Adjust `_utils_dir()` if `utils/` is located elsewhere.
2. Update the argument order in `_run_script(...)` calls to match each script’s true CLI.
3. Refine the assertions on `stdout`/`stderr` and return codes to line up with actual behavior.
4. If `extract_python_version_from_pipfile.py` exposes a differently named function or a CLI-only interface, update `_get_extract_python_func()` or replace those tests with subprocess-based tests similar to the others.
5. If the project already has common test utilities (e.g., helpers for resolving repo root or running scripts), replace `_utils_dir`, `_utils_script`, and `_run_script` with those existing helpers to stay consistent with the rest of the codebase.
</issue_to_address>

### Comment 9
<location> `utils/generate_requirements_file.py:18` </location>
<code_context>
+# pylint: disable=broad-exception-caught
+
+
+def __load_pipfile_packages_and_versions(
+    error_list: List[str],
+) -> Tuple[Dict[str, str], List[str], List[str]]:
</code_context>

<issue_to_address>
**issue (complexity):** Consider refactoring the script into smaller, clearly named functions with a focused CLI description and a main() entrypoint to make control flow and responsibilities easier to follow.

You can simplify this script without changing behavior by tightening responsibilities and naming:

1. **Clarify CLI description**

The current description is misleading noise for this tool.

```python
def parse_args():
    parser = argparse.ArgumentParser(
        description="Emit Pipfile [packages] or [dev-packages] as requirements-style lines."
    )
    parser.add_argument(
        "-d",
        "--dev",
        dest="export_dev_packages",
        action="store_true",
        default=False,
        help="output the dev-packages section instead of packages section",
    )
    return parser.parse_args()
```

2. **Split Pipfile loading into two focused helpers**

Right now `__load_pipfile_packages_and_versions` mixes version extraction, package-name extraction, and error accumulation into one function.

You can keep functionality but make the flow easier to follow by separating the concerns and returning explicit types instead of a tuple with positional meaning:

```python
def load_pipfile_versions() -> Dict[str, str]:
    versions: Dict[str, str] = {}
    for name, _, version in cast(List[List[str]], load_dependencies("Pipfile", False, [])):
        versions[name] = version
    return versions


def load_pipfile_package_names() -> Tuple[List[str], List[str]]:
    with open("Pipfile", "rt", encoding="utf-8") as f:
        config = toml.load(f)
    package_list = list(config["packages"].keys())
    dev_package_list = list(config["dev-packages"].keys())
    return package_list, dev_package_list
```

Then in `__main__` (or `main()`):

```python
try:
    pipfile_versions = load_pipfile_versions()
    package_names, dev_package_names = load_pipfile_package_names()
except Exception as exc:  # noqa: B036
    print(f"Failed to load Pipfile: {exc}")
    raise SystemExit(1)
```

This removes the `error_list` pattern and avoids partially valid data.

3. **Use simple, non-mangled names and a `main()`**

The leading double-underscore naming and top-level logic make the control flow harder to scan than necessary in a small script. You can keep behavior identical and make the flow more obvious:

```python
def load_package_exclude_list(use_dev_list: bool) -> List[str]:
    with open("project.properties", "r", encoding="utf-8") as file:
        text = file.read()
    config = configparser.RawConfigParser()
    config.read_string(f"[main]\n{text}")
    key = "PACKAGE_UPDATE_DEV_EXCLUDE_LIST" if use_dev_list else "PACKAGE_UPDATE_EXCLUDE_LIST"
    return [item.strip() for item in config.get("main", key, fallback="").split(",")]


def main() -> None:
    args = parse_args()
    exclude_list = load_package_exclude_list(args.export_dev_packages)

    try:
        versions = load_pipfile_versions()
        packages, dev_packages = load_pipfile_package_names()
    except Exception as exc:  # noqa: B036
        print(f"Failed to load Pipfile: {exc}")
        raise SystemExit(1)

    selected = dev_packages if args.export_dev_packages else packages
    for name in selected:
        if name in exclude_list:
            continue
        print(f"{name}=={versions[name]}")
    print("", flush=True)


if __name__ == "__main__":
    main()
```

These changes keep all behavior, but reduce cognitive load by:
- Removing tuple-with-error-list outputs
- Making argument parsing and Pipfile loading self-descriptive
- Making the main control flow explicit and linear.
</issue_to_address>

### Comment 10
<location> `utils/count_remaining_pcu_packages.py:53` </location>
<code_context>
+    return [i.strip() for i in ss]
+
+
+parsed_args = __handle_arguments()
+exclude_list = __load_package_exclude_list_from_properties_file(
+    parsed_args.export_dev_packages
</code_context>

<issue_to_address>
**issue (complexity):** Consider refactoring this script by moving top-level logic into a main function, extracting a line-parsing helper, clarifying CLI help text, and optionally reusing shared config-loading utilities to make the code easier to read and test.

You can reduce complexity and improve testability without changing behavior by:

### 1. Move top-level logic into a `main()` function

This removes import-time side effects and makes the script easier to skim and unit test.

```python
def main() -> None:
    parsed_args = __handle_arguments()
    exclude_list = __load_package_exclude_list_from_properties_file(
        parsed_args.export_dev_packages
    )

    with open(parsed_args.file_name, "r", encoding="utf-8") as file:
        text = file.read()

    has_extra_parameters = parsed_args.list_items

    non_excluded_packages: List[str] = []
    have_seen_start = False
    for line in text.splitlines():
        pkg = __parse_pcu_line(line, have_seen_start)
        if pkg is None:
            if line.strip().startswith("In "):
                have_seen_start = True
            continue

        name, current, target = pkg
        if name not in exclude_list:
            if has_extra_parameters:
                print(
                    f"      Package '{name}' has version '{current}' "
                    f"and can be updated to version '{target}'"
                )
            non_excluded_packages.append(name)

    if not has_extra_parameters:
        print(len(non_excluded_packages))


if __name__ == "__main__":
    main()
```

### 2. Extract a small helper for parsing `pcu` lines

Encapsulating the string munging makes the main loop much easier to follow and isolates assumptions about the `pcu` output format.

```python
from typing import Optional, Tuple

def __parse_pcu_line(
    line: str,
    have_seen_start: bool,
) -> Optional[Tuple[str, str, str]]:
    line = line.strip()
    if not have_seen_start or not line:
        return None
    if line.startswith("All dependencies match") or line.startswith("Run pcu"):
        return None

    # normalize whitespace and split
    while "  " in line:
        line = line.replace("  ", " ")
    parts = line.split(" ")
    if len(parts) < 4:
        return None  # or raise if this should never happen

    # name, current_version, "->", target_version
    return parts[0], parts[1], parts[3]
```

You can adjust the `have_seen_start` handling to match your exact control flow, but the idea is to keep the string logic out of the main loop.

### 3. Clarify argument help text

`--dev` and `--list` currently have identical help strings, which is confusing. You can keep behavior while improving clarity:

```python
def __handle_arguments():
    parser = argparse.ArgumentParser(
        description="Count or list updatable packages from a 'pcu' output file."
    )
    parser.add_argument(
        "-d",
        "--dev",
        dest="export_dev_packages",
        action="store_true",
        default=False,
        help="use dev package exclude list instead of the main package exclude list",
    )
    parser.add_argument(
        "-l",
        "--list",
        dest="list_items",
        action="store_true",
        default=False,
        help="list each updatable package instead of printing only the count",
    )
    parser.add_argument("file_name", action="store", help="pcu output file to process")
    return parser.parse_args()
```

### 4. (If applicable) Deduplicate config parsing

If another module (e.g., `generate_requirements_file.py`) already has a helper that reads `project.properties` via `configparser`, consider centralizing that logic:

```python
# in a shared module, e.g., config_utils.py
def load_properties_section(path: str = "project.properties") -> configparser.SectionProxy:
    with open(path, "r", encoding="utf-8") as properties_file:
        properties_text = properties_file.read()
    config = configparser.RawConfigParser()
    config.read_string(f"[main]\n{properties_text}")
    return config["main"]
```

Then:

```python
from config_utils import load_properties_section

def __load_package_exclude_list_from_properties_file(use_dev_list: bool) -> List[str]:
    section = load_properties_section()
    key = "PACKAGE_UPDATE_DEV_EXCLUDE_LIST" if use_dev_list else "PACKAGE_UPDATE_EXCLUDE_LIST"
    return [i.strip() for i in section.get(key, fallback="").split(",") if i.strip()]
```

These changes preserve behavior but significantly reduce cognitive load and make the script easier to maintain.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

return package_version_map


def __load_precommit_packages_and_versions_translate_ouptut(
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (typo): Function name contains a typo that harms readability and discoverability.

The function name __load_precommit_packages_and_versions_translate_ouptut has a typo in ouptut. Please rename it to ..._translate_output and update its usages for clarity and easier searchability.

Suggested implementation:

def __load_precommit_packages_and_versions_translate_output(
    data: str, version_list: List[List[str]]
) -> None:

Search the codebase for all usages of __load_precommit_packages_and_versions_translate_ouptut and update them to __load_precommit_packages_and_versions_translate_output. No other behavior changes are required.

# pylint: disable=broad-exception-caught


def __abc():
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Top-level function name __abc is non-descriptive and makes navigation harder.

Please rename this driver function to something intent-revealing (for example, __generate_combined_dependencies) and update its invocation under if __name__ == "__main__": accordingly to improve readability and maintainability.

Suggested implementation:

def __generate_combined_dependencies():
    error_messages: List[str] = []

You’ll also need to update the function invocation under if __name__ == "__main__":. Look for a call like:
__abc()
and rename it to:
__generate_combined_dependencies()
If there are any other internal references to __abc (e.g., tests or helper wrappers), they should be updated to __generate_combined_dependencies as well.

Comment on lines +1 to +5
"""
Module to generate a targetted requirements.txt file from a Pipfile. Only the [packages] sectopmn
or [dev-packages] section is exported, and only the package names listed in the file are exported.

Other tools will export all packages (listed and discovered) to a requirements.txt file.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (typo): Module docstring has typos and could be tightened for clarity.

Please correct targettedtargeted and sectopmnsection, and consider phrasing this as "targeted requirements file" and "[packages] or [dev-packages] section" for clearer readability.

Comment on lines +44 to +45
parser = argparse.ArgumentParser(
description="Analyze PyTest tests to determine if extra coverage is present."
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: Argument parser description is misleading and appears to be copy-pasted from an unrelated tool.

This description refers to analyzing PyTest coverage, but this script exports sections from a Pipfile. Please update it to accurately describe the tool’s behavior, e.g. "Generate a requirements file for either [packages] or [dev-packages] from Pipfile," to avoid confusing users.

Comment on lines +11 to +12
parser = argparse.ArgumentParser(
description="Analyze PyTest tests to determine if extra coverage is present."
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: CLI help texts are misleading for this script’s purpose.

The parser description and --list help still reference dev-packages/coverage analysis, which doesn’t match this script’s behavior (counting/listing updatable packages from pcu output). Please update both to describe the actual behavior, e.g. “Count or list packages that can be updated from pcu output,” and adjust the --list help accordingly.

Comment on lines +13 to +16
requires_index = line_in_file.index("[requires]")

PYTHON_VERSION_REGEX = r"^python_version\s=\s\"3\.(\d+)\"$"
version_regex_match = re.match(
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Parsing the Pipfile by line position is fragile; consider using a TOML parser instead.

This relies on python_version being the very next line after [requires] in a fixed format, so comments, blank lines, or reordering will break it. Instead, load the Pipfile with toml and read config['requires']['python_version'] so it’s resilient to formatting changes.

@jackdewinter jackdewinter merged commit 14141cb into main Jan 12, 2026
18 checks passed
@codecov
Copy link

codecov bot commented Jan 12, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 100.00%. Comparing base (c3bfd52) to head (b0e62dd).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff            @@
##              main      #312   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           10        10           
  Lines          657       657           
  Branches        89        89           
=========================================
  Hits           657       657           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants