Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump python-multipart from 0.0.17 to 0.0.18 in the pip group across 1 directory #48

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Dec 2, 2024

Bumps the pip group with 1 update in the / directory: python-multipart.

Updates python-multipart from 0.0.17 to 0.0.18

Release notes

Sourced from python-multipart's releases.

Version 0.0.18

What's Changed


Full Changelog: Kludex/python-multipart@0.0.17...0.0.18

Changelog

Sourced from python-multipart's changelog.

0.0.18 (2024-11-28)

  • Hard break if found data after last boundary on MultipartParser #189.
Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore <dependency name> major version will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
  • @dependabot ignore <dependency name> minor version will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
  • @dependabot ignore <dependency name> will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
  • @dependabot unignore <dependency name> will remove all of the ignore conditions of the specified dependency
  • @dependabot unignore <dependency name> <ignore condition> will remove the ignore condition of the specified dependency and ignore conditions
    You can disable automated security fix PRs for this repo from the Security Alerts page.

Summary by Sourcery

Build:

  • Update python-multipart from version 0.0.17 to 0.0.18 in the pip group.

Bumps the pip group with 1 update in the / directory: [python-multipart](https://github.com/Kludex/python-multipart).


Updates `python-multipart` from 0.0.17 to 0.0.18
- [Release notes](https://github.com/Kludex/python-multipart/releases)
- [Changelog](https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md)
- [Commits](Kludex/python-multipart@0.0.17...0.0.18)

---
updated-dependencies:
- dependency-name: python-multipart
  dependency-type: indirect
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Dec 2, 2024
Copy link

sourcery-ai bot commented Dec 2, 2024

Reviewer's Guide by Sourcery

This is a dependency update PR that bumps python-multipart from version 0.0.17 to 0.0.18. The main change in this version is a security improvement in the MultipartParser that enforces a hard break when data is found after the last boundary.

No diagrams generated as the changes look simple and do not need a visual representation.

File-Level Changes

Change Details Files
Dependency version update in package management files
  • Update python-multipart from 0.0.17 to 0.0.18
  • Incorporate security fix for MultipartParser boundary handling
poetry.lock

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time. You can also use
    this command to specify where the summary should be inserted.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

coderabbitai bot commented Dec 2, 2024

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have skipped reviewing this pull request. It seems to have been created by a bot (hey, dependabot[bot]!). We assume it knows what it's doing!

Copy link

CI Failure Feedback 🧐

Action: test

Failed stage: Run tests and generate reports [❌]

Failed test name: tests/test_FilePreprocessor.py::TestFilePreprocessor::test_py_file_with_function_only

Failure summary:

The action failed due to two test failures:

  • tests/test_FilePreprocessor.py::TestFilePreprocessor::test_py_file_with_function_only failed because
    the processed text was altered when it should not have been. The test expected the text to remain
    unchanged, but additional spaces were added.

  • tests/test_UnitTestValidator.py::TestUnitValidator::test_validate_test_pass_no_coverage_increase_with_prompt
    failed because the assertion for the reason of failure did not match the expected message. The test
    expected "Coverage did not increase" but received a more detailed message indicating potential
    issues with test execution or coverage increase.

  • Relevant error logs:
    1:  ##[group]Operating System
    2:  Ubuntu
    ...
    
    855:  ----------------------------- live log collection ------------------------------
    856:  INFO     httpx:_client.py:1038 HTTP Request: GET https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json "HTTP/1.1 200 OK"
    857:  collected 108 items
    858:  templated_tests/python_fastapi/test_app.py::test_root 
    859:  -------------------------------- live log call ---------------------------------
    860:  INFO     httpx:_client.py:1038 HTTP Request: GET http://testserver/ "HTTP/1.1 200 OK"
    861:  PASSED                                                                   [  0%]
    862:  tests/test_AICaller.py::TestAICaller::test_call_model_simplified PASSED  [  1%]
    863:  tests/test_AICaller.py::TestAICaller::test_call_model_with_error PASSED  [  2%]
    864:  tests/test_AICaller.py::TestAICaller::test_call_model_error_streaming PASSED [  3%]
    ...
    
    876:  tests/test_CoverageAi.py::TestCoverageAi::test_duplicate_test_file_without_output_path PASSED [ 14%]
    877:  tests/test_CoverageAi.py::TestCoverageAi::test_run_max_iterations_strict_coverage PASSED [ 15%]
    878:  tests/test_CoverageAi.py::TestCoverageAi::test_project_root_not_found PASSED [ 16%]
    879:  tests/test_CoverageAi.py::TestCoverageAi::test_run_diff_coverage PASSED  [ 17%]
    880:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_coverage_report_cobertura PASSED [ 18%]
    881:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_correct_parsing_for_matching_package_and_class PASSED [ 19%]
    882:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_returns_empty_lists_and_float PASSED [ 20%]
    883:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_coverage_report_unsupported_type PASSED [ 21%]
    884:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_extract_package_and_class_java_file_error PASSED [ 22%]
    885:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_extract_package_and_class_kotlin PASSED [ 23%]
    886:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_extract_package_and_class_java PASSED [ 24%]
    887:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_verify_report_update_file_not_updated PASSED [ 25%]
    888:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_verify_report_update_file_not_exist PASSED [ 25%]
    889:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_process_coverage_report PASSED [ 26%]
    890:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_missed_covered_lines_jacoco_csv_key_error PASSED [ 27%]
    ...
    
    895:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_missed_covered_lines_jacoco_xml PASSED [ 32%]
    896:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_missed_covered_lines_kotlin_jacoco_xml PASSED [ 33%]
    897:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_get_file_extension_with_valid_file_extension PASSED [ 34%]
    898:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_get_file_extension_with_no_file_extension PASSED [ 35%]
    899:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_coverage_report_lcov_with_feature_flag PASSED [ 36%]
    900:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_coverage_report_cobertura_with_feature_flag PASSED [ 37%]
    901:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_coverage_report_jacoco PASSED [ 37%]
    902:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_coverage_report_cobertura_filename_not_found PASSED [ 38%]
    903:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_coverage_report_lcov_file_read_error PASSED [ 39%]
    904:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_coverage_report_cobertura_all_files PASSED [ 40%]
    905:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_coverage_report_unsupported_type_with_feature_flag PASSED [ 41%]
    906:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_coverage_report_jacoco_without_feature_flag PASSED [ 42%]
    907:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_coverage_report_unsupported_type_without_feature_flag PASSED [ 43%]
    908:  tests/test_CoverageProcessor.py::TestCoverageProcessor::test_parse_coverage_report_lcov_without_feature_flag PASSED [ 44%]
    909:  tests/test_FilePreprocessor.py::TestFilePreprocessor::test_c_file PASSED [ 45%]
    910:  tests/test_FilePreprocessor.py::TestFilePreprocessor::test_py_file_with_function_only FAILED [ 46%]
    911:  tests/test_FilePreprocessor.py::TestFilePreprocessor::test_py_file_with_commented_class PASSED [ 47%]
    912:  tests/test_FilePreprocessor.py::TestFilePreprocessor::test_py_file_with_class PASSED [ 48%]
    913:  tests/test_FilePreprocessor.py::TestFilePreprocessor::test_py_file_with_syntax_error PASSED [ 49%]
    914:  tests/test_PromptBuilder.py::TestPromptBuilder::test_initialization_reads_file_contents PASSED [ 50%]
    915:  tests/test_PromptBuilder.py::TestPromptBuilder::test_initialization_handles_file_read_errors PASSED [ 50%]
    916:  tests/test_PromptBuilder.py::TestPromptBuilder::test_empty_included_files_section_not_in_prompt PASSED [ 51%]
    917:  tests/test_PromptBuilder.py::TestPromptBuilder::test_non_empty_included_files_section_in_prompt PASSED [ 52%]
    918:  tests/test_PromptBuilder.py::TestPromptBuilder::test_empty_additional_instructions_section_not_in_prompt PASSED [ 53%]
    919:  tests/test_PromptBuilder.py::TestPromptBuilder::test_empty_failed_test_runs_section_not_in_prompt PASSED [ 54%]
    920:  tests/test_PromptBuilder.py::TestPromptBuilder::test_non_empty_additional_instructions_section_in_prompt PASSED [ 55%]
    921:  tests/test_PromptBuilder.py::TestPromptBuilder::test_non_empty_failed_test_runs_section_in_prompt PASSED [ 56%]
    922:  tests/test_PromptBuilder.py::TestPromptBuilder::test_build_prompt_custom_handles_rendering_exception 
    923:  -------------------------------- live log call ---------------------------------
    924:  ERROR    root:PromptBuilder.py:200 Could not find settings for prompt file: custom_file
    925:  PASSED                                                                   [ 57%]
    926:  tests/test_PromptBuilder.py::TestPromptBuilder::test_build_prompt_handles_rendering_exception 
    927:  -------------------------------- live log call ---------------------------------
    928:  ERROR    root:PromptBuilder.py:158 Error rendering prompt: Rendering error
    929:  PASSED                                                                   [ 58%]
    930:  tests/test_PromptBuilder.py::TestPromptBuilderEndToEnd::test_custom_analyze_test_run_failure PASSED [ 59%]
    931:  tests/test_PromptBuilder.py::TestPromptBuilderEndToEnd::test_build_prompt_custom_missing_settings 
    932:  -------------------------------- live log call ---------------------------------
    933:  ERROR    root:PromptBuilder.py:205 Error rendering prompt: TestPromptBuilderEndToEnd.test_build_prompt_custom_missing_settings.<locals>.mock_get_settings.<locals>.<lambda>() takes 1 positional argument but 2 were given
    934:  PASSED                                                                   [ 60%]
    935:  tests/test_ReportGenerator.py::TestReportGeneration::test_generate_report PASSED [ 61%]
    936:  tests/test_ReportGenerator.py::TestReportGeneration::test_generate_partial_diff_basic PASSED [ 62%]
    937:  tests/test_Runner.py::TestRunner::test_run_command_success PASSED        [ 62%]
    938:  tests/test_Runner.py::TestRunner::test_run_command_with_cwd PASSED       [ 63%]
    939:  tests/test_Runner.py::TestRunner::test_run_command_failure PASSED        [ 64%]
    940:  tests/test_UnitTestDB.py::TestUnitTestDB::test_insert_attempt PASSED     [ 65%]
    941:  tests/test_UnitTestDB.py::TestUnitTestDB::test_dump_to_report PASSED     [ 66%]
    942:  tests/test_UnitTestDB.py::TestUnitTestDB::test_dump_to_report_cli_custom_args PASSED [ 67%]
    943:  tests/test_UnitTestDB.py::TestUnitTestDB::test_dump_to_report_defaults PASSED [ 68%]
    944:  tests/test_UnitTestGenerator.py::TestUnitTestGenerator::test_get_included_files_mixed_paths PASSED [ 69%]
    945:  tests/test_UnitTestGenerator.py::TestUnitTestGenerator::test_get_included_files_valid_paths PASSED [ 70%]
    946:  tests/test_UnitTestGenerator.py::TestUnitTestGenerator::test_get_code_language_no_extension PASSED [ 71%]
    947:  tests/test_UnitTestGenerator.py::TestUnitTestGenerator::test_build_prompt_with_failed_tests PASSED [ 72%]
    948:  tests/test_UnitTestGenerator.py::TestUnitTestGenerator::test_generate_tests_invalid_yaml PASSED [ 73%]
    949:  tests/test_UnitTestValidator.py::TestUnitValidator::test_extract_error_message_exception_handling 
    950:  -------------------------------- live log call ---------------------------------
    951:  ERROR    root:UnitTestValidator.py:707 Error extracting error message: 'UnitTestValidator' object has no attribute 'prompt_builder'
    952:  PASSED                                                                   [ 74%]
    953:  tests/test_UnitTestValidator.py::TestUnitValidator::test_run_coverage_with_report_coverage_flag PASSED [ 75%]
    954:  tests/test_UnitTestValidator.py::TestUnitValidator::test_extract_error_message_with_prompt_builder PASSED [ 75%]
    955:  tests/test_UnitTestValidator.py::TestUnitValidator::test_validate_test_pass_no_coverage_increase_with_prompt FAILED [ 76%]
    956:  tests/test_UnitTestValidator.py::TestUnitValidator::test_initial_test_suite_analysis_with_prompt_builder PASSED [ 77%]
    957:  tests/test_UnitTestValidator.py::TestUnitValidator::test_post_process_coverage_report_with_report_coverage_flag PASSED [ 78%]
    958:  tests/test_UnitTestValidator.py::TestUnitValidator::test_post_process_coverage_report_with_diff_coverage PASSED [ 79%]
    959:  tests/test_UnitTestValidator.py::TestUnitValidator::test_post_process_coverage_report_without_flags PASSED [ 80%]
    960:  tests/test_UnitTestValidator.py::TestUnitValidator::test_generate_diff_coverage_report PASSED [ 81%]
    961:  tests/test_load_yaml.py::TestLoadYaml::test_load_valid_yaml PASSED       [ 82%]
    962:  tests/test_load_yaml.py::TestLoadYaml::test_load_invalid_yaml1 
    963:  -------------------------------- live log call ---------------------------------
    964:  INFO     root:utils.py:36 Failed to parse AI prediction: mapping values are not allowed here
    965:  in "<unicode string>", line 12, column 37:
    966:  relevant line: user="""PR Info: aaa
    967:  ^. Attempting to fix YAML formatting.
    968:  INFO     root:utils.py:82 Successfully parsed AI prediction after adding |-
    969:  PASSED                                                                   [ 83%]
    970:  tests/test_load_yaml.py::TestLoadYaml::test_load_invalid_yaml2 
    971:  -------------------------------- live log call ---------------------------------
    972:  INFO     root:utils.py:36 Failed to parse AI prediction: mapping values are not allowed here
    ...
    
    984:  INFO     root:utils.py:119 Successfully parsed AI prediction after removing 1 lines
    985:  PASSED                                                                   [ 86%]
    986:  tests/test_load_yaml.py::TestLoadYaml::test_try_fix_yaml_llama3_8b 
    987:  -------------------------------- live log call ---------------------------------
    988:  INFO     root:utils.py:119 Successfully parsed AI prediction after removing 2 lines
    989:  PASSED                                                                   [ 87%]
    990:  tests/test_load_yaml.py::TestLoadYaml::test_invalid_yaml_wont_parse 
    991:  -------------------------------- live log call ---------------------------------
    992:  INFO     root:utils.py:36 Failed to parse AI prediction: mapping values are not allowed here
    993:  in "<unicode string>", line 3, column 9:
    994:  language: python
    995:  ^. Attempting to fix YAML formatting.
    996:  INFO     root:utils.py:41 Failed to parse AI prediction after fixing YAML formatting.
    997:  PASSED                                                                   [ 87%]
    998:  tests/test_load_yaml.py::TestLoadYaml::test_load_yaml_second_fallback_failure 
    999:  -------------------------------- live log call ---------------------------------
    1000:  INFO     root:utils.py:36 Failed to parse AI prediction: while parsing a flow sequence
    1001:  in "<unicode string>", line 2, column 15:
    1002:  invalid_yaml: [unclosed_list
    1003:  ^
    1004:  expected ',' or ']', but got '<stream end>'
    1005:  in "<unicode string>", line 3, column 1:
    1006:  ^. Attempting to fix YAML formatting.
    1007:  INFO     root:utils.py:145 Successfully parsed AI prediction when using the language: key as a starting point
    1008:  INFO     root:utils.py:41 Failed to parse AI prediction after fixing YAML formatting.
    ...
    
    1014:  tests/test_main.py::TestMain::test_parse_args PASSED                     [ 93%]
    1015:  tests/test_main.py::TestMain::test_main_source_file_not_found PASSED     [ 94%]
    1016:  tests/test_main.py::TestMain::test_main_test_file_not_found PASSED       [ 95%]
    1017:  tests/test_main.py::TestMain::test_main_calls_agent_run PASSED           [ 96%]
    1018:  tests/test_version.py::TestGetVersion::test_get_version_happy_path PASSED [ 97%]
    1019:  tests/test_version.py::TestGetVersion::test_get_version_file_missing PASSED [ 98%]
    1020:  tests/test_version.py::TestGetVersion::test_get_version_empty_or_whitespace_file PASSED [ 99%]
    1021:  tests/test_version.py::TestGetVersion::test_get_version_frozen_application PASSED [100%]
    1022:  =================================== FAILURES ===================================
    ...
    
    1027:  tmp.write(b"def function():\n    pass\n")
    1028:  tmp.close()
    1029:  preprocessor = FilePreprocessor(tmp.name)
    1030:  input_text = "Lorem ipsum dolor sit amet,\nconsectetur adipiscing elit,\nsed do eiusmod tempor incididunt."
    1031:  processed_text = preprocessor.process_file(input_text)
    1032:  >           assert (
    1033:  processed_text == input_text
    1034:  ), "Python file without class should not alter the text."
    1035:  E           AssertionError: Python file without class should not alter the text.
    ...
    
    1039:  E             +     Lorem ipsum dolor sit amet,
    1040:  E             ? ++++
    1041:  E             - consectetur adipiscing elit,
    1042:  E             +     consectetur adipiscing elit,
    1043:  E             ? ++++
    1044:  E             - sed do eiusmod tempor incididunt.
    1045:  E             +     sed do eiusmod tempor incididunt.
    1046:  E             ? ++++
    1047:  tests/test_FilePreprocessor.py:26: AssertionError
    ...
    
    1074:  with patch("builtins.open", mock_file), patch.object(
    1075:  Runner, "run_command", return_value=("", "", 0, datetime.datetime.now())
    1076:  ), patch.object(
    1077:  CoverageProcessor, "process_coverage_report", return_value=([], [], 0.4)
    1078:  ):
    1079:  result = generator.validate_test(test_to_validate)
    1080:  assert result["status"] == "FAIL"
    1081:  >               assert result["reason"] == "Coverage did not increase"
    1082:  E               AssertionError: assert 'Coverage did... some problem' == 'Coverage did not increase'
    1083:  E                 
    1084:  E                 - Coverage did not increase
    1085:  E                 + Coverage did not increase. Maybe the test did run but did not increase coverage, or maybe the test execution was skipped due to some problem
    1086:  tests/test_UnitTestValidator.py:137: AssertionError
    1087:  ----------------------------- Captured stderr call -----------------------------
    1088:  2024-12-02 21:50:25,685 - coverage_ai.UnitTestValidator - INFO - Running test with the following command: "pytest"
    1089:  2024-12-02 21:50:25,685 - coverage_ai.UnitTestValidator - INFO - Test did not increase coverage. Rolling back.
    1090:  =============================== warnings summary ===============================
    1091:  ../../../.cache/pypoetry/virtualenvs/coverage-ai-nWpvkqak-py3.12/lib/python3.12/site-packages/starlette/formparsers.py:12
    1092:  /home/runner/.cache/pypoetry/virtualenvs/coverage-ai-nWpvkqak-py3.12/lib/python3.12/site-packages/starlette/formparsers.py:12: PendingDeprecationWarning: Please use `import python_multipart` instead.
    1093:  import multipart
    1094:  ../../../.cache/pypoetry/virtualenvs/coverage-ai-nWpvkqak-py3.12/lib/python3.12/site-packages/pydantic/_internal/_config.py:291
    1095:  /home/runner/.cache/pypoetry/virtualenvs/coverage-ai-nWpvkqak-py3.12/lib/python3.12/site-packages/pydantic/_internal/_config.py:291: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.9/migration/
    ...
    
    1130:  coverage_ai/settings/token_handling.py      44     32    27%
    1131:  coverage_ai/utils.py                       141     17    88%
    1132:  coverage_ai/version.py                      11      0   100%
    1133:  ------------------------------------------------------------
    1134:  TOTAL                                     1706    584    66%
    1135:  Coverage XML written to file cobertura.xml
    1136:  Required test coverage of 65% reached. Total coverage: 65.77%
    1137:  =========================== short test summary info ============================
    1138:  FAILED tests/test_FilePreprocessor.py::TestFilePreprocessor::test_py_file_with_function_only - AssertionError: Python file without class should not alter the text.
    ...
    
    1141:  +     Lorem ipsum dolor sit amet,
    1142:  ? ++++
    1143:  - consectetur adipiscing elit,
    1144:  +     consectetur adipiscing elit,
    1145:  ? ++++
    1146:  - sed do eiusmod tempor incididunt.
    1147:  +     sed do eiusmod tempor incididunt.
    1148:  ? ++++
    1149:  FAILED tests/test_UnitTestValidator.py::TestUnitValidator::test_validate_test_pass_no_coverage_increase_with_prompt - AssertionError: assert 'Coverage did... some problem' == 'Coverage did not increase'
    1150:  - Coverage did not increase
    1151:  + Coverage did not increase. Maybe the test did run but did not increase coverage, or maybe the test execution was skipped due to some problem
    1152:  ================== 2 failed, 106 passed, 6 warnings in 14.44s ==================
    1153:  make: *** [Makefile:8: test] Error 1
    1154:  ##[error]Process completed with exit code 2.
    

    ✨ CI feedback usage guide:

    The CI feedback tool (/checks) automatically triggers when a PR has a failed check.
    The tool analyzes the failed checks and provides several feedbacks:

    • Failed stage
    • Failed test name
    • Failure summary
    • Relevant error logs

    In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:

    /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}"
    

    where {repo_name} is the name of the repository, {run_number} is the run number of the failed check, and {job_number} is the job number of the failed check.

    Configuration options

    • enable_auto_checks_feedback - if set to true, the tool will automatically provide feedback when a check is failed. Default is true.
    • excluded_checks_list - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list.
    • enable_help_text - if set to true, the tool will provide a help message with the feedback. Default is true.
    • persistent_comment - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true.
    • final_update_message - if persistent_comment is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true.

    See more information about the checks tool in the docs.

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    dependencies Pull requests that update a dependency file python Pull requests that update Python code
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    0 participants