Skip to content

Commit

Permalink
⬆️🪝 Update pre-commit hook astral-sh/ruff-pre-commit to v0.8.0 (#757)
Browse files Browse the repository at this point in the history
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[astral-sh/ruff-pre-commit](https://redirect.github.com/astral-sh/ruff-pre-commit)
| repository | minor | `v0.7.4` -> `v0.8.0` |

Note: The `pre-commit` manager in Renovate is not supported by the
`pre-commit` maintainers or community. Please do not report any problems
there, instead [create a Discussion in the Renovate
repository](https://redirect.github.com/renovatebot/renovate/discussions/new)
if you have any questions.

---

### Release Notes

<details>
<summary>astral-sh/ruff-pre-commit (astral-sh/ruff-pre-commit)</summary>

###
[`v0.8.0`](https://redirect.github.com/astral-sh/ruff-pre-commit/releases/tag/v0.8.0)

[Compare
Source](https://redirect.github.com/astral-sh/ruff-pre-commit/compare/v0.7.4...v0.8.0)

See: https://github.com/astral-sh/ruff/releases/tag/0.8.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "every weekend" (UTC), Automerge - At
any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/cda-tum/mqt-core).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4xOS4wIiwidXBkYXRlZEluVmVyIjoiMzkuMTkuMCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiZGVwZW5kZW5jaWVzIiwicHJlLWNvbW1pdCJdfQ==-->
  • Loading branch information
burgholzer authored Nov 26, 2024
2 parents 520b2d1 + 6162c24 commit 66a48ca
Show file tree
Hide file tree
Showing 14 changed files with 500 additions and 445 deletions.
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ repos:

# Python linting using ruff
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.7.4
rev: v0.8.0
hooks:
- id: ruff
args: ["--fix", "--show-fixes"]
Expand Down
6 changes: 5 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,11 @@ class CDAStyle(UnsrtStyle):
"""Custom style for including PDF links."""

def format_url(self, _e: Entry) -> HRef: # noqa: PLR6301
"""Format URL field as a link to the PDF."""
"""Format URL field as a link to the PDF.
Returns:
The formatted URL field.
"""
url = field("url", raw=True)
return href()[url, "[PDF]"]

Expand Down
58 changes: 8 additions & 50 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -173,58 +173,18 @@ preview = true
unsafe-fixes = true

[tool.ruff.lint]
extend-select = [
"A", # flake8-builtins
"ANN", # flake8-annotations
"ARG", # flake8-unused-arguments
"ASYNC", # flake8-async
"B", "B904", # flake8-bugbear
"C4", # flake8-comprehensions
"D", # pydocstyle
"EM", # flake8-errmsg
"EXE", # flake8-executable
"FA", # flake8-future-annotations
"FLY", # flynt
"FURB", # refurb
"I", # isort
"ICN", # flake8-import-conventions
"ISC", # flake8-implicit-str-concat
"LOG", # flake8-logging-format
"N", # flake8-naming
"NPY", # numpy
"PD", # pandas-vet
"PERF", # perflint
"PGH", # pygrep-hooks
"PIE", # flake8-pie
"PL", # pylint
"PT", # flake8-pytest-style
"PTH", # flake8-use-pathlib
"PYI", # flake8-pyi
"Q", # flake8-quotes
"RET", # flake8-return
"RSE", # flake8-raise
"RUF", # Ruff-specific
"S", # flake8-bandit
"SLF", # flake8-self
"SLOT", # flake8-slots
"SIM", # flake8-simplify
"T20", # flake8-print
"TCH", # flake8-type-checking
"TID251", # flake8-tidy-imports.banned-api
"TRY", # tryceratops
"UP", # pyupgrade
"YTT", # flake8-2020
]
select = ["ALL"]
ignore = [
"ANN101", # Missing type annotation for `self` in method
"ANN102", # Missing type annotation for `cls` in classmethod
"C90", # <...> too complex
"COM812", # Conflicts with formatter
"CPY001", # Missing copyright notice at top of file
"ISC001", # Conflicts with formatter
"PLR09", # Too many <...>
"PLR2004", # Magic value used in comparison
"PLC0415", # Import should be at top of file
"PT004", # Incorrect, just usefixtures instead.
"S101", # Use of assert detected
"S404", # `subprocess` module is possibly insecure
"TID252" # Prefer absolute imports over relative imports from parent modules
]
typing-modules = ["mqt.core._compat.typing"]
isort.required-imports = ["from __future__ import annotations"]
Expand All @@ -235,15 +195,13 @@ isort.required-imports = ["from __future__ import annotations"]
"typing.Mapping".msg = "Use collections.abc.Mapping instead."
"typing.Sequence".msg = "Use collections.abc.Sequence instead."
"typing.Set".msg = "Use collections.abc.Set instead."
"typing.Self".msg = "Use mqt.core._compat.typing.Self instead."
"typing_extensions.Self".msg = "Use mqt.core._compat.typing.Self instead."
"typing.assert_never".msg = "Use mqt.core._compat.typing.assert_never instead."

[tool.ruff.lint.per-file-ignores]
"test/python/**" = ["T20", "ANN"]
"docs/**" = ["T20"]
"test/python/**" = ["T20", "INP001"]
"docs/**" = ["T20", "INP001"]
"noxfile.py" = ["T20", "TID251"]
"*.pyi" = ["D418", "PYI021"] # pydocstyle
"*.pyi" = ["D418", "PYI021", "DOC202"] # pydocstyle
"*.ipynb" = [
"D", # pydocstyle
"E402", # Allow imports to appear anywhere in Jupyter notebooks
Expand Down
9 changes: 7 additions & 2 deletions src/mqt/core/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,18 @@ def load(input_circuit: CircuitInputType) -> QuantumComputation:
"""Load a quantum circuit from any supported format as a :class:`~mqt.core.ir.QuantumComputation`.
Args:
input_circuit: The input circuit to translate to a :class:`~mqt.core.ir.QuantumComputation`. This can be a :class:`~mqt.core.ir.QuantumComputation` itself, a file name to any of the supported file formats, an OpenQASM (2.0 or 3.0) string, or a Qiskit :class:`~qiskit.circuit.QuantumCircuit`.
input_circuit: The input circuit to translate to a :class:`~mqt.core.ir.QuantumComputation`. This can be
- a :class:`~mqt.core.ir.QuantumComputation` itself,
- a file name to any of the supported file formats,
- an OpenQASM (2.0 or 3.0) string, or
- a Qiskit :class:`~qiskit.circuit.QuantumCircuit`.
Returns:
The :class:`~mqt.core.ir.QuantumComputation`.
Raises:
ValueError: If the input circuit is a Qiskit :class:`~qiskit.circuit.QuantumCircuit` but the `qiskit` extra is not installed.
ValueError: If the input circuit is a Qiskit :class:`~qiskit.circuit.QuantumCircuit`,
but the `qiskit` extra is not installed.
FileNotFoundError: If the input circuit is a file name and the file does not exist.
"""
if isinstance(input_circuit, QuantumComputation):
Expand Down
7 changes: 3 additions & 4 deletions src/mqt/core/_compat/typing.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,18 +5,17 @@
from typing import TYPE_CHECKING

if sys.version_info >= (3, 11):
from typing import Self, assert_never
from typing import assert_never
elif TYPE_CHECKING:
from typing_extensions import Self, assert_never
from typing_extensions import assert_never
else:
Self = object

def assert_never(_: object) -> None:
msg = "Expected code to be unreachable"
raise AssertionError(msg)


__all__ = ["Self", "assert_never"]
__all__ = ["assert_never"]


def __dir__() -> list[str]:
Expand Down
75 changes: 52 additions & 23 deletions src/mqt/core/dd/evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,11 @@ class _BColors:


def __flatten_dict(d: dict[Any, Any], parent_key: str = "", sep: str = ".") -> dict[str, Any]:
"""Flatten a nested dictionary. Every value only has one key which is the path to the value."""
"""Flatten a nested dictionary. Every value only has one key which is the path to the value.
Returns:
A dictionary with the flattened keys and the values.
"""
items = {}
for key, value in d.items():
new_key = f"{parent_key}{sep}{key}" if parent_key else key
Expand All @@ -53,7 +57,14 @@ class _BenchmarkDict(TypedDict, total=False):


def __post_processing(key: str) -> _BenchmarkDict:
"""Postprocess the key of a flattened dictionary to get the metrics for the DataFrame columns."""
"""Postprocess the key of a flattened dictionary to get the metrics for the DataFrame columns.
Returns:
A dictionary containing the algorithm, task, number of qubits, component, and metric.
Raises:
ValueError: If the key is missing the algorithm, task, number of qubits, or metric.
"""
metrics_divided = key.split(".")
result_metrics = _BenchmarkDict()
if len(metrics_divided) < 4:
Expand Down Expand Up @@ -101,7 +112,11 @@ class _DataDict(TypedDict):


def __aggregate(baseline_filepath: str | PathLike[str], feature_filepath: str | PathLike[str]) -> pd.DataFrame:
"""Aggregate the data from the baseline and feature json files into one DataFrame for visualization."""
"""Aggregate the data from the baseline and feature json files into one DataFrame for visualization.
Returns:
A DataFrame containing the aggregated data.
"""
base_path = Path(baseline_filepath)
with base_path.open(mode="r", encoding="utf-8") as f:
d = json.load(f)
Expand Down Expand Up @@ -160,7 +175,7 @@ def __aggregate(baseline_filepath: str | PathLike[str], feature_filepath: str |
n=result_metrics["n"],
component=result_metrics["component"],
metric=result_metrics["metric"],
)
),
)

df_all = pd.DataFrame(df_all_entries)
Expand All @@ -170,7 +185,12 @@ def __aggregate(baseline_filepath: str | PathLike[str], feature_filepath: str |


def __print_results(
df: pd.DataFrame, sort_indices: list[str], factor: float, no_split: bool, only_changed: bool
*,
df: pd.DataFrame,
sort_indices: list[str],
factor: float,
no_split: bool,
only_changed: bool,
) -> None:
"""Print the results in a nice table."""
# after significantly smaller than before
Expand Down Expand Up @@ -202,6 +222,7 @@ def __print_results(
def compare(
baseline_filepath: str | PathLike[str],
feature_filepath: str | PathLike[str],
*,
factor: float = 0.1,
sort: str = "ratio",
dd: bool = False,
Expand All @@ -220,15 +241,14 @@ def compare(
sort: Sort the table by this column. Valid options are "ratio" and "algorithm".
dd: Whether to show the detailed DD benchmark results.
only_changed: Whether to only show results that changed significantly.
no_split: Whether to merge all results together in one table or to separate the results into benchmarks that improved, stayed the same, or worsened.
no_split: Whether to merge all results together in one table or to separate the results
into benchmarks that improved, stayed the same, or worsened.
algorithm: Only show results for this algorithm.
task: Only show results for this task.
num_qubits: Only show results for this number of qubits. Can only be used if algorithm is also specified.
Raises:
ValueError: If factor is negative or sort is invalid or if num_qubits is specified while algorithm is not.
FileNotFoundError: If the baseline_filepath argument or the feature_filepath argument does not point to a valid file.
json.JSONDecodeError: If the baseline_filepath argument or the feature_filepath argument points to a file that is not a valid JSON file.
"""
if factor < 0:
msg = "Factor must be positive!"
Expand All @@ -253,15 +273,17 @@ def compare(
df_runtime = df_runtime.drop(columns=["component", "metric"])
print("\nRuntime:")
sort_indices = ["ratio"] if sort == "ratio" else ["algo", "task", "n"]
__print_results(df_runtime, sort_indices, factor, no_split, only_changed)
__print_results(
df=df_runtime, sort_indices=sort_indices, factor=factor, no_split=no_split, only_changed=only_changed
)

if not dd:
return

print("\nDD Package details:")
df_dd = df_all[df_all["metric"] != "runtime"]
sort_indices = ["ratio"] if sort == "ratio" else ["algo", "task", "n", "component", "metric"]
__print_results(df_dd, sort_indices, factor, no_split, only_changed)
__print_results(df=df_dd, sort_indices=sort_indices, factor=factor, no_split=no_split, only_changed=only_changed)


def main() -> None:
Expand All @@ -279,18 +301,23 @@ def main() -> None:
- :code:`--sort`: Sort the table by this column. Valid options are 'ratio' and 'algorithm'.
- :code:`--dd`: Whether to show the detailed DD benchmark results.
- :code:`--only_changed`: Whether to only show results that changed significantly.
- :code:`--no_split`: Whether to merge all results together in one table or to separate the results into benchmarks that improved, stayed the same, or worsened.
- :code:`--no_split`: Whether to merge all results together in one table or to separate the results into benchmarks
that improved, stayed the same, or worsened.
- :code:`--algorithm <str>`: Only show results for this algorithm.
- :code:`--task <str>`: Only show results for this task.
- :code:`--num_qubits <int>`: Only show results for this number of qubits. Can only be used if algorithm is also specified.
- :code:`--num_qubits <int>`: Only show results for this number of qubits.
Can only be used if algorithm is also specified.
"""
parser = argparse.ArgumentParser(
description="Compare the results of two benchmarking runs from the generated json files."
description="Compare the results of two benchmarking runs from the generated json files.",
)
parser.add_argument("baseline_filepath", type=str, help="Path to the baseline json file.")
parser.add_argument("feature_filepath", type=str, help="Path to the feature json file.")
parser.add_argument(
"--factor", type=float, default=0.1, help="How much a result has to change to be considered significant."
"--factor",
type=float,
default=0.1,
help="How much a result has to change to be considered significant.",
)
parser.add_argument(
"--sort",
Expand All @@ -300,7 +327,9 @@ def main() -> None:
)
parser.add_argument("--dd", action="store_true", help="Whether to show the detailed DD benchmark results.")
parser.add_argument(
"--only_changed", action="store_true", help="Whether to only show results that changed significantly."
"--only_changed",
action="store_true",
help="Whether to only show results that changed significantly.",
)
parser.add_argument(
"--no_split",
Expand All @@ -320,12 +349,12 @@ def main() -> None:
compare(
args.baseline_filepath,
args.feature_filepath,
args.factor,
args.sort,
args.dd,
args.only_changed,
args.no_split,
args.algorithm,
args.task,
args.num_qubits,
factor=args.factor,
sort=args.sort,
dd=args.dd,
only_changed=args.only_changed,
no_split=args.no_split,
algorithm=args.algorithm,
task=args.task,
num_qubits=args.num_qubits,
)
Loading

0 comments on commit 66a48ca

Please sign in to comment.