Skip to content

Commit

Permalink
Merge 0.8-dev to master (#393)
Browse files Browse the repository at this point in the history
  • Loading branch information
soininen authored Apr 29, 2024
2 parents 828ff38 + cd995e0 commit 6e25426
Show file tree
Hide file tree
Showing 151 changed files with 14,478 additions and 14,772 deletions.
96 changes: 89 additions & 7 deletions .github/workflows/run_unit_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,12 @@ name: Unit tests

# Run workflow on every push
on:
push
push:
paths:
- "**.py"
- "requirements.txt"
- "pyproject.toml"
- ".github/workflows/*.yml"

jobs:
unit-tests:
Expand All @@ -13,15 +18,15 @@ jobs:
strategy:
matrix:
os: [ubuntu-22.04, windows-latest]
python-version: [3.8, 3.9, "3.10", 3.11]
python-version: [3.8, 3.9, "3.10", 3.11, 3.12]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Version from Git tags
run: git describe --tags
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Display Python version
Expand All @@ -36,14 +41,91 @@ jobs:
PYTHONUTF8: 1
run: |
python -m pip install --upgrade pip
pip install .[dev]
python -m pip install .[dev]
- name: List packages
run:
pip list
python -m pip list
- name: Run tests
env:
QT_QPA_PLATFORM: offscreen
run:
coverage run -m unittest discover --verbose
- name: Upload coverage report to Codecov
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v4
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
toolbox-unit-tests:
name: Spine Toolbox unit tests
runs-on: ${{ matrix.os }}
strategy:
fail-fast: true
matrix:
python-version: [3.8]
os: [ubuntu-22.04]
steps:
- uses: actions/checkout@v4
with:
repository: spine-tools/Spine-Toolbox
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install additional packages for Linux
if: runner.os == 'Linux'
run: |
sudo apt-get update -y
sudo apt-get install -y libegl1
- name: Install dependencies
env:
PYTHONUTF8: 1
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
- name: List packages
run:
python -m pip list
- name: Install python3 kernelspecs
run: |
python -m pip install ipykernel
python -m ipykernel install --user
- name: Run tests
run: |
if [ "$RUNNER_OS" != "Windows" ]; then
export QT_QPA_PLATFORM=offscreen
fi
python -m unittest discover --verbose
shell: bash
toolbox-execution-tests:
name: Spine Toolbox execution tests
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: [3.8]
os: [ubuntu-22.04]
steps:
- uses: actions/checkout@v4
with:
repository: spine-tools/Spine-Toolbox
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install additional packages for Linux
if: runner.os == 'Linux'
run: |
sudo apt-get update -y
sudo apt-get install -y libegl1
- name: Install dependencies
env:
PYTHONUTF8: 1
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
- name: List packages
run:
python -m pip list
- name: Run tests
run:
python -m unittest discover --pattern execution_test.py --verbose
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
/.idea/
/docs/build/
/docs/source/autoapi/
/docs/source/db_mapping_schema.rst

# Setuptools distribution folder.
/build/
Expand All @@ -16,3 +17,4 @@
/htmlcov

spinedb_api/version.py
benchmarks/*.json
10 changes: 7 additions & 3 deletions .readthedocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,12 @@
# Required
version: 2

# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.9"

# Build documentation in the docs/ directory with Sphinx
sphinx:
builder: html
Expand All @@ -17,8 +23,6 @@ formats:

# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.8
install:
- method: pip
path: .
- requirements: requirements.txt
- requirements: docs/requirements.txt
39 changes: 39 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Changelog

All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [Unreleased]

This is the first release where we keep a Spine-Database-API specific changelog.

The database structure has changed quite a bit.
Large parts of the API have been rewritten or replaced by new systems.
We are still keeping many old entry points for backwards compatibility,
but those functions and methods are pending deprecation.

### Changed

- Python 3.12 is now supported.
- Objects and relationships have been replaced by *entities*.
Zero-dimensional entities correspond to objects while multidimensional entities to relationships.

### Added

- *Entity alternatives* control the visibility of entities.
This replaces previous tools, features and methods.
- Support for *superclasses*.
It is now possible to set a superclass for an entity class.
The class then inherits parameter definitions from its superclass.

### Fixed

### Removed

- Tools, features and methods have been removed.

### Deprecated

### Security
29 changes: 29 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Performance benchmarks

This Python package contains performance benchmarks for `spinedb_api`.
The benchmarks use [`pyperf`](https://pyperf.readthedocs.io/en/latest/index.html)
which is installed as part of the optional developer dependencies:

```commandline
python -mpip install -e .[dev]
```

Each Python file is a self-contained script
that benchmarks some aspect of the DB API.
Benchmark results can be optionally written into a`.json` file
by modifying the script.
This may be handy for comparing different branches/commits/changes etc.
The file can be inspected by

```commandline
python -mpyperf show <benchmark file.json>
```

Benchmark files from e.g. different commits/branches can be compared by

```commandline
python -mpyperf compare_to <benchmark file 1.json> <benchmark file 2.json>
```

Check the [`pyperf` documentation](https://pyperf.readthedocs.io/en/latest/index.html)
for further things you can do with it.
Empty file added benchmarks/__init__.py
Empty file.
55 changes: 55 additions & 0 deletions benchmarks/datetime_from_database.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
"""
This benchmark tests the performance of reading a DateTime value from database.
"""

import datetime
import time
from typing import Any, Sequence, Tuple
import pyperf
from spinedb_api import DateTime, from_database, to_database


def build_datetimes(count: int) -> Sequence[DateTime]:
datetimes = []
year = 2024
month = 1
day = 1
hour = 0
while len(datetimes) != count:
datetimes.append(DateTime(datetime.datetime(year, month, day, hour)))
hour += 1
if hour == 24:
hour = 0
day += 1
if day == 29:
day = 1
month += 1
if month == 13:
month = 1
year += 1
return datetimes


def value_from_database(loops: int, db_values_and_types: Sequence[Tuple[Any, str]]) -> float:
duration = 0.0
for _ in range(loops):
for db_value, db_type in db_values_and_types:
start = time.perf_counter()
from_database(db_value, db_type)
duration += time.perf_counter() - start
return duration


def run_benchmark(file_name):
runner = pyperf.Runner(loops=10)
inner_loops = 1000
db_values_and_types = [to_database(x) for x in build_datetimes(inner_loops)]
benchmark = runner.bench_time_func(
"from_database[DateTime]", value_from_database, db_values_and_types, inner_loops=inner_loops
)
if file_name:
pyperf.add_runs(file_name, benchmark)


if __name__ == "__main__":
run_benchmark("")
39 changes: 39 additions & 0 deletions benchmarks/map_from_database.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
"""
This benchmark tests the performance of reading a Map type value from database.
"""

import time
import pyperf
from spinedb_api import from_database, to_database
from benchmarks.utils import build_even_map


def value_from_database(loops, db_value, value_type):
duration = 0.0
for _ in range(loops):
start = time.perf_counter()
from_database(db_value, value_type)
duration += time.perf_counter() - start
return duration


def run_benchmark(file_name):
runner = pyperf.Runner(loops=3)
runs = {
"value_from_database[Map(10, 10, 100)]": {"dimensions": (10, 10, 100)},
"value_from_database[Map(1000)]": {"dimensions": (10000,)},
}
for name, parameters in runs.items():
db_value, value_type = to_database(build_even_map(parameters["dimensions"]))
benchmark = runner.bench_time_func(
name,
value_from_database,
db_value,
value_type,
)
if file_name and benchmark is not None:
pyperf.add_runs(file_name, benchmark)


if __name__ == "__main__":
run_benchmark("")
61 changes: 61 additions & 0 deletions benchmarks/mapped_item_getitem.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
"""
This benchmark tests the performance of the MappedItemBase.__getitem__() method.
"""

import pyperf
import time
from typing import Dict
from spinedb_api import DatabaseMapping
from spinedb_api.db_mapping_base import PublicItem


def use_subscript_operator(loops: int, items: PublicItem, field: Dict):
duration = 0.0
for _ in range(loops):
for item in items:
start = time.perf_counter()
value = item[field]
duration += time.perf_counter() - start
return duration


def run_benchmark(file_name):
runner = pyperf.Runner()
inner_loops = 1000
object_class_names = [str(i) for i in range(inner_loops)]
relationship_class_names = [f"r{dimension}" for dimension in object_class_names]
with DatabaseMapping("sqlite://", create=True) as db_map:
object_classes = []
for name in object_class_names:
item, error = db_map.add_entity_class_item(name=name)
assert error is None
object_classes.append(item)
relationship_classes = []
for name, dimension in zip(relationship_class_names, object_classes):
item, error = db_map.add_entity_class_item(name, dimension_name_list=(dimension["name"],))
assert error is None
relationship_classes.append(item)
benchmarks = [
runner.bench_time_func(
"PublicItem subscript['name' in EntityClassItem]",
use_subscript_operator,
object_classes,
"name",
inner_loops=inner_loops,
),
runner.bench_time_func(
"PublicItem subscript['dimension_name_list' in EntityClassItem]",
use_subscript_operator,
relationship_classes,
"dimension_name_list",
inner_loops=inner_loops,
),
]
if file_name:
for benchmark in benchmarks:
if benchmark is not None:
pyperf.add_runs(file_name, benchmark)


if __name__ == "__main__":
run_benchmark("")
Loading

0 comments on commit 6e25426

Please sign in to comment.