Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions .github/workflows/docs.yml → .github/workflows/test_and_docs.yml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename file to test_and_docs, as workflow now does both

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,24 @@ on:
types: [created]

jobs:
docs-verify:
uses: eclipse-score/cicd-workflows/.github/workflows/docs-verify.yml@main
permissions:
pull-requests: write
contents: read
with:
bazel-docs-verify-target: "//:docs_check"
run-tests:
uses: eclipse-score/cicd-workflows/.github/workflows/tests.yml@main
permissions:
contents: read
pull-requests: read
with:
bazel-target: 'test //src/... //tests/... --config=x86_64-linux'
upload-name: 'bazel-testlogs'
build-docs:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a job to do docs check, take a look how it is done in docs-as-code:
https://github.com/eclipse-score/docs-as-code/blob/main/.github/workflows/test_and_docs.yml

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added docs-verify step

needs: run-tests
if: ${{ always() }}
uses: eclipse-score/cicd-workflows/.github/workflows/docs.yml@main
permissions:
contents: write
Expand All @@ -43,3 +60,4 @@ jobs:
# the bazel-target depends on your repo specific docs_targets configuration (e.g. "suffix")
bazel-target: "//:docs -- --github_user=${{ github.repository_owner }} --github_repo=${{ github.event.repository.name }}"
retention-days: 3
tests-report-artifact: bazel-testlogs
10 changes: 5 additions & 5 deletions MODULE.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,15 @@ module(
)

# Bazel global rules
bazel_dep(name = "rules_python", version = "1.4.1")
bazel_dep(name = "rules_python", version = "1.8.3")
bazel_dep(name = "rules_rust", version = "0.61.0")
bazel_dep(name = "rules_cc", version = "0.2.14")
bazel_dep(name = "aspect_rules_lint", version = "1.5.3")
bazel_dep(name = "rules_cc", version = "0.2.16")
bazel_dep(name = "aspect_rules_lint", version = "2.0.0")
bazel_dep(name = "buildifier_prebuilt", version = "8.2.0.2")
bazel_dep(name = "platforms", version = "1.0.0")
bazel_dep(name = "flatbuffers", version = "25.9.23")
bazel_dep(name = "download_utils", version = "1.0.1")
bazel_dep(name = "googletest", version = "1.17.0.bcr.1")
bazel_dep(name = "download_utils", version = "1.2.2")
bazel_dep(name = "googletest", version = "1.17.0.bcr.2")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to not upgrade further?
As for example Platform & Tooling are also not on newest version

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't upgrade score_docs_as_code to v3 as that would break the build, but the bump to 2.3.3 was necessary because 2.3.0 didn't allow linking test results in the pipeline. As for the other requirements, I think there is no reason not to bump them, I'll have a look

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bumped versions. Also updated to docs-as-code 3.0.1 and adapted requirements as necessary

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3.0.1 might be bugged. Not sure, you have to test that if it works with your build.
if it doesn't 3.0.0 should for sure work

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Downgraded to 3.0.0 just to be sure

# S-CORE process rules
bazel_dep(name = "score_bazel_platforms", version = "0.0.4")
Expand Down
2 changes: 0 additions & 2 deletions MODULE.bazel.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

45 changes: 45 additions & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,11 @@
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html

from itertools import chain
from pathlib import Path

from docutils import nodes
from docutils.parsers.rst import Directive

# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
Expand Down Expand Up @@ -54,3 +59,43 @@

# Enable numref
numfig = True


class DisplayTestLogs(Directive):
"""Find and display the raw content of all test.log files."""

def run(self):
env = self.state.document.settings.env
ws_root = Path(env.app.srcdir).parent

result_nodes = []
for log_file in chain(
(ws_root / "bazel-testlogs").rglob("test.log"),
(ws_root / "tests-report").rglob("test.log"),
):
rel_path = log_file.relative_to(ws_root)

title = nodes.rubric(text=str(rel_path))
result_nodes.append(title)

try:
content = log_file.read_text(encoding="utf-8")
except Exception as e:
content = f"Error reading file: {e}"

code = nodes.literal_block(content, content)
code["language"] = "text"
code["source"] = str(rel_path)
result_nodes.append(code)

if not result_nodes:
para = nodes.paragraph(
text="No test.log files found in bazel-testlogs or tests-report."
)
result_nodes.append(para)

return result_nodes


def setup(app):
app.add_directive("display-test-logs", DisplayTestLogs)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be really slow at some point depending on how large the logs are and how many of these directives you have.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I would kick this out as soon as it slows down the doc build significantly, but I think it's useful to have as long as it's feasible

1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ Lifecycle
:titlesonly:

module/*/index
statistics.rst

Overview
--------
Expand Down
122 changes: 122 additions & 0 deletions docs/statistics.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
.. _statistics:

Component Requirements Statistics
=================================

Overview
--------

.. needpie:: Requirements Status
:labels: not valid, valid but not tested, valid and tested
:colors: red, yellow, green

type == 'comp_req' and status == 'invalid'
type == 'comp_req' and testlink == '' and (status == 'valid' or status == 'invalid')
type == 'comp_req' and testlink != '' and (status == 'valid' or status == 'invalid')

In Detail
---------

.. grid:: 2
:class-container: score-grid

.. grid-item-card::

.. needpie:: Requirements marked as Valid
:labels: not valid, valid
:colors: red, orange, green

type == 'comp_req' and status == 'invalid'
type == 'comp_req' and status == 'valid'

.. grid-item-card::

.. needpie:: Requirements with Codelinks
:labels: no codelink, with codelink
:colors: red, green

type == 'comp_req' and source_code_link == ''
type == 'comp_req' and source_code_link != ''

.. grid-item-card::

.. needpie:: Test Results
:labels: passed, failed, skipped
:colors: green, red, orange

type == 'testcase' and result == 'passed'
type == 'testcase' and result == 'failed'
type == 'testcase' and result == 'skipped'

.. grid:: 2

.. grid-item-card::

Failed Tests

*Hint: This table should be empty. Before a PR can be merged all tests have to be successful.*

.. needtable:: FAILED TESTS
:filter: result == "failed"
:tags: TEST
:columns: name as "testcase";result;fully_verifies;partially_verifies;test_type;derivation_technique;id as "link"

.. grid-item-card::

Skipped / Disabled Tests

.. needtable:: SKIPPED/DISABLED TESTS
:filter: result != "failed" and result != "passed"
:tags: TEST
:columns: name as "testcase";result;fully_verifies;partially_verifies;test_type;derivation_technique;id as "link"




All passed Tests
-----------------

.. needtable:: SUCCESSFUL TESTS
:filter: result == "passed"
:tags: TEST
:columns: name as "testcase";result;fully_verifies;partially_verifies;test_type;derivation_technique;id as "link"


Details About Testcases
------------------------

.. needpie:: Test Types Used In Testcases
:labels: static-code-analysis, structural-statement-coverage, structural-branch-coverage, walkthrough, inspection, interface-test, requirements-based, resource-usage, control-flow-analysis, data-flow-analysis, fault-injection, struct-func-cov, struct-call-cov
:legend:

type == 'testcase' and test_type == 'static-code-analysis'
type == 'testcase' and test_type == 'structural-statement-coverage'
type == 'testcase' and test_type == 'structural-branch-coverage'
type == 'testcase' and test_type == 'walkthrough'
type == 'testcase' and test_type == 'inspection'
type == 'testcase' and test_type == 'interface-test'
type == 'testcase' and test_type == 'requirements-based'
type == 'testcase' and test_type == 'resource-usage'
type == 'testcase' and test_type == 'control-flow-analysis'
type == 'testcase' and test_type == 'data-flow-analysis'
type == 'testcase' and test_type == 'fault-injection'
type == 'testcase' and test_type == 'struct-func-cov'
type == 'testcase' and test_type == 'struct-call-cov'


.. needpie:: Derivation Techniques Used In Testcases
:labels: requirements-analysis, boundary-values, equivalence-classes, fuzz-testing, error-guessing, explorative-testing
:legend:

type == 'testcase' and derivation_technique == 'requirements-analysis'
type == 'testcase' and derivation_technique == 'boundary-values'
type == 'testcase' and derivation_technique == 'equivalence-classes'
type == 'testcase' and derivation_technique == 'fuzz-testing'
type == 'testcase' and derivation_technique == 'error-guessing'
type == 'testcase' and derivation_technique == 'explorative-testing'


Test Log Files
--------------

.. display-test-logs::
10 changes: 10 additions & 0 deletions src/health_monitoring_lib/cpp/tests/health_monitor_test.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,21 @@ using ::testing::_;

class HealthMonitorTest : public ::testing::Test
{
protected:
void SetUp() override
{
RecordProperty("TestType", "interface-test");
RecordProperty("DerivationTechnique", "explorative-testing ");
}
};

// For first review round, only single test case to show up API
TEST_F(HealthMonitorTest, TestName)
{
RecordProperty(
"Description",
"This test demonstrates the usage of HealthMonitor and DeadlineMonitor APIs. It creates a HealthMonitor with a "
"DeadlineMonitor, retrieves the DeadlineMonitor, and tests starting a deadline.");
auto builder_mon = deadline::DeadlineMonitorBuilder()
.add_deadline(IdentTag("deadline_1"),
TimeRange(std::chrono::milliseconds(100), std::chrono::milliseconds(200)))
Expand Down
Loading
Loading