Skip to content

Conversation

@knrc
Copy link
Contributor

@knrc knrc commented Nov 25, 2025

User description

…unning on OpenShift or explicitly configured

@osmman This is the alternative approach to handling it, however this suffers from two not insignificant problems

  • ServiceMonitor (and therefore it's api server) will remain a hard dependency of our operator. This will have a direct impact on anything which requires ctlog for verification, such as the policy controller and any deployment for which it provides a gate.
  • If the prometheus operator is deleted from the cluster, and recreated, we would not automatically recreate the service monitor resources. These may get recreated as a side effect of another reconciliation, but no guarantees if/when that would occur.

The #1440 pull request, while more complicated, handles this in the right way for kubernetes. That PR makes the service monitor resources a soft dependency for our operator, reacting dynamically to the changes within the cluster to determine when/if the service monitor resources should be created.


PR Type

Enhancement


Description

  • Add optional ServiceMonitor field to MonitoringConfig for platform-aware control

  • Implement IsServiceMonitorEnabled() method with OpenShift-aware defaults

  • Update all monitoring action handlers to check ServiceMonitor enablement

  • Fix deep copy functions for proper pointer handling in monitoring configs

  • Update CRD schemas to reflect new optional ServiceMonitor field


Diagram Walkthrough

flowchart LR
  A["MonitoringConfig"] -->|adds ServiceMonitor field| B["Optional bool pointer"]
  B -->|defaults based on| C["IsOpenShift detection"]
  C -->|explicit override| D["User configuration"]
  D -->|controls| E["ServiceMonitor creation"]
  E -->|in all services| F["CTlog, Fulcio, Rekor, Trillian, TSA"]
Loading

File Walkthrough

Relevant files
Enhancement
8 files
common.go
Add ServiceMonitor field and helper method                             
+13/-0   
monitoring.go
Add ServiceMonitor enablement check to CTlog                         
+3/-1     
monitoring.go
Add ServiceMonitor enablement check to Fulcio                       
+3/-1     
helper.go
Add ServiceMonitor enablement check to Rekor                         
+3/-1     
monitoring.go
Add ServiceMonitor enablement check to Rekor server           
+3/-1     
monitoring.go
Add ServiceMonitor enablement check to Trillian logserver
+3/-1     
monitoring.go
Add ServiceMonitor enablement check to Trillian logsigner
+3/-1     
monitoring.go
Add ServiceMonitor enablement check to TimestampAuthority
+3/-1     
Tests
1 files
common_test.go
Add comprehensive tests for ServiceMonitor logic                 
+80/-0   
Bug fix
1 files
zz_generated.deepcopy.go
Fix deep copy for MonitoringConfig pointer fields               
+11/-6   
Configuration changes
6 files
rhtas.redhat.com_ctlogs.yaml
Update CTlog CRD with ServiceMonitor field                             
+5/-0     
rhtas.redhat.com_fulcios.yaml
Update Fulcio CRD with ServiceMonitor field                           
+5/-0     
rhtas.redhat.com_rekors.yaml
Update Rekor CRD with ServiceMonitor field                             
+5/-0     
rhtas.redhat.com_securesigns.yaml
Update SecureSign CRD with ServiceMonitor fields                 
+25/-0   
rhtas.redhat.com_timestampauthorities.yaml
Update TimestampAuthority CRD with ServiceMonitor field   
+5/-0     
rhtas.redhat.com_trillians.yaml
Update Trillian CRD with ServiceMonitor field                       
+5/-0     

…unning on OpenShift or explicitly configured

Signed-off-by: Kevin Conner <kev.conner@gmail.com>
@sourcery-ai
Copy link

sourcery-ai bot commented Nov 25, 2025

Reviewer's Guide

Adds a configurable ServiceMonitor flag to monitoring configuration, wires it through CRDs and controllers so ServiceMonitor resources are only created on OpenShift by default or when explicitly enabled, and updates deepcopy logic plus tests to support the new optional field.

Updated class diagram for monitoring configuration and related specs

classDiagram
    class MonitoringConfig {
        bool Enabled
        bool* ServiceMonitor
        bool IsServiceMonitorEnabled(defaultVal bool)
    }

    class MonitoringWithTLogConfig {
        +MonitoringConfig MonitoringConfig
        +TLogConfig TLog
    }

    class CTlogSpec {
        +MonitoringConfig Monitoring
        +TrillianSpec Trillian
        +[]SecretKeySelector Secrets
        +LocalObjectReference* ServerConfigRef
    }

    class FulcioSpec {
        +MonitoringConfig Monitoring
        +CTlogSpec Ctlog
        +FulcioConfig Config
        +FulcioCertificateConfig Certificate
        +LocalObjectReference* TrustedCA
    }

    class RekorSpec {
        +MonitoringConfig Monitoring
        +TrillianSpec Trillian
        +ExternalAccessConfig ExternalAccess
        +RekorSearchUIConfig RekorSearchUI
        +SignerConfig Signer
        +AttestationsConfig Attestations
    }

    class TimestampAuthoritySpec {
        +MonitoringConfig Monitoring
        +PodRequirementsConfig PodRequirements
        +ExternalAccessConfig ExternalAccess
        +SignerConfig Signer
        +LocalObjectReference* TrustedCA
    }

    class TrillianSpec {
        +MonitoringConfig Monitoring
        +TrillianDBConfig Db
        +TrillianLogServerConfig LogServer
        +TrillianLogSignerConfig LogSigner
        +LocalObjectReference* TrustedCA
    }

    class RekorMonitoring {
        +MonitoringWithTLogConfig Monitoring
    }

    class Rekor {
        +RekorSpec Spec
    }

    class CTlog {
        +CTlogSpec Spec
    }

    class Fulcio {
        +FulcioSpec Spec
    }

    class TimestampAuthority {
        +TimestampAuthoritySpec Spec
    }

    class Trillian {
        +TrillianSpec Spec
    }

    MonitoringWithTLogConfig --> MonitoringConfig : embeds
    RekorMonitoring --> MonitoringWithTLogConfig : has

    CTlogSpec --> MonitoringConfig : has
    FulcioSpec --> MonitoringConfig : has
    RekorSpec --> MonitoringConfig : has
    TimestampAuthoritySpec --> MonitoringConfig : has
    TrillianSpec --> MonitoringConfig : has

    Rekor --> RekorSpec : has
    CTlog --> CTlogSpec : has
    Fulcio --> FulcioSpec : has
    TimestampAuthority --> TimestampAuthoritySpec : has
    Trillian --> TrillianSpec : has
Loading

Flow diagram for deciding ServiceMonitor creation in resource monitoring actions

flowchart TD
    A["Start monitoringAction.CanHandle"] --> B["Check Ready condition reason is Creating or Ready"]
    B -->|No| Z["Do not create ServiceMonitor"]
    B -->|Yes| C["Check Spec.Monitoring.Enabled == true"]
    C -->|No| Z
    C -->|Yes| D["Call kubernetes.IsOpenShift() to determine platform"]
    D --> E["Call MonitoringConfig.IsServiceMonitorEnabled(defaultVal)"]

    subgraph IsServiceMonitorEnabled_logic
        direction LR
        E --> F{ServiceMonitor is set?}
        F -->|Yes| G["Return value of ServiceMonitor"]
        F -->|No| H["Return defaultVal based on platform"]
    end

    G --> I{Result is true?}
    H --> I

    I -->|No| Z
    I -->|Yes| Y["CanHandle returns true, reconciliation creates or updates ServiceMonitor resources"]

    Z["CanHandle returns false, skip ServiceMonitor reconciliation"]
    Y --> J["End"]
    Z --> J
Loading

Flow diagram for MonitoringConfig.IsServiceMonitorEnabled behavior across platforms

flowchart TD
    A["Start IsServiceMonitorEnabled(defaultVal)"] --> B{ServiceMonitor field is nil?}
    B -->|No| C["Return *ServiceMonitor (explicit user choice)"]
    B -->|Yes| D["Use defaultVal provided by caller"]

    subgraph Caller_examples
        direction LR
        E["On OpenShift"] --> F["Caller passes defaultVal = true"]
        G["On non OpenShift Kubernetes"] --> H["Caller passes defaultVal = false"]
    end

    D --> I["Return defaultVal (platform-specific default)"]
    C --> J["End"]
    I --> J
Loading

File-Level Changes

Change Details Files
Introduce optional serviceMonitor flag on MonitoringConfig and wire through all CRDs so users can control ServiceMonitor creation behavior with platform-dependent defaults.
  • Extend MonitoringConfig API type with optional ServiceMonitor *bool field and add helper IsServiceMonitorEnabled(defaultVal bool) to centralize defaulting logic.
  • Update generated CRD YAMLs for Securesign, CTLog, Fulcio, Rekor, TimestampAuthority, and Trillian to expose the new serviceMonitor boolean field with documentation and keep enabled as required.
  • Add unit tests for MonitoringConfig.IsServiceMonitorEnabled to validate behavior when ServiceMonitor is nil or explicitly set across OpenShift and non-OpenShift defaults.
api/v1alpha1/common.go
config/crd/bases/rhtas.redhat.com_securesigns.yaml
config/crd/bases/rhtas.redhat.com_ctlogs.yaml
config/crd/bases/rhtas.redhat.com_fulcios.yaml
config/crd/bases/rhtas.redhat.com_rekors.yaml
config/crd/bases/rhtas.redhat.com_timestampauthorities.yaml
config/crd/bases/rhtas.redhat.com_trillians.yaml
api/v1alpha1/common_test.go
Ensure deepcopy logic correctly handles the new optional ServiceMonitor field and nested MonitoringConfig structs so runtime copies are safe.
  • Change DeepCopyInto implementations for specs containing Monitoring to call DeepCopyInto instead of value assignment, ensuring pointer fields are copied properly.
  • Extend MonitoringConfig.DeepCopyInto to deep-copy the ServiceMonitor pointer when present.
  • Update MonitoringWithTLogConfig.DeepCopyInto to deep-copy its embedded MonitoringConfig instead of assigning by value.
api/v1alpha1/zz_generated.deepcopy.go
Gate ServiceMonitor-related reconciliation in controllers on both monitoring enabled and the new ServiceMonitor flag, defaulting based on whether the cluster is OpenShift.
  • Update CanHandle/enable checks in ctlog, fulcio, rekor, trillian, and timestamp authority monitoring actions to additionally require Monitoring.IsServiceMonitorEnabled(kubernetes.IsOpenShift()).
  • Update Rekor monitor helper to combine existing TLog.Enabled check with the new ServiceMonitor-enabled condition.
  • Import the kubernetes utility package where needed to detect OpenShift at runtime so ServiceMonitor creation is OpenShift-default-on and Kubernetes-default-off.
internal/controller/ctlog/actions/monitoring.go
internal/controller/fulcio/actions/monitoring.go
internal/controller/rekor/actions/monitor/helper.go
internal/controller/rekor/actions/server/monitoring.go
internal/controller/trillian/actions/logserver/monitoring.go
internal/controller/trillian/actions/logsigner/monitoring.go
internal/controller/tsa/actions/monitoring.go

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@qodo-code-review
Copy link

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
🟢
No security concerns identified No security vulnerabilities detected by AI analysis. Human verification advised for critical code.
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
Missing Auditing: The new gating logic for creating ServiceMonitor resources adds critical decision paths
without any accompanying audit/log statements to trace when and why monitoring resources
are created or skipped.

Referred Code
return (c.Reason == constants.Creating || c.Reason == constants.Ready) &&
	instance.Spec.Monitoring.Enabled &&
	instance.Spec.Monitoring.IsServiceMonitorEnabled(kubernetes.IsOpenShift())

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Edge Case Nil: The new IsServiceMonitorEnabled method depends on an external default flag and does not
guard against potential nil MonitoringConfig usage in callers, and controller CanHandle
paths add new dependency checks without explicit fallback or logging when platform
detection fails.

Referred Code
func (m *MonitoringConfig) IsServiceMonitorEnabled(defaultVal bool) bool {
	if m.ServiceMonitor != nil {
		return *m.ServiceMonitor
	}
	return defaultVal

Learn more about managing compliance generic rules or creating your own custom rules

Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • The IsServiceMonitorEnabled helper takes a generic default value, but the tests and call sites treat it as an isOpenShift flag; consider renaming the parameter and updating test names to reflect that it is a default value rather than a platform indicator to avoid confusion for future readers.
  • The repeated CanHandle condition combining Ready status, Monitoring.Enabled, and IsServiceMonitorEnabled(kubernetes.IsOpenShift()) across multiple controllers could be factored into a shared helper to reduce duplication and keep the monitoring enablement logic in one place.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The IsServiceMonitorEnabled helper takes a generic default value, but the tests and call sites treat it as an `isOpenShift` flag; consider renaming the parameter and updating test names to reflect that it is a default value rather than a platform indicator to avoid confusion for future readers.
- The repeated `CanHandle` condition combining Ready status, `Monitoring.Enabled`, and `IsServiceMonitorEnabled(kubernetes.IsOpenShift())` across multiple controllers could be factored into a shared helper to reduce duplication and keep the monitoring enablement logic in one place.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@qodo-code-review
Copy link

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Reconsider approach for optional ServiceMonitors

The current approach for conditional ServiceMonitor creation still has a hard
dependency on its CRD. A better solution is to dynamically detect the CRD's
availability at runtime to create a true soft dependency.

Examples:

internal/controller/ctlog/actions/monitoring.go [31-36]
func (i monitoringAction) CanHandle(_ context.Context, instance *rhtasv1alpha1.CTlog) bool {
	c := meta.FindStatusCondition(instance.Status.Conditions, constants.Ready)
	return (c.Reason == constants.Creating || c.Reason == constants.Ready) &&
		instance.Spec.Monitoring.Enabled &&
		instance.Spec.Monitoring.IsServiceMonitorEnabled(kubernetes.IsOpenShift())
}
internal/controller/fulcio/actions/monitoring.go [31-36]
func (i monitoringAction) CanHandle(_ context.Context, instance *rhtasv1alpha1.Fulcio) bool {
	c := meta.FindStatusCondition(instance.Status.Conditions, constants.Ready)
	return (c.Reason == constants.Creating || c.Reason == constants.Ready) &&
		instance.Spec.Monitoring.Enabled &&
		instance.Spec.Monitoring.IsServiceMonitorEnabled(kubernetes.IsOpenShift())
}

Solution Walkthrough:

Before:

// api/v1alpha1/common.go
func (m *MonitoringConfig) IsServiceMonitorEnabled(defaultVal bool) bool {
	if m.ServiceMonitor != nil {
		return *m.ServiceMonitor
	}
	return defaultVal
}

// internal/controller/ctlog/actions/monitoring.go
func (i monitoringAction) CanHandle(_ context.Context, instance *rhtasv1alpha1.CTlog) bool {
	c := meta.FindStatusCondition(...)
	return (c.Reason == constants.Creating || c.Reason == constants.Ready) &&
		instance.Spec.Monitoring.Enabled &&
		instance.Spec.Monitoring.IsServiceMonitorEnabled(kubernetes.IsOpenShift())
}

After:

// A dynamic check for the ServiceMonitor CRD would be added,
// for example, during controller initialization.

type Reconciler struct {
    // ... other fields
    ServiceMonitorAvailable bool
}

// The action would then use this information.
func (i monitoringAction) CanHandle(_ context.Context, instance *rhtasv1alpha1.CTlog) bool {
    if !reconciler.ServiceMonitorAvailable {
        return false
    }
	c := meta.FindStatusCondition(...)
	return (c.Reason == constants.Creating || c.Reason == constants.Ready) &&
		instance.Spec.Monitoring.Enabled &&
		instance.Spec.Monitoring.IsServiceMonitorEnabled(kubernetes.IsOpenShift())
}
Suggestion importance[1-10]: 9

__

Why: This is a critical architectural suggestion that correctly identifies a major design flaw, which is even acknowledged in the PR description, making the current approach a potentially problematic workaround.

High
Possible issue
Decouple TLog monitoring from ServiceMonitor setting

Decouple the Rekor transparency log (TLog) monitoring from the ServiceMonitor
setting by removing the IsServiceMonitorEnabled check from the enabled function.

internal/controller/rekor/actions/monitor/helper.go [9-12]

 func enabled(instance *v1alpha1.Rekor) bool {
-	return utils.IsEnabled(&instance.Spec.Monitoring.TLog.Enabled) &&
-		instance.Spec.Monitoring.IsServiceMonitorEnabled(kubernetes.IsOpenShift())
+	return utils.IsEnabled(&instance.Spec.Monitoring.TLog.Enabled)
 }
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a logical bug where the Rekor TLog monitoring is incorrectly coupled with the ServiceMonitor setting, which could unintentionally disable the TLog monitoring CronJob.

Medium
  • More

@qodo-code-review
Copy link

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: Test upgrade operator

Failed stage: Run tests [❌]

Failed test name: Operator upgrade [It] Upgrade operator

Failure summary:

The GitHub Action failed because the E2E test "Operator upgrade [It] Upgrade operator" timed out:
-
Test file: test/e2e/upgrade_test.go
- Failure location: line 176 (initiated around line 158)
-
Error: Timed out after 300.003s — expected : false to be true
- Context shows catalog
rhtas-operator-catalog was ready and extension rhtas-operator version 1.3.1 was ready, but the
upgrade flow (targeting channel rhtas-operator.v1.4.0) did not complete within the timeout.

Relevant error logs:
1:  Runner name: 'ubuntu-4core_ee3665714d9d'
2:  Runner group name: 'default'
...

366:  configmap/ingress-nginx-controller created
367:  service/ingress-nginx-controller created
368:  service/ingress-nginx-controller-admission created
369:  deployment.apps/ingress-nginx-controller created
370:  job.batch/ingress-nginx-admission-create created
371:  job.batch/ingress-nginx-admission-patch created
372:  ingressclass.networking.k8s.io/nginx created
373:  validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
374:  pod/ingress-nginx-controller-bcdf75cfc-p42rc condition met
375:  ##[group]Run # Download the bundle.yaml
376:  �[36;1m# Download the bundle.yaml�[0m
377:  �[36;1mcurl -sL https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.84.0/bundle.yaml -o bundle.yaml �[0m
378:  �[36;1m�[0m
379:  �[36;1m# Check if the download was successful and the file is not empty�[0m
380:  �[36;1mif [ ! -s "bundle.yaml" ]; then�[0m
381:  �[36;1m  echo "Error: Downloaded bundle.yaml is empty or failed to download."�[0m
382:  �[36;1m  exit 1�[0m
...

727:  go: downloading github.com/aws/aws-sdk-go-v2/service/sso v1.22.4
728:  go: downloading github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.4
729:  go: downloading github.com/aws/aws-sdk-go-v2/service/sts v1.30.3
730:  go: downloading github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.15
731:  go: downloading github.com/letsencrypt/boulder v0.0.0-20240620165639-de9c06129bec
732:  go: downloading cloud.google.com/go/iam v1.1.12
733:  go: downloading cloud.google.com/go/longrunning v0.5.11
734:  go: downloading github.com/googleapis/gax-go/v2 v2.13.0
735:  go: downloading google.golang.org/genproto v0.0.0-20240730163845-b1a4ccb954bf
736:  go: downloading google.golang.org/grpc v1.65.0
737:  go: downloading google.golang.org/genproto/googleapis/api v0.0.0-20240725223205-93522f1f2a9f
738:  go: downloading github.com/cenkalti/backoff/v3 v3.2.2
739:  go: downloading github.com/go-jose/go-jose/v4 v4.0.2
740:  go: downloading github.com/hashicorp/errwrap v1.1.0
741:  go: downloading github.com/hashicorp/go-cleanhttp v0.5.2
742:  go: downloading github.com/hashicorp/go-multierror v1.1.1
743:  go: downloading github.com/hashicorp/go-retryablehttp v0.7.7
...

751:  go: downloading golang.org/x/oauth2 v0.22.0
752:  go: downloading github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/internal v1.0.0
753:  go: downloading github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0
754:  go: downloading github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2
755:  go: downloading github.com/containerd/stargz-snapshotter/estargz v0.14.3
756:  go: downloading github.com/google/go-cmp v0.6.0
757:  go: downloading github.com/docker/docker v26.1.4+incompatible
758:  go: downloading github.com/google/go-github/v55 v55.0.0
759:  go: downloading github.com/xanzy/go-gitlab v0.107.0
760:  go: downloading k8s.io/api v0.28.3
761:  go: downloading k8s.io/client-go v0.28.3
762:  go: downloading k8s.io/utils v0.0.0-20240502163921-fe8a2dddb1d0
763:  go: downloading github.com/theupdateframework/go-tuf v0.7.0
764:  go: downloading github.com/moby/term v0.5.0
765:  go: downloading github.com/docker/docker-credential-helpers v0.8.0
766:  go: downloading github.com/go-openapi/errors v0.22.0
767:  go: downloading github.com/go-openapi/validate v0.24.0
...

939:  IMG: ghcr.io/securesign/secure-sign-operator:dev-c9d3fb477ea16b4a12f91c9e4f9707d17a4b8df9
940:  BUNDLE_IMG: ghcr.io/securesign/secure-sign-operator-bundle:dev-c9d3fb477ea16b4a12f91c9e4f9707d17a4b8df9
941:  CATALOG_IMG: ghcr.io/securesign/secure-sign-operator-fbc:dev-c9d3fb477ea16b4a12f91c9e4f9707d17a4b8df9
942:  NEW_OLM_CHANNEL: rhtas-operator.v1.4.0
943:  OCP_VERSION: v4.19
944:  REGISTRY_AUTH_FILE: /tmp/config.json
945:  TEST_BASE_CATALOG: registry.redhat.io/redhat/redhat-operator-index:v4.19
946:  TEST_TARGET_CATALOG: ghcr.io/securesign/secure-sign-operator-fbc:dev-c9d3fb477ea16b4a12f91c9e4f9707d17a4b8df9
947:  ##[endgroup]
948:  Running Suite: Trusted Artifact Signer E2E Suite - /home/runner/work/secure-sign-operator/secure-sign-operator/test/e2e
949:  =======================================================================================================================
950:  Random Seed: �[1m1764035804�[0m
951:  Will run �[1m8�[0m of �[1m8�[0m specs
952:  �[38;5;10m•�[0m�[38;5;10m•�[0m�[38;5;10m•�[0m
953:  �[38;5;243m------------------------------�[0m
954:  �[38;5;9m• [FAILED] [301.434 seconds]�[0m
955:  �[0mOperator upgrade �[38;5;9m�[1m[It] Upgrade operator�[0m
956:  �[38;5;243m/home/runner/work/secure-sign-operator/secure-sign-operator/test/e2e/upgrade_test.go:158�[0m
957:  �[38;5;243mTimeline >>�[0m
958:  �[38;5;9m[FAILED]�[0m in [It] - /home/runner/work/secure-sign-operator/secure-sign-operator/test/e2e/upgrade_test.go:176 �[38;5;243m@ 11/25/25 02:06:44.335�[0m
959:  ----------------------- Dumping operator resources -----------------------
960:  Catalog:
961:  rhtas-operator-catalog ready: true
962:  Extension:
963:  rhtas-operator version: 1.3.1 ready: true
964:  ----------------------- Dumping namespace upgrade-test-4kqvr -----------------------
965:  �[38;5;243m<< Timeline�[0m
966:  �[38;5;9m[FAILED] Timed out after 300.003s.
967:  Expected
968:  <bool>: false
969:  to be true�[0m
970:  �[38;5;9mIn �[1m[It]�[0m�[38;5;9m at: �[1m/home/runner/work/secure-sign-operator/secure-sign-operator/test/e2e/upgrade_test.go:176�[0m �[38;5;243m@ 11/25/25 02:06:44.335�[0m
971:  �[38;5;243m------------------------------�[0m
972:  �[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m�[38;5;14mS�[0m
973:  �[38;5;9m�[1mSummarizing 1 Failure:�[0m
974:  �[38;5;9m[FAIL]�[0m �[0mOperator upgrade �[38;5;9m�[1m[It] Upgrade operator�[0m
975:  �[38;5;243m/home/runner/work/secure-sign-operator/secure-sign-operator/test/e2e/upgrade_test.go:176�[0m
976:  �[38;5;9m�[1mRan 4 of 8 Specs in 599.765 seconds�[0m
977:  �[38;5;9m�[1mFAIL!�[0m -- �[38;5;10m�[1m3 Passed�[0m | �[38;5;9m�[1m1 Failed�[0m | �[38;5;11m�[1m0 Pending�[0m | �[38;5;14m�[1m4 Skipped�[0m
978:  --- FAIL: TestE2e (599.77s)
979:  FAIL
...

982:  ?   	github.com/securesign/operator/test/e2e/support	[no test files]
983:  ?   	github.com/securesign/operator/test/e2e/support/condition	[no test files]
984:  ?   	github.com/securesign/operator/test/e2e/support/kubernetes	[no test files]
985:  ?   	github.com/securesign/operator/test/e2e/support/kubernetes/olm	[no test files]
986:  ?   	github.com/securesign/operator/test/e2e/support/steps	[no test files]
987:  ?   	github.com/securesign/operator/test/e2e/support/tas	[no test files]
988:  ?   	github.com/securesign/operator/test/e2e/support/tas/cli	[no test files]
989:  ?   	github.com/securesign/operator/test/e2e/support/tas/ctlog	[no test files]
990:  ?   	github.com/securesign/operator/test/e2e/support/tas/fulcio	[no test files]
991:  ?   	github.com/securesign/operator/test/e2e/support/tas/rekor	[no test files]
992:  ?   	github.com/securesign/operator/test/e2e/support/tas/securesign	[no test files]
993:  ?   	github.com/securesign/operator/test/e2e/support/tas/trillian	[no test files]
994:  ?   	github.com/securesign/operator/test/e2e/support/tas/tsa	[no test files]
995:  ?   	github.com/securesign/operator/test/e2e/support/tas/tuf	[no test files]
996:  FAIL
997:  ##[error]Process completed with exit code 1.
998:  ##[group]Run actions/upload-artifact@v4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant