Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CFE-1134: Watch infrastructure and update AWS tags #1148

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

chiragkyal
Copy link
Member

@chiragkyal chiragkyal commented Sep 23, 2024

The PR introduces the following changes:

  • The ingress controller now watches for the Infrastructure object changes. This ensures that any modifications to user-defined tags (platform.AWS.ResourceTags) trigger an update of the load balancer service.

  • Consider the awsLBAdditionalResourceTags annotation as a managed annotation. Any changes to user-defined tags in the Infrastructure object will be reflected in this annotation, prompting an update to the load balancer service.

Implements: CFE-1134

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Sep 23, 2024
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Sep 23, 2024

@chiragkyal: This pull request references CFE-1134 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set.

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@chiragkyal
Copy link
Member Author

/retest

@chiragkyal chiragkyal force-pushed the aws-tags branch 2 times, most recently from ea36409 to f2e5cf8 Compare October 3, 2024 07:05
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Oct 3, 2024

@chiragkyal: This pull request references CFE-1134 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set.

In response to this:

The PR introduces the following changes:

  • The ingress controller now watches for the Infrastructure object changes. This ensures that any modifications to user-defined tags (platform.AWS.ResourceTags) trigger an update of the load balancer service.

  • The logic for determining load balancer service updates now considers the awsLBAdditionalResourceTags annotation as a managed annotation. Any changes to user-defined tags in the Infrastructure object will be reflected in this annotation, prompting an update to the load balancer service.

    • Changing awsLBAdditionalResourceTags annotation won't mark the IngressController and the operator as Upgradable=False.

Implements: CFE-1134

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@chiragkyal chiragkyal changed the title CFE-1134: [WIP] Watch infrastructure and update AWS tags CFE-1134: Watch infrastructure and update AWS tags Oct 3, 2024
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Oct 7, 2024

@chiragkyal: This pull request references CFE-1134 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set.

In response to this:

The PR introduces the following changes:

  • The ingress controller now watches for the Infrastructure object changes. This ensures that any modifications to user-defined tags (platform.AWS.ResourceTags) trigger an update of the load balancer service.

  • Consider the awsLBAdditionalResourceTags annotation as a managed annotation. Any changes to user-defined tags in the Infrastructure object will be reflected in this annotation, prompting an update to the load balancer service.

    • Updating awsLBAdditionalResourceTags annotation won't mark the IngressController and the operator as Upgradable=False.

Implements: CFE-1134

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@chiragkyal
Copy link
Member Author

/assign @Miciah

@candita
Copy link
Contributor

candita commented Oct 9, 2024

/assign

Copy link
Contributor

@Miciah Miciah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add an E2E test? (I don't know whether an E2E test can update the ResourceTags in the infrastructure config status.)

pkg/operator/controller/ingress/controller.go Outdated Show resolved Hide resolved
@@ -134,6 +136,12 @@ func New(mgr manager.Manager, config Config) (controller.Controller, error) {
if err := c.Watch(source.Kind[client.Object](operatorCache, &configv1.Proxy{}, handler.EnqueueRequestsFromMapFunc(reconciler.ingressConfigToIngressController))); err != nil {
return nil, err
}
// Watch for changes to infrastructure config to update user defined tags
if err := c.Watch(source.Kind[client.Object](operatorCache, &configv1.Infrastructure{}, handler.EnqueueRequestsFromMapFunc(reconciler.ingressConfigToIngressController),
predicate.NewPredicateFuncs(hasName(clusterInfrastructureName)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The other watches technically should have this predicate too, and ingressConfigToIngressController should be renamed. However, adding the predicate to the other watches and renaming the map function should be addressed in a follow-up.

pkg/operator/controller/ingress/load_balancer_service.go Outdated Show resolved Hide resolved
Comment on lines 756 to 759
ignoredAnnotations := managedLoadBalancerServiceAnnotations.Union(sets.NewString(awsLBAdditionalResourceTags))
ignoredAnnotations := managedLoadBalancerServiceAnnotations.Clone()
ignoredAnnotations.Delete(awsLBAdditionalResourceTags)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we just use managedLoadBalancerServiceAnnotations now?

Suggested change
ignoredAnnotations := managedLoadBalancerServiceAnnotations.Union(sets.NewString(awsLBAdditionalResourceTags))
ignoredAnnotations := managedLoadBalancerServiceAnnotations.Clone()
ignoredAnnotations.Delete(awsLBAdditionalResourceTags)
return loadBalancerServiceAnnotationsChanged(current, expected, managedLoadBalancerServiceAnnotations)

To elaborate on that question, there are two general rules at play here:

  • First, the status logic sets Upgradeable=False if, and only if, it observes a discrepancy between the "managed" annotations' expected values and the actual values.
  • Second, by the time the status logic runs, there will not be any discrepancy between the expected (desired) annotation values and the actual annotation values.

And these general rules have exceptions:

  • As an exception to the first rule, before this PR, awsLBAdditionalResourceTags wasn't "managed", but even so, we set Upgradeable=False if it had been modified. (This is the logic that you are modifying here.)
  • As an exception to the second rule, if shouldRecreateLoadBalancer indicates that changing an annotation value requires recreating the service, then the desired and actual values can differ when the status logic observes them.

So now that you are making the awsLBAdditionalResourceTags annotation a managed annotation, don't we still want to set Upgradeable=False if the annotation value doesn't match the expected value?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the detailed explanation on how the status logic works and how it sets Upgradeable=False, as well as the exception that existed with awsLBAdditionalResourceTags before this PR. Earlier, I was under the impression that the status logic would still set Upgradeable=False even if awsLBAdditionalResourceTags was updated by the controller.

So now that you are making the awsLBAdditionalResourceTags annotation a managed annotation, don't we still want to set Upgradeable=False if the annotation value doesn't match the expected value?

Since awsLBAdditionalResourceTags will now be managed by the controller, and we still want to set Upgradeable=False if it’s updated by something other than the ingress controller, it does indeed make sense to use managedLoadBalancerServiceAnnotations directly in this logic. This way, the status logic will behave consistently for managed annotations when any discrepancy is observed.

I've removed the loadBalancerServiceTagsModified() function and used loadBalancerServiceAnnotationsChanged() directly inside loadBalancerServiceIsUpgradeable() and also added some comments for clearer understanding of the flow.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Oct 15, 2024

@chiragkyal: This pull request references CFE-1134 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set.

In response to this:

The PR introduces the following changes:

  • The ingress controller now watches for the Infrastructure object changes. This ensures that any modifications to user-defined tags (platform.AWS.ResourceTags) trigger an update of the load balancer service.

  • Consider the awsLBAdditionalResourceTags annotation as a managed annotation. Any changes to user-defined tags in the Infrastructure object will be reflected in this annotation, prompting an update to the load balancer service.

Implements: CFE-1134

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Contributor

openshift-ci bot commented Oct 15, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from candita. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@chiragkyal
Copy link
Member Author

Could you add an E2E test? (I don't know whether an E2E test can update the ResourceTags in the infrastructure config status.)

I need to try it to see if updating the infrastructure config status is possible through E2E. I used kubectl edit-status infrastructure cluster command to manually update the status while doing some testing. Need to check if similar thing can be done through E2E.

Having said that, do you think we should get QE sign-off for this PR?

@Miciah
Copy link
Contributor

Miciah commented Oct 16, 2024

I need to try it to see if updating the infrastructure config status is possible through E2E. I used kubectl edit-status infrastructure cluster command to manually update the status while doing some testing. Need to check if similar thing can be done through E2E.

The tests in test/e2e/configurable_route_test.go update the ingress cluster config status; maybe that's useful as a reference or precedent for adding a similar E2E test to this PR.

What I'm wondering is whether the test can update the infrastructure config status without some other controller stomping the changes, and whether there could be other reasons specific to the infrastructures resource or resourceTags API field why an E2E test should not or cannot update it.

Having said that, do you think we should get QE sign-off for this PR?

As a general matter, we should have QE sign-off for this PR. QE might prefer to do pre-merge testing as well.

Is day2 tags support being handled by a specific group of QA engineers, or are the QA engineers for each affected component responsible for testing the feature? Cc: @lihongan.

@chiragkyal
Copy link
Member Author

The tests in test/e2e/configurable_route_test.go update the ingress cluster config status; maybe that's useful as a reference or precedent for adding a similar E2E test to this PR.
What I'm wondering is whether the test can update the infrastructure config status without some other controller stomping the changes, and whether there could be other reasons specific to the infrastructures resource or resourceTags API field why an E2E test should not or cannot update it.

I just pushed a commit to add an E2E test. It's working fine locally, hope it should work on CI as well.

Is day2 tags support being handled by a specific group of QA engineers, or are the QA engineers for each affected component responsible for testing the feature?

The QA engineers for each affected component are testing this feature.
/cc @lihongan

@openshift-ci openshift-ci bot requested a review from lihongan October 16, 2024 20:14
@chiragkyal
Copy link
Member Author

/retest-required

@lihongan
Copy link
Contributor

Did pre-merge test on standalone OCP cluster and it can add new tags key/value pair and update existing tags value, but cannot delete the user added tags. see

$ oc get clusterversion
NAME      VERSION                                                AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.18.0-0.test-2024-10-18-013151-ci-ln-4jn9mbt-latest   True        False         59m     Cluster version is 4.18.0-0.test-2024-10-18-013151-ci-ln-4jn9mbt-latest

$ kubectl edit infrastructure cluster --subresource='status'

$ oc get infrastructures.config.openshift.io cluster -oyaml | yq .status.platformStatus
aws:
  region: us-east-2
  resourceTags:
    - key: Owner
      value: QE
    - key: CaseID
      value: OCP-76984
type: AWS

$ aws elb describe-tags --load-balancer-name a8a32335a6697415e9d55bafce2e6060 --output yaml
TagDescriptions:
- LoadBalancerName: a8a32335a6697415e9d55bafce2e6060
  Tags:
  - Key: kubernetes.io/service-name
    Value: openshift-ingress/router-default
  - Key: Owner
    Value: QE
  - Key: CaseID
    Value: OCP-76984
  - Key: kubernetes.io/cluster/hongli-tags-z7s5b
    Value: owned

// edit status and remove one tags pair and update one tag value
$ oc get infrastructures.config.openshift.io cluster -oyaml | yq .status.platformStatus
aws:
  region: us-east-2
  resourceTags:
    - key: Owner
      value: None
type: AWS

$ aws elb describe-tags --load-balancer-name a8a32335a6697415e9d55bafce2e6060 --output yaml
TagDescriptions:
- LoadBalancerName: a8a32335a6697415e9d55bafce2e6060
  Tags:
  - Key: kubernetes.io/service-name
    Value: openshift-ingress/router-default
  - Key: Owner
    Value: None
  - Key: CaseID
    Value: OCP-76984
  - Key: kubernetes.io/cluster/hongli-tags-z7s5b
    Value: owned

@chiragkyal please confirm if that's expected.
And will try to check HCP later.

@lihongan
Copy link
Contributor

lihongan commented Oct 18, 2024

And the tags can be added to new created NLB custom ingresscontroler, but when updating tags in infrastructure, the NLB cannot be updated accordingly.

$ oc get infrastructures.config.openshift.io cluster -oyaml | yq .status.platformStatus
aws:
  region: us-east-2
  resourceTags:
    - key: caseid
      value: ocp-88888
type: AWS

$ aws elbv2 describe-tags --resource-arns arn:aws:elasticloadbalancing:us-east-2:******:loadbalancer/net/a84a56f6f236b4f77b0032aaa037882d/9b00573907e8e952 --output yaml
TagDescriptions:
- ResourceArn: arn:aws:elasticloadbalancing:us-east-2:******:loadbalancer/net/a84a56f6f236b4f77b0032aaa037882d/9b00573907e8e952
  Tags:
  - Key: kubernetes.io/service-name
    Value: openshift-ingress/router-nlb
  - Key: caseid
    Value: ocp-88888
  - Key: kubernetes.io/cluster/hongli-tags-z7s5b
    Value: owned

// update with new tags but NLB not changed
$ oc get infrastructures.config.openshift.io cluster -oyaml | yq .status.platformStatus
aws:
  region: us-east-2
  resourceTags:
    - key: new-key
      value: new-value
type: AWS

$ aws elbv2 describe-tags --resource-arns arn:aws:elasticloadbalancing:us-east-2:******:loadbalancer/net/a84a56f6f236b4f77b0032aaa037882d/9b00573907e8e952 --output yaml
TagDescriptions:
- ResourceArn: arn:aws:elasticloadbalancing:us-east-2:******:loadbalancer/net/a84a56f6f236b4f77b0032aaa037882d/9b00573907e8e952
  Tags:
  - Key: kubernetes.io/service-name
    Value: openshift-ingress/router-nlb
  - Key: caseid
    Value: ocp-88888
  - Key: kubernetes.io/cluster/hongli-tags-z7s5b
    Value: owned

It looks like bug with NLB? checked the NLB service and we can find the annotation of updated tags, see

$ oc -n openshift-ingress get svc router-nlb -oyaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: new-key=new-value
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "4"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    traffic-policy.network.alpha.openshift.io/local-with-fallback: ""

@lihongan
Copy link
Contributor

looks kubernetes/kubernetes#96939 is for NLB fix but closed

@chiragkyal
Copy link
Member Author

chiragkyal commented Oct 18, 2024

@chiragkyal please confirm if that's expected.

It looks like the tags are getting merged with the existing AWS Resource tags. I think we can only control the aws-load-balancer-additional-resource-tags annotation values; the next step depends on how this annotation is getting read by its consumer.
As per https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/add-remove-tags.html, aws has remove-tags API to remove the tags, and I believe the upstream is only using add-tags API only to add the tags, so the tags are not getting removed

@Miciah is there a way we can control this behavior ?

@chiragkyal
Copy link
Member Author

looks kubernetes/kubernetes#96939 is for NLB fix but closed

Okay, it looks like an existing issue then.

@Miciah
Copy link
Contributor

Miciah commented Oct 18, 2024

Yeah, it looks like the behavior would need to be corrected in cloud-provider-aws. However, the upstream maintainers' usual response is to suggest using aws-load-balancer-controller instead: kubernetes/cloud-provider-aws#113 (comment). This has been an issue with other features that depend on cloud-provider-aws support as well.

Absent support in cloud-provider-aws, these are the options that come to mind:

  • Change OpenShift to use aws-load-balancer-controller instead of cloud-provider-aws for NLBs. This would be a major change, and it isn't one we could make in time for 4.18.
  • Delete and recreate the NLB for tags updates. This would disrupt traffic, so it isn't really practical.
  • Update tags directly from cluster-ingress-operator. We would need to make sure we had the necessary permissions in the CredentialsRequest and then write the client code, but then I think we could have ensureLoadBalancerService handle the AWS resource tags updates itself, and it shouldn't interfere with cloud-provider-aws's operation as cloud-provider-aws evidently isn't reconciling tags anyway. I don't love this option, but it seems like the only practical one.

Is handling updates a hard requirement? openshift/enhancements#1700 is ambiguous:

If the userTags field is changed post-install, there is no guarantee about how an in-cluster operator will respond to the change. Some operators may reconcile the change and change tags on the AWS resource. Some operators may ignore the change. However, if tags are removed from userTags, the tag will not be removed from the AWS resource.

But also:

For the resources created and managed by hosted control plane, cluster api provider for aws reconciles the user tags on AWS resources. The hosted control plane updates the infrastructure.config.openshift.io resource to reflect new tags in resourceTags. The OpenShift operators, both core and non-core (managed by RedHat), reconcile the respective AWS resources created and managed by them.

The first blockquote was present from the earlier implementation of the feature, so maybe it was supposed to be removed. I'll follow up on openshift/enhancements#1700.

@Miciah
Copy link
Contributor

Miciah commented Oct 18, 2024

I started a discussion here: https://github.com/openshift/enhancements/pull/1700/files#r1806975007

@chiragkyal
Copy link
Member Author

chiragkyal commented Oct 21, 2024

Yeah, it looks like the behavior would need to be corrected in cloud-provider-aws. However, the upstream maintainers' usual response is to suggest using aws-load-balancer-controller instead: kubernetes/cloud-provider-aws#113 (comment). This has been an issue with other features that depend on cloud-provider-aws support as well.

Okay. The lack of tags reconciliation support for NLB in cloud-provider-aws is an existing issue that will require significant changes and discussions. I don't believe that should be part of this feature implementation. I think we need to document this exception somewhere until it is resolved.

/cc @TrilokGeer

@lihongan
Copy link
Contributor

@chiragkyal Also did pre-mrege testing with HyperShift and got same results.

@Miciah
Copy link
Contributor

Miciah commented Oct 21, 2024

Okay. The lack of tags reconciliation support for NLB in cloud-provider-aws is an existing issue that will require significant changes and discussions. I don't believe that should be part of this feature implementation. I think we need to document this exception somewhere until it is resolved.

Do we need confirmation that this exception is acceptable? Would the exception be documented in openshift/enhancements#1700?

Aside from the matter of reconciling tags for NLBs managed by cloud-controller-manager/cloud-provider-aws, is this PR ready for review?

@chiragkyal
Copy link
Member Author

Do we need confirmation that this exception is acceptable? Would the exception be documented in openshift/enhancements#1700?

I had a discussion with @TrilokGeer on this. He'll check with other stakeholders regarding this issue and once accepted we can document the exception in the EP.

Aside from the matter of reconciling tags for NLBs managed by cloud-controller-manager/cloud-provider-aws, is this PR ready for review?

I believe so. And if I recall correctly, we need another review from @candita or @alebedev87 before final approval.

// awsLBAdditionalResourceTags is set correctly on the
// loadBalancer service associated with the default Ingress Controller.
func TestAWSResourceTagsChanged(t *testing.T) {
t.Parallel()
Copy link
Contributor

@Miciah Miciah Oct 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally, any test that modifies cluster config should be a serial test. I think that that general rule holds true here. We want to avoid nondeterministic test results should changing the aws-load-balancer-additional-resource-tags annotation somehow affect other tests.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion. I agreed that we should run this test in serial. I've removed t.Parallel() and added a go-doc for the function related to this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! You also need to update TestAll.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah! right. I've updated TestAll and moved TestAWSResourceTagsChanged into the serial section.

test/e2e/operator_test.go Outdated Show resolved Hide resolved
test/e2e/operator_test.go Outdated Show resolved Hide resolved
test/e2e/operator_test.go Outdated Show resolved Hide resolved
Comment on lines +1352 to +1373
err := wait.PollImmediate(5*time.Second, 5*time.Minute, func() (bool, error) {
service := &corev1.Service{}
if err := kclient.Get(context.Background(), controller.LoadBalancerServiceName(defaultIC), service); err != nil {
t.Logf("failed to get service %s: %v", controller.LoadBalancerServiceName(defaultIC), err)
return false, nil
}
if actualTags, ok := service.Annotations[awsLBAdditionalResourceTags]; !ok {
t.Logf("load balancer has no %q annotation: %v", awsLBAdditionalResourceTags, service.Annotations)
return false, nil
} else if actualTags != expectedTags {
t.Logf("expected %s, found %s", expectedTags, actualTags)
return false, nil
}
return true, nil
})
if err != nil {
t.Fatalf("timed out waiting for the %s annotation to be updated: %v", awsLBAdditionalResourceTags, err)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you use assertServiceAnnotation here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assertServiceAnnotation does not use wait.PollImmediate, and I think we should wait for the controller to update the annotation using the polling loop. I checked other tests like TestInternalLoadBalancer and TestAWSLBTypeChange which are also using similar logic. If it's safe to use assertServiceAnnotation here, then we can update it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed on the call, I'm fine with not using assertServiceAnnotation. You could add a code comment explaining why you cannot use assertServiceAnnotation, but I'm fine with the test in any case.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I've added the following code comment explaining the reason

	// Use a polling loop instead of assertServiceAnnotation since
	// the operator might not update the annotation immediately.

- Ingress controller now monitors changes to the Infrastructure object,
ensuring that modifications to user-defined AWS ResourceTags (platform.AWS.ResourceTags) trigger updates to the load balancer service.
- Consider awsLBAdditionalResourceTags annotation as a managed annotation.

Signed-off-by: chiragkyal <ckyal@redhat.com>
Copy link
Contributor

openshift-ci bot commented Oct 24, 2024

@chiragkyal: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-single-node a556017 link false /test e2e-aws-ovn-single-node
ci/prow/e2e-aws-operator a556017 link true /test e2e-aws-operator
ci/prow/e2e-aws-operator-techpreview a556017 link false /test e2e-aws-operator-techpreview
ci/prow/e2e-aws-ovn a556017 link true /test e2e-aws-ovn
ci/prow/e2e-aws-ovn-upgrade a556017 link true /test e2e-aws-ovn-upgrade
ci/prow/e2e-hypershift a556017 link true /test e2e-hypershift
ci/prow/e2e-aws-ovn-techpreview a556017 link false /test e2e-aws-ovn-techpreview
ci/prow/e2e-aws-ovn-serial a556017 link true /test e2e-aws-ovn-serial
ci/prow/e2e-aws-gatewayapi a556017 link false /test e2e-aws-gatewayapi

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants