Skip to content

Conversation

@mkowalski
Copy link
Contributor

No description provided.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Jul 10, 2023
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jul 10, 2023
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jul 10, 2023

@mkowalski: This pull request references OPNET-320 which is a valid jira issue.

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 10, 2023

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 10, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: mkowalski
Once this PR has been reviewed and has the lgtm label, please assign jcaamano for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack

@mkowalski
Copy link
Contributor Author

Waiting for openshift/kubernetes#1645

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack

@mkowalski
Copy link
Contributor Author

/test build04-dry

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack

Confusing because the error seems to be network-related but it can't be anything from this PR

 level=error msg=InstallerPodNetworkingDegraded: Pod "installer-8-ci-op-glg34mcb-84b46-2kbl8-master-1" on node "ci-op-glg34mcb-84b46-2kbl8-master-1" observed degraded networking: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-8-ci-op-glg34mcb-84b46-2kbl8-master-1_openshift-etcd_74ef075e-b27f-4b73-ab7f-7840c503326d_0(cd9a896b977e2e067472395b8c8f55ea0c9415d8b6445ee3b6dc1d96355c585d): error adding pod openshift-etcd_installer-8-ci-op-glg34mcb-84b46-2kbl8-master-1 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-etcd/installer-8-ci-op-glg34mcb-84b46-2kbl8-master-1/74ef075e-b27f-4b73-ab7f-7840c503326d:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-etcd/installer-8-ci-op-glg34mcb-84b46-2kbl8-master-1 cd9a896b977e2e067472395b8c8f55ea0c9415d8b6445ee3b6dc1d96355c585d network default NAD default] [openshift-etcd/installer-8-ci-op-glg34mcb-84b46-2kbl8-master-1 cd9a896b977e2e067472395b8c8f55ea0c9415d8b6445ee3b6dc1d96355c585d network default NAD default] failed to configure pod interface: failed to add pod route 10.128.0.0/14 via 10.130.0.1: file exists 

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack

This is a valid failure unfortunately



:  [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6  internal node ip [Suite:openshift/conformance/parallel] [Suite:k8s] expand_less
--
Run #0: Failed expand_less                           2s                                                                                                          {  fail [test/e2e/network/dual_stack.go:68]: Expected     <int>: 1 to equal     <int>: 2 Error: exit with code 1 Ginkgo exit error 1: exit with code 1} | Run #0: Failed expand_less                           2s                                                                                                          {  fail [test/e2e/network/dual_stack.go:68]: Expected     <int>: 1 to equal     <int>: 2 Error: exit with code 1 Ginkgo exit error 1: exit with code 1} | Run #0: Failed expand_less | 2s | {  fail [test/e2e/network/dual_stack.go:68]: Expected     <int>: 1 to equal     <int>: 2 Error: exit with code 1 Ginkgo exit error 1: exit with code 1}
Run #0: Failed expand_less                           2s                                                                                                          {  fail [test/e2e/network/dual_stack.go:68]: Expected     <int>: 1 to equal     <int>: 2 Error: exit with code 1 Ginkgo exit error 1: exit with code 1} | Run #0: Failed expand_less | 2s | {  fail [test/e2e/network/dual_stack.go:68]: Expected     <int>: 1 to equal     <int>: 2 Error: exit with code 1 Ginkgo exit error 1: exit with code 1}
Run #0: Failed expand_less | 2s
{  fail [test/e2e/network/dual_stack.go:68]: Expected     <int>: 1 to equal     <int>: 2 Error: exit with code 1 Ginkgo exit error 1: exit with code 1}

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-4.14-e2e-vsphere-ovn-dualstack

@openshift-ci-robot
Copy link
Contributor

@mkowalski: job(s): pull-ci-openshift-cluster-network-operator-4.14-e2e-vsphere-ovn-dualstack either don't exist or were not found to be affected, and cannot be rehearsed

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-release-4.14-e2e-vsphere-ovn-dualstack

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack

1 similar comment
@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack
/pj-rehearse pull-ci-openshift-cluster-network-operator-release-4.14-e2e-vsphere-ovn-dualstack

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack

@openshift-ci-robot
Copy link
Contributor

@mkowalski, pj-rehearse: unable prepare a candidate for rehearsal; rehearsals will not be run. This could be due to a branch that needs to be rebased. ERROR:

couldn't rebase repo client

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack

1 similar comment
@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-release-4.14-e2e-vsphere-ovn-dualstack

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack

1 similar comment
@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-release-4.14-e2e-vsphere-ovn-dualstack

@mkowalski
Copy link
Contributor Author

mkowalski commented Aug 21, 2023

Aug 21 10:44:02.744333 ci-op-4gcj1qgg-e84a7-gsp5q-master-0 kubenswrapper[4184]: I0821 10:44:02.742477    4184 flags.go:64] FLAG: --node-ip="192.168.100.46,fd65:a1a8:60ad:271c::51"

From journalctl of one of the nodes, this seems sane


Aug 21 10:43:53.431277 ci-op-4gcj1qgg-e84a7-gsp5q-master-0 resolv-prepender.sh[3455]: Error: initializing source docker://registry.apps.build01-us-west-2.vmc.ci.openshift.org/ci-op-4gcj1qgg/stable@sha256:e3e972ac8e423b829ed798885d4e9491eaeab899975a35802f2acc0daec11698: pinging container registry registry.apps.build01-us-west-2.vmc.ci.openshift.org: Get "https://registry.apps.build01-us-west-2.vmc.ci.openshift.org/v2/": dial tcp: lookup registry.apps.build01-us-west-2.vmc.ci.openshift.org on [fd65:a1a8:60ad:271c::1]:53: dial udp [fd65:a1a8:60ad:271c::1]:53: connect: network is unreachable

Lots of those, not sure if that's issue or not. Rather not because shortly after we can see

Aug 21 10:43:55.710311 ci-op-4gcj1qgg-e84a7-gsp5q-master-0 resolv-prepender.sh[1809]: NM resolv-prepender: Download of baremetal runtime cfg image completed

                        "message": "Multiple errors are preventing progress:\n* Cluster operators authentication, baremetal, cluster-autoscaler, config-operator, control-plane-machine-set, csi-snapshot-controller, dns, etcd, image-registry, ingress, insights, kube-apiserver, kube-controller-manager, kube-scheduler, kube-storage-version-migrator, machine-api, machine-approver, machine-config, marketplace, monitoring, network, node-tuning, openshift-apiserver, openshift-controller-manager, service-ca, storage are not available\n* Could not update imagestream \"openshift/driver-toolkit\" (601 of 858): resource may have been deleted\n* Could not update oauthclient \"console\" (543 of 858): the server does not recognize this resource, check extension API servers\n* Could not update role \"openshift-apiserver/prometheus-k8s\" (842 of 858): resource may have been deleted\n* Could not update role \"openshift-authentication/prometheus-k8s\" (743 of 858): resource may have been deleted\n* Could not update role \"openshift-console-operator/prometheus-k8s\" (781 of 858): resource may have been deleted\n* Could not update role \"openshift-console/prometheus-k8s\" (784 of 858): resource may have been deleted\n* Could not update role \"openshift-controller-manager/prometheus-k8s\" (850 of 858): resource may have been deleted\n* Could not update role \"openshift/copied-csv-viewer\" (665 of 858): resource may have been deleted\n* Could not update rolebinding \"openshift/cluster-samples-operator-openshift-edit\" (491 of 858): resource may have been deleted",
NAME                                       VERSION                                                    AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                                                                                                                                     
baremetal                                                                                                                                          
cloud-controller-manager                   4.14.0-0.ci.test-2023-08-21-102737-ci-op-4gcj1qgg-latest   True        False         False      34m     
cloud-credential                                                                                      True        False         False      40m     
cluster-autoscaler                                                                                                                                 
config-operator                                                                                                                                    
console                                                                                                                                            
control-plane-machine-set                                                                                                                          
csi-snapshot-controller                                                                                                                            
dns                                                                                                                                                
etcd                                                                                                                                               
image-registry                                                                                                                                     
ingress                                                                                                                                            
insights                                                                                                                                           
kube-apiserver                                                                                                                                     
kube-controller-manager                                                                                                                            
kube-scheduler                                                                                                                                     
kube-storage-version-migrator                                                                                                                      
machine-api                                                                                                                                        
machine-approver                                                                                                                                   
machine-config                                                                                                                                     
marketplace                                                                                                                                        
monitoring                                                                                                                                         
network                                                                                                                                            
node-tuning                                                                                                                                        
openshift-apiserver                                                                                                                                
openshift-controller-manager                                                                                                                       
openshift-samples                                                                                                                                  
operator-lifecycle-manager                                                                                                                         
operator-lifecycle-manager-catalog                                                                                                                 
operator-lifecycle-manager-packageserver                                                                                                           
service-ca                                                                                                                                         
storage                                                                                                                                            

                    {
                        "lastTransitionTime": "2023-08-21T10:54:36Z",
                        "lastUpdateTime": "2023-08-21T10:54:36Z",
                        "message": "ReplicaSet \"network-operator-7cf65ccf9f\" has timed out progressing.",
                        "reason": "ProgressDeadlineExceeded",
                        "status": "False",
                        "type": "Progressing"
                    }

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack

@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack
/pj-rehearse pull-ci-openshift-cluster-network-operator-release-4.14-e2e-vsphere-ovn-dualstack

1 similar comment
@mkowalski
Copy link
Contributor Author

/pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack
/pj-rehearse pull-ci-openshift-cluster-network-operator-release-4.14-e2e-vsphere-ovn-dualstack

@openshift-bot
Copy link
Contributor

Issues in openshift/release go stale after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 15d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 1, 2023
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 1, 2023
@openshift-merge-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-bot
Copy link
Contributor

Stale issue in openshift/release rot after 15d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 15d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 23, 2023
@mkowalski
Copy link
Contributor Author

/remove-lifecycle rotten

@openshift-ci openshift-ci bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 23, 2023
@openshift-bot
Copy link
Contributor

Issues in openshift/release go stale after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 15d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 23, 2023
@mkowalski
Copy link
Contributor Author

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 3, 2024
@openshift-bot
Copy link
Contributor

Issues in openshift/release go stale after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 15d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 2, 2024
@mkowalski
Copy link
Contributor Author

/lifecycle frozen

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 2, 2024

@mkowalski: The lifecycle/frozen label cannot be applied to Pull Requests.

In response to this:

/lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-bot
Copy link
Contributor

Stale issue in openshift/release rot after 15d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 15d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 17, 2024
@mkowalski
Copy link
Contributor Author

/lifecycle frozen

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 19, 2024

@mkowalski: The lifecycle/frozen label cannot be applied to Pull Requests.

In response to this:

/lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 1, 2024

@mkowalski: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/rehearse/openshift/cluster-network-operator/release-4.14/e2e-vsphere-ovn-dualstack 2e19cd4df6e3f54b07c57cc330742f3f8962d6a4 link unknown /pj-rehearse pull-ci-openshift-cluster-network-operator-release-4.14-e2e-vsphere-ovn-dualstack
ci/rehearse/openshift/cluster-network-operator/master/e2e-vsphere-ovn-dualstack 18a0ebfffde2172e4e589de2b0d592ac8a0ec7ee link unknown /pj-rehearse pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack
ci/prow/stackrox-stackrox-stackrox-stackrox-check e7eb61c link true /test stackrox-stackrox-stackrox-stackrox-check
ci/prow/ci-operator-config e7eb61c link true /test ci-operator-config

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

Rotten issues in openshift/release close after 15d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 17, 2024

@openshift-bot: Closed this PR.

In response to this:

Rotten issues in openshift/release close after 15d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. rehearsals-ack Signifies that rehearsal jobs have been acknowledged

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants