Skip to content

Conversation

@dlom
Copy link
Contributor

@dlom dlom commented Mar 17, 2025

xref: HIVE-2804

/assign @2uasimojo

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Mar 17, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented Mar 17, 2025

@dlom: This pull request references HIVE-2804 which is a valid jira issue.

Details

In response to this:

xref: HIVE-2804

/assign @2uasimojo

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Mar 17, 2025

@dlom: This pull request references HIVE-2804 which is a valid jira issue.

Details

In response to this:

xref: HIVE-2804

/assign @2uasimojo

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Member

@2uasimojo 2uasimojo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 17, 2025
@2uasimojo
Copy link
Member

/test all

prow seems ill today

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 17, 2025

@2uasimojo: No presubmit jobs available for openshift/hive@master

Details

In response to this:

/test all

prow seems ill today

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@dlom
Copy link
Contributor Author

dlom commented Mar 18, 2025

@2uasimojo: No presubmit jobs available for openshift/hive@master

something seems off

@smg247
Copy link
Member

smg247 commented Mar 18, 2025

/test all

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 18, 2025

@dlom: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@codecov
Copy link

codecov bot commented Mar 18, 2025

Codecov Report

Attention: Patch coverage is 75.00000% with 17 lines in your changes missing coverage. Please review.

Project coverage is 49.98%. Comparing base (b19773e) to head (8e4c517).
Report is 2 commits behind head on master.

Files with missing lines Patch % Lines
pkg/installmanager/installmanager.go 75.00% 10 Missing and 7 partials ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2612      +/-   ##
==========================================
+ Coverage   49.93%   49.98%   +0.04%     
==========================================
  Files         281      281              
  Lines       33136    33203      +67     
==========================================
+ Hits        16545    16595      +50     
- Misses      15257    15267      +10     
- Partials     1334     1341       +7     
Files with missing lines Coverage Δ
pkg/installmanager/installmanager.go 34.80% <75.00%> (+2.17%) ⬆️
🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@2uasimojo
Copy link
Member

/approve

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 18, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: 2uasimojo, dlom

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 18, 2025
@openshift-merge-bot openshift-merge-bot bot merged commit a6e61d6 into openshift:master Mar 18, 2025
10 checks passed
@2uasimojo
Copy link
Member

/cherry-pick mce-2.8 mce-2.7 mce-2.6 mce-2.5 mce-2.4

@openshift-cherrypick-robot

@2uasimojo: #2612 failed to apply on top of branch "mce-2.8":

Applying: Cleanse metadata.json on the ClusterProvision object
Using index info to reconstruct a base tree...
M	pkg/installmanager/installmanager.go
Falling back to patching base and 3-way merge...
Auto-merging pkg/installmanager/installmanager.go
CONFLICT (content): Merge conflict in pkg/installmanager/installmanager.go
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
hint: When you have resolved this problem, run "git am --continue".
hint: If you prefer to skip this patch, run "git am --skip" instead.
hint: To restore the original branch and stop patching, run "git am --abort".
hint: Disable this message with "git config advice.mergeConflict false"
Patch failed at 0001 Cleanse metadata.json on the ClusterProvision object

Details

In response to this:

/cherry-pick mce-2.8 mce-2.7 mce-2.6 mce-2.5 mce-2.4

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Aug 5, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're instead offloading it verbatim to a new Secret
in the ClusterDeployment's namespace, referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

Instead of building the installer's ClusterMetadata structure for the
destroyer with individual fields from the CD's ClusterMetadata, we're
unmarshaling it directly from the contents of that Secret.

(Except in some cases we have to scrub/replace credentials fields -- see
HIVE-2804 / openshift#2612)

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work. If this results in a hanging deprovision due to
a missing field, the workaround is to modify the contents of the Secret
to add it in; then kill the deprovision pod and the next attempt should
pick up the changes. (If the result is a "successful" deprovision with
leaked resources, the only workaround is to clean up the infra manually.
Sorry.)
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Aug 5, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're instead offloading it verbatim to a new Secret
in the ClusterDeployment's namespace, referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

Instead of building the installer's ClusterMetadata structure for the
destroyer with individual fields from the CD's ClusterMetadata, we're
unmarshaling it directly from the contents of that Secret.

(Except in some cases we have to scrub/replace credentials fields -- see
HIVE-2804 / openshift#2612)

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work. If this results in a hanging deprovision due to
a missing field, the workaround is to modify the contents of the Secret
to add it in; then kill the deprovision pod and the next attempt should
pick up the changes. (If the result is a "successful" deprovision with
leaked resources, the only workaround is to clean up the infra manually.
Sorry.)
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Aug 15, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Aug 19, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Sep 9, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Sep 23, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Sep 23, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Sep 30, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Oct 1, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Oct 1, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Oct 16, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Oct 24, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Oct 28, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Oct 28, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Oct 29, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Oct 31, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
2uasimojo added a commit to 2uasimojo/hive that referenced this pull request Oct 31, 2025
Well, mostly.

Previously any time installer added a field to metadata.json, we would
need to evaluate and possibly add a bespoke field and code path for it
to make sure it was supplied to the destroyer at deprovision time.

With this change, we're offloading metadata.json verbatim (except in
some cases we have to scrub/replace credentials fields -- see HIVE-2804
/ openshift#2612) to a new Secret in the ClusterDeployment's namespace,
referenced from a new field:
ClusterDeployment.Spec.ClusterMetadata.MetadataJSONSecretRef.

For legacy clusters -- those created before this change -- we attempt to
retrofit the new Secret based on the legacy fields. This is best effort
and may not always work.

In the future (but not here!) instead of building the installer's
ClusterMetadata structure for the destroyer with individual fields from
the CD's ClusterMetadata, we'll unmarshal it directly from the contents
of this Secret.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants