Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add single-node-developer Cluster Profile #482

Merged
merged 1 commit into from
Nov 24, 2020

Conversation

rkukura
Copy link
Contributor

@rkukura rkukura commented Sep 18, 2020

Add a new ‘single-node-developer’ cluster profile that defines the set
of OpenShift payload components, and the configuration thereof,
applicable to single-node resource-constrained non-production OCP
clusters that support the development of OpenShift applications to be
deployed for production on other OCP clusters, such as those using the
default ‘self-managed-high-availability’ profile. The
‘single-node-developer’ profile will be defined for and utilized in
producing the Code Ready Containers (CRC) product that runs in a
single VM on a developer’s workstation or laptop. This profile is not
intended or supported for any kind of production deployment.

from deads2k:
Several teams need to ack. Bob has mentioned them all so they had a chance to review:

Check the box if you agree to support your operator in this single node, non-HA, resource constrained mode (clearly indicated in the doc). Be sure you've read to understand the impact. I have starred operators that have known impacts in phase two (these are clearly called out in the doc) and all are being requested to change cpu and memory footprints. You should also have gotten an acknowledgement from your PM about ongoing maintenance tax and prioritization of a phase two.

If phase two cannot achieve agreement, it should probably merge separately from phase one for a re-comment phase.

  • authentication **
  • cluster-autoscaler
  • config-operator
  • console **
  • csi-snapshot-controller
  • dns
  • etcd **
  • image-registry
  • ingress **
  • insights **
  • kube-apiserver ** (cloud cred operator backfeeds it today and that's going to be gone)
  • kube-controller-manager
  • kube-scheduler
  • kube-storage-version-migrator
  • machine-approver
  • machine-config **
  • marketplace
  • monitoring **
  • network ** (multus)
  • node-tuning
  • openshift-apiserver
  • openshift-controller-manager
  • openshift-samples
  • operator-lifecycle-manager
  • operator-lifecycle-manager-catalog
  • operator-lifecycle-manager-packageserver **
  • service-ca

excluded operators are. These teams need to ack that they understand the impact on not being present and think the cluster will be functional without them

  • cloud-credential
  • machine-api
  • storage

Copy link
Member

@derekwaynecarr derekwaynecarr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will have more comments next week.

Can you clarify which phase and in which release this work is intended?

As written, I assume the desire is for 4.7 to achieve phase 1 and 2?

The testing required during the second phase will depend on what
changes are made to each individual component. If a specialized
manifest is added for a component, an appropriate subset of existing
tests for that component should ideally be run using the new
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you advocating this test runs per PR or as some periodic job? Can you provide a pointer to what tests CRC runs today?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In cases where specialized features or config knobs are added to a component specifically for this profile, I'd think those should be tested similarly to any other similar feature in the component. In previous projects I've worked on, that would involve at least basic UT coverage that runs on each PR, but I'm not yet very familiar with the OpenShift testing strategy, so could use some guidance here.

I will investigate the current CRC testing strategy in more detail, and followup here with the requested info.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarified component CI testing and added sections on conformance test job in update.

@derekwaynecarr
Copy link
Member

It would help to classify the goal of the crc user.

From the present UX of crc, the user flow is following:

  1. download crc
  2. crc setup -- checks my virtualization setup
  3. crc start -- starts up a single node OpenShift

I am wondering if a single profile is the right outcome for crc.

I see a use case for crc-minimal that gets me the following:

  • etcd
  • kube-*
  • openshift-*

If a user wants to do local development via the console, maybe switch to a crc-dev profile via
crc config set xyz=y, enabling the crc-dev profile would enable the following:

  • console for developer perspective
  • olm
  • marketplace

The tension is the proposal starts from a product UX of exclusion rather than addition and shifts the burden of asking 'does this make sense for crc?` to each component team.

It's possible we end up mapping above to distinct profiles, but what would help in this enhancement is the user persona for crc, if there are multiple, identify if there is a user action that the user performs to take on that persona, and then we can map it back to one or more profiles that guide forward decision making.

@rkukura
Copy link
Contributor Author

rkukura commented Sep 21, 2020

It would help to classify the goal of the crc user.

The focus of CRC is primarily (maybe solely?) on the user who wants to do local development of applications for OpenShift (or at least explore what that looks like). The criteria for whether to include or exclude a component should be based on whether that component is potentially useful when developing applications for OpenShift. If so, it should be included. If the component is only applicable to production deployments, then it should be excluded.

The focus of this enhancement is not so much on excluding as many components as possible, but is instead on more-cleanly excluding those that really serve no purpose for the CRC user, while enabling optimization where needed to fit into the single-node limited-memory form factor of a VM running on a laptop.

It would certainly be possible to consider offering multiple distinct profiles for CRC as a follow-on enhancement, but that does not seem necessary for the current target audience.

@cfergeau
Copy link

It's possible we end up mapping above to distinct profiles, but what would help in this enhancement is the user persona for crc, if there are multiple, identify if there is a user action that the user performs to take on that persona, and then we can map it back to one or more profiles that guide forward decision making.

Actually I'm tempted to ask you the same question ;) Who would be using crc-minimal? Our goal with crc is to provide easy access to an OpenShift cluster for developers/tests. We want to be as close as possible to a production OpenShift cluster, hence the "exclusion" approach, we disable as few things as possible, either because they make no sense/don't work on a single node cluster, or because they are consuming far too much resource and are not a core part of the cluster (thinking of the monitoring stack here). We want to avoid as much as possible scenarios when something works on crc, but does not work on a production cluster, or vice versa. So the closest we are to a production cluster (from a "components we run" perspective), the better.

@rkukura
Copy link
Contributor Author

rkukura commented Oct 6, 2020

Can you clarify which phase and in which release this work is intended?

As written, I assume the desire is for 4.7 to achieve phase 1 and 2?

Added text targeting all of phase 1 and a start on phase 2 to the 4.7 release in update.

will be available to collaborate with component developers regarding
whether a component should be included in the single-node-developer
cluster profile, and if so, how it might be configured to minimize
resource requirements.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding that final risk, at the moment there is no way to differentiate between "yes, this component is excluded from profile XXX on purpose" and "developers of the component forgot to consider profile XXX", it would be nice to have a way to indicate the latter so that we can automatically detect missing profile information.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to add an annotation such as:

include.release.openshift.io/single-node-developer=false

to a manifest to formally record the fact that this particular manifest is intentionally not included in the profile?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this probably will be useful to have once we start checking manifests for missing annotations (see openshift/oc#618 on that matter)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The upcoming update specifies that we will be adding the annotation with a value of "false" to document that manifests were intentionally excluded.

@guillaumerose
Copy link
Contributor

While looking at openshift/cluster-version-operator#404 where we implement cluster profile as an environment variable mostly for IBM Cloud, we are missing for CRC a way to use this profile through the installer.

I suggest that we add a section saying that in the ClusterVersion object will contain an new immutable field in his spec: the profile.
The CVO will load by order of precedence:

  • the environment variable
  • the profile from the ClusterVersion.

matter of whether the component is included or excluded. For others,
specialized configurations and/or features may be used in this
profile, possibly requiring development and ongoing maintenance
efforts.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this statement I am amazed this enhancement is not reviewed by a wider audience and also put into planning of PMs, or at least that they are aware that team capacity is reduced.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree we need to find and the right set of reviewers, and make sure PMs are fully aware of this effort. Please let me know if there are specific reviewers I should enlist. But I don't expect most components to require specialized manifests or new features, at least initially, so capacity during 4.7 development should not be impacted for most components.

Copy link
Member

@derekwaynecarr derekwaynecarr Oct 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rkukura i think this enhancement inaccurately implies that the burden of defining what is appropriate for CRC use cases is transferred to each component team. this proposal makes the alterations required by CRC visible to each component team but not the responsibility of each team. CRC needs and goals may change in the future, and ultimately CRC will decide which components/operators make sense for their use case (as has already been done).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Visibility is emphasized and responsibilities are clarified in the upcoming update.

simplified CRC build tooling should produce a cluster equivalent to
that currently produced, and all existing CRC tests should pass. Phase
one is expected to be completed during development of the 4.7
OpenShift release.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean other teams implicitly got another epic to support this effort in 4.7 ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Phase one changes would most likely be similar to openshift/cluster-authentication-operator#352 , one line addition to manifests which need to be included, not sure this would use so much time?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Phase one is fine, if these annotation are enough to get it working. I am more worried about #482 (comment).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the 4.7 timeframe, I don't expect there will be more than 'phase one' happening, ie adding the relevant annotation to manifests. At that stage, not having a way to use different manifest for operands will hopefully be good enough. Once this 'phase one' single node profile is in place, we can start narrowing down which components need profile-specific manifests, and scope/plan the work with the relevant teams.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree completing phase 1 is the main goal for this enhancement in 4.7, but hope we'll be able to also include specialized manifests for at least one or two components, such as etcd-operator, in that release.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sttts please see my earlier comments around shared responsibility versus ownership. I see alternative manifests as a shared responsibility across the teams, but ownership in 4.7 for any alternative manifests is with CRC. This is similar to how we handled ROKS. It is not the intention that every team got a new epic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarified timelines and responsibilities in upcoming update.


This cluster profile will result in a new topology of OpenShift
components to be officially supported for its intended usage in the
CRC product. Once the profile is introduced, all teams developing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to me supported means we will run a CI job for every PR that is affecting CRC, who is going to create and maintain this job?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Under "Test Plan" below, I discuss adding a new CI job that runs conformance tests against a deployment of the single-node-developer profile. We'll need to decide whether this CI job runs periodically, or is triggered by PRs submitted to certain repos.

In addition to this new CI job, if a new feature or configuration item is added to a component during phase 2, then it should be possible to extend an existing CI job in that component's repo to test the new feature or configuration item, just as for any other change being made to the component. This is also described in that section.

Do you see the need for an additional CI job beyond these?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we rephrase this into two parts:

part 1 - broad awareness

"Once the profile is introduced, all teams developing OpenShift components will be aware of how their components are configured in this profile."

part 2 - fit for purpose

"It is understood that this is an incremental step, but not a final destination. Any future evolution of the components included in CRC, their resultant configuration, and their required manifest changes will be sourced from this location and derived to meet the needs of the CRC user community, and made in consultation with, but not necessarily by the individual domain owners."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar wording is incorporated into the upcoming update, and more details on the CI job are provided.

while another serves as the combined master/worker node. Between
installation steps, and after the installer completes, snc.sh makes
various modifications to the cluster, such as configuring the etcd
operator to run without HA, tuning CPU, storage, and networking
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this etcd configuration is currently "highly" unsupported. the etcd team (@hexfusion @ironcladlou) need to aware and add option that if turned on in production clusters can have catastrophic side effects.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having each component explicitly add an annotation to indicate that the component will be used in a given profile is precisely a way of ensuring that the etcd team is aware of what is being done in the single-node profile.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be useful to show an example of such annotation in the context of a component.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed that single node non-HA etcd is completely unsupported for production.

During phase 1, we'll just be adding the profile's annotation to existing manifests, with no other changes.

I'd guess the etcd operator would be one of the first candidates to have a specialized manifest added during phase 2, so I'll add that as an example.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On that etcd matter, see #504 which has a need for single node clusters in production, and is having the same issue with etcd.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both the single-node-developer and single-node-production-edge profiles need to be able to run with a single etcd replica. Since the other profile has more critical requirements, discussion of how this can be supported should occur there, and this profile may be able to leverage the result.

### Phase One

In the initial phase, the following annotation will be added to the
manifest files for all components that are to be included in single
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which manifests? pods? deployments? crds?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can be any manifest contained in the release image (/release-manifests directory in the image). It can be a PrometheusRule, a Service, a Deployment, a CRD, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could start by adding the annotation to absolutely every manifest that is normally included in the default (self-managed-high-availability) profile. But we want to save some time and effort by avoiding adding the annotation to manifests that are definitely not needed for CRC, such as those removed by the SNC image build tooling or otherwise found to be unneeded, as described below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarified in upcoming update that these apply to CVO-managed components.

@sspeiche
Copy link

sspeiche commented Oct 12, 2020 via email

developer clusters, but will not have the necessary annotation
included in their manifests due to lack of awareness or concern for
the CRC product. Given that single node developer clusters are
expected to pass the majority of the OpenShift conformance tests,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that you are disabling some of the core openshift functionality I'm not sure you will be able to pass that majority. Can you maybe specify what is that expected majority?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran openshift-tests run openshift/conformance against the 1.17 CodeReady Containers release, and the results were error: 64 fail, 913 pass, 1660 skip (1h44m11s). Half of these are Prometheus/OauthServer/TopologyManager, so probably because of disabled components. Haven't looked closely at the others yet.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not mean to imply via "majority" that >= 50% of the tests were required to pass. The expectation is that all tests that do not depend on disabled functionality will pass. We will need to configure the CI job to skip those tests that are not expected to pass.

If a patch introducing a new component does not include the single-node-developer profile's annotation on a manifest, but does add new conformance tests for that new component, the CI job should begin to fail, unless the CI job is also configured to skip those new tests. This will help mitigate the risk of the annotation being accidentally omitted from the new component's manifest.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cfergeau thanks for the data.
@soltysh the important requirement from our present state is we have an explicit list of things we know pass so we know what level of functionality is expected to work (versus assume).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reworded this in upcoming update.


One currently open question is how to manage the set of operators
visible within the operator hub on a single-node-developer
cluster. Operators that cannot run without OpenShift components
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd start with not supporting additional operators. Focus on running single-node cluster w/o any additions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the ability to install and use operators from the operator hub is considered a key feature of CRC, so I doubt we can disable this entirely. I had a discussion with @ecordell last week to start exploring our options here. It sounds like OLM currently will prevent installing an operator that declares dependencies on APIs that aren't available, so that should help. Even if we don't come up with a comprehensive solution right away, this enhancement doesn't make things any worse than with the current CRC product.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rkukura i tend to agree with @soltysh .

having the catalog show function that is known to work by default (which would be empty, or a known subset that CRC says is important to the desired developer story is preferred.

we can document or add some crc command line to show the full catalog if CRC was being used as demo-ware to show ecoystem.

showing the whole catalog by default goes against the principle of "explicit inclusion".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the concerns here regarding OLM-managed operator compatibility, but this enhancement is focused on CVO-managed components and does not change this existing situation. I therefore added a non-goal for this in the upcoming update indicating it could be addressed via a separate enhancement.


Upgrade / downgrade strategy may need to be addressed for new
configuration items added to individual components to allow
optimization for single node developer cluster usage.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I follow, the previous paragraph clearly stated no upgrade/downgrade. Why any updates are needed then?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a component adds a feature to enable optimization for the single-node-developer profile, that feature might be used in other profiles that do support upgrade/downgrade. If so, then support might be needed for that new feature.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think i would say:

CRC does not support in-place upgrade/downgrade of a single-node developer cluster profile

the exception case @rkukura i think is understood that all other profiles of ocp need to keep functional behavior.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in upcoming update.

* openshift-multus

Once PRs adding this annotation have been merged to all the necessary
component repositories, the snc.sh script will be updated to tell the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we be more precise here and say: the snc.sh script will be updated to tell the OpenShift installer to use the single-node-developer cluster profile via a new field ClusterProfile in the ClusterVersion object. This new field won't be used if an other profile is passed to the CVO by an env. variable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While looking at openshift/cluster-version-operator#404 where we implement cluster profile as an environment variable mostly for IBM Cloud, we are missing for CRC a way to use this profile through the installer.

I suggest that we add a section saying that in the ClusterVersion object will contain an new immutable field in his spec: the profile.
The CVO will load by order of precedence:

  • the environment variable
  • the profile from the ClusterVersion.

This enhancement definitely requires some mechanism to specify the cluster profile to the installer and have that determine which cluster profile the CVO uses. I was hoping for some feedback on whether that mechanism should be addressed as part of this enhancement, or considered part of https://github.com/openshift/enhancements/blob/master/enhancements/update/cluster-profiles.md that this enhancement depends on. But that enhancement states "NOTE: The mechanism by which the environment variable is set on the CVO deployment is out of the scope of this design", so, unless someone identifies an existing enhancement that covers this, I'll include it in the next update to this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that #504 defines a different cluster profile that also depends on being able to specify the profile to the installer, it probably makes most sense to cover adding that functionality in a separate enhancement PR.

Copy link
Member

@derekwaynecarr derekwaynecarr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made a number of comments throughout, but thanks @rkukura for the updates.

I agree this is a good incremental improvement on present state, but not necessarily the end state. If we can clarify the language around responsibilities in this document, this is lgtm.

thereof, applicable to single-node OpenShift Container Platform
clusters intended for application development as well as for learning
and exploring OpenShift, but never for production deployment of
applications. This cluster profile will be utilized in producing the
Copy link
Member

@derekwaynecarr derekwaynecarr Oct 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "This cluster profile will be defined by and utilized..."

The keyword is that 'definition' of what meets the need of the CRC use case is still the domain of the CRC team to evolve and iterate. The definition is enforced by conformance tests on the community. The use of cluster profile is recognizing we share a common software delivery pipeline for building artifacts, but not shifting responsibility away from CRC entirely.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The upcoming update adds "defined" here, and is extensively reworked to clarify the CRC team's and component teams' responsibilities.


This cluster profile will result in a new topology of OpenShift
components to be officially supported for its intended usage in the
CRC product. Once the profile is introduced, all teams developing
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we rephrase this into two parts:

part 1 - broad awareness

"Once the profile is introduced, all teams developing OpenShift components will be aware of how their components are configured in this profile."

part 2 - fit for purpose

"It is understood that this is an incremental step, but not a final destination. Any future evolution of the components included in CRC, their resultant configuration, and their required manifest changes will be sourced from this location and derived to meet the needs of the CRC user community, and made in consultation with, but not necessarily by the individual domain owners."

matter of whether the component is included or excluded. For others,
specialized configurations and/or features may be used in this
profile, possibly requiring development and ongoing maintenance
efforts.
Copy link
Member

@derekwaynecarr derekwaynecarr Oct 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rkukura i think this enhancement inaccurately implies that the burden of defining what is appropriate for CRC use cases is transferred to each component team. this proposal makes the alterations required by CRC visible to each component team but not the responsibility of each team. CRC needs and goals may change in the future, and ultimately CRC will decide which components/operators make sense for their use case (as has already been done).

developer clusters, but will not have the necessary annotation
included in their manifests due to lack of awareness or concern for
the CRC product. Given that single node developer clusters are
expected to pass the majority of the OpenShift conformance tests,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cfergeau thanks for the data.
@soltysh the important requirement from our present state is we have an explicit list of things we know pass so we know what level of functionality is expected to work (versus assume).


One currently open question is how to manage the set of operators
visible within the operator hub on a single-node-developer
cluster. Operators that cannot run without OpenShift components
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rkukura i tend to agree with @soltysh .

having the catalog show function that is known to work by default (which would be empty, or a known subset that CRC says is important to the desired developer story is preferred.

we can document or add some crc command line to show the full catalog if CRC was being used as demo-ware to show ecoystem.

showing the whole catalog by default goes against the principle of "explicit inclusion".


Upgrade / downgrade strategy may need to be addressed for new
configuration items added to individual components to allow
optimization for single node developer cluster usage.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think i would say:

CRC does not support in-place upgrade/downgrade of a single-node developer cluster profile

the exception case @rkukura i think is understood that all other profiles of ocp need to keep functional behavior.

@rkukura rkukura deleted the profile branch November 25, 2020 16:58
guillaumerose added a commit to guillaumerose/cloud-credential-operator that referenced this pull request Nov 26, 2020
guillaumerose added a commit to guillaumerose/cluster-authentication-operator that referenced this pull request Nov 26, 2020
guillaumerose added a commit to guillaumerose/cluster-baremetal-operator that referenced this pull request Nov 26, 2020
guillaumerose added a commit to guillaumerose/cluster-config-operator that referenced this pull request Nov 26, 2020
guillaumerose added a commit to guillaumerose/cluster-csi-snapshot-controller-operator that referenced this pull request Nov 26, 2020
guillaumerose added a commit to guillaumerose/cluster-baremetal-operator that referenced this pull request Nov 26, 2020
guillaumerose added a commit to guillaumerose/cluster-baremetal-operator that referenced this pull request Dec 1, 2020
guillaumerose added a commit to guillaumerose/cluster-update-keys that referenced this pull request Dec 1, 2020
guillaumerose added a commit to guillaumerose/cluster-kube-apiserver-operator that referenced this pull request Dec 1, 2020
guillaumerose added a commit to guillaumerose/cluster-machine-approver that referenced this pull request Dec 1, 2020
guillaumerose added a commit to guillaumerose/cluster-baremetal-operator that referenced this pull request Dec 1, 2020
guillaumerose added a commit to guillaumerose/machine-config-operator that referenced this pull request Dec 1, 2020
guillaumerose added a commit to guillaumerose/cluster-update-keys that referenced this pull request Dec 2, 2020
guillaumerose added a commit to guillaumerose/cluster-authentication-operator that referenced this pull request Dec 2, 2020
This partially implements phase 1 of openshift/enhancements#482 and
does not change behavior. Initially, all manifests are included in
the single-node-developer cluster profile. Follow-on PRs may exclude
any of these that are not needed in the profile.
guillaumerose added a commit to guillaumerose/cluster-csi-snapshot-controller-operator that referenced this pull request Dec 4, 2020
guillaumerose added a commit to guillaumerose/cluster-csi-snapshot-controller-operator that referenced this pull request Dec 4, 2020
guillaumerose added a commit to guillaumerose/cluster-csi-snapshot-controller-operator that referenced this pull request Jan 6, 2021
guillaumerose added a commit to guillaumerose/cluster-csi-snapshot-controller-operator that referenced this pull request Jan 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.