Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs release 1.8.0 #415

Merged
merged 80 commits into from
Dec 15, 2021
Merged

Docs release 1.8.0 #415

merged 80 commits into from
Dec 15, 2021

Conversation

rivkap
Copy link
Member

@rivkap rivkap commented Dec 7, 2021

No description provided.

rivkap and others added 18 commits November 2, 2021 10:00
updating for 1.8.0 general version information
)

* 1.8.0 v updates

* CSI-3508: update orchestration for 1.8.0
* CSI-3242: initial input of HyperSwap information

* remove volume replication from whatsnew

* whatsnew update for version

* what's new updates

* removed extra spaces (general)

* removed HyperSwap limitations for everything except cloning

per #397 (comment)

* Additional HyperSwap prereqs

per #397 (comment)

* updated linking

* added back in hyperswap with snapshot limitation

per #397 (comment)

* updated wording of hyperswap limitation note for clarification

* clarified hyperswap wording

per #397 (comment)

* updated slashes for pathing

* updated typos
* Spectrum Virtualize Family to "family"

* fixing indentation for notes

* CSI-3402: prefix book_files dir with a dot

* CSI-3403: file name updates
* CSI-3512 initial content update

* various typo updates

* updated to indicate deduplicated deprecation

per #411 (comment)
it wasn't reading properly in transform
- For Fibre Channel connectivity be sure that storage system is using one of the fully supported HBAs compatible with your host connection, as listed in the [IBM® System Storage® Interoperation Center (SSIC)](https://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss).
- IBM DS8000 family storage systems only supports Fibre Channel connectivity.

For more information, find your storage system documentation in [IBM Documentation](http://www.ibm.com/docs/).

3. **For RHEL OS users:** Ensure that the following packages are installed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ArbelNathan does nvme users need to install anything here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IDK, @roysahar?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rivkap / @oriyarde , sure they need, but we said that we point them to storage host attach section!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oriyarde I agree - this should be included in any storage system requirements that we already say that they need to adhere to. (unless there is something special for our driver that they need but I highly doubt that)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but our code uses the nvme list command, regardless of storage system requirements

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Certain things that are based off of storage system configurations need to be left to them. There are certain things that we need to make assumptions that it is covered sufficiently by the storage system documentation. If not, either they, or we, will get customer requests to update accordingly. We did our due diligence and made sure that the command itself is in SV documentation. There is no reason to assume that the customer won't know that it's necessary to implement (even if for some reason it's removed.)
I think that if this is the last issue, we should leave as-is (without adding the extra command) and close this documentation

@erantzabari

Copy link
Contributor

@oriyarde oriyarde Dec 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no reason to assume that the customer won't know that it's necessary to implement (even if for some reason it's removed.)

how would the customer know, if it's not documented (removed)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we prefer to perform said due diligence for every storage system for which we add nvme support?
if this is only about time constraint, we could fix it in 1.9.0, it's not urgent

Copy link
Contributor

@oriyarde oriyarde Dec 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that if this is the last issue

nope (and I don't see how it is relevant to this conversation anyway)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not every comment is a merge blocker. I provided my input. if you guys prefer the current way, so be it

Copy link
Contributor

@oriyarde oriyarde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should list nvme-cli in case of future changes, but it seems ok for others for now

Copy link
Contributor

@ArbelNathan ArbelNathan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@rivkap rivkap merged commit 2143122 into develop Dec 15, 2021
@rivkap rivkap deleted the docs_release-1.8.0 branch December 15, 2021 12:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants