Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install gluster-fuse from the gluster nightly build #124

Merged
merged 1 commit into from
Dec 19, 2018

Conversation

Madhu-1
Copy link
Member

@Madhu-1 Madhu-1 commented Dec 19, 2018

Describe what this PR does

glusterd2 is built on top of the gluster nightly build.
In CSI we are using the gluster packages from
the centos-repo.
If there is any change in the gluster mounting steps
will affect the mount in CSI driver as we are
using the old client packages.

[root@csi-nodeplugin-glusterfsplugin-6gvf8 /]# rpm -qa |grep gluster
glusterfs-6dev-0.395.git9662504.el7.x86_64
glusterfs-fuse-6dev-0.395.git9662504.el7.x86_64
glusterfs-libs-6dev-0.395.git9662504.el7.x86_64
glusterfs-client-xlators-6dev-0.395.git9662504.el7.x86_64
[vagrant@kube1 ~]$ kubectl get po
NAME          READY   STATUS    RESTARTS   AGE
gcs-example   1/1     Running   0          40s
[vagrant@kube1 ~]$ kubectl get pvc
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
gcs-example-volume   Bound    pvc-62484e21-035a-11e9-80be-525400f7c378   1Gi        RWX            glusterfs-csi   47s

Related issues:
Fixes: #123

Signed-off-by: Madhu Rajanna mrajanna@redhat.com

glusterd2 is built on top of the gluster nightly build.
In CSI we are using the gluster packages from
the centos-repo.
If there is any change in the gluster mounting steps
will affect the mount in CSI driver as we are
using the old client packages.

```
[root@csi-nodeplugin-glusterfsplugin-6gvf8 /]# rpm -qa |grep gluster
glusterfs-6dev-0.395.git9662504.el7.x86_64
glusterfs-fuse-6dev-0.395.git9662504.el7.x86_64
glusterfs-libs-6dev-0.395.git9662504.el7.x86_64
glusterfs-client-xlators-6dev-0.395.git9662504.el7.x86_64
```
```
[vagrant@kube1 ~]$ kubectl get po
NAME          READY   STATUS    RESTARTS   AGE
gcs-example   1/1     Running   0          40s
[vagrant@kube1 ~]$ kubectl get pvc
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
gcs-example-volume   Bound    pvc-62484e21-035a-11e9-80be-525400f7c378   1Gi        RWX            glusterfs-csi   47s

```
Fixes: gluster#123

Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
@humblec
Copy link
Contributor

humblec commented Dec 19, 2018

@Madhu-1 I need to think about this. In one way, the client expected to be recent available version in centos repo, which is supposed to be compatible with latest/nightly server. Relaying on nightly build will be problematic for both client and server I believe. For server we may need to pull decent stable versions instead of nightly ones.

@humblec humblec added the DO-NOT-MERGE Do not merge label Dec 19, 2018
@aravindavk
Copy link
Member

@Madhu-1 I need to think about this. In one way, the client expected to be recent available version in centos repo, which is supposed to be compatible with latest/nightly server. Relaying on nightly build will be problematic for both client and server I believe. For server we may need to pull decent stable versions instead of nightly ones.

This can be changed to stable version post glusterfs 6.0 release

@Madhu-1
Copy link
Member Author

Madhu-1 commented Dec 19, 2018

@Madhu-1 I need to think about this. In one way, the client expected to be recent available version in centos repo, which is supposed to be compatible with latest/nightly server. Relaying on nightly build will be problematic for both client and server I believe. For server we may need to pull decent stable versions instead of nightly ones.

@humblec agree we need to use the stable versions. but as of now glusterd2 built on the laster gluster master, we can't rely on the gluster stable packages as we will be making changes to the gluster master which will be required to make glusterd2 work.
as of today we are getting the nightly container build from glusterd2, if we won't make use of the gluster nightly builts gluster-csi-driver won't work in GCS.

@amarts @aravindavk want to know your thoughts on this.

@Madhu-1
Copy link
Member Author

Madhu-1 commented Dec 19, 2018

This can be changed to stable version post glusterfs 6.0 release

when is glusterfs 6.0 release?

@humblec
Copy link
Contributor

humblec commented Dec 19, 2018

@humblec agree we need to use the stable versions. but as of now glusterd2 built on the laster gluster master, we can't rely on the gluster stable packages as we will be making changes to the gluster master which will be required to make glusterd2 work.
as of today we are getting the nightly container build from glusterd2, if we won't make use of the gluster nightly builts gluster-csi-driver won't work in GCS.

My point is that, if server is not stable or compatible enough with client, we should build server from version tags instead of master.

@atinmu
Copy link

atinmu commented Dec 19, 2018

As of now, all the repos should be pulling in the latest master till GCS 1.0. Post that we should be pulling specific versions.

@humblec
Copy link
Contributor

humblec commented Dec 19, 2018

As of now, all the repos should be pulling in the latest master till GCS 1.0. Post that we should be pulling specific versions.

@atinmu Dont we have pre releases for GD2 ? if there are, we should use it

@amarts
Copy link
Member

amarts commented Dec 19, 2018

@atinmu Dont we have pre releases for GD2 ? if there are, we should use it

This is what I propose.

  • Till Jan 10th - Jan 15th (the branch out or version of gcs-v1.0 - Use this approach. Nightly.
  • From gcs-v1.0 -> Use the tagged client till next gcs version.
  • March 1st: Once release 6.0 is made available, use the release tarball only.
  • March 10 onwards: Assuming centos/rhel/fedora packages are available, fall back to distro specific packages.

This is the only way I see we can reduce, and handle faster development of GCS. If this is ok, please merge this patch, so we can fix one critical issue and move on. This also helps CSI driver to get improvements faster.

@humblec
Copy link
Contributor

humblec commented Dec 19, 2018

@atinmu Dont we have pre releases for GD2 ? if there are, we should use it

This is what I propose.

* Till Jan 10th - Jan 15th (the branch out or version of gcs-v1.0 - Use this approach. Nightly.

* From gcs-v1.0 -> Use the tagged client till next gcs version.

* March 1st: Once release 6.0 is made available, use the release tarball only.

* March 10 onwards: Assuming centos/rhel/fedora packages are available, fall back to distro specific packages.

Thanks @amarts for details.

This is the only way I see we can reduce, and handle faster development of GCS. If this is ok, please merge this patch, so we can fix one critical issue and move on. This also helps CSI driver to get improvements faster.

The only concern I have here is the unstability of GD2 with nightly builds. Be it compatibility between client and server or something else. More or less, as CSI driver is stable enough on particular commit of GD2 server, we dont need nightly update unless there is a new feature or critical bug fix in it, so the thoughts on not pulling this approach. Now I get a feeling that GD2 is still undergoing massive changes which can break things. In that sense, I am fine to pull nightly builds till Jan 15th or GCS 1.0.

@humblec humblec removed the DO-NOT-MERGE Do not merge label Dec 19, 2018
@humblec
Copy link
Contributor

humblec commented Dec 19, 2018

LGTM. Merging.

@humblec humblec merged commit 3c0bb7b into gluster:master Dec 19, 2018
@ghost ghost removed the in progress label Dec 19, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants