Skip to content
This repository has been archived by the owner on Mar 26, 2020. It is now read-only.

Mounting auto provisioned bricks on gd2 start and unified Volume create API #973

Merged
merged 2 commits into from
Jul 18, 2018
Merged

Mounting auto provisioned bricks on gd2 start and unified Volume create API #973

merged 2 commits into from
Jul 18, 2018

Conversation

aravindavk
Copy link
Member

  • Commit 1 Merged Smartvol API into regular Volume Create API
  • Commit 2 Mounting auto provisioned bricks on glusterd2 start

Updates: #851

@aravindavk aravindavk requested review from kshlm and prashanthpai July 9, 2018 07:36
for _, b := range v.GetLocalBricks() {
// Mount all local Bricks if they are auto provisioned
if b.MountInfo.DevicePath != "" {
if _, exists := mounts[b.MountInfo.Mountdir]; exists {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MountInfo.Mountdir actually contains the directory after mount root, for example let's say mount point is /bricks, but brick path could be /bricks/b1 . in this case MountDir will have /b1.

The name is bit consfusing, I just took the same name from gd1.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I agree that Mountdir is not an optimal name, storing both path and actual mount dir seems redundant to me. In most case, path will be equal to mount root. And in almost all cases, directories after mount root will be less in length.
So I think we should store path and directory after mount root, and creates mount root using these two.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense, we can make that optimization. That can be separate PR since that needs to be handled in many places including response. brick.Path = brick.Mountdir + brick.Dir

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fine for me. We should store path and dir and calculate mount root like MountRoot := b.Path[:len(b.Dir)] .

One more thing As soon as this patch get's in, the snapshot functionality will break. Because, if you have any restored volume or cloned volume, then Mountdir is different in both case.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If e2e tests don't fail then I prefer submitting snapshot related changes as separate PR. is that okay?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure. We can create an issue

@@ -199,3 +200,54 @@ func CreateVolumeInfoResp(v *Volinfo) *api.VolumeInfo {
SnapList: v.SnapList,
}
}

// MountLocalBricks mounts bricks of auto provisioned volumes
func MountLocalBricks() error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have this function in common utils, so that the same can be used for snapshot also. Otherwise I have to use another function during glusterd2 init.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since it is not generic function it depends on store utils, can't move to common utils. Let me see

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could create a glusterd2/mountutils or just glusterd2/utils instead of pkg/utils

@aravindavk
Copy link
Member Author

retest this please

@aravindavk
Copy link
Member Author

@kshlm @prashanthpai @Madhu-1 @rafikc30 please review

@@ -18,8 +20,20 @@ import (

const (
maxMetadataSizeLimit = 4096
minVolumeSize = 10
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question:
want to understand why minimum volume size is 10

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we may have to increase this upto 20, this is because mkfs.xfs was not working on smaller size.

For now this is moved from plugin/smartvol code to here.

@@ -29,7 +43,11 @@ func validateVolCreateReq(req *api.VolCreateReq) error {
return errors.New("invalid transport. Supported values: tcp or rdma")
}

if len(req.Subvols) <= 0 {
if req.Size > 0 && req.Size < minVolumeSize {
return errors.New("invalid Volume Size, Minimum size required is " + strconv.Itoa(minVolumeSize))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: can you convert the capital case to a smaller case in the error message, wherever its required

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok


if req.SnapshotReserveFactor < 1 {
restutils.SendHTTPError(ctx, w, http.StatusBadRequest,
errors.New("invalid Snapshot Reserve Factor"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, check any changes required in error message case

c.Logger().WithError(err).WithField(
"key", "reqBricksPrepare").Debug("Failed to get key from store")
"key", "req").Debug("Failed to get key from store")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question:
This should be Debug or Error message

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not changed this part of the code while moving from plugins/smartvol. Will update this as required

if err != nil {
c.Logger().WithError(err).
WithField("dev", dev).
WithField("path", brickMountDir).
WithField("dev", b.DevicePath).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question:

which one is best to use WithFields or WithFiled in this case

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will change this to WithFields

c.Logger().WithError(err).WithField(
"key", "reqBricksPrepare").Debug("Failed to get key from store")
"key", "req").Debug("Failed to get key from store")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, Debug or Error

if e != nil || volumes == nil {
// In case cluster doesn't have any volumes configured yet,
// treat this as success
log.Debug("Failed to retrieve volumes")
Copy link
Member

@Madhu-1 Madhu-1 Jul 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be Error or debug log

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not error

err := utils.ExecuteCommandRun("mount", "-o", b.MountInfo.MntOpts, b.MountInfo.DevicePath, b.MountInfo.Mountdir)
if err != nil {
log.WithError(err).
WithField("volume", v.Name).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, WithFields or WithField is better in this case

Copy link
Member

@Madhu-1 Madhu-1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but I have some comments, which is not a functional changes

@@ -199,3 +200,54 @@ func CreateVolumeInfoResp(v *Volinfo) *api.VolumeInfo {
SnapList: v.SnapList,
}
}

// MountLocalBricks mounts bricks of auto provisioned volumes
func MountLocalBricks() error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could create a glusterd2/mountutils or just glusterd2/utils instead of pkg/utils

// MountLocalBricks mounts bricks of auto provisioned volumes
func MountLocalBricks() error {
volumes, e := GetVolumes()
if e != nil || volumes == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be 2 unique cases. You can have an error trying to get volumes, and you can get an empty list of volumes. The error is an error, the empty list is not.

ExcludePeers []string `json:"exclude-peers,omitempty"`
ExcludeZones []string `json:"exclude-zones,omitempty"`
SubvolZonesOverlap bool `json:"subvolume-zones-overlap,omitempty"`
SubvolType string `json:"subvolume-type,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two structs are now huge. I think it would be a good idea to group all the IVP related members in to their own structs and add those structs as members of the main structs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel input API will not be intuitive if we make it as sub level.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think otherwise. But we can finalize the request structure later.

Example:

    curl -XPOST http://localhost:24007/v1/volumes \
        -d '{"name": "gv1", "size": 1000, "replica": 3}'

Where size is in Mb.

CLI usage is unchanged

    glustercli volume create --name gv1 --size 1G --replica 3

Signed-off-by: Aravinda VK <avishwan@redhat.com>
On `glusterd2` restart, it checks the list of auto provisioned bricks
and mounts if not mounted already.

Updates: #851
Signed-off-by: Aravinda VK <avishwan@redhat.com>
@aravindavk
Copy link
Member Author

Addressed the review comments. Please review.

@kshlm kshlm merged commit 7603e0d into gluster:master Jul 18, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants