Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creation of gluster volumes is not working #68

Closed
fbalak opened this issue Feb 1, 2017 · 17 comments
Closed

Creation of gluster volumes is not working #68

fbalak opened this issue Feb 1, 2017 · 17 comments
Assignees

Comments

@fbalak
Copy link
Contributor

fbalak commented Feb 1, 2017

Api server returns 404 Not found message when I call hostname/api/1.0/cluster_id, hostname/api/1.0/cluster_id/Flows or hostname/api/1.0/cluster_id/GetVolumeList, where cluster_id is id of successfully imported gluster cluster with no volume.
I also get Internal Server Error message when I call hostname/api/1.0/cluster_id/GlusterCreateVolume according to https://github.com/Tendrl/api/blob/d3b44bbd8f85f545e112f0fae781fed3b366d3c2/docs/volumes.adoc.
Api calls related to manipulation with imported cluster should be fixed.
Tested with tendrl-api-1.2-02_01_2017_01_51_04.noarch.

@anivargi
Copy link
Contributor

anivargi commented Feb 2, 2017

@fbalak First of all there is no endpoint /api/1.0/:cluster_id hence the 404, its not documented anywhere.

I will debug the GetVolumeList and GlusterCreateVolume and get back to you shortly.

@fbalak
Copy link
Contributor Author

fbalak commented Feb 2, 2017

@anivargi Ok, thank you.

@anivargi
Copy link
Contributor

anivargi commented Feb 2, 2017

@fbalak there is a PR for the fixes #71

The fix also needed some fixes on the integration side, so all the packages will need to have a new build.

CC: @nthomas-redhat @brainfunked

@nthomas-redhat
Copy link
Contributor

@shtripat can you have a look?

@shtripat
Copy link
Member

shtripat commented Feb 3, 2017

@nthomas-redhat backend integration changes are merged already yesterday and should be available in latest build. API PR needs to be merged and built.

@anivargi
Copy link
Contributor

anivargi commented Feb 3, 2017

@nthomas-redhat the PR is merged now, can we have a new build?
The merge also has new changes which are required for monitoring app as discussed with @anmolbabu

@nthomas-redhat
Copy link
Contributor

@anivargi , build is already done. Please check --> https://copr.fedorainfracloud.org/coprs/tendrl/tendrl/builds/

@fbalak
Copy link
Contributor Author

fbalak commented Feb 6, 2017

Still not working. Server returns 500 Internal Server Error. Tested with:
tendrl-commons-1.2-02_06_2017_04_42_02.noarch
tendrl-node-agent-1.2-02_06_2017_00_01_03.noarch
tendrl-gluster-integration-1.2-02_06_2017_04_38_03.noarch
tendrl-api-1.2-02_05_2017_01_51_02.noarch
I followed https://github.com/Tendrl/api/blob/d3b44bbd8f85f545e112f0fae781fed3b366d3c2/docs/volumes.adoc.

@anivargi
Copy link
Contributor

anivargi commented Feb 6, 2017

@fbalak I have made one more fix. #72

@fbalak
Copy link
Contributor Author

fbalak commented Feb 6, 2017

Now it returns job_id instead of Internal Server Error but the job will never finish and flow is not specified. Tested with:
tendrl-gluster-integration-1.2-02_06_2017_04_38_03.noarch
tendrl-commons-1.2-02_06_2017_04_42_02.noarch
tendrl-node-agent-1.2-02_06_2017_00_01_03.noarch
tendrl-api-1.2-02_06_2017_16_15_15.noarch
POST api request: hostname:9292/1.0/9717f84f-375a-45c9-91f4-5fdc06b6c6e5/GlusterCreateVolume
With data:
{"Volume.bricks": ["hostname1:/bricks/fs_gluster01/brick", "hostname2:/bricks/fs_gluster01/brick", "hostname3:/bricks/fs_gluster01/brick", "hostname4:/bricks/fs_gluster01/brick"], "Volume.volname": "Vol_test"}
Output from /jobs api call:

 {
        "job_id": "54d2db1b-6eaa-40a1-aa19-55b3f8cd7dbe",
        "integration_id": "9717f84f-375a-45c9-91f4-5fdc06b6c6e5",
        "status": "processing",
        "flow": null,
        "parameters": {
            "integration_id": "9717f84f-375a-45c9-91f4-5fdc06b6c6e5",
            "TendrlContext.integration_id": "9717f84f-375a-45c9-91f4-5fdc06b6c6e5",
            "node_ids": [
                "f9c6a42d-8692-4751-9340-2337ea576167",
                "f4d94a73-22c7-4d69-a4e0-117ace7c7660",
                "a5ea498b-d7c4-4b16-8749-2fb8266352f5",
                "ea9d7013-d112-4b73-a2b9-350d7f5e70fc"
            ],
            "Volume.volname": "Vol_test",
            "Volume.bricks": [
                "hostname1:/bricks/fs_gluster01/brick",
                "hostname2:/bricks/fs_gluster01/brick",
                "hostname3:/bricks/fs_gluster01/brick",
                "hostname4:/bricks/fs_gluster01/brick"
            ]
        },
        "created_at": "2017-02-06T12:01:27Z",
        "log": "/jobs/54d2db1b-6eaa-40a1-aa19-55b3f8cd7dbe/logs?type=",
        "log_types": [
            "all",
            "info",
            "debug",
            "warn",
            "error"
        ]
    }

@nthomas-redhat
Copy link
Contributor

@fbalak , have you created the brick directories on all the hosts? I am sure you did, but just wanted to make sure. Also can you share your setup details(on PM) so that we can take a look?

@fbalak
Copy link
Contributor Author

fbalak commented Feb 6, 2017

@nthomas-redhat yes, directories are created. I will send you PM.

@shtripat
Copy link
Member

shtripat commented Feb 6, 2017 via email

@shtripat
Copy link
Member

shtripat commented Feb 6, 2017

@anivargi the job submitted to job queue should have run attribute set to be picked and executed. The below job provided by @fbalak doesnt have the same attribute

{
        "job_id": "54d2db1b-6eaa-40a1-aa19-55b3f8cd7dbe",
        "integration_id": "9717f84f-375a-45c9-91f4-5fdc06b6c6e5",
        "status": "processing",
        "flow": null,
        "parameters": {
            "integration_id": "9717f84f-375a-45c9-91f4-5fdc06b6c6e5",
            "TendrlContext.integration_id": "9717f84f-375a-45c9-91f4-5fdc06b6c6e5",
            "node_ids": [
                "f9c6a42d-8692-4751-9340-2337ea576167",
                "f4d94a73-22c7-4d69-a4e0-117ace7c7660",
                "a5ea498b-d7c4-4b16-8749-2fb8266352f5",
                "ea9d7013-d112-4b73-a2b9-350d7f5e70fc"
            ],
            "Volume.volname": "Vol_test",
            "Volume.bricks": [
                "hostname1:/bricks/fs_gluster01/brick",
                "hostname2:/bricks/fs_gluster01/brick",
                "hostname3:/bricks/fs_gluster01/brick",
                "hostname4:/bricks/fs_gluster01/brick"
            ]
        },
        "created_at": "2017-02-06T12:01:27Z",
        "log": "/jobs/54d2db1b-6eaa-40a1-aa19-55b3f8cd7dbe/logs?type=",
        "log_types": [
            "all",
            "info",
            "debug",
            "warn",
            "error"
        ]
    }

Please check https://github.com/Tendrl/commons/blob/develop/tendrl/commons/jobs/__init__.py#L41 for more details.

Sample job for creating a volume looks as below

job = {
    "integration_id": "9a4b84e0-17b3-4543-af9f-e42000c52bfc",
    "run": "tendrl.gluster_integration.flows.create_volume.CreateVolume",
    "status": "new",
    "parameters": {
        "Volume.volname": "vol1",
        "Volume.bricks": ["dhcp43-87.lab.eng.blr.redhat.com:/root/bricks/vol1_b2"]
    },
    "type": "sds",
    "node_ids": ["3943fab1-9ed2-4eb6-8121-5a69499c4568"]
} 

@shtripat
Copy link
Member

shtripat commented Feb 6, 2017

Sent a PR to fix a name error in atom Tendrl/gluster-integration#138. It was missed in last merge somehow.

@shtripat
Copy link
Member

shtripat commented Feb 7, 2017

Fix available now as part of build tendrl-gluster-integration-1.2-02_07_2017_10_31_13.noarch.rpm

@fbalak
Copy link
Contributor Author

fbalak commented Feb 7, 2017

Fixed. Tested with:
tendrl-gluster-integration-1.2-02_07_2017_10_31_13.noarch
tendrl-commons-1.2-02_07_2017_04_42_04.noarch
tendrl-node-agent-1.2-02_07_2017_00_01_03.noarch
tendrl-api-1.2-02_07_2017_10_53_13.noarch

@fbalak fbalak closed this as completed Feb 7, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants