-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creation of gluster volumes is not working #68
Comments
@fbalak First of all there is no endpoint I will debug the GetVolumeList and GlusterCreateVolume and get back to you shortly. |
@anivargi Ok, thank you. |
@shtripat can you have a look? |
@nthomas-redhat backend integration changes are merged already yesterday and should be available in latest build. API PR needs to be merged and built. |
@nthomas-redhat the PR is merged now, can we have a new build? |
@anivargi , build is already done. Please check --> https://copr.fedorainfracloud.org/coprs/tendrl/tendrl/builds/ |
Still not working. Server returns |
Now it returns
|
@fbalak , have you created the brick directories on all the hosts? I am sure you did, but just wanted to make sure. Also can you share your setup details(on PM) so that we can take a look? |
@nthomas-redhat yes, directories are created. I will send you PM. |
One thing I can see in job data the field 'flow' is 'null'.
On 6 Feb 2017 19:25, "Filip Balák" <notifications@github.com> wrote:
Now it returns job_id instead of Internal Server Error but the job will
never finish and flow is not specified. Tested with:
tendrl-gluster-integration-1.2-02_06_2017_04_38_03.noarch
tendrl-commons-1.2-02_06_2017_04_42_02.noarch
tendrl-node-agent-1.2-02_06_2017_00_01_03.noarch
tendrl-api-1.2-02_06_2017_16_15_15.noarch
POST api request: hostname:9292/1.0/9717f84f-375a-45c9-91f4-5fdc06b6c6e5/
GlusterCreateVolume
With data:
{"Volume.bricks": ["hostname1:/bricks/fs_gluster01/brick",
"hostname2:/bricks/fs_gluster01/brick", "hostname3:/bricks/fs_gluster01/brick",
"hostname4:/bricks/fs_gluster01/brick"], "Volume.volname": "Vol_test"}
Output from /jobs api call:
{
"job_id": "54d2db1b-6eaa-40a1-aa19-55b3f8cd7dbe",
"integration_id": "9717f84f-375a-45c9-91f4-5fdc06b6c6e5",
"status": "processing",
"flow": null,
"parameters": {
"integration_id": "9717f84f-375a-45c9-91f4-5fdc06b6c6e5",
"TendrlContext.integration_id":
"9717f84f-375a-45c9-91f4-5fdc06b6c6e5",
"node_ids": [
"f9c6a42d-8692-4751-9340-2337ea576167",
"f4d94a73-22c7-4d69-a4e0-117ace7c7660",
"a5ea498b-d7c4-4b16-8749-2fb8266352f5",
"ea9d7013-d112-4b73-a2b9-350d7f5e70fc"
],
"Volume.volname": "Vol_test",
"Volume.bricks": [
"hostname1:/bricks/fs_gluster01/brick",
"hostname2:/bricks/fs_gluster01/brick",
"hostname3:/bricks/fs_gluster01/brick",
"hostname4:/bricks/fs_gluster01/brick"
]
},
"created_at": "2017-02-06T12:01:27Z",
"log": "/jobs/54d2db1b-6eaa-40a1-aa19-55b3f8cd7dbe/logs?type=",
"log_types": [
"all",
"info",
"debug",
"warn",
"error"
]
}
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#68 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AGdSNFVvg1TNXwS2VivE40y2Twq7Gq5Bks5rZyY-gaJpZM4L0BSR>
.
|
@anivargi the job submitted to job queue should have
Please check https://github.com/Tendrl/commons/blob/develop/tendrl/commons/jobs/__init__.py#L41 for more details. Sample job for creating a volume looks as below
|
Sent a PR to fix a name error in atom Tendrl/gluster-integration#138. It was missed in last merge somehow. |
Fix available now as part of build |
Fixed. Tested with: |
Api server returns
404 Not found
message when I callhostname/api/1.0/cluster_id
,hostname/api/1.0/cluster_id/Flows
orhostname/api/1.0/cluster_id/GetVolumeList
, wherecluster_id
is id of successfully imported gluster cluster with no volume.I also get
Internal Server Error
message when I callhostname/api/1.0/cluster_id/GlusterCreateVolume
according to https://github.com/Tendrl/api/blob/d3b44bbd8f85f545e112f0fae781fed3b366d3c2/docs/volumes.adoc.Api calls related to manipulation with imported cluster should be fixed.
Tested with
tendrl-api-1.2-02_01_2017_01_51_04.noarch
.The text was updated successfully, but these errors were encountered: