Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

daemon fails because cluster has different name than 'ceph' #112

Closed
mkudlej opened this issue Feb 16, 2017 · 11 comments
Closed

daemon fails because cluster has different name than 'ceph' #112

mkudlej opened this issue Feb 16, 2017 · 11 comments

Comments

@mkudlej
Copy link

mkudlej commented Feb 16, 2017

It seems that ceph integration daemon work only if there is cluster with name ceph. It should work also if node is part of cluster with different name than ceph.
Example of call which breaks daemon:

$ python /usr/bin/ceph version -f json
Error initializing cluster client: Error('error calling conf_read_file: error code 22',)

which is part of start of daemon

systemctl start tendrl-cephd.service
@mkudlej
Copy link
Author

mkudlej commented Feb 16, 2017

This problem is worst than I expected. Command:

python /usr/bin/ceph version -f json

cannot be run on OSD nodes because they are not Ceph clients by default. So I think ceph command should be substitute by something else.
--> moving this issue to blockers

shtripat pushed a commit to shtripat/ceph_bridge that referenced this issue Feb 16, 2017
tendrl-bug-id: Tendrl#112

Signed-off-by: Shubhendu <shtripat@redhat.com>
@shtripat
Copy link
Member

Sent #118 to fix this.
We rather should be using ceph --version command which works on both mon and ceph nodes properly.

shtripat pushed a commit to shtripat/ceph_bridge that referenced this issue Feb 20, 2017
tendrl-bug-id: Tendrl#112

Signed-off-by: Shubhendu <shtripat@redhat.com>
@sankarshanmukhopadhyay
Copy link

@shtripat is this fix planned to be available in the latest build?

@nthomas-redhat
Copy link

@sankarshanmukhopadhyay This was merged yesterday, so should be available in the last nightly build(1.2-02_21_2017_06_35_02 )

@sankarshanmukhopadhyay
Copy link

ACK. Thank you @nthomas-redhat Subsequent to testing, if found fixed, I'd request the reporter to mark this as closed. And similarly for other issues which have been resolved in the latest builds.

@sankarshanmukhopadhyay
Copy link

@mkudlej - once you commence testing, please verify if the fix works and we can resolve this issue.

@mkudlej
Copy link
Author

mkudlej commented Feb 23, 2017

With today packages I see properly stored ceph cluster in Etcd. Once I can confirm this by importing cluster with different name than ceph, I'll close this issue. This is blocked by Tendrl/api#82

@sankarshanmukhopadhyay
Copy link

Since Tendrl/api#82 seems to have been addressed, can we attempt to confirm the fix this week?

@mkudlej
Copy link
Author

mkudlej commented Feb 27, 2017

There is new blocker https://github.com/Tendrl/performance-monitoring/issues/58 which blocks also this.

@sankarshanmukhopadhyay
Copy link

The fix for https://github.com/Tendrl/performance-monitoring/issues/58 has been merged. Please check the latest builds when available and update this issue.

@mkudlej
Copy link
Author

mkudlej commented Mar 2, 2017

I don't see this issue with:

  • api server:
tendrl-api-1.2.1-03_01_2017_01_51_02.noarch
tendrl-api-doc-1.2.1-03_01_2017_01_51_02.noarch
tendrl-api-httpd-1.2.1-03_01_2017_01_51_02.noarch
tendrl-dashboard-1.2.1-03_01_2017_13_39_11.noarch
  • ceph node:
centos-release-ceph-jewel-1.0-1.el7.centos.noarch
ceph-base-10.2.3-0.el7.x86_64
ceph-common-10.2.3-0.el7.x86_64
ceph-mon-10.2.3-0.el7.x86_64
ceph-selinux-10.2.3-0.el7.x86_64
libcephfs1-10.2.3-0.el7.x86_64
python-cephfs-10.2.3-0.el7.x86_64
tendrl-ceph-integration-1.2.1-03_01_2017_06_35_02.noarch
tendrl-commons-1.2.1-2.el7.centos.noarch
tendrl-node-agent-1.2.1-03_01_2017_00_47_35.noarch
tendrl-performance-monitoring-1.2.1-03_01_2017_03_02_02.noarch

--> CLOSE

@mkudlej mkudlej closed this as completed Mar 2, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants