Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Many AVC denials on storage server machines after cluster import #2

Open
mbukatov opened this issue Oct 16, 2017 · 2 comments
Open

Comments

@mbukatov
Copy link
Contributor

mbukatov commented Oct 16, 2017

Description

After a cluster is imported and tendrl starts to monitor it, there are many avc denials in audit log on machines of the monitored cluster.

Version

I'm using latest snapshot builds from master branch.

Packages on Tendrl Storage machine:

# rpm -qa | grep tendrl | sort
tendrl-collectd-selinux-1.5.3-20171013T090621.ffb1b7f.noarch
tendrl-commons-1.5.3-20171017T183749.33ac94f.noarch
tendrl-gluster-integration-1.5.3-20171013T082052.b8ddae5.noarch
tendrl-node-agent-1.5.3-20171017T183741.46ee175.noarch
tendrl-selinux-1.5.3-20171013T090621.ffb1b7f.noarch
# rpm -qa | grep selinux | sort
libselinux-2.5-11.el7.x86_64
libselinux-python-2.5-11.el7.x86_64
libselinux-utils-2.5-11.el7.x86_64
selinux-policy-3.13.1-166.el7_4.4.noarch
selinux-policy-targeted-3.13.1-166.el7_4.4.noarch
tendrl-collectd-selinux-1.5.3-20171013T090621.ffb1b7f.noarch
tendrl-selinux-1.5.3-20171013T090621.ffb1b7f.noarch
# sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          error (Success)
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      28

Steps to Reproduce

  1. Prepare machines with GlusterFS cluster, including gluster volume (I used nightly builds and volume_alpha_distrep_4x2.create.conf)
  2. Install Tendrl via tendrl-ansible there, using current master (upcoming 1.5.4).
  3. Import cluster via Tendrl web ui
  4. Open Grafana dashboard and wait about 30 minutes (so that tendrl has time to start gathering data for monitoring purposes).
  5. Log into one of storage server machines (aka tendrl node), and check for avc error messages via ausearch -m avc.

Note: step 2 means that I'm using SELinux targetted policy in permissive mode, with all tendrl selinux packages installed.

Actual Results

There are many avc denials in audit log. And large part of that is related to collectd:

# ausearch -m avc | grep collectd | wc -l
135
# ausearch -m avc | grep collectd | tail
type=AVC msg=audit(1508337403.109:2461): avc:  denied  { unlink } for  pid=31922 comm="lvm" name="V_vg_usmqe_alpha_distrep_2:aux" dev="tmpfs" ino=412599 scontext=system_u:system_r:collectd_t:s0 tcontext=system_u:object_r:lvm_lock_t:s0 tclass=file
type=PROCTITLE msg=audit(1508337403.126:2462): proctitle="/usr/sbin/collectd"
type=SYSCALL msg=audit(1508337403.126:2462): arch=c000003e syscall=42 success=yes exit=0 a0=7 a1=7f33fe467e30 a2=10 a3=5 items=0 ppid=1 pid=29253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="reader#0" exe="/usr/sbin/collectd" subj=system_u:system_r:collectd_t:s0 key=(null)
type=AVC msg=audit(1508337403.126:2462): avc:  denied  { name_connect } for  pid=29253 comm="reader#0" dest=2003 scontext=system_u:system_r:collectd_t:s0 tcontext=system_u:object_r:lmtp_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1508337480.116:2464): arch=c000003e syscall=2 success=yes exit=4 a0=7ffd4e440da0 a1=442 a2=1ff a3=645f6168706c615f items=0 ppid=29253 pid=32346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="lvm" exe="/usr/sbin/lvm" subj=system_u:system_r:collectd_t:s0 key=(null)
type=AVC msg=audit(1508337480.116:2464): avc:  denied  { add_name } for  pid=32346 comm="lvm" name="V_vg_usmqe_alpha_distrep_2:aux" scontext=system_u:system_r:collectd_t:s0 tcontext=system_u:object_r:lvm_lock_t:s0 tclass=dir
type=SYSCALL msg=audit(1508337480.116:2465): arch=c000003e syscall=87 success=yes exit=0 a0=7ffd4e440da0 a1=7ffd4e440ce0 a2=7ffd4e440ce0 a3=2 items=0 ppid=29253 pid=32346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="lvm" exe="/usr/sbin/lvm" subj=system_u:system_r:collectd_t:s0 key=(null)
type=AVC msg=audit(1508337480.116:2465): avc:  denied  { remove_name } for  pid=32346 comm="lvm" name="V_vg_usmqe_alpha_distrep_2:aux" dev="tmpfs" ino=413817 scontext=system_u:system_r:collectd_t:s0 tcontext=system_u:object_r:lvm_lock_t:s0 tclass=dir
type=SYSCALL msg=audit(1508337480.114:2463): arch=c000003e syscall=21 success=yes exit=0 a0=555cb1ec31c0 a1=7 a2=0 a3=65726373662f7274 items=0 ppid=29253 pid=32346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="lvm" exe="/usr/sbin/lvm" subj=system_u:system_r:collectd_t:s0 key=(null)
type=AVC msg=audit(1508337480.114:2463): avc:  denied  { read write } for  pid=32346 comm="lvm" name="lvm" dev="tmpfs" ino=13480 scontext=system_u:system_r:collectd_t:s0 tcontext=system_u:object_r:lvm_lock_t:s0 tclass=dir

See full output of ausearch -m avc here: https://gist.github.com/mbukatov/c76c5832c495ebc6d3eeffa09d27a386

Since all messages are included there, we can ignore the ones cased by gluster itself (eg. when exe="/usr/sbin/glusterfsd"), as those are out of scope of tendrl-selinux.

Expected Results

There are no avc messages related to collect or any other tendrl monitoring component.

@mbukatov
Copy link
Contributor Author

Actually, this seems to be caused by Tendrl/tendrl-ansible#58. I will update the status of this issue when tendrl-ansible is fixed.

@mbukatov
Copy link
Contributor Author

mbukatov commented Oct 18, 2017

Update: even with tendrl-collectd-selinux-1.5.3-20171013T090621.ffb1b7f installed, I see some collectd related avc denials. See the updated gist: https://gist.github.com/mbukatov/c76c5832c495ebc6d3eeffa09d27a386

I have updated the description of this issue as well.

@mbukatov mbukatov removed their assignment Sep 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant