GithubHelp home page GithubHelp logo

containers / container-selinux Goto Github PK

View Code? Open in Web Editor NEW
240.0 240.0 89.0 681 KB

SELinux policy files for Container Runtimes

License: GNU General Public License v2.0

Makefile 7.46% Roff 74.73% Shell 17.81%

container-selinux's People

Contributors

0xc0ncord avatar akihirosuda avatar chaasm avatar conan-kudo avatar dweomer avatar fire833 avatar giuseppe avatar haircommander avatar jcpunk avatar jlebon avatar joshwget avatar jsegitz avatar justincormack avatar lsm5 avatar manasugi avatar martinpitt avatar mgrepl avatar mike-nguyen avatar mlsorensen avatar mregmi avatar nalind avatar nforro avatar ningmingxiao avatar rhatdan avatar smijolovic avatar tomsweeneyredhat avatar vmojzis avatar wonder93 avatar wrabcak avatar zpytela avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

container-selinux's Issues

docker run with container-selinux-2.77-1.el7_6.noarch does not work

On a rhel7 system with container-selinux-2.77-1.el7_6.noarch:

# getenforce
Enforcing
# docker run --rm busybox echo hi
standard_init_linux.go:207: exec user process caused "permission denied"

The corresponding denied message in selinux log file /var/log/audit/audit.log is:

type=AVC msg=audit(1549491386.801:666): avc: denied { transition } for pid=4022 comm="runc:[2:INIT]" path="/bin/echo" dev="xvda2" ino=8534597 scontext=system_u:system_r:unconfined_service_t:s0 tcontext=system_u:system_r:container_t:s0:c397,c807 tclass=process

Is it perhaps related to this change? 99e2cfd...5133af6

I'm able to workaround the problem on rhel7 for now by downgrading:

$ sudo yum downgrade container-selinux-2.74-1.el7

Btw, this "permission denied" problem does not exist on centos7, I believe because container-selinux only goes up to version 2.74

Request: add user docs

@rhatdan

Could you add docs that would be useful for end users, including tips such as chconning files to container_file_t or container_share_t?

container-selinux-2.10-1.el7.noarch conflicts with docker-engine-selinux-17.03.0.ce-1.el7.centos

When doing a yum install docker-engine-1.13.1-1.el7.centos, it installs a dependency package called docker-engine-selinux-17.03.0.ce-1.el7.centos (for some reason instead of docker-engine-selinux-1.13.1-1.el7.centos.

Afterward, when I try to do a yum install container-selinux-2.10-1.el7.noarch from the RHEL-1.12 branch, I get an error:

Error: docker-engine-selinux conflicts with container-selinux-2.10-1.el7.noarch

If I install container-selinux-2.10-1.el7.noarch first and then do a yum install docker-engine-1.13.1-1.el7.centos, I also get a conflict error.

I found out that there's a >= in here which is why 1.13 pulls the newer 17.x as a dependency:

[root@n7-z01-0a2a0574 ~]# yum deplist docker-engine-1.13.1-1.el7.centos
package: docker-engine.x86_64 1.13.1-1.el7.centos
  dependency: /bin/sh
   provider: bash.x86_64 4.2.46-21.el7_3
  dependency: device-mapper-libs >= 1.02.90-1
   provider: device-mapper-libs.x86_64 7:1.02.135-1.el7_3.3
   provider: device-mapper-libs.i686 7:1.02.135-1.el7_3.3
  dependency: docker-engine-selinux >= 1.13.1-1.el7.centos

Does https://github.com/projectatomic/container-selinux/tree/RHEL-1.12 need to be updated to support docker-engine-selinux-17.xx.x?

AVCs on rhel-8

Hello,

We have detected a small amount of AVCs on a newly deployed rhel-8 with a bunch of containers:

type=AVC msg=audit(1562587793.241:215): avc:  denied  { read } for  pid=8208 comm="systemd-user-ru" name="libpod" dev="tmpfs" ino=73106 scontext=system_u:system_r:init_t:s0 tcontext=unconfined_u:object_r:container_runtime_tmpfs_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1562587793.241:216): avc:  denied  { read } for  pid=8208 comm="systemd-user-ru" name="overlay-containers" dev="tmpfs" ino=73105 scontext=system_u:system_r:init_t:s0 tcontext=unconfined_u:object_r:container_runtime_tmpfs_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1562587793.241:217): avc:  denied  { read } for  pid=8208 comm="systemd-user-ru" name="overlay-layers" dev="tmpfs" ino=73104 scontext=system_u:system_r:init_t:s0 tcontext=unconfined_u:object_r:container_runtime_tmpfs_t:s0 tclass=dir permissive=0

We have a couple of containers bind-mounting /dev - that's probably the main issue, although we can't do otherwise since the services running in those containers do need /dev access on the host.

Do you think we can provide a patch allowing init_t to access (only "read" listed here because "permissive=0") container_runtime_tmpfs? I'm not really sure it's a good idea, but I don't know what to do in order to avoid that :/.

Cheers,

C.

AVC denied for unconfined_service_t transition to container_t

This issue seems similar to #61, but with newer versions of components

type=AVC msg=audit(1592923828.874:31296): avc:  denied  { transition } for  pid=23060 comm="runc:[2:INIT]" path="/usr/bin/container-entrypoint" dev="vdc" ino=5429068 scontext=system_u:system_r:unconfined_service_t:s0 tcontext=system_u:system_r:container_t:s0:c832,c955 tclass=process permissive=0
rpm -q runc
runc-1.0.0-67.rc10.el7_8.x86_64
rpm -q podman
podman-1.6.4-16.el7_8.x86_64
rpm -q container-selinux
container-selinux-2.119.1-1.c57a6f9.el7.noarch
cat /etc/redhat-release 
CentOS Linux release 7.8.2003 (Core)

audit2allow output:

#============= unconfined_service_t ==============

#!!!! The file '/usr/bin/container-entrypoint' is mislabeled on your system.  
#!!!! Fix with $ restorecon -R -v /usr/bin/container-entrypoint
allow unconfined_service_t container_t:process transition;

make install-policy on centos7

I am running CentOS 7 in Vagrant and encountering an error installing from source:

[vagrant@localhost container-selinux]$ make install-policy
make -f /usr/share/selinux/devel/Makefile container.pp
make[1]: Entering directory `/home/vagrant/container-selinux'
container.if:13: Error: duplicate definition of container_runtime_domtrans(). Original definition on 14.
container.if:60: Error: duplicate definition of container_runtime_exec(). Original definition on 33.
container.if:97: Error: duplicate definition of container_search_lib(). Original definition on 52.
container.if:116: Error: duplicate definition of container_exec_lib(). Original definition on 71.
container.if:135: Error: duplicate definition of container_read_lib_files(). Original definition on 90.
container.if:154: Error: duplicate definition of container_read_share_files(). Original definition on 109.
container.if:237: Error: duplicate definition of container_exec_share_files(). Original definition on 131.
container.if:274: Error: duplicate definition of container_manage_lib_files(). Original definition on 149.
container.if:331: Error: duplicate definition of container_manage_lib_dirs(). Original definition on 169.
container.if:367: Error: duplicate definition of container_lib_filetrans(). Original definition on 205.
container.if:385: Error: duplicate definition of container_read_pid_files(). Original definition on 223.
container.if:404: Error: duplicate definition of container_systemctl(). Original definition on 242.
container.if:429: Error: duplicate definition of container_rw_sem(). Original definition on 267.
container.if:466: Error: duplicate definition of container_use_ptys(). Original definition on 285.
container.if:484: Error: duplicate definition of container_filetrans_named_content(). Original definition on 303.
container.if:543: Error: duplicate definition of container_stream_connect(). Original definition on 336.
container.if:564: Error: duplicate definition of container_spc_stream_connect(). Original definition on 355.
container.if:585: Error: duplicate definition of container_admin(). Original definition on 376.
container.if:632: Error: duplicate definition of container_auth_domtrans(). Original definition on 441.
container.if:651: Error: duplicate definition of container_auth_exec(). Original definition on 460.
container.if:670: Error: duplicate definition of container_auth_stream_connect(). Original definition on 479.
container.if:689: Error: duplicate definition of container_runtime_typebounds(). Original definition on 498.
container.if:770: Error: duplicate definition of container_spc_read_state(). Original definition on 423.
Compiling targeted container module
/usr/bin/checkmodule:  loading policy configuration from tmp/container.tmp
container.te:290:ERROR 'syntax error' at token 'init_stop' on line 10274:
init_stop(container_runtime_domain)
 	
/usr/bin/checkmodule:  error(s) encountered while parsing configuration
make[1]: *** [tmp/container.mod] Error 1
make[1]: Leaving directory `/home/vagrant/container-selinux'
make: *** [container.pp] Error 2
[vagrant@localhost container-selinux]$ 

My provisioning script runs:

yum install -y \
    curl git socat wget \
    btrfs-progs-devel \
    libselinux-static \
    libseccomp-devel \
    libsemanage-devel \
    policycoreutils-devel \
    setools-devel \
    container-selinux

Is there something else I need to do to get this working on CentOS 7?

File context of /home/<USER>/.local/share/containers

Hello world !

I'm wondering if it is a good thing to relabel the /home/USER/.local/share/containers directory to make that directory have more specific file context. Because by default it has the data_home_t context so I imagine many user process context can access it
A quite large list actually : sesearch -A -t data_home_t -p write

I could also do another dedicated user, it might be simpler to configure... But i would only rely on DAC permissions.

Here is the command i use :

semanage fcontext --add -e /var/lib/containers /home/<USER>/.local/share/containers && \
restorecon -RF /home/<USER>/.local/share/containers

My current setup:

dnf list installed | grep -E 'selinux-policy|container-selinux|podman'
container-selinux.noarch                    2:2.124.0-3.fc31                   @updates               
podman.x86_64                               2:1.8.2-2.fc31                     @updates               
podman-plugins.x86_64                       2:1.8.2-2.fc31                     @updates               
selinux-policy.noarch                       3.14.4-49.fc31                     @updates               
selinux-policy-devel.noarch                 3.14.4-49.fc31                     @updates               
selinux-policy-targeted.noarch              3.14.4-49.fc31                     @updates

Regards :)

Disable --security-opt and --privileged

I've successfully configured SELinux for docker and it's working as expected. My primary goal is to allow an unprivileged (except for being member of the docker group) developer to use Docker on a virtual machine. I want to prevent, that the user gains root access to the host file system by mounting the root filesystem in a container. SELinux seems to get me there half-way, as the default will label the container correctly and access to the root filesystem is denied:

$ docker run -ti --rm -v /:/host ubuntu:18.04 touch /host/foo
touch: cannot touch '/host/foo': Permission denied

However, the user is free to pass --security-opt label:disable and will effectively disable all SELinux policies for the given container:

$ docker run -ti --rm -v /:/host --security-opt label:disable ubuntu:18.04 touch /host/foo
# succeeds

Is there a way to enforce labeling and preventing the user to launch a container in privileged mode?

Unable to copy files using Python across certain contexts

Problem: Python raises a permission denied error and aborts due EACESS while trying to legitimately relabel files within a container.

Environment:

  • Host System: Fedora 31 with enforced SELinux
  • Container Daemon: podman (unprivileged)
  • image: fedora:latest
  • ext4 loop volume handed in as /relabel_bug

podman setup:

podman run -v '/home/test/relabel_bug:/relabel_bug:Z' -it fedora:latest /bin/bash

mounted volumes:

# mount
fuse-overlayfs on / type fuse.fuse-overlayfs (rw,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,context="system_u:object_r:container_file_t:s0:c552,c859",size=65536k,mode=755,uid=1000,gid=1000)
/home/test/relabel_bug_container on /relabel_bug type ext4 (rw,relatime,seclabel)
tmpfs on /run/secrets type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=798884k,mode=700,uid=1000,gid=1000)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime,seclabel)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,context="system_u:object_r:container_file_t:s0:c552,c859",gid=100004,mode=620,ptmxmode=666)
tmpfs on /etc/resolv.conf type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=798884k,mode=700,uid=1000,gid=1000)
tmpfs on /etc/hosts type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=798884k,mode=700,uid=1000,gid=1000)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:container_file_t:s0:c552,c859",size=64000k,uid=1000,gid=1000)
tmpfs on /etc/hostname type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=798884k,mode=700,uid=1000,gid=1000)
tmpfs on /run/.containerenv type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=798884k,mode=700,uid=1000,gid=1000)
cgroup2 on /sys/fs/cgroup type cgroup2 (ro,nosuid,nodev,noexec,relatime,seclabel,nsdelegate)
devtmpfs on /dev/null type devtmpfs (rw,nosuid,seclabel,size=3974416k,nr_inodes=993604,mode=755)
devtmpfs on /dev/zero type devtmpfs (rw,nosuid,seclabel,size=3974416k,nr_inodes=993604,mode=755)
devtmpfs on /dev/full type devtmpfs (rw,nosuid,seclabel,size=3974416k,nr_inodes=993604,mode=755)
devtmpfs on /dev/tty type devtmpfs (rw,nosuid,seclabel,size=3974416k,nr_inodes=993604,mode=755)
devtmpfs on /dev/random type devtmpfs (rw,nosuid,seclabel,size=3974416k,nr_inodes=993604,mode=755)
devtmpfs on /dev/urandom type devtmpfs (rw,nosuid,seclabel,size=3974416k,nr_inodes=993604,mode=755)
tmpfs on /proc/acpi type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c552,c859",size=0k,uid=1000,gid=1000)
devtmpfs on /proc/kcore type devtmpfs (rw,nosuid,seclabel,size=3974416k,nr_inodes=993604,mode=755)
devtmpfs on /proc/keys type devtmpfs (rw,nosuid,seclabel,size=3974416k,nr_inodes=993604,mode=755)
devtmpfs on /proc/latency_stats type devtmpfs (rw,nosuid,seclabel,size=3974416k,nr_inodes=993604,mode=755)
devtmpfs on /proc/timer_list type devtmpfs (rw,nosuid,seclabel,size=3974416k,nr_inodes=993604,mode=755)
devtmpfs on /proc/sched_debug type devtmpfs (rw,nosuid,seclabel,size=3974416k,nr_inodes=993604,mode=755)
tmpfs on /proc/scsi type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c552,c859",size=0k,uid=1000,gid=1000)
tmpfs on /sys/firmware type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c552,c859",size=0k,uid=1000,gid=1000)
tmpfs on /sys/fs/selinux type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c552,c859",size=0k,uid=1000,gid=1000)
proc on /proc/asound type proc (ro,relatime)
proc on /proc/bus type proc (ro,relatime)
proc on /proc/fs type proc (ro,relatime)
proc on /proc/irq type proc (ro,relatime)
proc on /proc/sys type proc (ro,relatime)
proc on /proc/sysrq-trigger type proc (ro,relatime)
devpts on /dev/console type devpts (rw,nosuid,noexec,relatime,context="system_u:object_r:container_file_t:s0:c552,c859",gid=100004,mode=620,ptmxmode=666)

permissions on the target directory:

# ls -lZd /relabel_bug
drwxr-xr-x. 2 root root system_u:object_r:container_file_t:s0:c552,c859 1024 Nov 15 11:00 /relabel_bug

create file on a different device:

# touch /tmp/some_file

use shutil.copy2 (invoked by many shutil functions) to copy the file

# python3
Python 3.7.4 (default, Aug 12 2019, 14:45:07) 
[GCC 9.1.1 20190605 (Red Hat 9.1.1-2)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import shutil
>>> shutil.copy2('/tmp/some_file', '/relabel_bug/failure')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib64/python3.7/shutil.py", line 267, in copy2
    copystat(src, dst, follow_symlinks=follow_symlinks)
  File "/usr/lib64/python3.7/shutil.py", line 209, in copystat
    _copyxattr(src, dst, follow_symlinks=follow)
  File "/usr/lib64/python3.7/shutil.py", line 165, in _copyxattr
    os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
PermissionError: [Errno 13] Permission denied: '/relabel_bug/failure'
# ls -lZ /relabel_bug/
total 3
-rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c552,c859 0 Nov 15 11:00 failure

SE Alert

type=AVC msg=audit(1573815617.682:1332): avc:  denied  { relabelto } for  pid=3157530 comm="python3" name="failure" dev="loop1" ino=12 scontext=system_u:system_r:container_t:s0:c552,c859 tcontext=system_u:object_r:fusefs_t:s0 tclass=file permissive=0


Hash: python3,container_t,fusefs_t,file,relabelto

Update 1: Corrected the described python invocation (copy -> copy2 )

Fedora rawhide: container-selinux-2:2.120.1-0.2.dev.gita233788.fc32 breaks podman run

container-selinux-2:2.120.1-0.2.dev.gita233788.fc32 on Fedora Rawhide braks podman run:

# podman run alpine date  [note: no output. Expected date]
# echo $?
127

It's an AVC:

type=AVC msg=audit(1574380012.187:1499): avc:  denied  { read } for  pid=21249 comm="date" path="/lib/ld-musl-x86_64.so.1" dev="vda1" ino=266990 scontext=system_u:system_r:container_t:s0:c29,c643 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=0
type=AVC msg=audit(1574380012.187:1500): avc:  denied  { read } for  pid=21249 comm="date" path="/bin/busybox" dev="vda1" ino=266829 scontext=system_u:system_r:container_t:s0:c29,c643 tcontext=unconfined_u:object_r:var_lib_t:s0 tclass=file permissive=0

container-selinux-2.120.1-0.1.dev.git6fb6dcf.fc32 works fine.

What's the best practice to upgarde selinux?

Enable selinux, and upgrade container-selinux from 2.9 to 2.68, meet below error in docker:

# docker run --rm -v /opt/kubernetes/:/data:z myimage:latest sh -c 'cp -f /hyperkube /data/'
sh: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory

From https://github.com/containers/container-selinux/blob/master/contrib/container-selinux.spec#L104, seemsrestorecon -R -v /var/lib/docker is skipped in upgrade. After I manually run restorecon -R -v /var/lib/docker, above issue resolved.

And refer to #60 (comment) Is that mean, I have to manually run restorecon -R -v /var/lib/docker each time after upgarde selinux?

Why restorecon -R -v /var/lib/docker is skipped , and what's the best practice to upgarde selinux?

installation of v2.143.0 rpm from advisory=FEDORA-2020-04374575bf doesn't seem to be using updated policy from this repo

Trying to validate the RPM installation on Fedora 32 for the recently tagged v2.143.0 as described on bodhi:

dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2020-04374575bf

And it does not seemed to have picked up the policy in this repo.

This Vagrantfile is my validation harness: https://github.com/containerd/containerd/blob/586061508e1dd1ba5305778265f967ba5a409bcc/Vagrantfile
After spinning up the VM via vagrant up I connect to it and run the installation line from bodhi, and seemingly have the correct RPM via dnf info container-selinux:

Last metadata expiration check: 0:02:59 ago on Fri 07 Aug 2020 11:39:21 PM UTC.
Installed Packages
Name         : container-selinux
Epoch        : 2
Version      : 2.143.0
Release      : 1.fc32
Architecture : noarch
Size         : 45 k
Source       : container-selinux-2.143.0-1.fc32.src.rpm
Repository   : @System
From repo    : updates-testing
Summary      : SELinux policies for container runtimes
URL          : https://github.com/containers/container-selinux
License      : GPLv2
Description  : SELinux policy modules for use with container runtimes.

I then run the test-cri provisioner with SELinux Enforcing invoking:

SELINUX=Enforcing vagrant up --provision-with=selinux,test-cri

This starts containerd and then runs critest against it. Prior to my fix that should be in v2.143.0 I would see critest suite failures like this:

    default: Summarizing 11 Failures:
    default: 
    default: [Fail] 
    default: [k8s.io] Security Context NamespaceOption [It] runtime should support HostPID 
    default: /root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container.go:389
    default: [Fail] [k8s.io] Security Context NamespaceOption [It] runtime should support ContainerPID 
    default: /root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container.go:389
    default: 
    default: [Fail] [k8s.io] Multiple Containers [Conformance] when running multiple containers in a pod [It] should support container exec 
    default: /root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container.go:389
    default: 
    default: [Fail] [k8s.io] Security Context NamespaceOption [It] runtime should support PodPID 
    default: /root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container.go:389
    default: 
    default: [Fail] 
    default: [k8s.io] Streaming runtime should support streaming interfaces [It] runtime should support portforward in host network 
    default: /root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/networking.go:253
    default: 
    default: [Fail] [k8s.io] Multiple Containers [Conformance] when running multiple containers in a pod 
    default: [It] should support network 
    default: /root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/networking.go:253
    default: 
    default: [Fail] [k8s.io] Networking runtime should support networking [It] runtime should support port mapping with host port and container port [Conformance] 
    default: /root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/networking.go:253
    default: 
    default: [Fail] [k8s.io] Security Context NamespaceOption 
    default: [It] runtime should support HostIpc is true 
    default: /root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/security_context_linux.go:159
    default: 
    default: [Fail] [k8s.io] Streaming runtime should support streaming interfaces [It] runtime should support portforward [Conformance] 
    default: /root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/networking.go:253
    default: 
    default: [Fail] 
    default: [k8s.io] Networking runtime should support networking [It] runtime should support port mapping with only container port [Conformance] 
    default: /root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/networking.go:253
    default: 
    default: [Fail] [k8s.io] Multiple Containers [Conformance] when running multiple containers in a pod [It] should support container log 
    default: /root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/multi_container_linux.go:95
    default: 
    default: Ran 88 of 95 Specs in 246.619 seconds
    default: FAIL! -- 77 Passed | 11 Failed | 0 Pending | 7 Skipped

And this is what I still see with this RPM installed. This is surprising to me. When I install the policy from this repo via source (after installing selinux-policy-devel and bzip2) by running make && sudo make install-policy I get the results that I expect, e.g.:

    default: Summarizing 1 Failure:
    default: 
    default: [Fail] [k8s.io] Security Context NamespaceOption 
    default: [It] runtime should support HostIpc is true 
    default: /root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/security_context_linux.go:159
    default: 
    default: Ran 88 of 95 Specs in 65.534 seconds
    default: FAIL! -- 87 Passed | 1 Failed | 0 Pending | 7 Skipped

Is there something wrong with how the RPM is being build? This build log seems to indicate that the wrong tag is being used: https://kojipkgs.fedoraproject.org//packages/container-selinux/2.143.0/1.fc32/data/logs/noarch/build.log

error building master branch on rhel 7.3

@rhatdan @wrabcak ptal

 Compiling targeted docker module
/usr/bin/checkmodule:  loading policy configuration from tmp/docker.tmp
docker.te:277:ERROR 'syntax error' at token 'fs_rw_nsfs_files' on line 11984:
fs_rw_nsfs_files(docker_t)

/usr/bin/checkmodule:  error(s) encountered while parsing configuration
make[1]: *** [tmp/docker.mod] Error 1
make[1]: Leaving directory `/home/lsm5/repositories/pkgs/docker/BUILD/docker-f9d4a2c183cb4ba202babc9f8649ea043d8c84d0/docker-selinux-dba8e033744889ffc44e5ddc68ce55e27d0e619c'
make: *** [docker.pp] Error 2
error: Bad exit status from /var/tmp/rpm-tmp.U0ERwG (%build)


RPM build errors:
    Bad exit status from /var/tmp/rpm-tmp.U0ERwG (%build)`



$ rpm -q selinux-policy
selinux-policy-3.13.1-93.el7.noarch

Running `ipcs` from busybox doesn't work inside container

Hi @rhatdan,

Running the ipcs command from busybox doesn't work inside a Docker container. The one from util-linux works, but not the one from busybox.

It turns out their implementation is different, the one from busybox starts by calling shmctl(0, SHM_INFO, ...) while the one from util-linux tries to access the /proc files first and only falls back to IPC_INFO if that fails.

This is the end result:

$ docker run busybox ipcs

kernel not configured for message queues

kernel not configured for shared memory

kernel not configured for semaphores

While the same works when running from a container that ships util-linux:

$ docker run fedora ipcs

------ Message Queues --------
key        msqid      owner      perms      used-bytes   messages    

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      

------ Semaphore Arrays --------
key        semid      owner      perms      nsems     

Considering the functionality itself is not blocked (and accessing that information from the files under /proc/sysvipc/ works), I think this should be corrected by allowing IPC info inside a container.

I'll send a PR shortly that fixes this issue.

/cc @Random-Liu since this came up in kubernetes/kubernetes#58174

Relabling blocked when using NoNewPrivileges

Creating a container with NoNewPrivileges and an selinux label causes the state transition to fail:

type=SELINUX_ERR msg=audit(1515446105.513:1559): op=security_bounded_transition seresult=denied oldcontext=system_u:system_r:container_runtime_t:s0 newcontext=system_u:system_r:spc_t:s0:c0,c1
type=SYSCALL msg=audit(1515446105.513:1559): arch=c000003e syscall=59 success=no exit=-1 a0=c4200d9670 a1=c420037910 a2=c42000dda0 a3=0 items=0 ppid=31738 pid=31748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc:[2:INIT]" exe="/usr/bin/docker-runc" subj=system_u:system_r:container_runtime_t:s0 key=(null)
[root@localhost server]# rpm -qa | grep container-selinux
container-selinux-2.33-1.git86f33cd.el7.noarch

non-fatal error messages while rpmbuild

gawk: fatal: cannot open file `/etc/selinux/config' for reading (Permission denied)
docker.if:14: Error: duplicate definition of docker_domtrans(). Original definition on 14.
docker.if:33: Error: duplicate definition of docker_exec(). Original definition on 33.
docker.if:52: Error: duplicate definition of docker_search_lib(). Original definition on 52.
docker.if:71: Error: duplicate definition of docker_exec_lib(). Original definition on 71.
docker.if:90: Error: duplicate definition of docker_read_lib_files(). Original definition on 90.
docker.if:109: Error: duplicate definition of docker_read_share_files(). Original definition on 109.
docker.if:128: Error: duplicate definition of docker_manage_lib_files(). Original definition on 149.
docker.if:148: Error: duplicate definition of docker_manage_lib_dirs(). Original definition on 169.
docker.if:184: Error: duplicate definition of docker_lib_filetrans(). Original definition on 205.
docker.if:202: Error: duplicate definition of docker_read_pid_files(). Original definition on 223.
docker.if:221: Error: duplicate definition of docker_systemctl(). Original definition on 242.
docker.if:246: Error: duplicate definition of docker_rw_sem(). Original definition on 267.
docker.if:264: Error: duplicate definition of docker_use_ptys(). Original definition on 285.
docker.if:282: Error: duplicate definition of docker_filetrans_named_content(). Original definition on 303.
docker.if:315: Error: duplicate definition of docker_stream_connect(). Original definition on 336.
docker.if:334: Error: duplicate definition of docker_spc_stream_connect(). Original definition on 355.
docker.if:355: Error: duplicate definition of docker_admin(). Original definition on 376.
docker.if:392: Error: duplicate definition of domain_stub_named_filetrans_domain(). Original definition on 1605.
docker.if:398: Error: duplicate definition of lvm_stub(). Original definition on 14.
docker.if:403: Error: duplicate definition of staff_stub(). Original definition on 13.
docker.if:408: Error: duplicate definition of virt_stub_lxc(). Original definition on 13.
docker.if:413: Error: duplicate definition of virt_stub_svirt_sandbox_domain(). Original definition on 29.
docker.if:418: Error: duplicate definition of virt_stub_svirt_sandbox_file(). Original definition on 45.
docker.if:423: Error: duplicate definition of fs_dontaudit_remount_tmpfs(). Original definition on 4943.
docker.if:430: Error: duplicate definition of dev_dontaudit_list_all_dev_nodes(). Original definition on 221.
docker.if:437: Error: duplicate definition of kernel_unlabeled_entry_type(). Original definition on 3906.
docker.if:444: Error: duplicate definition of kernel_unlabeled_domtrans(). Original definition on 3885.
docker.if:453: Error: duplicate definition of files_write_all_pid_sockets(). Original definition on 7852.
docker.if:460: Error: duplicate definition of dev_dontaudit_mounton_sysfs(). Original definition on 4553.
docker.if:478: Error: duplicate definition of docker_auth_domtrans(). Original definition on 423.
docker.if:497: Error: duplicate definition of docker_auth_exec(). Original definition on 442.
docker.if:516: Error: duplicate definition of docker_auth_stream_connect(). Original definition on 479.
gawk: fatal: cannot open file `/etc/selinux/config' for reading (Permission denied)
Compiling  docker module
/usr/bin/checkmodule:  loading policy configuration from tmp/docker.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 17) to tmp/docker.tmp
gawk: fatal: cannot open file `/etc/selinux/config' for reading (Permission denied)

container_t forbidden from container_runtime_tmpfs_t

I encountered this issue when trying to run the envoyproxy/envoy:v1.14.1 docker container. Envoy creates a shared memory file in /dev/shm/envoy_shared_memory_0, which gets mapped to a file of the form /var/lib/docker/containers/[^/]+/mounts/shm/envoy_shared_memory_0 on the host machine using the docker container runtime. This file is labeled as container_runtime_tmpfs_t, which appears to be the expected behavior and is consistent with other container runtimes.

However, processes of type container_t are not permitted to access files of type container_runtime_tmpfs_t.

Adding the following type enforcement policy resolves the issue for me, as does setting the container process to run as spc_t, but I was wondering if this was a bug or if there is a particular reason that these operations are normally forbidden?

require {
	type container_runtime_tmpfs_t;
	type container_t;
	class dir { add_name remove_name read write };
	class file { create open read write };
}

allow container_t container_runtime_tmpfs_t:dir { add_name remove_name read write };
allow container_t container_runtime_tmpfs_t:file { create open read write };

staff_u rootless podman

Hello again, i'm stuck at trying to run rootless podman as staff_u...

First here is my conf :

uname -a
Linux desktop 5.5.11-200.fc31.x86_64 #1 SMP Mon Mar 23 17:32:43 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

dnf list installed | grep -E 'selinux-policy|container-selinux|podman'
container-selinux.noarch                    2:2.124.0-3.fc31                   @updates               
podman.x86_64                               2:1.8.2-2.fc31                     @updates               
podman-plugins.x86_64                       2:1.8.2-2.fc31                     @updates               
selinux-policy.noarch                       3.14.4-49.fc31                     @updates               
selinux-policy-devel.noarch                 3.14.4-49.fc31                     @updates               
selinux-policy-targeted.noarch              3.14.4-49.fc31                     @updates               

getenforce 
Enforcing

podman version
Version:            1.8.2
RemoteAPI Version:  1
Go Version:         go1.13.6
OS/Arch:            linux/amd64

My first issue

I first tried to do a simple podman version as a non root user confined as staff_u SELinux user.
But no luck here, the command stays stuck. I'm directly thinking that i might be a SELinux problem so to validate it it did:
setenforce 0 && podman version && setenforce 1, and it worked.

I know i'm running as a confined user so i tried :
** semanage permissive -a staff_t && podman version && semanage permissive -d staff_t **, again it worked.

Great, so it might be a problem of the staff.te of the fedora-selinux/selinux-policy github project...

I enabled "debug" mode with semodule -DB and found these avc :

----
type=AVC msg=audit(03/29/2020 21:04:53.601:1049) : avc:  denied  { noatsecure } for  pid=202831 comm=bash scontext=staff_u:staff_r:staff_t:s0 tcontext=staff_u:staff_r:container_runtime_t:s0 tclass=process permissive=0 
----
type=AVC msg=audit(03/29/2020 21:04:53.601:1050) : avc:  denied  { rlimitinh } for  pid=202831 comm=podman scontext=staff_u:staff_r:staff_t:s0 tcontext=staff_u:staff_r:container_runtime_t:s0 tclass=process permissive=0 
----
type=AVC msg=audit(03/29/2020 21:04:53.601:1051) : avc:  denied  { siginh } for  pid=202831 comm=podman scontext=staff_u:staff_r:staff_t:s0 tcontext=staff_u:staff_r:container_runtime_t:s0 tclass=process permissive=0 
----
type=AVC msg=audit(03/29/2020 21:04:53.649:1052) : avc:  denied  { sys_ptrace } for  pid=1192 comm=systemd capability=sys_ptrace  scontext=staff_u:staff_r:staff_t:s0 tcontext=staff_u:staff_r:staff_t:s0 tclass=cap_userns permissive=0

So i tried to make a quick patch to allow the forbidden rules with that :

cat mypodman.te 
policy_module(mypodman, 1.0)
require {
	type container_runtime_t;
	type staff_t;
	class process { noatsecure rlimitinh siginh };
	class cap_userns sys_ptrace;
}
#============= staff_t ==============
allow staff_t container_runtime_t:process { noatsecure rlimitinh siginh };
allow staff_t self:cap_userns sys_ptrace;

** semodule -i mypodman.pp && podman version ** : it works, awesome !

Next issue with a podman container ls

Again the command podman container ls is stuck :(
** semanage permissive -a staff_t && podman container ls && semanage permissive -d staff_t ** : it works again...
But this time nothing related with ausearch -i -m avc -m user_avc

Last issue with a podman run -it fedora:31 bash

The following error occurs :
{"msg":"exec container process /usr/bin/bash: Permission denied","level":"error","time":"2020-03-30T09:07:45.000757281Z"}

Here is the relevant findings:

find /home/user/.local/share/containers/ -name bash -type f
/home/user/.local/share/containers/storage/overlay/xxx/diff/usr/bin/bash

ll -Z /home/user/.local/share/containers/storage/overlay/xxx/diff/usr/bin/bash
-rwxr-xr-x. 1 user user staff_u:object_r:data_home_t:s0 1203992 Dec  6 13:08 /home/user/.local/share/containers/storage/overlay/xxx/diff/usr/bin/bash

mount | grep /home
/home type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)

type=AVC msg=audit(03/30/2020 11:07:45.756:685) : avc:  denied  { transition } for  pid=190930 comm=3 path=/usr/bin/bash dev="fuse" ino=67181813 scontext=staff_u:staff_r:container_runtime_t:s0 tcontext=system_u:system_r:container_t:s0:c433,c907 tclass=process permissive=0

sesearch -A -s container_runtime_t -t container_t -p transition
allow container_runtime_domain container_domain:process { dyntransition transition };

I can't figure out why even if the rule is present it does not apply here...

It works again when i'm doing semanage permissive -a container_runtime_t but not the perfect solution.

Thank you again for your time :)

error building master branch on centos

@rhatdan, @wrabcak I'm trying to build the master branch on CentOS 7 (basically rebuilding fedora rawhide rpm source), but I see this:

Compiling targeted docker module
/usr/bin/checkmodule:  loading policy configuration from tmp/docker.tmp
docker.te":296:ERROR 'syntax error' at token 'systemd_dbus_chat_machined' on line 14377:
#line 296
        systemd_dbus_chat_machined(docker_t)
/usr/bin/checkmodule:  error(s) encountered while parsing configuration
make[1]: *** [tmp/docker.mod] Error 1
make[1]: Leaving directory `/home/lsm5/repositories/pkgs/docker/BUILD/docker-4ddbd3d6b9070a0693f14aaa40f3d84af05c7e85/docker-selinux-7c94597ac663c7ea624cc30fbb31faa49cd93afd'
make: *** [docker.pp] Error 2
error: Bad exit status from /var/tmp/rpm-tmp.gg119r (%build)


RPM build errors:
    Bad exit status from /var/tmp/rpm-tmp.gg119r (%build)

Any idea if this is because of selinux-policy version mismatch or should I be using the rhel branch only for building on CentOS?

container_t can't open existing tap devices

I ran across this while trying to use a tap device from within a container that was pre-created via CNI:

type=AVC msg=audit(1599601918.051:599720): avc:  denied  { relabelfrom } for  pid=12147 comm="qemu-system-x86" scontext=system_u:system_r:container_t:s0:c726,c911 tcontext=system_u:system_r:container_t:s0:c726,c911 tclass=tun_socket permissive=1
type=AVC msg=audit(1599601918.051:599720): avc:  denied  { relabelto } for  pid=12147 comm="qemu-system-x86" scontext=system_u:system_r:container_t:s0:c726,c911 tcontext=system_u:system_r:container_t:s0:c726,c911 tclass=tun_socket permissive=1

I tracked it down to the way /dev/net/tun works - if you open /dev/net/tun and do ioctl TUNSETIFF to get a socket for the desired tap/tun device, the kernel checks to ensure you have "relabelto" and "relabelfrom" first, and then copies the selinux label to the new socket that will be used to communicate with your existing tun/tap.

See:
https://github.com/torvalds/linux/blob/34d4ddd359dbcdf6c5fb3f85a179243d7a1cb7f8/security/selinux/hooks.c#L5493

This will be critical for running VMs inside of containers in cases where we want to strip the container of NET_ADMIN and let it use pre-created ones. Note that with NET_ADMIN a container is allowed to create new tun/taps, but still can't open existing ones per SELinux.

The fix may be as simple as:

allow container_t self:tun_socket { relabelfrom relabelto };

However, I'm not familiar with this code base, how to test or where the appropriate place to put this would be. On first glance I'd just dump it into container.te under the container_t local policy section.

Question about MLS support on branch "RHEL-1.12"

Hello,
I am new in selinux and I would like to use docker with the MLS selinux.
Unfortunately, I keep facing the same issue with MLS SystemHigh and SystemLow that do not match the target object

I tried to change the following statement without success

ifdef(`enable_mls',`
        init_ranged_daemon_domain(container_runtime_t, container_runtime_exec_t, mls_systemhigh)
')
  • when I use the s15 (mls_systemhigh) for container_runtime_t I have got some issue with container_share_t container_var_lib_t and ...
  • and when I use the s0 (mls_systemlow) for container_runtime_t I have an issue with the proc_kcore_t:s15:c0.c1023

Example of deny message:

time->Sun Feb 25 11:16:37 2018
type=PROCTITLE msg=audit(1519553797.524:1475): proctitle=2F70726F632F73656C662F65786500696E6974
type=SYSCALL msg=audit(1519553797.524:1475): arch=c000003e syscall=165 success=yes exit=0 a0=c4200c5e60 a1=c4200c5e70 a2=c4200c5e6a a3=1000 items=0 ppid=10889 pid=10898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc:[2:INIT]" exe="/usr/bin/docker-runc" subj=system_u:system_r:container_runtime_t:s0 key=(null)
type=AVC msg=audit(1519553797.524:1475): avc:  denied  { mounton } for  pid=10898 comm="runc:[2:INIT]" path="/proc/kcore" dev="proc" ino=4026532033 scontext=system_u:system_r:container_runtime_t:s0 tcontext=system_u:object_r:proc_kcore_t:s15:c0.c1023 tclass=file

Could you please provide some guidance (or reading) about that problem ?
I just would like to understand the problem here.

I also tried audit2allow just for testing and validation.
but it failed with the following message in the .te file

#       Possible cause is the source level (s15:c0.c1023) and target level (s0) are different.
allow container_runtime_t container_var_lib_t:file write;

# !!!! This avc is a constraint violation.  You would need to modify the attributes of either the source or target types to allow this access.
mlsconstrain ...

Podman exec does not work in system cronjobs

Adding

* * * * * root podman exec test sleep 1

to /etc/cron.d/some_cron

results in the following denial:

SELinux is preventing / from using the transition access on a process.

*****  Plugin restorecon_source (99.5 confidence) suggests   *****************

If you want to fix the label. 
/ default label should be default_t.
Then you can run restorecon.
Do
# /sbin/restorecon -v /

*****  Plugin catchall (1.49 confidence) suggests   **************************

If you believe that  should be allowed transition access on processes labeled container_t by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'runc:[2:INIT]' --raw | audit2allow -M my-runc2INIT
# semodule -X 300 -i my-runc2INIT.pp


Additional Information:
Source Context                system_u:system_r:system_cronjob_t:s0
Target Context                system_u:system_r:container_t:s0:c335,c493
Target Objects                /usr/bin/bash [ process ]
Source                        runc:[2:INIT]
Source Path                   /
Port                          <Unknown>
Host                          xxx
Source RPM Packages           filesystem-3.8-2.el8.x86_64
Target RPM Packages           bash-4.4.19-10.el8.x86_64
Policy RPM                    selinux-policy-3.14.3-20.el8.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     xxx
Platform                      Linux xxx
                              4.18.0-147.8.1.el8_1.x86_64 #1 SMP Thu Apr 9
                              13:49:54 UTC 2020 x86_64 x86_64
Alert Count                   37
First Seen                    2020-06-17 04:12:47 CEST
Last Seen                     2020-07-23 04:09:38 CEST
Local ID                      846ef433-db39-4832-8af1-fc884f8fb87c

Raw Audit Messages
type=AVC msg=audit(1595470178.631:224251): avc:  denied  { transition } for  pid=92842 comm="runc:[2:INIT]" path="/usr/bin/bash" dev="overlay" ino=6947485 scontext=system_u:system_r:system_cronjob_t:s0 tcontext=system_u:system_r:container_t:s0:c335,c493 tclass=process permissive=0


type=SYSCALL msg=audit(1595470178.631:224251): arch=x86_64 syscall=execve success=no exit=EACCES a0=c0000e9f30 a1=c0000e7dc0 a2=c00015b890 a3=0 items=0 ppid=92832 pid=92842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=runc:[2:INIT] exe=/ subj=system_u:system_r:system_cronjob_t:s0 key=(null)

Hash: runc:[2:INIT],system_cronjob_t,container_t,process,transition

if I add the same cron via crontab -e it works.

Unable to build .pp file or RPM from RHEL branch or latest tag

My build server, where I'm trying to build the PP file or RPM package has these components:

[root@n7-z01-0a2a0576 yum.repos.d]# uname -r
3.10.0-514.10.2.el7.x86_64
[root@n7-z01-0a2a0576 yum.repos.d]# yum list installed|grep selinux
libselinux.x86_64               2.5-6.el7                          @os/7.0
libselinux-python.x86_64        2.5-6.el7                          @os/7.0
libselinux-utils.x86_64         2.5-6.el7                          @os/7.0
selinux-policy.noarch           3.13.1-102.el7_3.15                @base
selinux-policy-devel.noarch     3.13.1-102.el7_3.15                @base
selinux-policy-targeted.noarch  3.13.1-102.el7_3.15                @base
[root@n7-z01-0a2a0576 yum.repos.d]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)

However when I try to build the RPM from either of the RHEL branches, or master, I get a syntax error and the .bz2 file will got build. Here's an example:

[root@n7-z01-0a2a0576 container-selinux-0.1.0]# git checkout RHEL-1.12
Previous HEAD position was 8f8caa6... Bump to v2.10
Switched to branch 'RHEL-1.12'
[root@n7-z01-0a2a0576 container-selinux-0.1.0]# git pull
Already up-to-date.
[root@n7-z01-0a2a0576 container-selinux-0.1.0]# make clean
rm -f *~  *.tc *.pp *.pp.bz2
rm -rf tmp *.tar.gz
[root@n7-z01-0a2a0576 container-selinux-0.1.0]# make
make -f /usr/share/selinux/devel/Makefile container.pp
make[1]: Entering directory `/root/container-selinux-0.1.0'
container.if:548: Error: duplicate definition of docker_exec_lib(). Original definition on 71.
container.if:552: Error: duplicate definition of docker_read_share_files(). Original definition on 109.
container.if:556: Error: duplicate definition of docker_exec_share_files(). Original definition on 131.
container.if:560: Error: duplicate definition of docker_manage_lib_files(). Original definition on 149.
container.if:565: Error: duplicate definition of docker_manage_lib_dirs(). Original definition on 169.
container.if:569: Error: duplicate definition of docker_lib_filetrans(). Original definition on 205.
container.if:573: Error: duplicate definition of docker_read_pid_files(). Original definition on 223.
container.if:577: Error: duplicate definition of docker_systemctl(). Original definition on 242.
container.if:581: Error: duplicate definition of docker_use_ptys(). Original definition on 285.
container.if:585: Error: duplicate definition of docker_stream_connect(). Original definition on 336.
container.if:589: Error: duplicate definition of docker_spc_stream_connect(). Original definition on 355.
Compiling targeted container module
/usr/bin/checkmodule:  loading policy configuration from tmp/container.tmp
container.te:270:ERROR 'syntax error' at token 'init_stop' on line 9368:
init_stop(container_runtime_t)

/usr/bin/checkmodule:  error(s) encountered while parsing configuration
make[1]: *** [tmp/container.mod] Error 1
make[1]: Leaving directory `/root/container-selinux-0.1.0'
make: *** [container.pp] Error 2
[root@n7-z01-0a2a0576 container-selinux-0.1.0]# ll
total 88
-rw------- 1 root root  3953 Mar  6 09:13 container.fc
-rw------- 1 root root 13989 Mar  6 09:13 container.if
-rw------- 1 root root  3045 Mar  6 09:47 container-selinux.spec
-rw------- 1 root root 26679 Mar  6 09:47 container.te
-rw------- 1 root root 17987 Mar  6 05:48 LICENSE
-rw------- 1 root root   669 Mar  6 09:13 Makefile
-rw------- 1 root root    38 Mar  6 09:13 README.md
drwx------ 2 root root  4096 Mar  6 09:48 tmp
-rw------- 1 root root     5 Mar  6 09:13 VERSION

I expected the .bz2 file to get built and then I can run an rpmbuild -ba container-selinux.spec to build the RPM file.

Do you see anything I could be doing incorrect?

container_ro_file_t ?

# cat /etc/selinux/targeted/contexts/lxc_contexts 
process = "system_u:system_r:container_t:s0"
content = "system_u:object_r:virt_var_lib_t:s0"
file = "system_u:object_r:container_file_t:s0"
ro_file="system_u:object_r:container_ro_file_t:s0"
sandbox_kvm_process = "system_u:system_r:svirt_qemu_net_t:s0"
sandbox_kvm_process = "system_u:system_r:svirt_qemu_net_t:s0"
sandbox_lxc_process = "system_u:system_r:container_t:s0"

RHEL 8.0 lxc_contexts contains reference to system_u:object_r:container_ro_file_t:s0, but the type doesn't seem defined in this repo and yet doesn't seem used by Podman and Docker.

Maybe the type should be removed from lxc_contexts?

Loading docker volume fails after upgrade Fedora Ugrade container-selinux.noarch 2:2.77-1.git2c57a17.fc29

Goodafternoon, after upgrading to "container-selinux.noarch 2:2.77-1.git2c57a17.fc29"
I try to start my docker image.
But I get the error:
/usr/bin/docker-current: Error response from daemon: SELinux relabeling of /var/lib/docker/volumes/openhab_userdata/_data is not allowed: "no such file or directory".
But the directory /var/lib/docker/volumes/openhab_userdata/_data" is there.

Could this be related to the update?

Yours kindly,
Roel

delete stale branches

Dan, could you delete the fedora-1.9 branch? Makes things a little less confusing for me :D

Accessing Host IPC through a sandbox container (Pod) doesn't work with SELinux

Hi @rhatdan,

This is somewhat related to #46 but when using "Host IPC" in which the IPC is shared with the actual host the container is running on. (Implemented in Docker by using --ipc host mode.)

The way Pod's are implemented in Kubernetes is to run a "sandbox" container that "owns" the network and IPC namespaces and then have other containers in that Pod refer to that container's settings. In other words, the sandbox container will be run with --ipc host while the other containers will be run with --ipc container:<sandbox> so they share that one.

But running things this way makes ipcs command (from busybox!) fail on the non-sandbox container, causing an AVC while trying to access specific entries of IPC from the host.

For a simple reproducer:

  1. Create IPCs on the host:
        $ ipcmk -M 1M -S 10 -Q
        Shared memory id: 98304
        Message queue id: 65536
        Semaphore id: 131072
  1. Run a "sandbox" container to define a Pod, have it run on Host IPC mode so it can access the IPCs created above:
        $ docker run --name hostipc_pod --detach --ipc host busybox sleep 3600
  1. Run a separate container that shares the sandbox's IPC and try to check ipcs output there (using busybox and after #47 has been applied!):
        $ docker run --ipc container:hostipc_pod busybox ipcs

        ------ Message Queues --------
        key        msqid      owner      perms      used-bytes   messages

        ------ Shared Memory Segments --------
        key        shmid      owner      perms      bytes      nattch     status

        ------ Semaphore Arrays --------
        key        semid      owner      perms      nsems

The expectation was that the IPCs created above would show up here, but they don't, the output is empty.

This only happens when Host IPC is in use. Otherwise, when containers are sharing IPC with a sandbox, this doesn't seem to be an issue...

Checking for violations shows them in the Audit log:

        $ grep AVC /var/log/audit/audit.log
        type=AVC msg=audit(1516822236.932:1618): avc:  denied  { unix_read } for  pid=19972 comm="ipcs" key=39206617  scontext=system_u:system_r:container_t:s0:c374,c410 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=msgq permissive=0
        type=AVC msg=audit(1516822236.932:1619): avc:  denied  { unix_read } for  pid=19972 comm="ipcs" key=-1093557504  scontext=system_u:system_r:container_t:s0:c374,c410 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=shm permissive=0
        type=AVC msg=audit(1516822236.932:1620): avc:  denied  { unix_read } for  pid=19972 comm="ipcs" key=-125885918  scontext=system_u:system_r:container_t:s0:c374,c410 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=sem permissive=0

The expected output would be:

        $ docker run --ipc container:hostipc_pod busybox ipcs

        ------ Message Queues --------
        key        msqid      owner      perms      used-bytes   messages
        0x02563ed9 65536      5192       644        0            0

        ------ Shared Memory Segments --------
        key        shmid      owner      perms      bytes      nattch     status
        0xbed1a300 98304      5192       644        1048576    0

        ------ Semaphore Arrays --------
        key        semid      owner      perms      nsems
        0xf87f2222 131072     5192      644        10

/cc @Random-Liu since this came up in kubernetes/kubernetes#58174

NOTE: Perhaps by just testing ipcs I'm missing something, it's possible that after fixing this, ipcs will show them, but they won't be usable? Perhaps there needs to be more investigation there...

syntax-error building fedora branch on f23

gawk: fatal: cannot open file `/etc/selinux/config' for reading (Permission denied)
docker.if:131: Error: duplicate definition of apache_exec(). Original definition on 277.
gawk: fatal: cannot open file `/etc/selinux/config' for reading (Permission denied)
Compiling  docker module
/usr/bin/checkmodule:  loading policy configuration from tmp/docker.tmp
docker.te:296:ERROR 'syntax error' at token 'systemd_dbus_chat_machined' on line 14602:
    systemd_dbus_chat_machined(docker_t)
#line 296
/usr/bin/checkmodule:  error(s) encountered while parsing configuration
/usr/share/selinux/devel/include/Makefile:154: recipe for target 'tmp/docker.mod' failed
make: *** [tmp/docker.mod] Error 1
error: Bad exit status from /var/tmp/rpm-tmp.EbpwOG (%build)


RPM build errors:
    Bad exit status from /var/tmp/rpm-tmp.EbpwOG (%build)

container_use_samba boolean

I'd read through some blog posts outlining this package was loosely borrowing from the virt policies. I came across a scenario where I wanted to containerize a service that stored configurations and data on a samba share.

Would a patch that includes the gen_tunable for 'container_use_samba' be applied here in container.te or is that something that would be better addressed in virt.te?

Transition blocked when using docker "no-new-privileges"

Container-SELinux 2.36 doesn't seem to play nicely with no-new-privileges on CentOS7.

I've been unable to update the container-linux version to 2.48 due to the dependency chain packages not being available on the centOS repos. Trying to build the repo itself with make install ends with a syntax error. Seems likely this is because the underlying selinux-policy version is not matching.

It's possible this is fixed in future versions, but I haven't yet been able to test it out.

It looks like this happened before.

Turning on no_new_privs actually stopped the SELinux transition from the docker daemon type docker_t to the container type, svirt_lxc_net_t. The no_new_privs option only allows SELinux transitions from one type to another if the target type as a complete subset of the source type.

Repro Steps

  1. Create CentOS VM, I am using latest released AMI on EC2 (ami-4bf3d731)

  2. Install Docker-CE, and enable edge repo
    Checking versions:

$ rpm -q container-selinux
container-selinux-2.36-1.gitff95335.el7.noarch
$ rpm -q docker-ce
docker-ce-18.02.0.ce-1.el7.centos.x86_64
  1. Enable docker SELinux, sudo vi /etc/docker/daemon.json and add the following content:
{
	"selinux-enabled": true
}
  1. Don't forget to restart dockerd sudo systemctl restart docker

  2. Try to run a container with --security-opt=no-new-privileges

$ sudo docker run --security-opt=no-new-privileges -it alpine sh
standard_init_linux.go:195: exec user process caused "operation not permitted"
  1. Check out the audit log
$ sudo tail -n 500 /var/log/audit/audit.log

...
type=SELINUX_ERR msg=audit(1519087517.808:2497): op=security_bounded_transition seresult=denied oldcontext=system_u:system_r:container_runtime_t:s0 newcontext=system_u:system_r:svirt_lxc_net_t:s0:c433,c559
...

Workaround

The only workaround seems to be to manually add a typebound for the blocked transition classes.

See: https://stackoverflow.com/questions/44127247/does-anyone-know-a-workaround-for-no-new-privileges-blocking-selinux-transitions

module dockersvirt 1.0;

require {
    type container_runtime_t;
    type svirt_lxc_net_t;
    role system_r;
};

typebounds container_runtime_t svirt_lxc_net_t;

I don't have a proper understanding of SELinux or the entirety of the policies to know if this is a proper workaround, but it seems potentially unsafe at a glance.

If this issue is in the wrong place, please point me in the right direction. If there is another known workaround, please let me know.

AVC denied for crio on access of tmpfs (modules_object_t)

Occasionally I run into an AVC denied when running CRI-O 1.14.11 on RHEL7.7. This occurs once every N times I redeploy a host. The issue persists after stop/start-ing crio.service. This probably excluded issues with the order in which (selinux-policy) RPM's are installed and services are started.

type=AVC msg=audit(1582899355.542:22597): avc:  denied  { associate } for 
pid=24477 comm="crio" name="/" dev="tmpfs" ino=1765990 
scontext=system_u:object_r:container_file_t:s0:c576,c842
tcontext=system_u:object_r:modules_object_t:s0 
tclass=filesystem permissive=1

The SEalert report from this log-entry is:

--------------------------------------------------------------------------------

SELinux is preventing /usr/bin/crio from associate access on the filesystem /var/lib/kubelet/pods/add84e8d-44c3-462f-a243-4964c5aaebb0/volumes/kubernetes.io~secret/fluentd-elasticsearch-token-pw6db.

*****  Plugin catchall (100. confidence) suggests   **************************

If you believe that crio should be allowed associate access on the fluentd-elasticsearch-token-pw6db filesystem by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'crio' --raw | audit2allow -M my-crio
# semodule -i my-crio.pp


Additional Information:
Source Context                system_u:object_r:container_file_t:s0:c576,c842
Target Context                system_u:object_r:modules_object_t:s0
Target Objects                /var/lib/kubelet/pods/add84e8d-44c3-462f-a243-4964
                              c5aaebb0/volumes/kubernetes.io~secret/fluentd-
                              elasticsearch-token-pw6db [ filesystem ]
Source                        crio
Source Path                   /usr/bin/crio
Port                          <Unknown>
Host                          node-3
Source RPM Packages           cri-o-1.14.11-1.el7.x86_64
Target RPM Packages           
Policy RPM                    selinux-policy-3.13.1-252.el7.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Permissive
Host Name                     node-3
Platform                      Linux node-3 3.10.0-1062.el7.x86_64 #1 SMP Thu Jul
                              18 20:25:13 UTC 2019 x86_64 x86_64
Alert Count                   1
First Seen                    2020-02-28 14:15:55 UTC
Last Seen                     2020-02-28 14:15:55 UTC
Local ID                      086af805-810c-4f07-8ffc-b31689cb7d8f

Raw Audit Messages
type=AVC msg=audit(1582899355.542:22597): avc:  denied  { associate } for  pid=24477 comm="crio" name="/" dev="tmpfs" ino=1765990 scontext=system_u:object_r:container_file_t:s0:c576,c842 tcontext=system_u:object_r:modules_object_t:s0 tclass=filesystem permissive=1


type=SYSCALL msg=audit(1582899355.542:22597): arch=x86_64 syscall=lsetxattr success=yes exit=0 a0=c0011db700 a1=c000e5d6c0 a2=c001494980 a3=33 items=1 ppid=1 pid=24477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=crio exe=/usr/bin/crio subj=system_u:system_r:container_runtime_t:s0 key=(null)

type=CWD msg=audit(1582899355.542:22597): cwd=/

type=PATH msg=audit(1582899355.542:22597): item=0 name=/var/lib/kubelet/pods/add84e8d-44c3-462f-a243-4964c5aaebb0/volumes/kubernetes.io~secret/fluentd-elasticsearch-token-pw6db inode=1765990 dev=00:6d mode=041777 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:modules_object_t:s0 objtype=NORMAL cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0

Hash: crio,container_file_t,modules_object_t,filesystem,associate

I've gathered the log-data after switching SElinux to Permissive.

Allow process:setexec for container_t

Hello,

While deploying openstack/tripleo on a SELinux enforcing CentOS 7, I get the following errors in the audit.log:

type=AVC msg=audit(1539849661.136:2404): avc:  denied  { setexec } for  pid=58245 comm="crond" scontext=system_u:system_r:container_t:s0:c80,c233 tcontext=system_u:system_r:container_t:s0:c80,c233 tclass=process

I've tried to understand how to inject the rule in the "container.te" of this repository, but it's way beyond my knowledge with selinux. Care to either explain me how to provide a good pull request, or do it?
The rule should look like that:

require {
	type container_t;
	class process setexec;
}

#============= container_t ==============
allow container_t self:process setexec;

Apparently there are "shortcuts" in the .te, and I don't want to mess it all ;).

Thank you!

More open but not privileged domain?

I'm using Fedora Atomic Workstation:

fedora-ws-27:fedora/27/x86_64/workstation
Version: 27.20171110.n.1 (2017-11-10 12:05:52)

And previously my "devshell" container was run with --privileged, which I'm trying to get away from for obvious reasons. However, my daily development involves testing rpm-ostree which uses bwrap. I am running with --security-opt seccomp:unconfined since I obviously want to be able to use strace/gdb in my devshell. However, the default SELinux policy denies a lot of things that one wants to do when creating user namespaces.

type=AVC msg=audit(1510840822.440:772): avc: denied { mount } for pid=27009 comm="bwrap" name="/" dev="tmpfs" ino=313846 scontext=system_u:system_r:container_t:s0:c47,c237 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=0

Now, things do work with --security-opt label:disable. But I actually want some of the protections of SELinux here. So, maybe something like --security-opt label:unconfined? Basically the way I think of this is it should have a quite similar set of permissions as a default unprivileged login shell i.e. unconfined_t.

cannot exit from kubectl exec (openshift 3.11)

Hi, when I use the following selinux configuration:
type: container_logreader_t

I cannot exit from a container shell. Example:

  1. oc deploy -f dpeloyment.yaml
  2. oc exec -it sh
  3. exit

The exit hangs. The interesting part, I have read access to /var/log, which is what I require.

In addition, I cannot delete the containers. They hang in a terminating state. I think it is the same issue.

audit.log gives this message when I try to exit from the container's shell.

type=AVC msg=audit(1586919161.042:18459): avc:  denied  { sigchld } for  pid=15994 comm="docker-containe" scontext=system_u:system_r:container_logreader_t:s0:c4,c17 tcontext=system_u:system_r:container_runtime_t:s0 tclass=process permissive=0

This is solved by disabling selinux, everything works great, however, my environment is under heavy compliance.

This may not be the correct place to file an issue. I am looking for guidance on how to bring this issue to the attention of the OpenShift maintainers. Does the issue give anyone any clues?

CRIU related denials using Podman

Restoring a checkpoint container with Podman needs following CRIU related policy changes:

allow iptables_t container_file_t:file append;
allow iptables_t container_runtime_tmpfs_t:dir read;
allow iptables_t container_t:dir ioctl;
allow iptables_t container_var_lib_t:dir read;
allow iptables_t container_var_lib_t:file append;

To make sure no network packets are in-flight during checkpointing and restoring CRIU locks the network using iptables-restore. All those policy changes are necessary for iptables to write to the criu log-file, create a temporary file for iptables-restore input and actually run the iptables-restore command.

Question about container_runtime_domain

I was wondering whether it was considered to make container_runtime_t into an attribute container_runtime_domain so that 3rd party container engines that need custom rules can still inherit the base container_runtime_t configuration?

build failure with commit 51001dd on rhel 7.3

Compiling targeted container module
/usr/bin/checkmodule: loading policy configuration from tmp/container.tmp
container.te:390:ERROR 'syntax error' at token 'virt_sandbox_domtrans' on line 19871:

line 390

virt_sandbox_domtrans(container_runtime_t, spc_t)

/usr/bin/checkmodule: error(s) encountered while parsing configuration
make[1]: *** [tmp/container.mod] Error 1
make[1]: Leaving directory `/home/lsm5/repositories/pkgs/docker/BUILD/docker-5759a0805380f1067386e87b64f0e27ed818be27/container-selinux-51001dd9052288a6f2356db61bcf4a3084497366'
make: *** [container.pp] Error 2
error: Bad exit status from /var/tmp/rpm-tmp.LwRSQv (%build)

RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.LwRSQv (%build)

accessing /etc/pki/ca-trust/extracted/pem files from a container

I am running Kubernetes in RHEL 7.7 using container-selinux.noarch 2:2.107-3.el7

When mounting the /etc/pki/ca-trust/extracted/pem directory (read only) I am unable to read the contents of any of the files. Running an audit2allow produced the following policy suggestion:

#============= container_t ==============
allow container_t cert_t:dir read;

can that be added to the main container-selinux release, or is there a better way to handle this?

Allow container-selinux to be built on Debian

Debian is using refpolicy instead of fedora-selinux where it included a container module,

https://github.com/SELinuxProject/refpolicy

As the results, it failed to build container-selinux probably complaining container_unit_file_t was not defined.

Compiling refpolicy container module
m4:container.te:415: Warning: modutils_domtrans_insmod(container_runtime_t) has been deprecated, please use modutils_domtrans() instead.
/usr/bin/checkmodule: loading policy configuration from tmp/container.tmp
container.te:76:ERROR 'syntax error' at token 'systemd_unit_file' on line 8702:
systemd_unit_file(container_unit_file_t)
type container_unit_file_t alias docker_unit_file_t;
/usr/bin/checkmodule: error(s) encountered while parsing configuration
make[1]: *** [/usr/share/selinux/devel/include/Makefile:162: tmp/container.mod] Error 1
make[1]: Leaving directory '/home/debian/container-selinux'
make: *** [Makefile:12: container.pp] Error 2

Allow container to access ns_last_pid

The newest selinux-policy package for F29 F30 and rawhide labels /proc/sys/kernel/ns_last_pid with sysctl_kernel_ns_last_pid_t.

A test build of container-selinux with kernel_rw_kernel_ns_lastpid_sysctl(container_domain) enabled CRIU to restore a container with threads.

Can kernel_rw_kernel_ns_lastpid_sysctl(container_domain) be added to container-selinux?

Needed for containers/podman#2705 and containers/podman#2272

systemd-nspawn userns container with MCS constraints

I am working through containing a Fedora OS container with a user-namespaced, MCS constrained systemd-nspawn container and am looking for clarification or potential bug fixes around the following AVCs. Inside the container, I expect to use systemd services such as systemd-networkd, and systemd-resolved and move up to useful services such as FreeIPA, etc.

The files in /var/lib/machines/msstest are installed bare as in dnf --installroot=/var/lib/machines/msstest install ... -- not within a single *.raw or similar file container.

I have tried using container_t, container_userns_t, and the newly added container_init_t types in the -Z option of systemd-nspawn with similar results.

  1. What is the proper or best fit SELinux type for a userns, MCS contained OS container? container_t, container_userns_t, or the newly added container_init_t
  2. I'd like to bind-mount space on the host into the container (in both ro and rw modes) -- is there an SELinux type that an MCS constrained container can "share" on the host such as httpd_content_t or var_t? I ask because I house containers on the very fast disks, and store large amounts of data on slower disks on the host -- that data is also shared by other non-MCS constrained processes on the host such as NFS or Apache.
  3. systemd-nspawn and machinectl related file contexts and type enforcement rules seem to be largely absent from container-selinux -- is this intended? Or can you elaborate on the reason?

AVCs when using -Z system_u:system_r:container_userns_t:s0:c73

AVC avc:  denied  { write } for  pid=41646 comm="systemd-machine" name="msstest" dev="dm-1" ino=2367529 scontext=system_u:system_r:systemd_machined_t:s0 tcontext=system_u:object_r:container_file_t:s0:c73 tclass=dir permissive=1
AVC avc:  denied  { mounton } for  pid=53564 comm="(networkd)" path="/" dev="dm-1" ino=2367529 scontext=system_u:system_r:container_userns_t:s0:c73 tcontext=system_u:object_r:container_file_t:s0:c73 tclass=dir permissive=1
AVC avc:  denied  { remount } for  pid=53564 comm="(networkd)" scontext=system_u:system_r:container_userns_t:s0:c73 tcontext=system_u:object_r:fs_t:s0 tclass=filesystem permissive=1
AVC avc:  denied  { mounton } for  pid=53569 comm="(modprobe)" path="/run/systemd/unit-root/proc/sys/kernel/domainname" dev="proc" ino=1123384 scontext=system_u:system_r:container_userns_t:s0:c73 tcontext=system_u:object_r:sysctl_kernel_t:s0 tclass=file permissive=1
AVC avc:  denied  { mounton } for  pid=53571 comm="(r-launch)" path="/tmp/namespace-dev-cNaOYN/dev/pts" dev="tmpfs" ino=1120151 scontext=system_u:system_r:container_userns_t:s0:c73 tcontext=system_u:object_r:container_file_t:s0 tclass=dir permissive=1
AVC avc:  denied  { unmount } for  pid=53571 comm="(r-launch)" scontext=system_u:system_r:container_userns_t:s0:c73 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
AVC avc:  denied  { sendto } for  pid=53547 comm="systemd" path="/systemd/nspawn/notify" scontext=system_u:system_r:container_userns_t:s0:c73 tcontext=system_u:system_r:container_runtime_t:s0 tclass=unix_dgram_socket permissive=1
AVC avc:  denied  { kill } for  pid=41646 comm="systemd-machine" capability=5  scontext=system_u:system_r:systemd_machined_t:s0 tcontext=system_u:system_r:systemd_machined_t:s0 tclass=cap_userns permissive=1

The following versions are in use:
systemd-243.8-1.fc31.x86_64
container-selinux-2.129.0-2.fc31.noarch
selinux-policy-targeted-3.14.4-50.fc31.noarch

File labeling on the host

/usr/bin/systemd-nspawn    system_u:object_r:container_runtime_exec_t:s0
/var/lib/machines/msstest(/.*)?    system_u:object_r:container_file_t:s0:c73

[email protected] template on the host

#  SPDX-License-Identifier: LGPL-2.1+
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

[Unit]
Description=Container %i
Documentation=man:systemd-nspawn(1)
PartOf=machines.target
Before=machines.target
After=network.target systemd-resolved.service
RequiresMountsFor=/var/lib/machines

[Service]
# Make sure the DeviceAllow= lines below can properly resolve the 'block-loop' expression (and others)
ExecStartPre=-/sbin/modprobe -abq tun loop dm-mod
ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot --link-journal=try-guest --network-veth -U --settings=override --machine=%i
KillMode=mixed
Type=notify
RestartForceExitStatus=133
SuccessExitStatus=133
WatchdogSec=3min
Slice=machine.slice
Delegate=yes
TasksMax=16384

# Enforce a strict device policy, similar to the one nspawn configures when it
# allocates its own scope unit. Make sure to keep these policies in sync if you
# change them!
DevicePolicy=closed
DeviceAllow=/dev/net/tun rwm
DeviceAllow=char-pts rw

# nspawn itself needs access to /dev/loop-control and /dev/loop, to implement
# the --image= option. Add these here, too.
DeviceAllow=/dev/loop-control rw
DeviceAllow=block-loop rw
DeviceAllow=block-blkext rw

# nspawn can set up LUKS encrypted loopback files, in which case it needs
# access to /dev/mapper/control and the block devices /dev/mapper/*.
DeviceAllow=/dev/mapper/control rw
DeviceAllow=block-device-mapper rw

[Install]
WantedBy=machines.target

[email protected]/msstest.conf dropin on the host

[Service]
ExecStart=
ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot --link-journal=try-guest --network-macvlan=eno1 --private-users=pick --settings=override --machine=%i -L system_u:object_r:container_file_t:s0:c73 -Z system_u:system_r:container_t:s0:c73

Thank you for this container-selinux project and in advance, for any assistance.

duplicate definitions found in selinux-policy-devel-3.13.1-102.el7

When compiling an SELinux module on CentOS 7.3.1611 which has selinux-policy-devel-3.13.1-102.el7 & container-selinux-1.12.5-14.el7 installed, I'm seeing these errors.

/usr/share/selinux/devel/include/contrib/docker.if:71: Error: duplicate definition of docker_exec_lib(). Original definition on 515.                                                                                                                                                                                                                                                                                                      
/usr/share/selinux/devel/include/contrib/docker.if:109: Error: duplicate definition of docker_read_share_files(). Original definition on 519.                                                                                                                                                                                                                                                                                             
/usr/share/selinux/devel/include/contrib/docker.if:131: Error: duplicate definition of docker_exec_share_files(). Original definition on 523.                                                                                                                                                                                                                                                                                             
/usr/share/selinux/devel/include/contrib/docker.if:149: Error: duplicate definition of docker_manage_lib_files(). Original definition on 527.                                                                                                                                                                                                                                                                                             
/usr/share/selinux/devel/include/contrib/docker.if:169: Error: duplicate definition of docker_manage_lib_dirs(). Original definition on 532.                                                                                                                                                                                                                                                                                              
/usr/share/selinux/devel/include/contrib/docker.if:205: Error: duplicate definition of docker_lib_filetrans(). Original definition on 536.                                                                                                                                                                                                                                                                                                
/usr/share/selinux/devel/include/contrib/docker.if:223: Error: duplicate definition of docker_read_pid_files(). Original definition on 540.                                                                                                                                                                                                                                                                                               
/usr/share/selinux/devel/include/contrib/docker.if:242: Error: duplicate definition of docker_systemctl(). Original definition on 544.                                                                                                                                                                                                                                                                                                    
/usr/share/selinux/devel/include/contrib/docker.if:285: Error: duplicate definition of docker_use_ptys(). Original definition on 548.                                                                                                                                                                                                                                                                                                     
/usr/share/selinux/devel/include/contrib/docker.if:336: Error: duplicate definition of docker_stream_connect(). Original definition on 552.                                                                                                                                                                                                                                                                                               
/usr/share/selinux/devel/include/contrib/docker.if:355: Error: duplicate definition of docker_spc_stream_connect(). Original definition on 556.
# lsb_release -d
Description:    CentOS Linux release 7.3.1611 (Core) 

# rpm -q container-selinux docker-selinux selinux-policy-devel
container-selinux-1.12.5-14.el7.centos.x86_64
package docker-selinux is not installed
selinux-policy-devel-3.13.1-102.el7_3.13.noarch

# rpm -qf /usr/share/selinux/devel/include/contrib/docker.if
selinux-policy-devel-3.13.1-102.el7_3.13.noarch

Unsure if these errors are fatal.

I've found #3 and https://bugzilla.redhat.com/show_bug.cgi?id=1262812#c4 though last comment is a bit dated so I'm guessing current selinux-policy-devel needs to have docker.tf removed?

"transition" denied on entrypoint with podman 1.0

Hello,

We're currently testing podman 1.0, and hit the following issue:

type=AVC msg=audit(1547545930.107:1449): avc:  denied  { transition } for  pid=69772 comm="runc:[2:INIT]" path="/usr/local/bin/dumb-init" dev="vda1" ino=2232295 scontext=system_u:system_r:unconfined_service_t:s0 tcontext=system_u:system_r:container_t:s0:c12,c116 tclass=process permissive=0

Some details:
this dumb-init is embedded in the container (meaning, no volume involved). It has the following flags:

-rwxr-xr-x. root root system_u:object_r:container_file_t:s0:c724,c908 /usr/local/bin/dumb-init

And if we pass this AVC into audit2allow, we get the following output:

#============= unconfined_service_t ==============

#!!!! The file '/usr/local/bin/dumb-init' is mislabeled on your system.  
#!!!! Fix with $ restorecon -R -v /usr/local/bin/dumb-init
allow unconfined_service_t container_t:process transition;

We didn't get this issue with previous podman versions.

Any hint on what to do? I don't know this "transition" being denied, sooo.. any help would be nice :).

Thanks!

C.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.