← Back

Investigating Podman, NFS, and SELinux on Uyuni

April 3, 2026

After successfully bootstrapping a containerized Uyuni server and a Leap 15.6 client locally, my immediate focus shifted to the underlying storage layer. During recent discussions with the project maintainers, a specific infrastructure constraint was highlighted, the historical failure of running Uyuni on Podman with NFS-backed storage due to strict SELinux policy conflicts. I needed to reconstruct this specific architectural conflict in a controlled laboratory environment.

Before orchestrating complex distributed storage engines on Kubernetes, I am utilizing this local Podman deployment to establish my troubleshooting methodology. The objective is to deliberately trigger the intersection of NFS filesystem locking requirements and SELinux context denials, capture the raw logs, and dissect exactly how the host kernel, security modules, and container runtime fail to communicate under real application load.

This page serves as my live scratchpad. I will continuously update it with raw logs, mount configurations, and engineering findings during the investigation.

What has been done so far:

Phase 1: Environment recon and SELinux baseline

Before introducing network storage, it is critical to map the baseline security environment. The containerized Uyuni server utilizes Podman volumes to maintain state across reboots, specifically for PostgreSQL database files, the RPM package cache, and localized configuration files. In a production-grade openSUSE deployment, these volumes are strictly governed by SELinux enforcing mode.

To trigger the storage incompatibility, the execution plan is:

  1. Identify the active local volumes driving the current stable deployment.
  2. Provision a local NFS export on the host machine.
  3. Intercept the container's mount points, forcing Podman to route all database and package synchronization I/O through the network filesystem protocol.
  4. Execute a high-volume repository sync to observe the exact moment SELinux Mandatory Access Control (MAC) intercepts and denies the container's rapid write operations across the network boundary.

Architecture: Simulating the network hop

To ensure this test accurately reflects production constraints, I am explicitly avoiding loopback mounts. I have partitioned my laboratory into two distinct entities using KVM/libvirt:

  1. Storage backend (Host): An Ubuntu 24.04 LTS bare-metal host running nfs-kernel-server. This acts as the external NAS/StorageClass proxy. The export is configured with no_root_squash to accommodate containerized UID mapping across the network.
  2. Compute node (VM): An openSUSE Tumbleweed virtual machine running the containerized Uyuni server via Podman, with SELinux actively enforcing security contexts.

By separating the compute node (Podman/Uyuni) and the storage array (NFS) across a standard virbr0 network bridge, I can properly simulate the I/O latency and lock-state arbitration delays, while also enforcing the strict network boundary that triggers immediate SELinux context denials in distributed environments.

Identifying the state vectors and security contexts

Running podman volume ls inside the Tumbleweed VM reveals the storage topology established by the mgradm deployment utility. The state is deliberately fractured across multiple local volumes to isolate configurations (etc-salt, etc-tomcat), package payloads (srv-susemanager), and database records.

The primary target for this investigation is the var-pgsql volume.

Integrating this volume over NFS presents a dual-layered engineering challenge:

  1. The application layer (POSIX Locks): PostgreSQL is notoriously unforgiving regarding storage latency and POSIX file locking (flock / fcntl). When the engine attempts to write to the Write-Ahead Log (WAL), it expects the underlying filesystem to guarantee that operation immediately.
  2. The security layer (SELinux): Podman normally stamps container volumes with a container_file_t SELinux context label to enforce MAC boundaries. NFS mounts do not persist SELinux extended attributes on the server side by default, which changes how the kernel assigns security contexts to files accessed across the network boundary.

By migrating the data residing in the local var-pgsql volume to the /mnt/nfs-uyuni network share and re-mapping the container, I force the architecture to handle POSIX locking over NFSv4 while exposing all container filesystem operations to SELinux MAC enforcement under a different security context than a local volume would carry.

Phase 1 finding: SELinux context baseline

podman inspect uyuni-db shows SecurityOpt: [], confirming mgradm launched the container with no SELinux volume labeling instructions. A full scan of journalctl -u uyuni-db.service contains no relabeling events or errors around container startup. Podman detects the underlying NFS filesystem type and skips the setxattr relabeling step entirely, a known behavior, leaving all files under /var/lib/pgsql/data at their default system_u:object_r:nfs_t:s0 context. The nfs_t label is not the result of a failed relabeling attempt; Podman never attempted one. PostgreSQL operates under this context without issue, but whether Tomcat and Taskomatic can create new files across this boundary under their respective SELinux process domains is what the next phase will expose.

Generating application-level I/O load

It is insufficient to test the database engine with synthetic SQL queries. While modern NFSv4.2 protocols might successfully arbitrate raw SQL lock requests, they bypass the heavy application stack.

When the Uyuni architecture (Tomcat, Taskomatic) executes a sustained I/O operation, such as downloading, expanding, and parsing massive XML repository metadata, the JVM places immense pressure on the database connection pool. Simultaneously, the kernel's SELinux module strictly evaluates every new file creation event (write(), mkdir()) against the allowed container security contexts.

To properly simulate the failure state documented by the maintainers, the next step is to trigger a high-volume metadata synchronization from a public repository. This forces the entire upstream-to-downstream architecture to negotiate the network storage layer, guaranteeing we expose the exact SELinux MAC enforcement denials or database lock timeouts that plague these specific deployments.

Phase 2: SELinux process domain analysis

With the NFS volume mounted and the nfs_t context confirmed, the next step was to map the SELinux process domains of every component in the stack. PostgreSQL (PID 1 inside uyuni-db) runs as system_u:system_r:container_t:s0:c708,c978. Tomcat, the search daemon, and Taskomatic (PIDs 313, 324, and 1788 respectively inside uyuni-server) all run as system_u:system_r:container_init_t:s0:c518,c720. These are distinct domains with different MCS category pairs.

Querying the active SELinux policy for container_init_t access to nfs_t files:

sesearch --allow --source container_init_t --target nfs_t --class file

The policy grants container_init_t full file permissions on nfs_t objects, including create, write, append, rename, and unlink. However, every rule is conditionally gated:

allow container_domain nfs_t:file { append create getattr ioctl link lock open read rename setattr unlink watch watch_reads write }; [ virt_use_nfs ]:True

The entire permission set only applies when the virt_use_nfs SELinux boolean is enabled. Checking its state:

getsebool virt_use_nfs
# virt_use_nfs --> on

semanage boolean -l | grep virt_use_nfs
# virt_use_nfs  (on, on)  Allow virt to use nfs

Both the current and default values are on on this Tumbleweed host. This means Tomcat and Taskomatic have full MAC-permitted write access to nfs_t files under the current configuration. A repository sync triggered now would not produce SELinux denials.

A further complication emerged during this investigation. Running getenforce on the Tumbleweed host revealed SELinux is in Permissive mode, set during the initial VM installation. In Permissive mode, the kernel logs AVC denials but never enforces them. Checking the audit log confirms 23 denial patterns logged since boot, every one marked permissive=1. Under enforcing mode, these would be hard blocks. This means the current environment cannot reproduce the maintainers' failure condition regardless of how virt_use_nfs is configured. To properly trigger the SELinux MAC denials cbosdo documented, a fresh Tumbleweed VM with SELinux set to Enforcing from installation is required. That is the next infrastructure step.

Phase 3: Fresh VM with SELinux Enforcing

The previous environment had SELinux in Permissive mode. A fresh openSUSE Tumbleweed VM was provisioned via virt-install with SELinux set to Enforcing during the YaST installation. getenforce on first boot confirmed the mode:

Enforcing

Uyuni was deployed fresh on this VM using mgradm, with some post-install changes applied: firewall ports opened for ports 80, 443, 4505, and 4506, and the podman1 bridge interface added to the trusted firewalld zone to unblock inter-container traffic. These changes are irrelevant, and mostly apply to initial issues I ran into whilst attempting the hijack of the DB storage point.

With Uyuni running cleanly, both uyuni-server and uyuni-db were stopped. The existing PostgreSQL data in the local var-pgsql volume was copied to the NFS share at /mnt/nfs-uyuni, which is backed by the host export at 192.168.122.1:/home/yemi/vms/uyuni-nfs-share. The local volume was then deleted and recreated as a bind mount over the NFS path:

podman volume create \
  --driver local \
  --opt type=none \
  --opt o=bind \
  --opt device=/mnt/nfs-uyuni \
  var-pgsql

After starting uyuni-db, the mount was confirmed inside the container:

192.168.122.1:/home/yemi/vms/uyuni-nfs-share on /var/lib/pgsql/data type nfs4 (rw,relatime,vers=4.2,...)

All files under /var/lib/pgsql/data carry system_u:object_r:nfs_t:s0, consistent with the previous environment. Podman skips setxattr relabeling on NFS-backed volumes, so no relabeling was attempted.

Phase 3 finding: Baseline under Enforcing mode

With the NFS volume live and uyuni-server running healthy, a repository sync was triggered from the UI against a channel backed by http://download.opensuse.org/tumbleweed/repo/oss/. While the sync ran, NFS write activity was confirmed on the host side: pg_wal and pg_logical directories under /home/yemi/vms/uyuni-nfs-share/ showed timestamps updating in real time, proving PostgreSQL was committing write-ahead log entries across the network boundary.

Checking the audit log on the VM during the sync:

ausearch -m avc --start today | grep nfs
# <no matches>

cat /var/log/audit/audit.log | grep denied
type=AVC msg=audit(1775518120.183:739): avc:  denied  { module_request } for  pid=15780 comm="ss" kmod="net-pf-16-proto-4-type-2" 
scontext=system_u:system_r:container_init_t:s0:c713,c846 tcontext=system_u:system_r:kernel_t:s0 tclass=system permissive=0
type=AVC msg=audit(1775518120.183:740): avc:  denied  { module_request } for  pid=15780 comm="ss" kmod="net-pf-16-proto-4-type-2" 
scontext=system_u:system_r:container_init_t:s0:c713,c846 tcontext=system_u:system_r:kernel_t:s0 tclass=system permissive=0

The only AVC denials present were unrelated to NFS: two module_request denials for net-pf-16-proto-4-type-2 from container_init_t, generated when ss inside uyuni-server attempted to load a kernel module. This is expected container behavior and has no bearing on storage access.

virt_use_nfs remains at its Tumbleweed default of on:

virt_use_nfs  (on, on)  Allow virt to use nfs

With virt_use_nfs=on and SELinux in Enforcing mode, the full Uyuni stack, including Tomcat, Taskomatic, and PostgreSQL, operates over NFS without a single MAC enforcement denial. This is the confirmed baseline. The next step is to flip setsebool virt_use_nfs off and repeat the identical repository sync to force the kernel to enforce the policy gate and produce any avc: denied entries against nfs_t access.

Phase 4: Triggering real SELinux MAC enforcement

With the baseline confirmed, virt_use_nfs was flipped off at runtime:

setsebool virt_use_nfs off
getsebool virt_use_nfs
# virt_use_nfs --> off

semanage boolean -l confirms this is a runtime-only change, the default remains on:

virt_use_nfs  (off, on)  Allow virt to use nfs

At this point uyuni-db was not running, having been taken down by the disk exhaustion from the overnight sync. Attempting to start it via systemd immediately produced the denial:

Apr 07 15:59:43 uyuni.local.lan uyuni-db[214222]: chmod: changing permissions of '/var/lib/pgsql/data': Permission denied
Apr 07 15:59:43 uyuni.local.lan uyuni-db[214222]: find: '/var/lib/pgsql/data': Permission denied

The container startup script runs chmod and find against the PostgreSQL data directory on startup. With virt_use_nfs=off, the kernel's SELinux module intercepts both operations before they reach the NFS layer. The audit log captures the hard denials:

type=AVC msg=audit(1775573886.465:10022): avc:  denied  { setattr } for  pid=209316 comm="chmod" name="/" dev="0:72" ino=12622640 
scontext=system_u:system_r:container_t:s0:c69,c470 tcontext=system_u:object_r:nfs_t:s0 tclass=dir permissive=0

type=AVC msg=audit(1775573888.658:10042): avc:  denied  { setattr } for  pid=209439 comm="chmod" name="/" dev="0:72" ino=12622640 
scontext=system_u:system_r:container_t:s0:c516,c810 tcontext=system_u:object_r:nfs_t:s0 tclass=dir permissive=0

Every field tells the story. The subject process domain is container_t, the domain PostgreSQL runs under inside uyuni-db. The target object carries the nfs_t label, which is what the kernel assigns to all files on the NFS mount since NFS does not persist SELinux xattrs server-side. The denied operation is setattr on a dir object. And permissive=0 confirms this is a hard enforcement block, not a logged-only warning. The container restart loop produced identical denials every two seconds across dozens of attempts before systemd gave up.

Now, the question is, is this the failure condition cbosdo made mention of? The virt_use_nfs boolean is the exact policy gate controlling whether container_t and container_init_t can exercise setattr, write, create, and related permissions against nfs_t objects. With it off, the entire permission set collapses and uyuni-db cannot initialize its data directory, making the database permanently unreachable.