Snippets of learning.
Error Accessing Mounted USB Drive
I was trying to setup an MLFlow server, with a volume pointing to a mounted external USB HDD:
$ mount | grep DC67
/dev/sdb2 on /mnt/DC67-0ADB type exfat (rw,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,iocharset=utf8,errors=remount-ro)
With a dockercompose from here, looking specifically at:
services:
mlflow-db:
...
volumes:
- /mnt/DC67-0ADB/mlflow/db:/var/lib/postgresql
Trying to launch the stack (via Portainer), I get the error:
2025-10-19 00:10:15
Error Failure
Failed to deploy a stack: compose up operation failed: Error response from daemon: error while creating mount source path '/mnt/DC67-0ADB/mlflow/db': chown /mnt/DC67-0ADB/mlflow/db: operation not permitted
2025-10-19 00:09:04
What’s Happening
Docker can have mount permission issues with external drives (esp. those of exFAT, NTFS, or FAT32). In this case, Docker wants to verify (or change) ownership of the /mnt/DC67-0ADB/* folder. Which, the linux host is saying it not permissible.
Verify Issue
$ mount | grep DC67
/dev/sdb2 on /mnt/DC67-0ADB type exfat (rw,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,iocharset=utf8,errors=remount-ro)
The drive is mounted with uid/gid=1000 and it is a exfat type.
- exFAT doesn’t support UNIX permissions (or ownership, or symlinks)
- Postgres expects to set ownership and permissions in its data directory (
/var/lib/postgresql/data… which is a link to the drive)
Solutions
- Just move the postgres db from the external drive onto the host drive (in this case, 12GB VM of a 60GB Proxmox instance, of a 256GB NVME SSD). This may actually be okay, as the postgres data will likely stay less than 1GB forever. i.e.:
mlflow-db:
...
volumes:
- /home/b/homelab/mlflow_data/db:/var/lib/postgresql
Which may actually be consisten with the other stack file data that I keep on the nvme drive (e.g.
b@vm-portainer:~/homelab$ ls
caddy_config caddy_data heimdall_config jellyfin_config plex_config
- Reformat the drive to ext4 (linux-native filesystem). This may actually be the right thing to do, as that drive is presumably always going to be holding stuff on this homelab. But that would require a 2TB data transfer/re-transfer and re-mounting.
- Gamble and run the database with
postgres_user, i.e:
mlflow-db:
image: postgres:latest
user: "1000:1000"
Which may actually lead to data corruption after reboots.
For today, I’m not a gambler: Option (1)
Still fails: More Permissions
Option 1 will still fail, briefly.
The uid of the postgres default is 999. so we need to (a) make the folder — docker will do this, but it will setup the wrong permissions; (b) change ownership to postgres uid=1000; (c) make sure host (root) can modify:
sudo mkdir -p /home/b/homelab/mlflow_data/db
sudo chown -R 999:999 /home/b/homelab/mlflow_data/db
sudo chmod -R 755 /home/b/homelab/mlflow_data/db
Then relaunch the stack.