Solving SSHFS 'Permission Denied'

Solving SSHFS ‘Permission Denied’

Background

I like using Digital Ocean and have spun up 100+ VMs with them, but their Private Networking model should make anyone wary. And it’s just not ideal to send things between nodes in plain-text. I’d rather get things right from the beginning, than have to retrofit later.

Most recently, I was setting up a Consul/Nomad cluster for running containers. Since I’m on the cheap, having a single point of failure for volumes isn’t a huge issue. There’s a few options.

NFS works well and is pretty simple, but is not easily encrypted.

SSHFS seemed like a good, encrypted alternative:

SSHFS is a FUSE-based filesystem client for mounting remote directories over a SSH connection.

Fear The Error

I kept running into this error. Google is also a wasteland for others who have encountered problems with permissions with sshfs.

root@client:# chown -R root:root /mnt/dir-1/docker/
chown: cannot read directory '/mnt/dir-1/docker/': Permission denied

Following something like what is shown below should help avoiding this error.

Server

Relevant paths: /etc/ssh/sshd_config and ~/.ssh/authorized_keys.

Assuming you already have PermitRootLogin set to no (you should if it’s publicly exposed), you can whitelist certain configuration overrides for sshd by source IP address (in this case):

# /etc/ssh/sshd_config
Match Address 10.138.171.21,10.138.171.22
    PermitRootLogin yes

Reload sshd for the change to take effect:

systemctl reload sshd

Client

Installation / Configuration

Install the sshfs package:

apt instal sshfs

Add your user to the fuse group:

groupadd fused; usermod -aG fuse yourusername

Add the user_allow_other parameter to the configuration

echo 'user_allow_other' > /etc/fuse.conf

Perstistent Mounting

If you’re dealing with painful Docker bind mounts, SSHing as root is easier. Write this into /etc/fstab, replacing the IP address, mountpoint, path to SSH key:

[email protected]:/mnt/ssd-1 /mnt/ssd-1  fuse.sshfs _netdev,users,idmap=user,IdentityFile=/root/.ssh/sshfs,allow_other,default_permissions,reconnect 0 0
Test

Unmount and remount anything in /etc/fstab:

umount /mnt/dir-1 && mount -av

Throughput

I tested throughput following this example, but with a crazy blocksize parameter (/shrug).

Assuming mountpoint of /mnt/dir-1/:

time sh -c 'dd if=/dev/zero of=/mnt/dir-1/dd-test.file bs=100k count=20k conv=fdatasync && sync' && rm -f '/mnt/dir-1/dd-test.file'

Output:

20480+0 records in
20480+0 records out
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 40.8501 s, 51.3 MB/s

real	0m40.880s
user	0m0.064s
sys	0m2.741s

Not bad.

References