Mounting file systems over two SSH hops

When working with files on a remote linux system it is useful to be able to mount whole directories as part of the local file system. This means that you can work with the remote data as if it on the local system. This is much more seamless and convenient that fetching individual files to work with using SCP or SFTP or similar.

Fortunately, there exists a neat way to do this using sshfs. There are already a great many tutorials on how to set up sshfs as well as the sshfs manual page so I won't repeat unnecessary details here. Instead I want to concentrate on my problem getting sshfs working across two ssh connections simltaneously.

The problem

Let me exaplain: let's say that I work on system A (localhost) and I want to mount a folder on system C. Unfortunately, I can't directly access system C from system A but there is a third system, system B which sits between A and C and can reach both of them:

   system A <----> system B <----> system C

The situation is made more complicated by the fact that I have no administrative control over system B. There are a few suggestions for how to get this kind of setup working available at sshfs FAQ but none of these helped me in this situation. For example, I couldn't install sshfs on System B so I was not able to log in, mount System C as a folder on System B and then in a separate command on System A, mount the System B mount point on the local file system.

The whole situation was made more frustrating by the fact that I could do a normal multi-hop ssh which logged me in and allowed command line access to the files (see this page for other ssh tips). The command can take different usernames for each system and asks for two passwords, first for System B and then for System C:

$ ssh -t user1@systemB "ssh user2@systemC"

The solution

It turns out that the easiest way to achieve this is through two simple commands:

$ ssh -f userB@systemB -L 2222:systemC:22 -N
$ sshfs -p 2222 userC@localhost:/remote/path/ /mnt/localpath/

Looking at each of these in turn, the first open a background (-f) connection to system B and uses it to tunnel the local port 2222 to port 22 on system C. The -f is useful if you want to use the command prompt for other commands. Without it, the terminal will be unusable until you break the ssh connection. The -N prevents SSH from executing a command on the remote system. This is useful when just forwarding ports.

If you have a slow network, you may wish to include the -C option in both the ssh and sshfs commands as well to enable compression. Don't bother if you are on a fast network as in the time taken to compress the file and send the smaller version, you can instead just get the uncompressed version quicker.

The second command uses sshfs and the tunnel established in the 1st command to mount the file system on system C as a local path on system A. -p tells sshfs to use port 2222 which is forwarded through the tunnel to port 22 (the normal SSH port) on system C. This means that we can use localhost instead of the name for system C and still achieve the correct result.

The beauty of this system is that any remote folder can be mounted locally, even those that are themselves remote mount points! This is a very versatile system.


To unmount the connection, use:

$ fusermount -u /mnt/localpath/


# umount /mnt/localpath/

Killing the tunnel

Because the ssh tunnel was created in the background using the -f option, the easiest way to kill it is to kill the process running it which can be found with ps:

$ ps ax | grep "ssh_command" | awk '{print $1}' | \
	xargs -i kill {} 2>/dev/null

Firstly, ps is used to find all running processes. We then pipe the output to grep which looks for "ssh_command" (which should be replaced with whichever command you used to establish the tunnel). Then we use awk to return only the PID numbers for the relevant processes and these are finally piped to kill to terminate the process. All errors are redirected to /dev/null. For a simple explanation of output redirection, see this tutorial.

Useful resources

Published on 21st January 2012.