VMware ESXi is a powerful hypervisor. (If you don’t know what that means, please read this primer before continuing.) The VMware company is best known for vSphere, which is a suite of integrated products that includes the ESXi hypervisor. Professional IT shops buy vSphere because it makes redundancy, uptime, and upgrades so much easier than working alone in just ESXi. Despite the power of vSphere, however, there are downsides. Because vSphere is business critical, access is restricted to fewer personnel. Also, because vSphere is expensive, shops likely don’t have capacity to spare for special projects. I would have to assume reasons like these are why VMware makes available a free license of ESXi for standalone use. Isn’t that cool? In this guide, I’m going to take two different hosts running the free version of ESXi and setup a datastore for them to access simultaneously. This is a great way to share things like ISO files and scripts!
Have an Ubuntu server. Have two or more different ESXi hosts. Have a network where the ESXi hosts can reach the Ubuntu Server. Have fresh coffee and make sure it is hot. Have a donut if today is Wednesday.
Ubuntu Setup procedure
When NOT to use Ubuntu/NFS
Don’t use Ubuntu if you have a dedicated storage device like a NAS or SAN. Instead use the NFS feature of the storage.
If the network speed between ESXi and the storage is sufficiently fast, iSCSI may be a more performant solution than NFS. If ESXi would be the only client of NFS, consider switching to iSCSI.
The Ubuntu package repository provides the following for NFS server installation:
you@ubuntu:~$ sudo apt-get install nfs-kernel-server
After installation, modify the exports file.
you@ubuntu:~$ sudo nano /etc/exports
# Takeoff Technical # *) Permit the address of each ESXi server (Ex: 192.168.10.20) # *) rw = read/write # *) sec = Security option - using sys so NFSv4 authenticates as NFSv3 would # *) secure = only allow client requests that use port numbers below 1024 # *) no_subtree_check = optimization for reliability - reduces server workload # *) async = a performance improvement with a small risk when unclean server restart # *) no_root_squash = required by VMware per references /srv/nfs/esxi-share 192.168.10.20(rw,sec=sys,secure,no_subtree_check,async,no_root_squash) /srv/nfs/esxi-share 192.168.10.30(rw,sec=sys,secure,no_subtree_check,async,no_root_squash)
you@ubuntu:~$ sudo mkdir -p /srv/nfs/esxi-share
The specific language in VMware NFS Server Configuration states, “Ensure that the NFS volume is exported using NFS over TCP.” To accommodate this requirement, I will be using NFSv4. Version 4 of NFS defaults to use of TCP. There is also a statement that, “The NAS server must not provide both protocol versions for the same share.” As such, I will be disabling NFS 2 and 3 on Ubuntu, which for our guide is the “NAS” VMware documentation assumed we had.
you@ubuntu:~$ sudo nano /etc/default/nfs-kernel-server
# Number of servers to start up RPCNFSDCOUNT=8 # Runtime priority of server (see nice(1)) RPCNFSDPRIORITY=0 # Options for rpc.mountd. # If you have a port-based firewall, you might want to set up # a fixed port here using the --port option. For more information, # see rpc.mountd(8) or http://wiki.debian.org/SecuringNFS # To disable NFSv4 on the server, specify '--no-nfs-version 4' here RPCMOUNTDOPTS="--manage-gids -N 2 -N 3" # Do you want to start the svcgssd daemon? It is only required for Kerberos # exports. Valid alternatives are "yes" and "no"; the default is "no". NEED_SVCGSSD="" # Options for rpc.svcgssd. RPCSVCGSSDOPTS="" # Options for rpc.nfsd (see rpc.nfsd(8)) # The address after -H is the network interface on this host RPCNFSDOPTS="-N 2 -N 3 -H 192.168.10.10"
In the above configuration, you will note I have provided the NFS daemon with a specific IP address for binding (-H 192.168.10.10). With this setting the NFS daemon will only listen for connection on the that specific network interface. Given the security vulnerabilities of NFS, I highly recommend you architect the server similarly. Use a dedicated and isolated network explicitly for the NFS traffic between the VMware ESXi hosts and the NFS server.
you@ubuntu:~$ sudo nano /etc/default/nfs-common
# If you do not set values for the NEED_ options, they will be attempted # autodetected; this should be sufficient for most people. Valid alternatives # for the NEED_ options are "yes" and "no". # Do you want to start the statd daemon? It is not needed for NFSv4. NEED_STATD="no" # Options for rpc.statd. # Should rpc.statd listen on a specific port? This is especially useful # when you have a port-based firewall. To use a fixed port, set this # this variable to a statd argument like: "--port 4000 --outgoing-port 4001". # For more information, see rpc.statd(8) or http://wiki.debian.org/SecuringNFS STATDOPTS= # Do you want to start the idmapd daemon? It is only needed for NFSv4. NEED_IDMAPD="yes" # Do you want to start the gssd daemon? It is required for Kerberos mounts. NEED_GSSD=
Start the service and confirm it is operational before continuing.
you@ubuntu:~$ sudo systemctl start nfs-kernel-server.service you@ubuntu:~$ systemctl list-units --all --type=service --no-pager | grep nfs-server
you@ubuntu:~$ sudo shutdown -r now
VMware Attach Procedure
Mounting the NFS datastore is pretty straightforward using the provided wizard. View each of the screenshots below for more information.
NFS is a great way to share data between ESXi hosts. I hope this article has helped!