The Mounts That Wouldn’t Come Back
Date: 2025-08-30 Duration: About 20 minutes Issue: NAS mounts dead after network outage Root Cause: Stale mounts, duplicate fstab entries, wrong SMB version
The Situation
Network went down. Happens sometimes. NAS mounts dropped. Also happens.
Network came back. NAS mounts didn’t.
Tried mount -a:
mount error(16): Device or resource busy
mount error(16): Device or resource busy
mount error(16): Device or resource busy
mount error(2): No such file or directory
mount error(16): Device or resource busy
... (repeat for 12 more shares)
Every single CIFS mount was broken.
The First Problem: Stale Mounts
The mount points were “busy” because the old mounts were still registered even though the network connection was gone.
sudo umount -a -t cifs -l
The -l flag does a lazy unmount — detaches the filesystem immediately even if busy. Cleared the stale handles.
Tried mount -a again:
mount error(2): No such file or directory
Different error. Progress.
The Second Problem: Duplicate Entries
Checked the fstab:
cat /etc/fstab | grep cifs
Twelve shares. Twenty-two entries.
Every share was listed twice — once without _netdev, once with. The duplicates were fighting each other.
//10.42.0.10/Backups /mnt/nas/backups cifs ... 0 0
//10.42.0.10/downloads /mnt/nas/downloads cifs ... 0 0
...
//10.42.0.10/Backups /mnt/nas/backups cifs ..._netdev 0 0
//10.42.0.10/downloads /mnt/nas/downloads cifs ..._netdev 0 0
Classic copy-paste drift over time.
The Third Problem: Ancient SMB Version
All the entries used vers=2.0:
vers=2.0
The NAS was running SMB 3.0. The client was trying to speak an obsolete protocol.
The Fix
Cleaned up fstab to single entries with correct settings:
//10.42.0.10/Backups /mnt/nas/backups cifs credentials=/etc/cifs-credentials,uid=1000,gid=1000,iocharset=utf8,vers=3.0,_netdev 0 0
//10.42.0.10/downloads /mnt/nas/downloads cifs credentials=/etc/cifs-credentials,uid=1000,gid=1000,iocharset=utf8,vers=3.0,_netdev 0 0
Key changes:
- Removed all duplicates
- Changed
vers=2.0tovers=3.0 - Every entry has
_netdev(waits for network before mounting)
Reload systemd to pick up the fstab changes:
sudo systemctl daemon-reload
sudo mount -a
All twelve shares mounted. Verified with:
mount | grep cifs
df -h | grep cifs
The Recovery Sequence
For future NAS mount issues after network outages:
# 1. Force unmount all stale CIFS mounts
sudo umount -a -t cifs -l
# 2. Kill any lingering mount processes
sudo pkill -f cifs
sudo pkill -f mount.cifs
# 3. Reload systemd if fstab was modified
sudo systemctl daemon-reload
# 4. Mount everything fresh
sudo mount -a
# 5. If still failing, test one share manually
sudo mount -t cifs //nas-ip/sharename /mount/point -o credentials=/etc/cifs-credentials,uid=1000,gid=1000,vers=3.0
What I Learned
Network mounts leave stale handles. After a network outage, the old mount registrations don’t clean themselves up. You need to lazy unmount them before new mounts can work.
Check for fstab duplicates. Easy to accumulate over time. Each duplicate can cause “device busy” errors.
SMB version matters. vers=2.0 was deprecated years ago. Most modern NAS devices run SMB 3.0.
The _netdev option prevents boot hangs. Without it, systemd tries to mount NAS shares before the network is up. The boot hangs for 90 seconds waiting for a timeout.
fstab Best Practices for CIFS
//nas-ip/share /mount/point cifs credentials=/etc/cifs-credentials,uid=1000,gid=1000,iocharset=utf8,vers=3.0,_netdev,soft,nofail 0 0
Options explained:
_netdev— Wait for network before mountingsoft— Allow interruption of hung mountsnofail— Don’t block boot if mount failsvers=3.0— Use modern SMB protocol
Twelve mounts dead. Three problems stacked on top of each other. Network outages are always messier to recover from than they should be.