The Update That Broke Storage Manager
Date: 2025-03-21 Duration: About 2 hours Issue: Multiple Synology services failing after DSM update Root Cause: Corrupted StorageManager package FHS target
The Cascade
Started with SMB acting up. Reinstalled the SMB package. That fixed SMB.
Then I noticed nine other services were in error state:
- Active Insight
- DHCP Server
- File Station
- Hybrid Share
- OAuth Service
- Python2
- SAN Manager
- Secure SignIn Service
- Storage Manager
All showing the same error: “Unable to run this package because of storage abnormalities. Please go to Storage Manager for detailed information.”
The irony: Storage Manager itself was broken. Can’t check storage abnormalities when the thing that checks storage abnormalities won’t start.
The Panic Moment
116TB of data on this NAS. Family photos. Project archives. Backups of backups.
First instinct: run filesystem checks, repair commands, maybe even touch partition tables.
Then I stopped. This is vital data. Every command needs to be non-destructive until we know exactly what’s wrong.
The Investigation
Started with read-only diagnostics:
mount | grep volume
# /dev/mapper/cachedev_0 on /volume1 type btrfs (rw,...)
Volume is mounted. That’s good. Data should be accessible.
ls -la /volume1
Files are there. I can read them. The volume isn’t corrupted — it’s the package management that’s broken.
Checked the StorageManager package:
synopkg status StorageManager
{
"status": "broken",
"broken_by": "fhs",
"status_description": "failed to read fhs target"
}
FHS — Filesystem Hierarchy Standard. The package’s directory structure was corrupted.
The Root Cause
ls -la /var/packages/StorageManager/target/
# ls: cannot access '/var/packages/StorageManager/target/': No such file or directory
The target directory didn’t exist. The package was installed according to synopkg list, but its file structure was incomplete.
DSM 7.2.2 update + SMB reinstallation = something got corrupted in the package manager’s symlink structure.
The data was fine. The filesystem was fine. But Synology’s package system couldn’t find the files it needed to start Storage Manager, which cascaded into every service that depends on it.
The Fix
First, recreated the missing directory structure:
sudo mkdir -p /var/packages/StorageManager/target
Then tried repairing the package:
sudo synopkg repair StorageManager
Restarted the service:
sudo synosystemctl restart pkgctl-StorageManager
Storage Manager came back. Checked the web interface — Package Center showed it as healthy.
Restarted the other broken services one by one. They all came back once Storage Manager was functional.
What I Didn’t Do
I got suggestions to run btrfs check directly on the volume. Didn’t do it.
I got suggestions to use parted to fix GPT errors. Didn’t do it.
I got suggestions to reinstall DSM. Didn’t do it.
None of those were necessary. The problem was a missing directory, not filesystem corruption. Running invasive repair tools on 116TB of data because of a package manager glitch would have been reckless.
The Lessons
Updates can break package structures. DSM updates are usually smooth, but occasionally they leave packages in inconsistent states.
Read-only diagnostics first. Before running any repair command, verify what’s actually broken. ls, mount, cat — these can’t hurt anything.
Data accessibility ≠ service health. The volume was mounted and readable. The problem was entirely in the package management layer.
Cascading failures hide root causes. Nine services were broken, but only one — Storage Manager — was the actual problem. Fix the root, and the rest follow.
Don’t trust generic repair suggestions. Commands like parted mklabel gpt can destroy partition tables. On a NAS with irreplaceable data, that’s not a debugging step — it’s a disaster.
The Recovery Checklist
For future Synology package failures:
- Check if volume is mounted:
mount | grep volume - Verify data accessibility:
ls /volume1 - Check package status:
synopkg status <package> - Look for missing target directories:
ls -la /var/packages/<package>/target/ - Try repair before reinstall:
synopkg repair <package> - Restart dependent services after fixing root cause
Nine services broken. 116TB at stake. Fixed with mkdir -p. Sometimes the scariest problems have the simplest solutions.