I don’t know if their brand is that great. I have been using synology NAS for about 15 years. It is very solid and easy to use, but the hardware is expensive, non customizable, the underlying OS is based on an ancient linux kernel. I have now run into the volume size limits (200TB) and disk sizes keep increasing exponentially. And they don’t support enterprise SSDs (SAS/U.2).
So in my mind I was already thinking of moving on for my next NAS and go custom hardware, that policy just made it a no brainer. And reading comments on reddit I feel there are many people in a similar state of mind.
I find Synology NAS's to be at the sweet spot between "too simple for anything except accessing some files remotely via the vendors app" (like WD) and "another tech babysitting project".
DSM is rock solid in my opinion, and gives enough freedom to tinker for those that want to. The QuickConnect feature makes it easy to connect to the NAS without being locked in to one specific app.
Exactly. About 10 years ago I wanted to set up a NAS to store a variety of things. I have the knowhow to hand roll just about anything I wanted, but I lacked the desire or time to do so. At the same time, the simple things were tying me to apps or otherwise putting me on rails.
Instead I bought a lower end Synology & stuffed it with some HDs, and it's been pretty fire & forget while satisfying all of my needs. I'm able to mount drives on it from all of the devices in my network. I can use it as a BitTorrent client. I use it to host a Plex server. And a few other odds & ends over time.
Meanwhile the only issues I had were needing to solder a resistor onto the motherboard to resolve some issue, and replacing some HDDs as they were aging out.
All in all it has struck a perfect balance for me. I'll grant that "solder a resistor onto the motherboard" is likely beyond a typical home user but it's also been a lot less fiddling than some home-brew solution.
> Meanwhile the only issues I had were needing to solder a resistor onto the motherboard to resolve some issue
You and I must have a different idea of "fire and forget." I've been running my NAS on a generic Dell running stock Debian for over a decade now, and I've never had to get the soldering iron out to maintain it!
Agreed. it was a pretty freak issue, albeit one that had a well known fix. I stated it here in full disclosure and did state that this was beyond what most people would consider tolerable. And I'll admit that I came very close to throwing it in the garbage and buying a new one.
Still, other than replacing old drives, something that'd happen regardless of solution, that's the only fiddling I ever had to do.
That was almost certainly the Intel Avoton clock degradation issue. It hit Cisco and lots of other networking vendors too. I lost Supermicro and ASRock boards to the same thing. Soldering on the resistor gets the CLK circuit back into spec for a while, but I had an officially-repaired board eventually fail again in the same way after a few more years since it keeps degrading.
That's a good reminder, I forgot about it being temporary. Looks like it was ~6 years before the initial failure, and it's been ~4 years since.
I should start investigating potential migration paths that would allow me to do a HDD migration as that would be ideal. Although it looks like that might be a pain due to some of their OS-level limitations.
I swapped my dead C2750 (Supermicro A1SAi-2750F) board for my cold-spare C3558 (A2SDi-4C-HLN4F) and was right back running again. I guess if you're talking about an appliance it's a little different, but this was just my home firewall/router FreeBSD+PF+Jails machine.
And actually a good reminder for me to eBay up another cold spare, because I totally forgot to.
Same here. Still rocking a DS415+ from 2015. Had to solder a 100ohm resistor to work around the Intel Atom C2000 flaw. Has had a new set of spinning rust in that time too. It's also connected to UPS so will power down if there's an extended outage. Stuck on DSM 7.1 but it does the job.
As for the ancient Linux kernel, I want the device I’m using for backups to be secure. I’m not saying I need to be using the kernel on ~main, but there are important security fixes merged in the last 5 years.
I'd be far more weary of the application level services provided by Synology than of the kernel in this context, as long as the vendor backports the various fixes and you update the kernel you should in theory be fine. But the applications get far less scrutiny.
What you really never ever should do is expose your NAS to the internet, even if vendors seem to push for this. Of course you'd still be vulnerable to a local compromised application on another machine that is on the same network as the NAS. It's all trade-offs. My own solution to all this was quite simple but highly dependent on how I use the NAS: when not in use it is off and it is only connected to my own machine running linux, not to the wifi or the house network.
It's hard to find any other products that compare to DSM. It really is something special. It's worth a small premium in hardware costs. But I share a lot of the concerns as everyone else here and will be considering other options.
I find that Linux NAS and router project require essentially no babysitting. You do have to do some initial setup work, but once it's done, there's no maintenance (other than replacing failed hardware) for years and years.
I just lost a bunch of files on mine due to their Drive software. I was setting up a folder to sync and just clicking the folder in their file explorer when setting it up isn’t enough to actually select it, so the sync went one level higher than it should have. That decided to wipe out the folders on that level instead of trying to sync them back to my computer, for whatever reason.
Also for whatever reason when you use Drive files don’t go into the regular recycle bin. They go into the Drive recycle bin…but only if you have file backups (whatever they call them, where it saves copies of files if they’re changed) enabled. I didn’t, for that folder.
My story is similar. I've been using them for a decade, and was shopping for an upgrade when they made the proprietary drive announcmement.
It was the impetus I needed to realize that it only takes an hour to build my own, better, NAS out of junk I mostly already owned and save a ton of money. I won't be going back.
I started with FreeNAS or whatever flavor of it existed well over a decade ago. It was enough hassle that I went Synology because the stuff I like tinkering with isn't the storage of my most important data. Everything I do with NUCs, Pis, VMs, etc is somewhat ephemeral in that it's all backed up multiple times and locations.
I spent five hours debugging a strange behavior in my shell with some custom software this morning and submitted a bug report to a software vendor that was not the expected cause of the issue. I feel great about it. I used to feel great about my Synology NAS, too.
Qnap, Ugreen, whatever else, we'll see when my current model is due for replacement. Synology will have to perform pretty much miracles before then for me to consider them again after three generations of their hardware that were all very satisfactory. What a major mistake.
They weren't perfect, but they were perfect for my needs. Not anymore.
You can build a little hot-swappable NAS with nice trays to slide disks in and out, an easy web GUI, front panel status lights, support for applications like surveillance cameras, etc, with junk you mostly already owned?
I don't think most people consider easy hot-swaps + front panel status lights particularly key features in their home NAS.
I don't swap drives unless something is failing or I'm upgrading - both of which are a once every few years or longer thing, and 15min of planned downtime to swap doesn't really matter for most Home or even SMB usage.
-----
As for the rest, TrueNAS gets me ZFS, a decent GUI for the basics, the ability to add in most other things I'd want to do with it without a ton of hassle, and will generally run on whatever I've got lying around for PC hardware from the past 5-10 years.
It's hard to directly compare non-identical products.
For me and my personal basic usage - yes, it really was pretty much as easy as a Synology to set up.
It's entirely possible that whatever you want to do with it is a lot of work on something like TrueNAS vs easy on a Synology, I'm not going to say that's the case for everything.
Hot swap for drives is a must on a NAS. If you have to power it down to swap out a drive there is a chance that your small problem becomes a larger one. Better to replace the drive immediately and have the NAS do the rebuild without a powercycle.
If you're worried the hard drives won't spin back up, I'd say you should instead spin them down regularly so you know that risk is basically zero. If you're worried the power supply will explode and surge into the drives when you turn it on, you should not be using that power supply at all. Any other risks to powering it down?
And for the particular issue of replacing a failed drive and not wanting to open up the case while it's powered, you can get a single drive USB enclosure to "hot swap" for $20. And if you use hard drives you should already have one of those laying around, imo.
Agree, you should consider replacing your drives on your primary server (backup servers we can debate) as soon as you start seeing the first SMART problems, like bad sectors. If you do regular data scrubbing, and none of these problems show up on the other drives, I'd argue the risk that they fail simultaneously is fairly low.
Hot swap drives are necessary on data centers where you don't want to have to pull the whole server and open the top cover just to replace a disk.
But on a home NAS? What problem would having to power it down and power it on for drive replacement create? You're going to resync the array anyways.
I don't mind them and I do use them but I consider them a very small QOL improvement. I don't really replace my disks all that often. And now that you can get 30TB enterprise samsung SSDs for 2k, two of those babies in raid 1 + an optane cache gives you extremely fast and reliable storage in a very small footprint.
No, I've seen this happen on larger arrays. The restart with a degraded array risks another drive not coming up and then you are on very thin ice. Powercycles are usually benign but they don't have to be and on an array there is a fair chance that all of the drives are equally old and if one dies there may be another that is marginal but still working. Statistically unlikely but I have actually seen this in practice so I'm a bit wary of it. The larger the array the bigger the chance. This + the risk of controller failure is why my backup box is using software RAID 6. It definitely isn't the fastest but it has the lowest chance of ever losing the whole thing. I've seen a hardware raid controller fail as well and that was a real problem. For one it was next to impossible to find a replacement and for another when the replacement finally arrived it would not recognize the drives.
In fact I find the synology disk trays to be very fragile. Out of the 48 trays I have, I think a good 6 or 7 do not close anymore unless you lock them with a key. A common problem apparently.
Sure. You buy a chinese case with 6-8 bays off Aliexpress, throw some board with ECC RAM support into it and a few disks. You install TrueNAS Scale on it, setup a OpenZFS pool. Front panel lights are controllable via Kernel [0], it even offers a ready-made disk-activity module if you want to hack. Surveillance cameras are handled by Frigate, an open source NVR Software which works really well.
Especially when you want to build and learn, there's next to no reason to buy a Synology.
Very valid advice, but you don't do all that in "an hour," of course. Synology's purpose in life is to provide a solution to users who are more interested in the verbs than the nouns.
They are the Apple of the NAS industry, a role that has worked out really well for Apple as well as for most of their users. The difference is, for all their rent-seeking walled-garden paternalism, Apple doesn't try to lock people out of installing their own hard drives.
Kudos to Synology for walking back a seriously-stupid move.
Once you have the case, an hour or two is pretty reasonable... you can even have your boot device pre-imaged while waiting on the case to get delivered.
Not to mention the alternative brands that allow you to run your own software... I've got a 4-bay TerraMaster (F-424 Pro) as a backup NAS. I don't plan on buying another Synology product.
I'm no stranger to building boxes or running servers, but I've run a couple of different Synology NAS over the past 15 years. My estimate is that if I were to put together my own system, it would probably take several days and cost about the same as if I were to buy Synology. I'm not familiar with building NAS systems specifically, so that might be part of the issue. But saying you can do it in one hour seems like hyperbole.
When I looked into it last, I planned to spend about as much as a Synology, but it would have much more compute, memory and as much storage. I was likely going to run ProxMox as a primary OS, and pass the SATA controller(s) to a TruNAS Scale VM... Alternatively, just run everything in containers under TruNAS directly.
For my backup NAS, I wound up going with a TerraMaster box and loading TruNAS Scale on it.
Someone building their own probably isn't too afraid of missing out on a webgui or installing something like FreeNAS or whatever is the popular choice these days.
I think the NAS market is in for an upheaval due to the markups for fairly crappy hardware and then squeezed from the bottom by cloud storages.
RPI 5 can be got with 16gb of memory and has a PCI-E port, some might complain about the lack of ECC ram but does all those cheap ARM cpu's on lower end NAS'es really have that?
I think the biggest factor might be that case manufacturers haven't found it to be a high enough margin, but it only takes one to decide that they want to take a bite out of the enthusiast NAS market.
In any case, none of the requirements you listed seem that exotic. There are computer cases with hot-swap ready drive cages, and status lights (or even LCDs) are easy to find. The software is probably already on github. The toughest ask is probably for it to be "little", but that's not something everybody cares about. So I don't find the GP's claim to be that much of a stretch.
they’re pretty clearly referring to _their_ use case and not everyone’s. i think people are mostly talking past each other about this. there isn’t one feature set that matters for everyone, so of course a synology is perfect for some and for others it can be replaced with “junk”.
There are several drive tray cases for ITX and mATX that you can choose from. As for a Web GUI, you can get TruNAS Scale running relatively easily and there are other friendly options as well... so yes.
I have been running TrueNAS (was FreeNAS) for ~10 years now and never had issues. There is the risk that TrueNAS gets rug pulled and no longer is free for non commercial use, but so far it has been fine.
The thing is, I'm still running FreeNAS 9, not even TrueNAS. If they rug pull, not only will there be forks, but the old versions should just continue to work!
Trivially on their (and qnap's) amd64 systems at least. There are some quirks where they are more similar to an embedded system than a PC, but it's not a big deal. Things like console over UART (unless you add a UART) and fan control not working out of the box, so you set it to full speed in bios or mess with config.
Nope, the purpose of a Synology unit is to be about as complex as a toaster. Put it on the shelf, plug it in, make sure auto-updates are enabled, and forget about it until it sends you an email in 5-10 years that one or more drives is full/failing. I bought a synology almost 10 years ago and it's been purring away in a closet somewhere and never causing problems the entire time.
If you want a device to tinker with, this is the wrong product for you.
So in my mind I was already thinking of moving on for my next NAS and go custom hardware, that policy just made it a no brainer. And reading comments on reddit I feel there are many people in a similar state of mind.