Skip to main content

Homemade NAS after two and half years

Few years ago, I created a NAS system using my old PC. The old pc is powerful enough to do the basic NAS operation except real time transcoding of 4K or more than one 1080 streams. 

The setup was setup as follows:

  • 1 SSD 128 GB for boot device

  • 2x 4 TBs for the main storage configured as RAID 1

The hard disks were from different manufacturers (WD red plus and Segate) to avoid catastrophic situations: disks are part of a bad batch and you will have a second failed disk during the build of the RAID array.

Here is a list of my mistakes:

My first mistake was CentOS. I wanted a stable machine without surprises, unexpected behavior, as much less bugs affecting my setup. I had three free/open source options: 

Open Media Vault: Debian based Linux with many configuration options. The main downside is that you don't have support for ZFS and you have to very often use the command line to fix things. Moreover, there are a plethora of open issues/bugs that can make your life harder than expected.

CentOS: Stable Linux based on Red Hat. All the base software is checked to work well with the system and there are few bugs that can cause trouble. The biggest drawback is that you have to make most of your configuration manually.

TrueNAS (was FreeNAS back then): FreeBSD distribution with ZFS support. Although I really liked the idea of using this OS, I had three concerns. First, the OS "requires" at least 8GB of memory (not true for simple installations). Second, I 'm not familiar with FreeBSD systems. What will happen if something goes wrong and I need to connect using the command line? I 'm familiar with mdadm but not with BSD and ZFS tools. Third, I was not familiar with ZFS terminology and way of thinking. What is a pool? What is the dataset? What is encryption? Should I compress? Dedup? What are these permissions and how do they affect the windows shares? I just did not have time to spend on configuring the system.

I selected CentOS because it was a good opportunity to learn more about the configuration of a Enterprise level Linux distribution. However, CentOS eventually was a bad choice due to RedHat decision to stop supporting it. You don't have many options after the end of the support and I didn't want to spend time to format and configure a new NAS. Moreover, the updates were not always hassle free and I had issues from time to time. Recently I revisited my choice and moved to TrueNAS, but this is another blog post.

Second mistake was the use of a single SSD for the boot device. Although I did not had a catastrophic failure I realized afterwards how dangerous this was. What happens if your boot disk crashes? Have you backed up the configuration? How do you backup the configuration using CentOS? (answer: manually)

Third mistake was using disks with two years warning instead of five: simple NAS drives vs Pro NAS drivers (or enterprise). Eventually one of the two disks crashed (Segate Ironwolf) after 30 months, exactly two years after the end of the guarantee. It was also interesting that smart test was completing successfully however I was noticing some increase in the read error rate:

SMART Attributes Data Structure revision number: 10

Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  1 Raw_Read_Error_Rate     0x000f   082   064   044    Pre-fail  Always       -       171088624

I eventually replaced the disk with a WD RED plus (yes I know I should have selected a pro, but it was not available locally).

After the last crash, I decided to make the following changes on my NAS:

  • Move to TrueNAS (ZFS with compression)

  • Use RaidZ1 for boot device for safety

  • Use a third drive for backing up the data from the main pool

I will create in the future regarding my experience with the new configuration!

Seagate IronWolf ST6000VN001 Hard drive 6 TB | ST6000VN001


Popular posts from this blog

Processing Milky Way in RawTherapee

This text is an analysis of a video I did some months ago how to process photos of our Milky Way in RawTherapee. You can find the picture here . The photo was taken by wiegemalt. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Editing: Step 1: Fixing lighting The first thing someone notices when opening the picture is the extreme noise due to high ISO used (1600 - check the picture on the right). The first thought is to de-noise the picture, however, if you do that you are going to loose the details of the night sky. The main reason for the high noise is the additional exposure Rawtherapee adds when the 'Auto' button is pressed. In particular, the RT adds +2.4 EV to equalize the picture. This is Wrong! What we want is to keep the noise down and at the same time bring the stars up. That's why we are going to play with the Tone Curve of the RT. To adjust the light properly we increase the cont

Static linking with gcc and g++

In this tutorial, we will explain what the static linking is, how this affect the size of final binary, and why statically linking with g++ sometimes is pain. By definition, a statically compiled binary is a group of programmer ‘s routines, external functions, and variables which are packed into the final binary executable. The compiler or the linker produces the final object and embeds all the functions and variables and the linking phase.  There are two reasons of using dynamic linking and shared libraries: 1) Avoid creating a huge binary, if all the programs use a standard set of libraries why not having the operating system providing to them 2) Compatibility on operating system and machine dependant characteristics: sometimes the libraries must be implemented based on the architecture or the operating system and using dynamic linking is an easy way to avoid this catch. On the other hand, static linking is the ideal way of distributing one software product, paying