Synology NAS vs. MSD SAN

The NAS vs. SAN Dilemma


Synology NAS

In my opinion, the best all-around NAS you can buy is a Synology.

The drives are completely up to the person buying the NAS, but I am usually a Seagate customer. I have had quite a few Western Digital failures, so I have moved away from storing seriously important data on Western Digital hard drives.

Synology builds high quality, reliable NAS enclosures, which run their own Disk Station Manager (DSM) operating system. Every Synology Disk Station NAS runs the same Disk Station Manager operating system, so the features are identical whether using a home specification NAS or an enterprise specification NAS.

Some of the high-end enterprise products Synology builds are closer to SAN products than NAS products, but they still have some differences from mainstream SAN enclosures.

Interconnects can be one of the obvious differences. The recently launched Disk Controller DC connects to the network with four 1 Gbps Ethernet ports but connects to up to 1440 TB of storage via a daisy chained 12 Gbps SAS port.

The bottleneck becomes the network throughput within the Disk Controller.
A total of 4 Gbps full duplex caps the Disk Controller NAS array at 500 MBps (notice MegaBytes per second, not MegaBits per second).


MaX Saxe Design SAN

I will be honest here, this is not an MSD SAN, this is a SAN that MaX, MSD CET, would build.

I have long wondered why PCIe flash is not used as the main front end of the storage system.

PCIe flash is reducing in cost daily & storage capacity is increasing monthly. Bandwidth is theoretically only capped by the limits of PCIe 3.0 [8 GTps, 985 MBps (×1), 15.75 GBps (×16)], which will be extended with PCIe 4.0 [16 GTps, 1.969 GBps (×1), 31.51 GBps (×16)].

I would like to maximise the potential of an enclosure with eight Intel Xeon E7-8890v3 18 core CPUs and Intel DC P3700 2.0TB PCIe flash cards.

Each Intel Xeon E7-8890v3 processor supports 32 lanes of PCIe v3
Each Intel DC P3700 series 2.0TB PCIe flash card utilises four PCIe 3.0 lanes.

A total quantity of 256 PCIe 3.0 lanes from eight Intel Xeon E7-8890v3 processors gives a total of 64 four-lane PCIe 3.0 Intel DC P3700 cards can be installed & fully supported.

64 Intel DC P3700 2.0 TB cards

128 TB of PCIe flash storage

Which means the user can access 128 TB of PCIe flash storage at 2800 MBps read & 1900 MBps write for each of the PCIe flash cards

128 TB of PCIe flash storage with a theoretical total bandwidth of 179.2 GBps or 179200 MBps [ 64 * 2800 MBps ] & 121.6 GBps 121600 MBps [ 64 * 1900 MBps ]

"I suppose this is all theoretical" - NO

The bandwidth calculated is not the theoretical maximum; fortunately, the eight Intel Xeon E7-8890v3 processors can handle all of the bandwidth.

Some of you may have noticed I have not left any PCIe bandwidth for the networking connections. This is because I will be using the Intel QPI to connect with the Chipset, which will handle the networking connections.

I intend to use Intel MXC & Intel silicon photonics when available.


Future: Intel DC P3800 Series PCIe flash storage

I would like to make a few predictions on the P3800 series PCIe flash storage modules.

  • Up to 16 TB of PCEi flash storage
  • Up to 3600 MBps read bandwidth (x8 & x16)
  • Up to 2800 MBps write bandwidth (x8 & x16)
  • PCIe 3.0 formats in x4, x8 & x16

Future: Intel DC P4x00 Series PCIe 4.0 flash storage

I would like to make a few predictions on the P3800 series PCIe flash storage modules.

  • Up to 16 TB of PCEi flash storage
  • Up to 28 GBps read bandwidth (x16)
  • Up to 28 GBps write bandwidth (x16)
  • PCIe 4.0 formats in x4, x8 & x16

The above predictions are based on Intel IPOs & my experience in the technology sector.