FPGA RAIDZ (ZFS) Accelerator

Field Programmable Gate Array (FPGA) RAIDZ (ZFS) Acceleration

FPGAs have been in use for many years, but their ability to change for the software makes them mouldable ASICs for any ASIC application.

Change the FPGA with the software to optimise the hardware for the software.


MaX Saxe Design uses FreeBSD as the core operating system for the majority of their servers.

MaX Saxe Design uses our own custom FreeNAS distribution we refer to as FreeSAN because it is optimised for larger storage networks.


FPGA

Field Programmable Gate Arrays, FPGAs, are increasingly popular processors designed to adapt to the software being run on the FPGA.

It enables MaX Saxe Design to develop highly optimised hardware and software.


I, MaX Falstein, CET & Founder of MaX Saxe Design, have been working with FPGAs for five years, the last two years have been increasing in use of FPGAs in designs, developments and deployments.


I have been developing FPGA networks for supercomputers.

I have been developing FPGAs for use in routers and managed switches for terabit and exabit class carrier, data centre and enterprise-grade networking.


MaX Saxe Design is developing applications for FPGAs to be applied in many areas, including infrastructure automation.


MaX Saxe Design FreeSAN, which is currently not publicly available, utilises ZFS RAIDZ pools.

We have a few FreeSANs, some of which are set up with multiple compound RAIDZ pools.

We have standard compound pools, such as RAIDZ3 and mirrored or striped depending on the data.

We have slightly less standard compound pools, such as RAIDZ3 with RAIDZ or RAIDZ2.

We have very uncommon compound pools, such as multiple levels of RAIDZ3.

Eight RAIDZ3 arrays in a RAIDZ3 array

We do this for extra levels of redundancy due to the IOPS intensive nature of the stored data.


Where doe FPGA come into RAIDZ?

At the moment, there are very few programs for FPGAs using the RAIDZ or ZFS architecture.

I think this is an oversight.

We can optimise the FPGA hardware to the ZFS file system and then onto the type of pooling: Striping, Mirroring, RAIDZ, RAIDZ2 and RAIDZ3.


MaX Saxe Design FPGA RAIDZ and ZFS

MaX Saxe Design is developing software for the FPGA to work with a RAIDZ and ZFS architecture.

It will work with Intel PCIe NVMe flash drives.

This FPGA will not work with SAS or SATA solid state of spinning magnetic hard disk drives.


There is a long way to go with FPGA development and application development for the FPGAs, but MaX Saxe Design is committed to the FPGA projects and has successful use cases in supercomputer applications where FPGAs are far more powerful per square mm than graphics processing units.

MaX Saxe Design is hoping to shape the way processing boards or cards are built by building FPGA + SoC PCIe boards for supercomputers in the next five years, expected launch 2021.

I am very passionate about FPGAs, I have a few tucked in a network render cluster.

The original test render cluster node:

Two Intel Xeon E5 2699v3
Supermicro chassis
Four power supplies 1250W each
Four Intel NVMe flash storage (RAIDZ3)
Supermicro dual port 100 Gbps SFP+ NIC
Eight AMD FirePro S10000

The second generation test render cluster node:

Two Intel Xeon E5 2699v3
Supermicro chassis
Four power supplies 1250W each
Four Intel NVMe flash storage (RAIDZ3)
Supermicro dual port 100 Gbps SFP+ NIC
Eight AMD FirePro W9100

Generation three was the same as the above, AMD FirePro S9750x2 server computer GPUs replaced the AMD FirePro W9100 workstation computer and render GPUs.

Generation three was scrapped in favour of FPGAs.

The first generation FPGA test render cluster node:

Two Intel Xeon E5 2699v3
Supermicro chassis
Four power supplies 1250W each
Eight Intel NVMe flash storage (two RAIDZ3 pools)
Supermicro dual port 100 Gbps SFP+ NIC
64 FPGAs on eight FPGA PCIe 3.0 x16 boards

The (proposed) second generation FPGA test render cluster node:

Two Intel Xeon E5 2699v3
Supermicro chassis
Four power supplies 1250W each
Eight Intel NVMe flash storage (two RAIDZ3 pools)
Supermicro dual port 100 Gbps SFP+ NIC
64 FPGAs + eight Intel Xeon D 16 core SoCs on eight PCIe 3.0 x16 boards


FPGA + SoC Board

I am currently writing an article about FPGAs and SoC boards.

This will be a very vague article as I would like to apply for some patents in the next six months for some of the designs and engineering MaX Saxe Design is producing.

There is a lot of talk about the Intel Xeon D platform.

I am very interested in the Xeon D SoC platform because of its core count, up to 16, along with its low power consumption.

The Xeon D makes a superb NAS or SAN processor.

The Xeon D does not have a very noticeable lack of performance when the SAN has large amounts of NVMe flash storage and 10 Gbps being chucked at it.

On the board MaX Saxe Design has designed, the SoC is connected to the PCIe bus. It can have two direct connections to a 100 Gbps network from the rear of the PCIe 3.0 x16 board.

The SoC communicates with the main Intel Xeon E5v3 or v4 processors over the PCIe 3.0 BUS.
The SoC communicates with four FPGAs via the SoC's PCIe 3.0 BUS.

The SoC controls the two direct connections to a 100 Gbps network on the board over the PCIe 3.0 BUS.

The FPGAs have everything they need to process the data: RAM, NVMe buffer storage, network connectivity through the SoC and much more.

The FPGA + SoC board is headless in graphics terms. It cannot be controlled directly, only through the GUI and WUI MaX Saxe Design have engineered for access by remoting into the main server (RDP or IPMI 2.0) and via visiting the web site hosted by the server using the NGINX web server and the MEAN stack.