Network Attached Storage Technology Takes the Performance Challenge

0


anything is possible – Fotolia

Storage area network technology evolves and speed is the name of the game.

Fast flash storage and the growing use of virtualization and applications with larger amounts of data aren’t the only technologies putting pressure on networks that carry storage traffic like never before. Databases like IBM DB2, MySQL, Oracle, and SQL Server, for example, can still use faster connections with lower latency, while increasingly popular big data applications contain huge amounts of data. information that needs to be moved. And while the 4K resolution video origin is already established, as of this year the requirements for native 4K resolution video post-production have been set by companies such as Amazon, Netflix and others, reinforcing the demand for higher SAN bandwidth.

These are just a few examples of the challenges faced by storage facility managers today. To prevent storage traffic from becoming the bottleneck in your data center, large or small, we provide an overview of the key enhancements to storage networking and interface technologies available in Canada. 2016.

Widen the Ethernet channel

Almost everyone uses Ethernet for connectivity between desktops, workstations, application servers, and file servers. While many of us with wired connections to our desktops use 1 GbE, 10 GbE is the backbone of our data center connections, with 40 GbE technology being exploited in some pockets of the business. Much of the traffic passing through these networks can be considered storage traffic, especially when it comes to file servers.

As flash storage begins to proliferate, we find that even 10GbE can become a bottleneck. To remedy this, the Ethernet industry is making a significant performance leap. Until now, the fastest speed per lane for Ethernet has been 10 Gbps. Faster Ethernet such as 40 GbE and 100 GbE combines several 10 Gb / s connection paths into a single connection: 4 x 10 for 40 GbE and 10 x 10 for 100 GbE.

Announced two years ago, Ethernet operating at 25 Gbps per lane is now available. This means that a single lane of Ethernet connectivity operates 2.5 times faster than legacy 10GbE. There are also options to achieve 50 GbE and 100 GbE by bundling two and four lanes respectively. Although considerably faster than 10 GbE, the good news is that 25 GbE technology can generally use the same types of fiber optic or copper cables as 10 GbE (except for some cable lengths and differences. between transceivers).

25 GbE technology also uses the same underlying SFP28 technology as 32 Gb / s Fiber Channel (see next section), but operates at a slightly different speed, which is one of the reasons these two technologies are coming to the market. the market this year.

Those planning to build new data centers should be familiar with the Ethernet Alliance roadmap. This roadmap provides a good idea of ​​the new speeds to come, approximate timelines for them, details on physical connectors for copper and fiber optic cables, and a good discussion of the entire Ethernet ecosystem, from residential to high-end data center (see also: Ethernet Speed ​​Roadmap).

Ethernet speed roadmap

Fiber Channel neither down nor out

Popular in data centers for its reliability and stability, Fiber Channel (FC) dominates high-end storage networking technology. By some industry estimates, 90% of high-end data centers have deployed FC technology. And while there have been discussions about the decline of this high-speed storage area network technology, recent analyst reports suggest that the FC market actually increased in late 2015 and early 2016.

Fiber Channel performance has doubled in speed approximately every three to five years since 1997. Gen 6 Fiber Channel became available this year and includes a single lane speed of 32 Gbps and a four lane speed of 128 Gbps. / s (4 x 32). This generation of FCs also includes new management and diagnostic features.

As with previous generations, Gen 6 FC is backward compatible with the previous two generations (16 Gbps FC and 8 Gbps FC), making the transition to new technology a relatively smooth process for businesses.

The Fiber Channel Industry Association (FCIA) offers a public roadmap that provides information on new speeds, advice on selecting cables and connectors, etc. (see also the Fiber Channel roadmap).

Fiber Channel Roadmap

Catch the NVM Express

NVM Express (NVMe) is an optimized, high-performance, scalable host controller interface designed for enterprise and customer solid-state storage that uses the local PCI Express bus. More recently, NVMe has been extended over the distance with the new NVMe over Fabrics specification. NVMe over Fabrics can use either a Remote Direct Memory Access (RDMA) fabric or a Fiber Channel fabric and works with future fabric technologies.

NVMe is designed to streamline I / O access to storage devices and storage systems built with non-volatile memory, from current NAND flash technology to future persistent and higher performance memory technologies. NVMe’s streamlined command set typically uses less than half the number of CPU instructions to process an I / O request than other storage protocols.

Internally, NVMe is designed differently from other storage protocols. It supports 64 KB of commands per queue and up to 64 KB of queues. These queues are designed so that I / O commands and responses to those commands operate on the same processor core and can take advantage of the parallel processing capabilities of multicore processors. Each application or thread can have its own independent queue, so no I / O locking is required.

NVMe can be used in devices ranging from mobile phones to corporate storage systems. NVMe devices in enterprise environments, typically operating at full power, provide performance up to the full bandwidth of the number of PCIe lanes used by each device. In consumer devices operating at low power levels, NVMe devices offer lower performance.

At the device level, you can use NVMe for expansion cards that plug into PCIe slots, the traditional drive form factor (2.5 inch is the most popular), and the M. 2. Through these and other features, we have found that by running tests in our lab, NVMe offers significantly higher performance and lower latency than other storage protocols.

Boost serial connected SCSI

SAS, or SCSI connected in series, is an interface and protocol for enterprise storage networking technology that is used in one way or another in almost all enterprise storage products today. SAS, and its predecessor SCSI, have a long history of versatility, reliability, and scalability for a peripheral-level interface, as a shelf-to-shelf disk interface and as a host interface to platforms. forms of external storage. In addition to hard drives and solid-state drives, SAS products include host bus adapters, RAID controllers, expanders, and other components used in storage. There are also SAS switches used in SAS fabric implementations.

Currently shipped SAS products operate at 12 Gbps, and some older 6 Gbps products are still available. The roadmap for SAS doubles the speed to 24 Gbps, with these products expected to hit the market with server platforms, which are also expected to support PCIe 4.0, slated for release in 2019.

24 Gbit / s SAS is backward compatible with the previous two generations of SAS (12 Gbit / s and 6 Gbit / s) and with SATA 6 Gbit / s. To learn more about the future of SAS, see the SAS Roadmap from the SCSI Trade Association (See also: Serial Attached SCSI (SAS) Roadmap.)

Serial attached SCSI roadmap

Serial ATA in limbo

SATA, or Serial ATA, has been used for many years to connect a computer to a single storage device such as a hard drive, SSD, or optical device (CD-ROM, DVD, etc.). The current SATA interface operates at 6 Gbps and there is no roadmap for faster speed, although work is underway to add enterprise functionality. There was some activity for “SATA Express” running at higher speeds, but this activity seems to have stopped.

SATA is used in the traditional disk format, but is also available in a much smaller M.2 card size.

Compatibility of SATA, SAS and NVMe devices

In my article on flash storage in the June 2016 edition of Storage room magazine (see “Flashy Servers – the lowdown on server-side, solid-state storage”), I have provided a diagram of the SATA, SAS and PCIe / NVMe device connectors, showing the areas of compatibility between these three interfaces.

For compatibility between SATA, SAS, and PCIe / NVMe device connectors, consider them as a three-level hierarchy. A lower device can be placed in an upper device backplane, but upper devices cannot be placed in a lower level backplane. SATA devices, at the lowest level of this hierarchy, can be placed in SATA, SAS, and PCIe / NVMe device backplanes. SAS devices, at the middle level of this hierarchy, can fit into SAS and PCIe / NVMe device backplanes, but not SATA device backplanes. And NVMe devices in the drive form factor can only be placed in PCIe / NVMe backplanes.

As you move up the network storage technology hierarchy, additional features and performance become available. To learn more about how the protocols discussed in this article correspond to different storage types and enterprise use cases, see: The Real World.

Suggested storage protocols and use cases


Share.

Leave A Reply