Solid-state storage, and non-volatile memory express (NVMe) in particular, has removed one of IT’s greatest bottlenecks. NVMe allows compute to communicate with storage at much higher speeds than was the case with spinning disk.
NVMe provides high input/output operations per second (IOPS), low latency and, above all, multiple, parallel channels between storage and CPU, and offers far greater storage performance than conventional disk interfaces, such as SAS and SATA.
NVMe over fabrics (NVMe-oF) is, in many ways, the next step. NVMe runs over the server’s PCIe bus. This is fine for local storage, but enterprises rely heavily on networked storage for economies of scale, redundancy and ease of management. NVMe over fabrics takes NVMe technology, and extends it to storage arrays over the local area network (LAN).
And the market is growing fast. Industry analyst firm IDC expects NVMe storage to account for half of the industry’s primary storage revenue by the end this year, and most systems will use NVMe-oF.
What is NVMe over fabrics?
At the basic level, NVMe over fabrics (NVMe-oF) is NVMe extended across a network. The fabrics part is the storage network that connects the host server to the flash array.
Although the NVMexpress Group sets out the standards for NVMe-oF, the protocols are flexible and customers can choose the fabric they want. Usually, this is based on their existing storage network infrastructure, use cases and choice of supplier. This is why the Storage Networking Industry Association (SNIA) refers to NVMe over fabrics in the plural.
How are NVMe-oF systems set up?
Organisations moving to NVMe-oF can choose any of the main network transports. These include Fibre Channel, iWARP, RoCE (RDMA over Converged Ethernet), Infiniband and, most recently, TCP. Potentially, NVMe-oF could support new protocols in the future.
Although Fibre Channel is a common choice, and offers a respectable 32Gbps throughput, some suppliers now claim speeds of up to 100Gbps for Ethernet-based systems.
To work, though, NVMe-oF relies on “bindings”. These connect the transport protocol to the host and the storage array, and control matters such as management, authentication and capabilities.
As J Michel Metz of SNIA puts it: “They are the glue that holds the NVMe communication language to the underlying fabric transport (whether it is Fibre Channel, InfiniBand, or various forms of Ethernet).” You can read more on the background to NVMe-oF on the SNIA blog.
The most recent standard, NVMe-oF 1.1, supports TCP binding. This allows an NVMe-oF SAN to run over a conventional Ethernet network.
The advantage of this standards-based approach is that buyers of NVMe-oF have choice. They can mix and match servers, arrays and fabrics. And, as SNIA’s Metz points out, there are also non-standardised implementations on the market. These might work perfectly well, especially for single-supplier installations.
What is NVMe over fabrics used for?
The short answer is anything that needs to move storage over a network, and on to flash media. Organisations deploy it where they need to connect hosts to external storage arrays over a network, rather than using conventional direct-attached or internal NVMe storage.
In this respect, NVMe-oF is just another type of SAN, alongside arrays running conventional, spinning disks. It is simply faster. The case for moving to NVMe-oF is largely governed by cost, and to a lesser extent, capacity.
According to Julia Palmer, a research vice-president at Gartner, most storage suppliers now offer arrays with NVMe internally. She expects the number supporting NVMe-oF, in its various forms, to grow over the next year or so.
Initially, though, most use cases for NVMe-oF are ones where performance is important: machine learning and artificial intelligence, analytics, including real-time analytics, high-performance database applications and high-performance computing. The technology is less suitable for very large data volumes or archiving, due to the relative cost of flash storage compared with legacy spinning disks.
What are NVMe-oF’s benefits and limitations?
It is tempting to answer the first part of this question with “speed”, but the reality is more nuanced than that.
For the highest performance, users are still best off installing storage directly in the server. And there are plenty of use cases for this, and more broadly for hyper-converged infrastructure.
So the benefits of NVMe-oF really stem from the ability to combine the faster performance of a flash-based array, compared with spinning disk-based RAIDs, with the advantages of network or shared storage. These include higher utilisation rates, easier management, and improved redundancy and resilience when compared with direct-attached storage.
Limitations include cost, complexity and operating system support.
Although NVMe-oF uses transport protocols that storage architects already know, any deployment will still need to be optimised for the technology deployed. This includes host bus adapters (HBAs), the network fabric, application support – although this is becoming common, as NVMe-oF shares most of its features with “local” NVMe – and, of course, the storage media themselves.
Earlier implementations of NVMe-oF were limited by physical distance, but the ability to run over Ethernet, using TCP, appear to have largely addressed that issue.
At present, though, Microsoft Windows does not directly support NVMe-oF, although individual vendors do offer Windows drivers. Linux provides a standard initiator.
Who sells NVMe-oF products?
Although NVMe-oF has been around since 2016, it is still not fully supported by all enterprise technology suppliers, as is shown by the lack of direct Windows support.
SAN switch suppliers, including Cisco, Brocade and Mellanox, support NVMe-oF. At the system level, the technology is sold by NetApp and Pure Storage, Dell EMC and IBM. Western Digital launched NVMe-oF support last year, Broadcom supports NVMe-oF over Fibre Channel, and Marvell also makes FC host bus adapters that support the protocol.
The market is still some way from being fully mature, however. Industry watchers expect the majority of server and storage array suppliers to support NVMe-oF, in at least one of its formats, fairly soon. Given how widespread NVMe already is for direct-attached storage, this is hardly a surprise.
The challenge for CIOs is balancing the benefits of NVMe-oF’s greater performance with the cost and complexity of upgrading, especially if they operate mostly in a Windows environment.