Authors: Sagi Grimberg, Chief SW Architect, Lightbits Labs and Dave Minturn, Intel Corp., Fabrics Linux Driver WG Chair
This week the ratified NVMe™/TCP Transport Binding specification has been made available for public download. TCP is a new transport added to the family of existing NVMe™ transports; PCIe®, RDMA, and FC. NVMe/TCP defines the mapping of NVMe queues, NVMe-oF capsules and data delivery over the IETF Transport Control Protocol (TCP). The NVMe/TCP transport offers optional enhancements such as inline data integrity (DIGEST) and online Transport Layer Security (TLS).
What’s really exciting about NVMe/TCP is that it enables efficient end-to-end NVMe operations between NVMe-oF host(s) and NVMe-oF controller devices interconnected by any standard IP network with excellent performance and latency characteristics. This allows large-scale data centers to utilize their existing ubiquitous Ethernet infrastructure with multi-layered switch topologies and traditional Ethernet network adapters. NVMe/TCP is designed to layer over existing software based TCP transport implementations as well as future hardware accelerated implementations.
Software NVMe/TCP host and controller device drivers are also available for early adoption in both the Linux Kernel and SPDK environments. Both NVMe/TCP implementations were designed to plug seamlessly to their existing NVMe and NVMe-oF software stacks.
References:
NVMe/TCP Transport Binding specification: https://nvmexpress.org/wp-content/uploads/NVM-Express-over-Fabrics-1.0-Ratified-TPs.zip
The Linux Kernel NVMe/TCP support: http://git.infradead.org/nvme.git/shortlog/refs/heads/nvme-tcp
The SPDK NVMe/TCP support: https://review.gerrithub.io/#/c/spdk/spdk/+/425191/