The emergence of datacenters as single computer entities has made latency a critical consideration in designing datacenter networks. PCIe interconnect is known to be a latency bottleneck in communication networks as its overhead can contribute to up to 90% of the overall communication latency. PCIe is a standard motherboard interface, sending a packet over a conventional NIC requiring several PCIe and memory channel transactions. This data movement in the network software stacks consumes thousands of processor cycles, which ultimately prevent ultra-low latency networking.
Researchers eliminate NIC-PCIe transactions and instead use the local memory channels between the memory and NIC, resulting in reduced latency and improved data center computer performance (figure 1). This finding is the basis for NetDIMM, which is a method for integrating a network interface chip with the memory module of a computer.
Figure 1: State of the art network interface architectures vs. NetDIMM
On average, NetDIMM improves per packet latency by 49.9% compared with a baseline network deploying PCIe NICs (figure 2). The NetDIMM architecture is located in the memory module and does not require any change to the processor architecture or memory subsystem.
Figure 2: One-way network latency breakdown for packets of various sizes when using a PCIe NIC (left), an integrated NIC (middle), and NetDIMM (left). X-axis is not drawn to scale.