Network Interface Architecture for Remote Indirect Memory Access (RIMA) in Datacenters

Author:

Xue Jiachen1,Vijaykumar T. N.2,Thottethodi Mithuna2ORCID

Affiliation:

1. Nvidia, Santa Clara, CA

2. Purdue University, West Lafayette, IN

Abstract

Remote Direct Memory Access (RDMA) fabrics such as InfiniBand and Converged Ethernet report latency shorter by a factor of 50 than TCP. As such, RDMA is a potential replacement for TCP in datacenters (DCs) running low-latency applications, such as Web search and memcached. InfiniBand’s Shared Receive Queues (SRQs), which use two-sided send/recv verbs (i.e., channel semantics ), reduce the amount of pre-allocated, pinned memory (despite optimizations such as InfiniBand’s on-demand paging (ODP)) for message buffers. However, SRQs are limited fundamentally to a single message size per queue, which incurs either memory wastage or significant programmer burden for typical DC traffic of an arbitrary number (level of burstiness) of messages of arbitrary size. We propose remote indirect memory access (RIMA) , which avoids these pitfalls by providing (1) network interface card (NIC) microarchitecture support for novel queue semantics and (2) a new “verb” called append . To append a sender’s message to a shared queue, the receiver NIC atomically increments the queue’s tail pointer by the incoming message’s size and places the message in the newly created space. As in traditional RDMA, the NIC is responsible for pointer lookup, address translation, and enforcing virtual memory protections. This indirection of specifying a queue (and not its tail pointer, which remains hidden from senders) handles the typical DC traffic of an arbitrary sender sending an arbitrary number of messages of arbitrary size. Because RIMA’s simple hardware adds only 1--2 ns to the multi-\mu s message latency, RIMA achieves the same message latency and throughput as InfiniBand SRQ with unlimited buffering. Running memcached traffic on a 30-node InfiniBand cluster, we show that at similar, low programmer effort, RIMA achieves significantly smaller memory footprint than SRQ. However, while SRQ can be crafted to minimize memory footprint by expending significant programming effort, RIMA provides those benefits with little programmer effort. For memcached traffic, a high-performance key-value cache ( FastKV ) using RIMA achieves either 3× lower 96 th-percentile latency or significantly better throughput or memory footprint than FastKV using RDMA.

Funder

National Science Foundation

Publisher

Association for Computing Machinery (ACM)

Subject

Hardware and Architecture,Information Systems,Software

Reference59 articles.

1. A scalable, commodity data center network architecture

2. Data center TCP (DCTCP)

3. apache [n.d.]. Apache Performance Tuning. Retrieved from https://httpd.apache.org/docs/2.4/misc/perf-tuning.html. apache [n.d.]. Apache Performance Tuning. Retrieved from https://httpd.apache.org/docs/2.4/misc/perf-tuning.html.

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Understanding the Scalability Problem of RNIC Cache at the Micro-architecture Level;ICC 2023 - IEEE International Conference on Communications;2023-05-28

2. A Scalable RDMA Network Interface Card with Efficient Cache Management;2023 IEEE International Symposium on Circuits and Systems (ISCAS);2023-05-21

3. DEEP LEARNING-DRIVEN DIFFERENTIATED TRAFFIC SCHEDULING IN CLOUD-IOT DATA CENTER NETWORKS;Fractals;2023-01

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3