PSA-Cache: A Page-state-aware Cache Scheme for Boosting 3D NAND Flash Performance

Author:

Pang Shujie1ORCID,Deng Yuhui1ORCID,Zhang Genxiong1ORCID,Zhou Yi2ORCID,Huang Yaoqin1ORCID,Qin Xiao3ORCID

Affiliation:

1. Department of Computer Science, Jinan University, China

2. TSYS School of Computer Science, Columbus State University, USA

3. Department of Computer Science and Software Engineering, Auburn University, USA

Abstract

Garbage collection (GC) plays a pivotal role in the performance of 3D NAND flash memory, where Copyback has been widely used to accelerate valid page migration during GC. Unfortunately, copyback is constrained by the parity symmetry issue: data read from an odd/even page must be written to an odd/even page. After migrating two odd/even consecutive pages, a free page between the two migrated pages will be wasted. Such wasted pages noticeably lower free space on flash memory and cause extra GCs, thereby degrading solid-state-disk (SSD) performance. To address this problem, we propose a page-state-aware cache scheme called PSA-Cache , which prevents page waste to boost the performance of NAND Flash-based SSDs. To facilitate making write-back scheduling decisions, PSA-Cache regulates write-back priorities for cached pages according to the state of pages in victim blocks. With high write-back-priority pages written back to flash chips, PSA-Cache effectively fends off page waste by breaking odd/even consecutive pages in subsequent garbage collections. We quantitatively evaluate the performance of PSA-Cache in terms of the number of wasted pages, the number of GCs, and response time. We compare PSA-Cache with two state-of-the-art schemes, GCaR and TTflash, in addition to a baseline scheme LRU. The experimental results unveil that PSA-Cache outperforms the existing schemes. In particular, PSA-Cache curtails the number of wasted pages of GCaR and TTflash by 25.7% and 62.1%, respectively. PSA-Cache immensely cuts back the number of GC counts by up to 78.7% with an average of 49.6%. Furthermore, PSA-Cache slashes the average write response time by up to 85.4% with an average of 30.05%.

Funder

National Natural Science Foundation of China

Guangdong Basic and Applied Basic Research Foundation

Industry University Research Collaboration Project of Zhuhai

Open Project Program of Wuhan National Laboratory for Optoelectronics

Publisher

Association for Computing Machinery (ACM)

Subject

Hardware and Architecture

Reference35 articles.

1. Error analysis and retention-aware error management for nand flash memory;Cai Yu;Intel. Technol. J.,2013

2. Real-time garbage collection for flash-memory storage systems of real-time embedded systems

3. W. Chang, Y. Lim, and J. Cho. 2014. An efficient copy-back operation scheme using dedicated flash memory controller in solid-state disks. In Proceedings of the International Journal of Electrical Energy, Vol. 2.

4. Architectures and optimization methods of flash memory based storage systems

5. Aging capacitor supported cache management scheme for solid-state drives;Gao Congming;IEEE Trans. Comput.-Aided Design Integr. Circ. Syst.,2019

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Simulation Experiment and Teaching Research of a Land-Based Ship Engine Room;International Journal of Information and Communication Technology Education;2023-10-09

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3