Affiliation:
1. School of Software Henan University Kaifeng Henan China
Abstract
SummaryHadoop distributed file system (HDFS) performs well when storing and managing large files. However, its performance significantly decreases when dealing with massive small files. In response to this problem, a novel archive‐based solution is proposed. The archive refers to merging multiple small files into larger data files, which can effectively reduce the memory usage of the NameNode. The current archive‐based solutions have the disadvantages of long access time, long archive construction time, and no support for storage, updating and deleting small files in the archive system. Our method utilizes a dynamic hash function to distribute the metadata of small files across multiple metadata files. We construct a primary index that combines dynamic and static indexes for these metadata files. Regarding data files, include some read‐only files and one readable–writable file. A small file's contents are written into a readable and writable file. Upon reaching a predetermined threshold, the readable–writable file transitions into read‐only status, with a fresh readable–writable file replacing it. Experimental results show that the scheme improves the efficiency of archive access and archive creation and is more efficient than the original HDFS storage and update efficiency.
Reference25 articles.
1. MehtaA.What Is Hadoop—the components use cases and importance. Data Science.Digital Vidya. 2019.https://www.digitalvidya.com/blog/what‐is‐hadoop/
2. IBM.What is Hadoop?—IBM. 2023.https://www.ibm.com/topics/hadoop
3. An overview—Google file system (GFS) and Hadoop distributed file system (HDFS);Dhage SP;SAMRIDDHI J Phys Sci Eng Technol,2020
4. Improving the performance of Hadoop MapReduce Applications via Optimization of concurrent containers per Node
5. Hadoop Perfect File: A fast and memory-efficient metadata access archive file to face small files problem in HDFS