Author:
Saraswat Pankaj,Raj Swapnil
Abstract
Hadoop is an open-source softwareprogramming platform for storing and processing huge amounts of data. Its framework is built on Java programming, with some native C code and shell scripts thrown in for good measure. HDFS (Hadoop Distributed File System) is a global; highly failure file system intended to operate on low-cost commodity hardware. Big Data is the term used to describe this massive amount of data. In a world where data is generated at such a rapid pace, it must be preserved, evaluated, and dealt with. It is a subproject of the Apache Hadoop project. HDFS is designed for operations with big data volumes and offers high accessibility to application data. The main features of HDFS are discussed in this article, as well as a highlevel overview of the HDFS structure. Hadoop is a technology that will be used in the future, particularly by big businesses. The quantity of data being generated is only going to grow, and the need for this software is only going to grow.
Publisher
Innovative Research Publication
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献