Affiliation:
1. Research Group Data and Web Science, University of Mannheim, Mannheim, Germany
Abstract
Being promoted by major search engines such as Google, Yahoo!, Bing, and Yandex, Microdata embedded in web pages, especially using schema.org, has become one of the most important markup languages for the Web. However, deployed Microdata is very often not free from errors, which makes it difficult to estimate the data volume and create an accurate data profile. In addition, as the usage of global identifiers is not common, the real number of entities described by this format in the Web is hard to assess. In this article, we discuss how the subsequent application of data cleaning steps, such as duplicate detection and correction of common schema-based errors, leads to a more realistic view on the data, step by step. The cleaning steps applied include both heuristics for fixing errors as well as means to perform duplicate detection and duplicate elimination. Using the Web Data Commons Microdata corpus, we show that applying such quality improvement methods can essentially change the statistical profile of the dataset and lead to different estimates of both the number of entities as well as the class distribution within the data.
Funder
Amazon Web Service Education
Publisher
Association for Computing Machinery (ACM)
Subject
Information Systems and Management,Information Systems
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献