Affiliation:
1. Engineering Faculty, Autonomous University of San Luis Potosi , 78290 San Luis Potosí, SLP, Mexico
2. Astronomy Institute, National Autonomous University of Mexico , 04510, CdMx, Mexico
Abstract
ABSTRACT
The vast amount of astronomical information that has become available over this decade has far exceeded that of the last century. The heterogeneity of the data and its overwhelming magnitude have made it impossible to perform manual analysis. As a consequence, new techniques have been developed and different strategies have been amalgamated, such as data science and data mining, in order to carry out more in-depth and exhaustive analyses in search of the extraction of the knowledge contained in data. This paper introduces a data science methodology that consists of successive stages, with the core of this proposal being the step of data preprocessing, with the aim of reducing the complexity of the analysis and enabling hidden knowledge in the data to be uncovered. The proposed methodology was tested on a set of data consisting of artificial light curves that try to mimic the behaviour of the strong gravitational lens phenomenon, as supplied by the Time Delay Challenge 1 (TDC1). Under the data science methodology, diverse statistical methods were implemented for data analysis, and cross-correlation and dispersion methods were applied for the time-delay estimation of strong lensing systems. With this methodology, we obtained time-delay estimations from the TDC1 data set and compared them with earlier results reported by the COSmological MOnitoring of GRAvItational Lenses project (COSMOGRAIL). The empirical evidence leads us to conclude that, with the proposed methodology, we achieve a greater accuracy in estimating time delays compared with estimations made with raw data.
Publisher
Oxford University Press (OUP)
Subject
Space and Planetary Science,Astronomy and Astrophysics
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献