Automating and Optimizing Data-Centric What-If Analyses on Native Machine Learning Pipelines

Author:

Grafberger Stefan1ORCID,Groth Paul1ORCID,Schelter Sebastian1ORCID

Affiliation:

1. University of Amsterdam, Amsterdam, Netherlands

Abstract

Software systems that learn from data with machine learning (ML) are used in critical decision-making processes. Unfortunately, real-world experience shows that the pipelines for data preparation, feature encoding and model training in ML systems are often brittle with respect to their input data. As a consequence, data scientists have to run different kinds of data centric what-if analyses to evaluate the robustness and reliability of such pipelines, e.g., with respect to data errors or preprocessing techniques. These what-if analyses follow a common pattern: they take an existing ML pipeline, create a pipeline variant by introducing a small change, and execute this pipeline variant to see how the change impacts the pipeline's output score. The application of existing analysis techniques to ML pipelines is technically challenging as they are hard to integrate into existing pipeline code and their execution introduces large overheads due to repeated work. We propose mlwhatif to address these integration and efficiency challenges for data-centric what-if analyses on ML pipelines. mlwhatif enables data scientists to declaratively specify what-if analyses for an ML pipeline, and to automatically generate, optimize and execute the required pipeline variants. Our approach employs pipeline patches to specify changes to the data, operators and models of a pipeline. Based on these patches, we define a multi-query optimizer for efficiently executing the resulting pipeline variants jointly, with four subsumption-based optimization rules. Subsequently, we detail how to implement the pipeline variant generation and optimizer of mlwhatif. For that, we instrument native ML pipelines written in Python to extract dataflow plans with re-executable operators. We experimentally evaluate mlwhatif, and find that its speedup scales linearly with the number of pipeline variants in applicable cases, and is invariant to the input data size. In end-to-end experiments with four analyses on more than 60 pipelines, we show speedups of up to 13x compared to sequential execution, and find that the speedup is invariant to the model and featurization in the pipeline. Furthermore, we confirm the low instrumentation overhead of mlwhatif.

Publisher

Association for Computing Machinery (ACM)

Reference69 articles.

1. Denis Baylor , Eric Breck , Heng-Tze Cheng , Noah Fiedel , Chuan Yu Foo , Zakaria Haque, Salem Haykal, Mustafa Ispir, Vihan Jain, Levent Koc, et al. Tfx: A tensorflow-based production-scale machine learning platform. KDD ( 2017 ). Denis Baylor, Eric Breck, Heng-Tze Cheng, Noah Fiedel, Chuan Yu Foo, Zakaria Haque, Salem Haykal, Mustafa Ispir, Vihan Jain, Levent Koc, et al. Tfx: A tensorflow-based production-scale machine learning platform. KDD (2017).

2. Sumon Biswas and Hridesh Rajan . Fair preprocessing: towards understanding compositional fairness of data transformers in machine learning pipeline. ESEC/FSE ( 2021 ). Sumon Biswas and Hridesh Rajan. Fair preprocessing: towards understanding compositional fairness of data transformers in machine learning pipeline. ESEC/FSE (2021).

3. Matthias Boehm , Iulian Antonov , Sebastian Baunsgaard , Mark Dokter , et al . SystemDS: A Declarative Machine Learning System for the End-to-End Data Science Lifecycle. CIDR ( 2020 ). Matthias Boehm, Iulian Antonov, Sebastian Baunsgaard, Mark Dokter, et al . SystemDS: A Declarative Machine Learning System for the End-to-End Data Science Lifecycle. CIDR (2020).

4. Eric Breck , Neoklis Polyzotis , Sudip Roy , Steven Whang , and Martin Zinkevich . Data Validation for Machine Learning. MLSys ( 2019 ). Eric Breck, Neoklis Polyzotis, Sudip Roy, Steven Whang, and Martin Zinkevich. Data Validation for Machine Learning. MLSys (2019).

5. Leo Breiman . Random forests. JMLR 45 , 1 ( 2001 ). Leo Breiman. Random forests. JMLR 45, 1 (2001).

Cited by 4 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3