Author:
Borgonovo Emanuele,Plischke Elmar,Prieur Clémentine
Abstract
AbstractRecent studies have emphasized the connection between machine learning feature importance measures and total order sensitivity indices (total effects, henceforth). Feature correlations and the need to avoid unrestricted permutations make the estimation of these indices challenging. Additionally, there is no established theory or approach for non-Cartesian domains. We propose four alternative strategies for computing total effects that account for both dependent and constrained features. Our first approach involves a generalized winding stairs design combined with the Knothe-Rosenblatt transformation. This approach, while applicable to a wide family of input dependencies, becomes impractical when inputs are physically constrained. Our second approach is a U-statistic that combines the Jansen estimator with a weighting factor. The U-statistic framework allows the derivation of a central limit theorem for this estimator. However, this design is computationally intensive. Then, our third approach uses derangements to significantly reduce computational burden. We prove consistency and central limit theorems for these estimators as well. Our fourth approach is based on a nearest-neighbour intuition and it further reduces computational burden. We test these estimators through a series of increasingly complex computational experiments with features constrained on compact and connected domains (circle, simplex), non-compact and non-connected domains (Sierpinski gaskets), we provide comparisons with machine learning approaches and conclude with an application to a realistic simulator.
Funder
Università Commerciale Luigi Bocconi
Publisher
Springer Science and Business Media LLC
Reference67 articles.
1. Badea, A., Bolado, R.: Milestone M.2.1.D.4: review of sensitivity analysis methods and experience. Technical report, PAMINA Project, Sixth Framework Programme, European Commission (2008). http://www.ip-pamina.eu/downloads/pamina.m2.1.d.4.pdf
2. Bayousef, M., Mascagni, M.: A computational investigation of the optimal Halton sequence in QMC applications. Monte Carlo Methods Appl. 25(3), 187–207 (2019)
3. Bénard, C., Veiga, S.D., Scornet, E.: Mean decrease accuracy for random forests: inconsistency, and a practical solution via the Sobol-MDA. Biometrika 109(4), 881–900 (2022)
4. Bose, A., Chatterjee, S.: U-statistics, $$M_m$$-estimators and permutations. Springer, Singapore (2018)
5. Bratley, P., Fox, B.L., Niederreiter, H.: Implementation and tests of low-discrepancy sequences. ACM Trans. Model. Comput. Simul. 2(3), 195–213 (1992)