Author:
Pillonetto Gianluigi,Chen Tianshi,Chiuso Alessandro,De Nicolao Giuseppe,Ljung Lennart
Abstract
AbstractIn the previous parts of the book, we have studied how to handle linear system identification by using regularized least squares (ReLS) with finite-dimensional structures given, e.g., by finite impulse response (FIR) models. In this chapter, we cast this approach in the RKHS framework developed in the previous chapter. We show that ReLS with quadratic penalties can be reformulated as a function estimation problem in the finite-dimensional RKHS induced by the regularization matrix. This leads to a new paradigm for linear system identification that provides also new insights and regularization tools to handle infinite-dimensional problems, involving, e.g., IIR and continuous-time models. For all this class of problems, we will see that the representer theorem ensures that the regularized impulse response is a linear and finite combination of basis functions given by the convolution between the system input and the kernel sections. We then consider the issue of kernel estimation and introduce several tuning methods that have close connections with those related to the regularization matrix discussed in Chap. 3. Finally, we introduce the notion of stable kernels, that induce RKHSs containing only absolutely summable impulse responses and study minimax properties of regularized impulse response estimation.
Publisher
Springer International Publishing
Reference97 articles.
1. Andrieu C, Doucet A, Holenstein R (2010) Particle Markov Chain Monte Carlo methods. J. R. Stat. Soc. Series B 72(3):269–342
2. Aravkin A, Burke JV, Pillonetto G (2018) Generalized system identification with stable spline kernels. SIAM J. Sci. Comput. 40(5):1419–1443
3. Aravkin A, Bell BM, Burke JV, Pillonetto G (2015) The connection between Bayesian estimation of a Gaussian random field and RKHSs. IEEE Trans. Neural Netw. Learn. Syst. 26(7):1518–1524
4. Aravkin A, Burke JV, Chiuso A, Pillonetto G (2014) Convex vs non-convex estimators for regression and sparse estimation: the mean squared error properties of ARD and GLASSO. J. Mach. Learn. Res. 15(1):217–252
5. Atkinson K (1975) Convergence rates for approximate eigenvalues of compact integral operators. SIAM J. Numer. Anal. 12(2):213–222