Abstract
AbstractThe number of adjustable parameters in a model or hypothesis is often taken as the formal expression of its simplicity. I take issue with this `definition´ and argue that comparative simplicity has a quasi-empirical measure, reflecting experts’ judgements who track past use of a model-type in or across domains. Since models are represented by restricted sets of functions in a suitable space, formally speaking, a general `measure of simplicity´ may be defined implicitly for the elements of a function space. This paper sketches such a framework starting from intuitive constraints. It is shown how experts’ judgements feed into this framework and how the usual definition can be recovered. A theorem by H. Akaike in the theory of model-choice has recently been used to shine new light on the relationship between the demand for simplicity and empirical success, or even `truth´. The approach favored here permits an alternative answer based on a reliabilist account of justification: if judgements of simplicity track past successful use of a model-type comparative simplicity is evidential and inductive.
Funder
Ludwig-Maximilians-Universität München
Publisher
Springer Science and Business Media LLC
Subject
General Social Sciences,Philosophy
Reference22 articles.
1. Bandyopadhyay, P., Boik, R., & Basu, P. (1996). The curve fitting problem: A Bayesian approach. Philosophy of Science, 63(supplement), S264–S272
2. Bandyopadhyay, P., & Boik R. (1999). The curve fitting problem: a Bayesian rejoinder. Philosophy of Science, 66, S390–S402
3. Bandyopadhyay, P., Bennett, J. & Higgs, M. (2014). How to Undermine Underdetermination. Foundations of Science, 20(2), 107–127
4. Burnham, K. P. & Anderson D. (2002). A practical information-theoretic approach. Model selection and multimodel inference. Springer
5. Dasgupta, A. (2011). Mathematical foundations of randomness. In: Philosophy of Statistics. North-Holland. pp. 641–710
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献