Affiliation:
1. Division of Mathematics, Computer Science, and Statistics, The University of Texas at San Antonio, San Antonio, Texas 78249 USA
Abstract
We demonstrate sufficient conditions for polynomial learnability of suboptimal linear threshold functions using perceptrons. The central result is as follows. Suppose there exists a vector w*, of n weights (including the threshold) with “accuracy” 1 − α, “average error” η, and “balancing separation” σ, i.e., with probability 1 − α, w* correctly classifies an example x; over examples incorrectly classified by w*, the expected value of |w* · x| is η (source of inaccuracy does not matter); and over a certain portion of correctly classified examples, the expected value of |w* · x| is σ. Then, with probability 1 − δ, the perceptron achieves accuracy at least 1 − [∊ + α(1 + η/σ)] after O[n∊−2σ−2(ln 1/δ)] examples.
Subject
Cognitive Neuroscience,Arts and Humanities (miscellaneous)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献