Abstract
Often input scenes contain a vast amount of information, which tends to make pattern-recognition decisions laborious and time consuming. In traditional digital pattern recognition methods, one digitizes the input scene using a two-dimensional detector, e.g. a solid-state photodiode array and a frame store. If the detector consists of, say, a 1000 x 1000 array of detection elements, then one has to process a million points of data. This is too much information for even very large computers to process in real time, so one generally transforms the input information into some sort of feature-space representation, e.g. through the use of edge-enhanced images, and makes the recognition decision based on this reduced data set.