This chapter provides an overview of computational principles that may be useful when addressing the question of computation in nervous systems as well as questions of biological systems. It begins by introducing several key mathematical concepts, including “function,” and the distinction between computable and noncomputable functions, and between linear and nonlinear functions. It then considers a number of computational principles, such as the look-up table and linear associators, before discussing a new type of principle that can accomplish the satisfaction of constraints by a process of “relaxation.” In particular, it describes Hopfield networks and Boltzmann machines. It also examines learning in neural nets, competitive learning, curve fitting, feedforward nets, and recurrent nets. Finally, it assesses the importance of optimization procedures to neuroscience, along with the use of realistic and abstract network models in neuroscience.