Abstract
Enabling cognition in a Virtual Character (VC) may be an exciting endeavor for its designer and for the character. A typical VC interacts primarily with its virtual world, but given some sensory capabilities (vision or hearing), it would be expected to explore some of the real world and interact with the intelligent beings there. Thus a virtual character should be equipped with some algorithms to localize and track humans (e.g. via 2D or 3D models), recognize them (e.g. by their faces) and communicate with them. Such perceptual capabilities prompt a sophisticated Cognitive Architecture (CA) to be integrated into the design of a virtual character, which should enable a VC to learn from intelligent beings and reason like one. To seem natural, this CA needs to be fairly seamless, reliable and adaptive. Hence a vision-based human-centric approach to the VC design is explored here.