Affiliation:
1. University of Southern California, Los Angeles, and Harvey Mudd College, Claremont, CA
2. University of Southern California
Abstract
This article describes the design and implementation of the Multimodal Interactive Musical Improvisation (Mimi) system. Unique to Mimi is its visual interface, which provides the performer with instantaneous and continuous information on the state of the system, in contrast to other human-machine improvisation systems, which require performers to grasp and intuit possible extemporizations in response to machine-generated music without forewarning. In Mimi, the information displayed extends into the near future and reaches back into the recent past, allowing the performer awareness of the musical context so as to plan their response accordingly. This article presents the details of Mimi's system design, the visual interface, and its implementation using the formalism defined by François' Software Architecture for Immersipresence (SAI) framework. Mimi is the result of a collaborative iterative design process. We have recorded the design sessions and present here findings from the transcripts that provide evidence for the impact of visual support on improvisation planning and design. The findings demonstrate that Mimi's visual interface offers musicians the opportunity to anticipate and to review decisions, thus making it an ideal performance and pedagogical tool for improvisation. It allows novices to create more contextually relevant improvisations and experts to be more inventive in their extemporizations.
Funder
National Science Foundation
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Science Applications
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献