DMI Design Space

From Malloch et al. 2011 – “Input Devices and Music Interaction”:

There is a huge variety of situations and interactions which might be termed “Musical Performance,” ranging from classical “virtuosic” performance on acoustic musical instruments to turntablism, mixing or live diffusion, to live coding of musical processes and sound synthesis. It is obvious that these different musical interactions present very different needs in terms of interface design, so we have found it essential to differentiate between the intentions of the creator, performer, and audience in order to establish contexts for discussing, designing, or evaluating DMIs.

In particular we use a paradigm of interaction and musical context based on Rasmussen’s model of human information processing (Rasmussen 1986), previously used to aid DMI design in (Cariou 1994). In Rasmussen’s model, interaction behaviours are described as being skill-, rule-, or model-based. Skill-based behaviour is defined as a real-time, continuous response to a continuous signal, whereas rule-based behaviour consists of the selection and execution of stored procedures in response to cues extracted from the system. Model-based behaviour refers to a level yet more abstract, in which performance is directed towards a conceptual goal, and active reasoning must be used before an appropriate action (rule or skill-based) is taken. Each of these modes is linked to a category of human information processing, distinguished by their human interpretation; that is to say, during various modes of behaviour, environmental conditions are perceived as playing distinct roles, which can be categorized as signals, signs, and symbols.

Model visualization based on Rasmussen's typology of human information processing (Malloch 2006). From left to right, the systems represented are less and less tolerant of interruption of the channels of control.

Model visualization based on Rasmussen’s typology of human information processing (Malloch 2006). From left to right, the systems represented are less and less tolerant of interruption of the channels of control.

[The figure above] shows a visualization we have developed for comparing and discussing musical devices based on Rasmussen’s framework (Malloch 2006). Performance behaviours are represented on the top row, and performance contexts on the bottom row. Since these contexts and behaviours may be blurred or mixed, we have also included “interruption tolerance” as a horizontal axis, meaning the tolerance of the system to interruption of the channels of control between user and machine. For example, if the performer stops “playing” and leaves to get coffee, will the system be affected immediately, after some length of time, or not at all? This idea has also been represented as “granularity of control” and later as “balance of power in performance” (Overholt 2009); we feel that “interruption tolerance” is less subject to value-judgements and conflicting interpretations.

Skill-based behaviour is identified by Cariou (1992) as the mode most descriptive of musical interaction, in that it is typified by rapid, coordinated movements in response to continuous signals. Rasmussen’s own definition and usage is somewhat broader, noting that in many situations a person depends on the experience of previous attempts rather than real-time signal input, and that human behaviour is very seldom restricted to the skill-based category. Usually an activity mixes rule and skill-based behaviour, and performance thus becomes a sequence of automated (skill-based) sensorimotor patterns. Instruments that belong to this mode of interaction have been compared more closely in several ways. The “entry-fee” of the device (Wessel and Wright 2002), allowance of continuous excitation of sound after an onset (Levitin et al. 2002), and the number of musical parameters available for expressive nuance (Clarke 1988) may all be considered.

During rule-based performance the musician’s attention is focused on controlling a process rather than a signal, responding to extracted cues and internal or external instructions. Behaviours that are considered to be rule-based are typified by the control of higher-level processes and by situations in which the performer acts by selecting and ordering previously determined procedures, such as live sequencing, or using “dipping” or “drag and drop” metaphors (Wessel and Wright 2002). Rasmussen describes rule-based behaviour as goal-oriented, but observes that the performer may not be explicitly aware of the goal. Similar to the skill-based domain, interactions and interfaces in the rule-based area can be further distinguished by the rate at which a performer can effect change and by the number of task parameters available as control variables.

The model domain occupies the left side of the visualization, where the amount of control available to the performer (and its rate) is determined to be low. It differs from the rule-based domain in its reliance on an internal representation of the task, thus making it not only goal-oriented but goal-controlled. Rather than performing with selections among previously stored routines, a musician exhibiting model-based behaviour possesses only goals and a conceptual model of how to proceed. She must rationally formulate a useful plan to reach that goal, using active problem-solving to determine an effectual course of action. This approach is thus often used in unfamiliar situations, when a repertoire of rule-based responses does not already exist.

By considering their relationship with the types of information described by Rasmussen, performance context can also be distributed among the interaction domains. The signal domain relates to most traditional instrumental performance, whether improvised or pre-composed, since its output is used at the signal-level for performance feedback. The sign domain relates to sequenced music, in which pre-recorded or pre-determined sections are selected and ordered. Lastly, the symbol domain relates to conceptual music, which is not characterized by its literal presentation but rather the musical context in which it is experienced. In this case, problem solving and planning are required; for example, conceptual scores may lack specific “micro-level” musical instructions but instead consist of a series of broader directives or concepts to be actively interpreted by the performer (Cage 1961).

Related publications:

Joseph Malloch, Stephen Sinclair, Avrum Hollinger, and Marcelo M. Wanderley. “Input Devices and Music Interaction”. In Jorge Solis and Kia Ng (Eds.): Musical Robots and Interactive Multimodal Systems. Springer Tracts in Advanced Robotics, Vol. 74. Berlin: Springer-Verlag, 2011.

Joseph Malloch, David Birnbaum, Eliot Sinyor, and Marcelo M. Wanderley. “Towards a new conceptual framework for digital musical instruments”. In Proceedings of the 9th International Conference on Digital Audio Effects (DAFx-06), Montreal, Canada, pp. 49–52, 2006.