Table of Contents Author Guidelines Submit a Manuscript
Wireless Communications and Mobile Computing
Volume 2019, Article ID 9234812, 12 pages
https://doi.org/10.1155/2019/9234812
Research Article

The Influence of Coauthorship in the Interpretation of Multimodal Interfaces

1Creative Arts and Industries, University of Auckland, New Zealand
2Madeira-ITI and FCT/Universidade Nova de Lisboa, Portugal
3Faculty of Computer Science, Free University of Bozen-Bolzano, Italy

Correspondence should be addressed to Fabio Morreale; zn.ca.dnalkcua@elaerrom.f

Received 15 January 2019; Accepted 2 April 2019; Published 24 April 2019

Guest Editor: Federico Avanzini

Copyright © 2019 Fabio Morreale et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a model to codesign interpretively flexible artefacts. We present the case study of Beatfield, a multimodal system that allows users to control audiovisual material by means of tangible interaction. The design of the system was coauthored by individuals with different background and interests to encourage a range of difference interpretations. The capability of Beatfield to foster multiple interpretations was evaluated in a qualitative study with 21 participants. Elaborating on the outcome of this study, we present a new design model that can be used to stimulate heterogeneous interpretations of interactive artefacts.

1. Introduction

Throughout history, many musical instruments gained an ample uptake for having been used in ways that designers did not originally envision [1]. This is the case, for instance, of the tape recorder, which was transformed into a compositional tool by composers and engineers in the 19th century [2], and of the turntable, which found a new identity in the hands of DJs [1]. In more recent years, several musical interfaces have been appropriated by players in ways that designers did not envision [1, 3]. The very nature of interactive devices and musical instruments, it seems, becomes clear only when submitted to the verdict of the final users, whose possibility for appropriating and repurposing technology gave rise to the hacking culture at the end of last century.

Only in recent years did researchers in human-computer interaction (HCI) start focusing their attention on the phenomena of interface appropriation [4, 5], hacking, and making communities [68]. Prior to that, the objective of HCI designers was to convey a single, clear interpretation of their design artefacts. This objective was a direct consequence of the contexts in which HCI operated at that time, i.e., the work environment, which required research on human factors to achieve better ergonomics and increased productivity [9]. However, due to its recent intersection with new domains, in particular the arts and humanities [9], the HCI community started questioning this approach, suggesting that multiple interpretations of an interactive system can coexist [10, 11]. A considerable corpus of work was then produced advocating for the importance of appropriating and even subverting interactive systems [1113]. Researchers and designers started exploring ways to design interactive systems that can be easily appropriated by the user. To date, however, no work has pinpointed a series of design activities that allow a single interactive artefact to present itself in different ways to different user and thus stimulate a variety of interpretations.

In this paper, we discuss our attempt to address this challenge. We introduce the idea that the design of interpretively flexible systems should embed multiple values and backgrounds at design stage. To this end, we propose a codesign method based on a number of activities in which a few participants coauthor a new interactive system. These activities, inspired from a design framework of musical interfaces [14], were tested to guide the development of Beatfield, an interactive music system designed to be interpretively flexible (Figure 1). We empirically evaluated the extent to which Beatfield managed to stimulate a variety of interpretations in a field study with 21 participants. A qualitative analysis was performed on interview transcripts and video analysis. The results confirmed that Beatfield successfully elicited a variety of interpretations about its own nature and the way to interact with it. Elaborating on these results, we propose discussions that offer new insights on human interaction with multimodal devices.

Figure 1: Beatfield, an interactive music system developed as a research through design artefact.

The remainder of this paper is structured as follows. Section 2 reviews the related work; Section 3 describes the design process that lead to the development of Beatfield, whose implementation is presented in Section 4. The field study is presented in Section 5, and its results are discussed in Section 6. The paper concludes with reflections on the design of open-meaning multimodal systems.

2. Related Work

2.1. Design for Open Interpretation

The idea of an “interpretively flexible” artefact first appeared in [11], who described it as a system in which meaning is coconstructed by users and designers. The resulting system is described as a sort of “Rorschach interface” onto which each user would project their own personal meanings. A number of design suggestions were also identified to encourage users to appropriate and reinterpret a system and produce their own meanings [11]. For instance, the system should not constrain the user to a single interaction mode; instead, it should provide information about the topic without specifying how to relate to it and stimulate original interpretations while discouraging expected ones. Similarly, Gaver and colleagues suggested that if the ultimate purpose is to foster the association of personal meanings with a design artefact, any clear narrative of use should be avoided [13]. Multiple interpretations of a piece of design can be promoted by embedding in the design ambiguous situations that require users to participate in meaning-making [15].

Even though ambiguity had been avoided in HCI for a long time, researchers are now advocating that it can be considered as a resource for design [1517]. Three kinds of ambiguity in design can be distinguished [15]: ambiguity of information, which arises when the information is presented in an ambiguous way; ambiguity of context, which occurs when things assume different meanings according to the context; and ambiguity of relationship, which is related to the user’s personal relationship with a piece of design. A series of design considerations were also proposed to foster an ambiguous reception. For instance, the product should expose inconsistencies and cast doubts on the source of information; it should assemble disparate contexts, as to create tensions that must be resolved; and should diverge from its original meaning when used in radically new contexts.

HCI researchers also proposed to support open interpretations of artefacts by embedding elements of appropriation, i.e., “improvisations and adaptations around technology”[5]. A number of suggestions to design for appropriation have been suggested by Dourish [4] and Dix [5], among which (i) including elements where users can add their own meanings [5]; (ii) offering tools to accomplish a task without forcing an interaction strategy [4, 5]; and (iii) supporting multiple perspectives on information [4]. Another factor that can encourage multiple interpretations of a design artefact is randomness [18]. Randomness provides users with positive and rich experiences as, arguably, it is the very experience of unpredictability that mostly captures user imagination. Our proposed codesign model uses as design material some of the suggestions for designing for open interpretations in the context of interactive systems.

2.2. Interactive Music Systems

The design space of interactive music systems unfolds along different dimensions, which have been categorised by several researchers in a number of design frameworks [14, 19, 20]. This design space can be roughly described along a continuum that stretches from digital musical instruments (DMIs) to interactive installations [19]. DMIs are innovative devices to perform music: they can either take a form that resembles that of existing instruments [21, 22] or be completely new interfaces [23, 24]. At the other end of the spectrum, there are interactive systems, which are designed with a focus on the experience of the player rather than the actual musical output [14]. This is the case, for instance, of The Music Room, which was ideated to stimulate collaborative experiences among players [25]. In other cases the objective seems to purposely stimulate ambiguous reception. Polyfauna, for instance, is a mobile app that allows the user to influence audio and visual aspects, while the mapping between the input and the output remains ambiguous. As another example of enigmatic mapping, Cembalo Scrivano is an augmented typewriter that responds to user interaction with ambiguous sounds and visuals [26].

In parallel to the development of novel music creation tools, new relations between different actors (i.e., designers, composers, and performers) emerged [27]. Schnell and Battier [28] proposed the concept of the composed instrument, an interactive music system that is both an instrument and a score. Under this perspective, the design of an interactive music system coincides with the compositional process and the authorship of the resulting artistic outcome is embedded in the technology itself. In these cases, the composer is usually also the designer and imposes her own aesthetic on the artefact. Recently, a codesign activity has been proposed to collaboratively develop musical interfaces [29].

The codesign model that we present in this paper is inspired by MINUET, a framework aimed at assisting the design of interactive music systems centred on the player experience [14]. MINUET is structured as a design process consisting of two stages, goal and specifications (Figure 2). At the first stage, Goal frames a conceptual model of the reason for existence of the interface by considering three separate entities: people, activities, and contexts. People addresses the designer’s objectives from the point of view of the target category of players and from the role of the audience; Activities addresses the character of the envisioned interaction; Contexts addresses the environment and the physical setup of the interface. Each of these entities is composed of a number of concepts that consider design issues on a more pragmatic level (e.g., activities are defined by the concepts motivation, collaboration, learning curve, and ownership). The second stage of the design process, specifications, delves into the goals from the point of view of the interaction constraints between the player and the artefact. Concepts considered at this stage are control, mapping, input, feedback, operational freedom, and embodied facilitation. Control refers to the extent to which the player has control on the music output. Mapping refers to the relation between the user input and the musical output. It can be convergent (a sequence of actions produces a single sound), or divergent (a single action affects many musical factors). Input concerns the modality of interaction (e.g., visual, tactile, semantic). Feedback considers different feedback modalities beyond sound. Operational freedom indicates the extent to which the instrument can push the player to have a creative, flexible interaction. Embodied facilitation invites to consider whether or not the design of the interface should impose limitations by suggesting specific interaction modalities. In the work presented in this paper, we modelled the proposed design activities on MINUET to foster reflections on both abstract and practical level when designing an interface that is interpretively opened.

Figure 2: MINUET, a design framework of musical interfaces [14].

3. Beatfield: Design

The goal of this project was to generate design activities aimed at creating an interpretively flexible artefact, as defined in Section 2.1. As opposed to the suggestion offered by [11], who endorsed the centrality of the designer in the process when designing interpretively flexible systems, we propose the idea that the designer should in fact resist embedding his own aesthetics in the design of the system. Rather, we advocate for several individuals to coauthor the system through codesign sessions, thus avoiding privileging any particular interpretation.

We tested this model in an empirical study with a multidisciplinary team composed of a researcher in HCI, a musician, a game designer, and a visual artist. This team was involved to propose a number of possible scenarios. MINUET [14] appeared to be the ideal framework to guide this process as it was specifically designed to elaborate ideas and objectives when designing an interactive music system. However, MINUET does not provide practical guidance on how to put these ideas into practical design activities. To this end, we created a design activity divided into two parts as to mirror the structure of MINUET. Specifically, the goal of Part I is to outline a new interactive music system; and Part II is dedicated to embed in its design elements that make it open to interpretation.

3.1. Part I: Coauthoring a New Interactive Music System

The first step of MINUET is intended to generate a conceptual model of the interaction, i.e., the user story at very high level. We operationalised this step in the following way. The researcher prepared a block of papers to be handed to each participant. Each block was composed of three sheets, each describing one of the entities specified in MINUET and its associated concepts. Being each entity equally important in the design of an interactive music system [14], the order of each entity was shuffled across the three papers to avoid privileging any of them. As an example, the order of the entities and their associated concepts in one of the blocks was sheet 1 - Context; sheet 2 - Activities; and sheet 3 - People. The design activity took place in a “brainstorming” room that contained several facilities and appliances to stimulate design activities. The researcher moderated the sessions. To start with, he explained that the goal of the activity was to conceptualise a new interactive music system. The research objective to create an open-interpretation artefact was purposely not disclosed at this point.

For the first task, each participant came up with a scenario using the entity on sheet 1 and the associated concepts. Participants were invited to draw or write down their ideas. After 10 minutes, the blocks were collected and redistributed in a different order among the participants, who were asked to read the scenario described in the first sheet and to elaborate it further, this time using the entity indicated at sheet 2. The same process was repeated a third time: participants had to finalise the scenario authored by the other two participants using the entity of the last sheet. At the end, all scenarios were coauthored by all participants, and all participants contributed to a different entity in each scenario. Finally, the three scenarios were presented and discussed and the researcher then invited the team to come up with a single scenario. The outcome of this activity was the conceptual proposition of a tangible multimodal interface that we eventually named Beatfield. The high-level user story of Beatfield as defined at the end of the activity follows: Beatfield is an ambiguous tangible exploration of an audiovisual landscape. A musical source is placed inside a box; players can let the music spawn from it by placing objects on top of the box. Each time a new object is positioned on the board, a music pattern would play.

3.2. Part II: Making the System Open to Interpretation

The second part of the workshop consisted of defining the specifications of the interface in a way to stimulate multiple interpretations. Only at this point did the researcher disclose the actual objective of the project to build an interface that was interpretively flexible. The activity took the form of a focus group. At the beginning of the session, the researcher handed each participant a paper containing a list of design suggestions as identified from the related work on open interpretation of design artefacts (Table 1). Participants were invited to reflect upon the user story defined in the first part of the workshop and to detail possible design specifications using the concepts associated with this stage of MINUET. Each participant was then given 30 minutes to individually elaborate upon each concept adopting the design suggestions described in the list. At the end, each concept was discussed in group. The descriptions of each concept were iteratively refined several times before profiling the final versions, which is detailed below using concepts from MINUET.

Table 1: List of design suggestions that we elaborated from related work. These suggestions were offered to the codesign participants as design material to stimulate idea generation and constitute the boundaries of our intervention in the design process.
3.2.1. Input

Players can control music and visuals by positioning pieces on a game board. A total of eight pieces are available, equally divided into two colours (blue and yellow). Each colour group is composed of one king and three pawns, which have different roles. The craft of the pieces was later commissioned to an artist that was asked to (i) take inspiration from chess pieces while at the same time departing from them (using chess pieces in a different context might stir ambiguous receptions among players [15]), (ii) suggest that the two pieces have different roles, and (iii) evoke an enigmatic imagery. The initial sketches and the final pieces are shown in Figure 3.

Figure 3: Initial sketches of the pieces and final builds.
3.2.2. Feedback

Given the multimodal character of the system, we opted for auditory, tactile, and visual feedback. Priority was given to sound given its lower semantic character compared to visuals, which might lead to higher variability of interpretations. Some simple visual feedback would though be included to add a new layer of interpretation.

3.2.3. Control

Players can interact with some aspect of the music at a high level. Specifically, they are allowed to control the rhythm of the composition, leaving harmony and melody selection to the computer. The mechanics of the interaction would be defined: placing an object on the game board always results in the same outcome. However, the rules and roles of the interaction would be withhold to the players, an approach that contrasts traditional games, in which players have a specific role that is made for them by the game, “a set of expectations within the game to exercise its effect” [30]. Rather, we denied the players to build expectations, thus allowing them to play accordingly to their own wishes and decide their own strategies. Players can build formal definitions of their interaction (e.g., they agree on a turn-based system), thus adopting a “voluntary act of unnecessary overcome obstacles” [31], and become actual players. For instance, players might or might not give pieces of different colours the identity of teams; also, there is no specific need for cooperative or competitive behaviours, but both behaviours have the potential to emerge. It is a design without expectations, which allows the experience to exceed the boundaries of belonging to a particular domain, being it game, music, or artwork. This idea resonates with the concept of the well-played game [32]: a well-played game cannot be found in a specific game, but rather in the experience, in the spirit of playing itself. Games can be “well-played” as long as they are continuously created and players are encouraged to change their standard rules.

3.2.4. Mapping

The mapping between user interaction and generated music is divergent and nontransparent. Player actions are not directly associated with any specific sound. Rather, an algorithmic agent determines the evolution of the piece; thus, the player fails to precisely recognise his gesture in a direct musical output. The mapping between the physical objects and the music is the result of a design process meant to stimulate reflections in the player and to remain ambiguous.

3.2.5. Operational Freedom

Players are free to guess the functionality of the system and to decide their own objective of interaction based on their interpretation of the system. To this end, any information and narrative of use about the functionalities and the objectives of the interface are withheld from players.

3.2.6. Embodied Facilitation

Beatfield is intended to offer limited freedom of interaction. There is a set of moves and strategies that are considered acceptable and that are necessary for the achievement of an effect in the game (e.g., the player can place a piece between two cells of the board, but cannot expect any outcome). However, the system has not been developed considering the emergence of any particular interaction nor assuming a particular behaviour of the users. In other words, while there cannot be unlimited ways to use the system, it does not exist the right way either.

4. Beatfield: Implementation

We iteratively prototyped the technological apparatus to accommodate the specifications outlined in the previous section. The final configuration of Beatfield is shown in Figure 1. The interface is composed of an augmented game board that sits on top of a wooden box containing a wireless portable speaker. The chosen speaker outputs sound on two opposite directions; thus, sounds can be reproduced with the same intensity from all the four sides, which avoids Beatfield to have one correct orientation.

4.1. Gesture Sensing

The physical interface was an adaptation of an existing embedded system developed by colleagues at the University of Trento called . The is a 32 wooden box whose surface (the game board) is divided into 36 5x5 cm cells organised as a 6x6 matrix [33]. Each cell is equipped with electronic contacts that allow communications with the blocks and with magnets that facilitate the blocks to slot in. The top cover is made of a thin sheet of translucent black plastic, beneath which lays a matrix of RGB LEDs, one for each cell. Players can interact with the interface by positioning custom made wooden blocks on the surface. The base of each block embeds a number of microcontrollers, custom electronics boards, and surface contacts that allow to exchange information with the cell of the . On top of the base, we slotted in the crafts of the pieces as shown in Figure 3. To facilitate the mechanical placement, the corners of each cell have L-shaped cuts that are aligned with the bottom corners of the blocks, which have the same L-shape but are extruded. This mechanical guidance is supported by the use of magnets hidden under the surfaces; the magnetic attraction between a cell of the and the block facilitates the electrical connection. Being the base of each block a square, there are four possible orientations; therefore, the pieces can be detected regardless of their orientation. In order to obtain a fully portable system, an integrated Wi-Fi network was included inside the to communicate with an external computer.

4.2. Game Mechanics

The mechanics of the game are illustrated in Figure 4.

Figure 4: (a) Each cell is associated with a specific rhythm, which is played once a pawn is placed on top of it. (b) Once positioned on top of a cell, the king suggests favourable positions by illuminating up to three cells with its colour. The rhythm associated with these cells is similar to that where the king sits. (c) When a pawn is positioned on top of a cell with its same colour, its harmonic spectrum is enhanced. (d) When both kings are on the panel, their favourable positions might point to the same cell(s). In this case, a green colour lights up.
4.3. Music Generation

The sound design and the mapping between the physical objects and the music are the result of a design process meant to be open to different interpretations. Beatfield generates in real time a tune that is divided into two main components: a drone tone and a number of rhythmic patterns, whose notes are based on a global harmony. The drone is always active as a background sound, even in idle condition, while the rhythmic patterns only play following user interaction. Specifically, a rhythmic pattern is triggered and looped each time a pawn is placed on a cell: the more pawns are added to the board, the more rhythmic patterns are generated. When a pawn is removed from the board, the corresponding rhythmic pattern is turned off. A global metronome set at 60 BPM synchronises all the rhythmic patterns and the drone tone. The global harmony consists of a set of 8 notes forming an atonal scale to control the overall harmonic coherence of the music. This set of notes cyclically changes every 12 beats; at each new cycle one of the notes is removed from the set of available ones and replaced with a new one that is randomly chosen. This new note is also transposed two octaves lower and replaces the previous note of the drone. As a result, each 12 beats the drone plays a new note, resulting in a continuously changing background.

4.3.1. Rhythmic Patterns

Each cell of the board is associated with a specific rhythm according to the following mapping. Different columns are associated with different lengths of the bar: the leftmost column is one-quarter bar long, the one on its right is two quarters long, and so on up to six quarters in the rightmost column. The rows determine the number of notes in the bar: the topmost row has one note per bar, the second has two notes, and so on up to six notes for bar in the bottom row (Figure 5). This configuration allows for a variety of rhythmic combinations, whose density spans from the cell positioned in , which plays six notes per second, to the cell positioned in , which plays one note every six seconds.

Figure 5: Table of possible rhythms on the matrix.
4.3.2. Sound Design

Both the drone and the rhythmic patterns are generated using FM synthesis developed in Max/MSP. The modulation index and the carrier-modular ratio are used to differentiate the timber of the patterns according to the colour of the associated pawn. Blue pawns have bell-like low timber, while yellow pawns have bright timber. The modulation index also reacts to the colour of LED lights. When a pawn is positioned on top of a cell illuminated with its same colour the harmonic spectrum of the corresponding rhythmic pattern is synthesised with higher harmonics by increasing the modulation index.

5. Field Study

We evaluated the capability of Beatfield to foster flexible interpretations in a field study. We selected 21 participants with different backgrounds and artistic knowledge to foster diverse receptions of the system. Some were invited to try it alone (N=9), while others in group of two (N=6) or three (N=6).

The study took place in an empty room in a historical building in the city centre of Trento, in Italy. The room was lit up by four dim lights and Beatfield was positioned on a table at the centre of the room (Figure 6). A visible camera recorded the players’ activities. In between session, the pieces were shuffled on the table next to the , as to prevent specific piece arrangements (e.g., pieces placed in two rows of different colours) from influencing the reception of the interface. Once entered the room, participants were informed that the session was going to be recorded and asked to sign a consent form, for which an ethical approval was previously requested and issued. We did not provide any explanation about the functionality and the objectives of the system. Participants were told that they could do whatever they wished and had no time limit.

Figure 6: The setup of Beatfield in the field study.

At the end of each session, a researcher interviewed participants following a semistructured approach. Participants were free to talk about whatever they considered relevant to describe their experience. At the end of the interview, every participant was asked to describe the system they interacted with. Another source of data was provided by the videos; two researchers independently analysed the videos and thematically coded them. Coding was entirely data-driven and themes were derived in a number of ways, such as their prevalence in the data and also their importance. Double coding was conducted on around 30% of the videos, yielding interrater reliability of almost 90%.

A thematic analysis was performed on the data source composed of interview transcripts on the videos. A deductive approach was adopted: we had preexisting coding frames through whose lenses we aimed to read our research exploration [34]. The coding process identified the different behaviours exhibited by participants as well as the interpretations they attributed to the system. Then, a list of themes was identified by clustering codes. The following list indicates the identified themes and the associated codes.(i)Visual exploration visual/light patterns; colour homogeneity; reaction to light; self-assignment of colours; presence of the pieces on top of the lights}(ii)Sonic exploration timing in piece placement; awareness of sound source; awareness of drone sound; listening to sonic outcome; attempt to understand mapping}(iii)Board exploration systematic exploration of each cell; geometric placement of pieces}(iv)Pieces exploration moving pieces with no attention to sonic output; intentional movements of pieces; using the king; using the king only; piece rotation; awareness of colour and type of moved pieces}(v)Interaction mode reaction to light; systematic exploration; movement combination; wait-and-see after a movement}(vi)Kind of engagement enjoyment; understanding mapping; epiphany/surprise; oral communication; involvement in music making; absorbed by the experience; appropriation of the system; strategic planning; flow of actions}

6. Results

Session lengths varied from 11.3 to 46.1 minutes (average 22.4, standard deviation 9.5). In general, we observed three different interpretations.

6.1. Exploration

Nearly all the participants received Beatfield as an exploratory game, but the objective of the exploration greatly varied. Our analysis shows that the terminus of attention was clustered into two almost orthogonal sets: on the one side, how the system functions; on the other side, what is the meaning of the system. Several participants spent most of the time trying to understand the functionality of the system, the possibilities they were offered, and the basic elements of the interaction. One group, composed of three art historians (all females, average age = 24), focused on understanding how the system worked, and how they could interact with it: “We spent most of the time trying to understand the underlying mechanism rather than purposely creating a melody or mixing different sounds”. Another participant (33, M, designer) reported: “I was trying to figure out what I could do with this … thing”. Rather than focusing on the possibilities for interaction, some participants interpreted the system as sort of a puzzle game in which the objective was to understand the objective of the system. A board-game enthusiast (M, 33), for instance, said: “I tried to understand whether there was a predefined aim to this game. Because, apparently, it was a game. For instance, I tried nullifying sounds, or finding specific configuration for something special”. He also added: “I was trying to position the towers in a way that they could completely fill a specific area to see if something would happen”.

6.2. Immersion

In a few cases, players’ experiences resembled the psychological condition, called flow by Csikszentmihalyi, of being spontaneously absorbed in an activity [35]. In particular, an artist (F, 26), rather than trying a conscious interaction with the system, interpreted the system as a free exploration and she let herself be absorbed by it for more than 40 minutes. When prompted, she tried to explain her experience: “I was placing the pieces using my instinct. I believe you have to be free in this kind of experiences, you should not set yourself objectives”. She was not interested in understanding the values of the pieces: “I didn’t want to control the instrument too much. I always try to control everything, but this time I felt I was free, I let the music follow me. I realised that colours changed but I didn’t realise why it was that - I didn’t want them to”. Rather, she connected Beatfield to her experience personal meanings and life events: “While playing I was having a sort of parallel existence. I was not here. All these cells, the slots for the towers…they were all connected to a picture I had in my mind. I lived through some moments of my life”.

6.3. Artistic Creation

Several participants interpreted Beatfield as a musical interface. Two professional musicians (F= 26, M=32), for instance, reported: “It is a controller. It reminded me of a loop station, and partially a sequencer. It is a way to create musical environments that can be more or less rhythmical. But with some randomness”. They reported that they had spent most of their session searching for polyrhythms, and for complex situations. However, although they spent more than 45 minutes interacting with the installation, they acknowledged that “it is too difficult to plan a precise musical interaction without knowing what’s behind the installation. Even if you had a musical objective in mind, reaching that configuration is virtually impossible. However, you can somehow create a musical landscape that moves”. In other cases, players focused their interaction on playing with visual elements, such as pieces’ placement and light combinations. As an example, two musicians, both expert chess players (24 and 26, M) spent a relevant amount of time trying to arrange the pieces into particular geometric shapes and to make the green lights appear: “When we had a lot of pieces on the surface we mostly played with the lights. And we also tried to play with geometry - perhaps a regular shape would create a particular sound”.

7. Findings

In this section we discuss the main findings of the work. First, we offer evidence that our proposed design model successfully elicited a variety of interpretations. Second, we reflect on how this model provides a new perspective for the design of interpretively flexible artefacts.

7.1. Fostering Multiple Interpretations

The field study demonstrated the capability of Beatfield to foster a range of possible interpretations and opportunities for interaction that have consequences on the very status of the system and on the experience of the player. Beatfield turned out to be a musical instrument, a board game, and an interactive artwork. Its perceived nature also changed throughout the sessions in several occasions. For instance, a passionate board-game player (21, F) initially fully dedicated her attention to controlling light activation, while she later tried to control the music towards the end of the session. The change in the interpretation of the system was mirrored by a change in the experience of the user, as explained by another participant (26, F, artist): “At first, I was just placing towers on the board in complete freedom. I was not focusing on what I was doing. Then, I gradually listened to all the sounds and started to organise them” to the point that she even found her favourite cell position: “I really like the one that was playing there (position 6,6)”.

The variety of interpretations disclosed different natures of the systems that were inherently embedded in it by the identities and values of the participants of the workshop, which strongly determined the meaning that users could make of Beatfield. The emergence of these different interpretations was favoured by the design activities, which gave voice to the different insights of the designers. These activities achieved to find a fine balance among the authors’ values, goals, insights, competences, and interests, which merged and grew together in a number of coauthored design scenarios that reflected the diversity of their proponents. The definition of these design scenarios was supported by the adoption of MINUET as a reference framework [14], whose design suggestions we operationalised. For instance, we proposed the idea of shuffling the order of the entities of the framework when conceptualising the new installation to allow different scenarios to emerge. This finding can appeal to readers interested in creating open-meaning artefacts and offers new knowledge on human interaction with multimodal devices.

It is worth mentioning that, even though Beatfield successfully elicited different interpretations, it also presented itself with a defined aesthetic feature as, especially from a musical perspective, the outcome is embedded in the artefact itself. We can therefore argue that Beatfield maintains some key elements of a composed instrument, given its intrinsic nature of being both a playable tool and a piece of music on its own [28]. However, Beatfield is not the product of a typical authored process but rather the product of a codesign process. This process also differs from that of the cocreated composed tool, proposed by Masu and Correia [29], as in that case the authorship over the final aesthetic musical results was negotiated between a composer/designer and a performer.

7.2. Coauthoring Interpretively Flexible Experiences

Following the conceptualisation of Sengers and colleagues [11], a system is considered interpretively flexible when its meaning or interpretation is coconstructed by the user and the designer. The model presented in this paper, which we schematically represent in Figure 7, departs from this view insofar as the coconstruction of meaning predates user interaction. The conditions for fostering multiple meanings are indeed set at design time by adopting design activities that embed the different values and ideas of the codesigners and that result in the emergence of different interpretations whose quality and number are as varied as the values and identities of the individuals that are involved. These design activities, which were inspired by the MINUET framework [14], allow the designer to set the general direction by offering initial suggestions (Table 1) as design material. Our investigation thus did not focus on evaluating the quality of the design suggestions against the experiences of the participants of our study. Rather, these suggestions constitute the very boundary of designer intervention.

Figure 7: From the bottom-up: the designer initially outlines a series of design suggestions defining the constraints for user interaction. These suggestions are then used by codesigners during the codesign activities to frame the goals of the interaction and outline the specifications for the final artefact. At interaction stage, multiple interpretations of the artefact are stimulated.

8. Conclusion and Future Work

The expression “too many cooks in the kitchen” describes a situation in which an effort to achieve an outcome is made unproductive by too many individuals seeking to have input. In this paper, we advocate that this is not the case for multimodal interfaces that are designed to be interpretively open. We promoted the faculty of embedding in the design of a multimodal interface values and perspectives from different individuals to create heterogeneous interpretations. We proposed that this process benefits from being framed within a validated design framework and we tested this proposition with the case study of Beatfield, in which we intentionally avoided crafting the semiotics of the artefact to give rise to idiosyncratic interpretations of the system. Also, we expanded upon findings from related work to formulate design choices that could stimulate a variety of interpretations of an interactive system. Notably, this study did not precisely pinpoint the specific factors that contributed to such a diversity of experiences to emerge.

We also introduced the idea that the design of open-ended tools may benefit from nonobvious relations among the different elements of the artefact. In our case study, for example, different interpretations were elicited by having sound and visual independent from each other. The distinction among audio and visual elements can be considered nonsyncretic, which contrasts the syncretic approach, i.e., the amalgamation between audio and visuals in traditional media [37]. As opposed to syncretic approach to audiovisual, which advocate for the combination of audio and visuals to emphasise a concept, this approach purposely separates the two aspects to foster multiple interpretations. The results of the study confirmed that unclear relations between sound and visual indeed fostered different interpretations.

The findings of this paper open new challenges for multimedia devices. In particular, the conscious use of ambiguity in the design, relying on cocreation and adoption of nonobvious relations among different sensory modes, can help designers find solutions to make their artefact open to different interpretation, foster exploration, and support appropriation. Future studies will include a more copious number of participants to systematically test whether the emergence of specific interpretations and experiences is ascribable to the personal experience, interests, and background. Furthermore, future studies will explore differences between users experiencing the system individually or as a group and between participants tied by different forms of relationships. Finally, in this paper we focused on coauthorship at design time; future research will investigate issues of coauthorship in use when different people elaborate meaning by manipulating an ambiguous space together.

Data Availability

Interview transcripts and the videos of participants' sessions will be uploaded in an institutional website and freely accessible by everybody.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

We want to thank our colleagues that helped us with this project and in particular Zeno Menestrina, Andrea Conci, and Adriano Siesser. We are also grateful to all the participants that volunteered to join our case study.

Supplementary Materials

A video collected during the field study is available as supplementary material. (Supplementary Materials)

References

  1. V. Zappi and A. McPherson, “Dimensionality and appropriation in digital musical instrument design,” NIME, pp. 455–460, 2014. View at Google Scholar
  2. S. Reich, Writings on music, Oxford University Press, Oxford, UK, 2002.
  3. F. Morreale and A. De Angeli, “Evaluating visitor experiences with interactive art,” in Proceedings of the In Proceedings of the 11th Biannual Conference on Italian SIGCHI Chapter, pp. 50–57, ACM, 2015.
  4. P. Dourish, “The appropriation of interactive technologies: some lessons from placeless documents,” Computer Supported Cooperative Work: CSCW: An International Journal, vol. 12, no. 4, pp. 465–490, 2003. View at Publisher · View at Google Scholar · View at Scopus
  5. A. Dix, “Designing for appropriation,” in Proceedings of the In Proceedings of the 21st British HCI Group Annual Conference on People and Computers: HCI but not as we know it-Volume 2, pp. 27–30, BCS Learning & Development Ltd., 2007.
  6. S. Lindtner, S. Bardzell, and J. Bardzell, “Reconstituting the utopian vision of making: HCI after technosolutionism,” in Proceedings of the 34th Annual Conference on Human Factors in Computing Systems, CHI 2016, pp. 1390–1402, ACM, USA, May 2016. View at Scopus
  7. A. L. Toombs, S. Bardzell, and J. Bardzell, “The proper care and feeding of hackerspaces: care ethics and cultures of making,” in Proceedings of the 33rd Annual CHI Conference on Human Factors in Computing Systems, CHI 2015, pp. 629–638, ACM, Republic of Korea, April 2015. View at Scopus
  8. F. Morreale, G. Moro, A. Chamberlain, S. Benford, and A. P. McPherson, “Building a maker community around an open hardware platform,” in Proceedings of the 2017 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2017, pp. 6948–6959, ACM, USA, May 2017. View at Scopus
  9. S. Bødker, “Third-wave HCI, 10 years later - participation and sharing,” Interactions, vol. 22, no. 5, pp. 24–31, 2015. View at Google Scholar · View at Scopus
  10. K. Kwastek, Aesthetics of Interaction in Digital Art, Mit Press, Cambridge, Massachusetts, USA, 2013.
  11. P. Sengere and B. Gaver, “Staying open to interpretation: engaging multiple meanings in design and evaluation,” in Proceedings of the Conference on Designing Interactive Systems, DIS2006, pp. 99–108, USA, June 2006. View at Scopus
  12. K. Höök, P. Sengers, and G. Andersson, “Sense and sensibility: evaluation and interactive art,” in Proceedings of the The CHI 2003 New Horizons Conference Proceedings: Conference on Human Factors in Computing Systems, pp. 241–248, USA, April 2003. View at Scopus
  13. W. W. Gaver, J. Bowers, A. Boucher et al., “The drift table: designing for ludic engagement,” in CHI’04 Extended Abstracts on Human Factors in Computing Systems, pp. 885–900, 2004. View at Publisher · View at Google Scholar
  14. F. Morreale, A. De Angeli, and S. OModhrain, “Musical interface design: an experience-oriented framework,” in Proc. NIME, pp. 467–472, 2014. View at Google Scholar
  15. W. W. Gaver, J. Beaver, and S. Benford, “Ambiguity as a resource for design,” in Proceedings of the The CHI 2003 New Horizons Conference Proceedings: Conference on Human Factors in Computing Systems, pp. 233–240, USA, April 2003. View at Scopus
  16. G. Bell, M. Blythe, and P. Sengers, “Making by making strange: defamiliarization and the design of domestic technologies,” ACM Transactions on Computer-Human Interactions (TOCHI), vol. 12, no. 2, pp. 149–173, 2005. View at Publisher · View at Google Scholar · View at Scopus
  17. C. Muth, V. M. Hesslinger, and C.-C. Carbon, “The appeal of challenge in the perception of art: how ambiguity, solvability of ambiguity, and the opportunity for insight affect appreciation,” Psychology of Aesthetics, Creativity, and the Arts, vol. 9, no. 206, 2015. View at Publisher · View at Google Scholar · View at Scopus
  18. T. W. Leong, “Designing for experiences: randomness as a resource,” in Proceedings of the 6th conference on Designing Interactive systems, pp. 346-347, University Park, PA, USA, June 2006. View at Publisher · View at Google Scholar
  19. D. Birnbaum, R. Fiebrink, J. Malloch, and M. M. Wanderley, “Towards a dimension space for musical devices,” in Proceedings of the In Proceedings of the 2005 conference on New interfaces for musical expression, pp. 192–195, 2005.
  20. A. Johnston, L. Candy, and E. Edmonds, “Designing and evaluating virtual musical instruments: facilitating conversational user interaction,” Design Studies, vol. 29, no. 6, pp. 556–571, 2008. View at Publisher · View at Google Scholar · View at Scopus
  21. A. McPherson, “The magnetic resonator piano: electronic augmentation of an acoustic grand piano,” Journal of New Music Research, vol. 39, no. 3, pp. 189–202, 2010. View at Publisher · View at Google Scholar · View at Scopus
  22. J. Harrison, R. H. Jack, F. Morreale, and A. McPherson, “When is a guitar not a guitar? cultural form, input modality and expertise,” in Proc. NIME, 2018. View at Google Scholar
  23. J. Snyder, “Snyderphonics manta controller, a novel usb touch-controller,” in Proc. NIME, pp. 413–416, 2011. View at Google Scholar
  24. V. Zappi, A. Allen, and S. Fels, “Shader-based physical modelling for the design of massive digital musical instruments,” in Proc. NIME, pp. 145–150, 2017. View at Google Scholar
  25. F. Morreale, A. De Angeli, R. Masu, P. Rota, and N. Conci, “Collaborative creativity: the music room,” Personal and Ubiquitous Computing, vol. 18, no. 5, pp. 1187–1199, 2014. View at Publisher · View at Google Scholar · View at Scopus
  26. G. Lepri and A. McPherson, “Mirroring the past, from typewriting to interactive art: an approach to the re-design of a vintage technology,” in Proceedings of the New Interfaces for Musical Expression, NIME, Blacksburg, Virginia, USA, 2018.
  27. F. Morreale, A. McPherson, M. Wanderley et al., “Nime identity from the performer's perspective,” in Proc. NIME, 2018.
  28. N. Schnell and M. Battier, “Introducing composed instruments, technical and musicological implications,” in Proc. NIME, pp. 1–5, National University of Singapore, 2002.
  29. R. Masu and N. N. Correia, “Penguin: design of a screen score interactive system,” in Proceedings of the International Conference on Live Inter­Faces, 2018.
  30. E. Aarseth, “I fought the law: transgressive play and the implied player,” in From literature to cultural literacy, pp. 180–188, Springer, 2014. View at Publisher · View at Google Scholar · View at Scopus
  31. B. Suits, The Grasshopper: Games, Life and Utopia, Broadview Press, 2014. View at Publisher · View at Google Scholar
  32. B. De Koven, The Well-Played Game: A Player’S Philosophy, mit Press, 2013.
  33. Z. Menestrina, R. Masu, M. Bianchi, A. Conci, and A. Siesser, “OHR,” in Proceedings of the 1st ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play, CHI PLAY 2014, pp. 355–358, Canada, October 2014. View at Scopus
  34. V. Braun and V. Clarke, “Using thematic analysis in psychology,” Qualitative Research in Psychology, vol. 3, no. 2, pp. 77–101, 2006. View at Publisher · View at Google Scholar · View at Scopus
  35. M. Csikszentmihalyi, “Toward a psychology of optimal experience,” in Flow and the foundations of positive psychology, pp. 209–226, Springer, 2014. View at Google Scholar · View at Scopus
  36. J. Marshall, S. Benford, and T. Pridmore, “Deception and magic in collaborative interaction,” in Proceedings of the 28th Annual CHI Conference on Human Factors in Computing Systems, CHI 2010, pp. 567–576, USA, April 2010. View at Scopus
  37. M. Chion, C. Gorbman, W. Murch, and Audio-vision., Audio-Vision, 1994.