Please use this identifier to cite or link to this item:
http://hdl.handle.net/10174/6054
|
Title: | I-SOUNDS: Emotion-Based Music Generation for Virtual Environments |
Authors: | Lopes, Eduardo Brisson, António Paiva, Ana |
Editors: | Picard, Rosalind |
Keywords: | Música Computação Inteligência Artificial |
Issue Date: | 2007 |
Publisher: | Springer Verlag |
Citation: | “I-SOUNDS: Emotion-Based Music Generation for Virtual Environments”, in Ana Paiva, Rui Prada, and Rosalind Picard (eds.), Affective Computing and Intelligent Interaction, Berlin: Springer, 2007. |
Abstract: | With the new emergent interactive virtual environments, new needs at the user interaction level demand for an answer. Building Interactive-Drama applications where users act out their roles and build a story in cooperation with virtual characters proposes several challenges. One of these challenges is to build Autonomous Affective Characters that are able to establish an affective interaction with the users. Several approaches have been made to achieve this goal, most of them using gestures and facial expressions that rely simply on the Visual Channel. Other approaches use the Auditive Channel either as character’s speech or background music. However most of these approaches use pre-defined samples which contrast with the emergent approach taken in the Visual Channel. With I-Sounds we want to increase the Affective Bandwidth of an Interactive Drama system called I-Shadows, implementing a fully emergent system that generates affective sounds, based on musical the- ory and on the emotional state of the characters. The project main goal, is to build a software system able to translate emotions into real-time generated music. We want to have a ”virtual composer” able to deliver emotional contextualized music. |
URI: | http://hdl.handle.net/10174/6054 |
Type: | article |
Appears in Collections: | UnIMeM - Artigos em Livros de Actas/Proceedings
|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
|