EvoWorkshops2006: EvoMUSART

4th European Workshop on Evolutionary Music and Art

The application of Evolutionary Computation (EC) techniques for the development of creative systems is a new, exciting and significant area of research. There is a growing interest in the application of these techniques in fields such as: art and music generation, analysis and interpretation; architecture; and design.

EvoMUSART 2006 is the third workshop of the EvoNet working group on Evolutionary Music and Art. Following the success of previous events, the main goal of EvoMUSART 2006 is to bring together researchers who are using Evolutionary Computation in this context, providing the opportunity to promote, present and discuss ongoing work in the area.

The workshop will include an open panel for the discussion of the most relevant questions of the field. In order to promote participation, we encourage the participants to submit topics for debate. To foster cooperation among researchers, there will also be a panel for the proposal and discussion of potential collaboration opportunities.

The event includes a exhibition and demonstration session held at the Artpool Art Research Center, giving an opportunity for the presentation of evolutionary art and music in an informal environment. The submission of works for the demonstration session is independent from the submission of papers

Accepted papers will be presented orally at the workshop and included in the EuroGP2006 conference proceedings, published by Springer Verlag in the Lecture Notes in Computer Science series.

Web address: http://www.evonet.info/eurogp2006/

Topics include

Organising Committee

Program Chairs
Juan Romero
jj AT udc DOT es
University of A Coruña
 
Penousal Machado
machado AT dei DOT uc DOT pt
CISUC - Centre for Informatics and Systems, University of Coimbra
 
EvoWorkshops2006 Chair
Franz Rothlauf
rothlauf AT uni-mannheim DOT de
University of Mannheim, Germany
 
Local Chair
Anikó Ekárt
ekart AT sztaki DOT hu
Hungarian Academy of Sciences
 
Publicity Chair
Steven Gustafson
smg AT cs DOT nott DOT ac DOT uk
University of Nottingham, UK

Note: the e-mail addresses are masked for spam protection.

Programme Committee

Alan Dorin, Monash University, Australia
Alice C. Eldridge, University of Sussex, UK
Amilcar Cardoso, University of Coimbra, Portugal
Alejandro Pazos, University of A Coruna, Spain
Anargyros Sarafopoulos, Bournemouth University, UK
Andrew Horner, University of Science & Technology, Hong Kong
Antonino Santos, University of A Coruna, Spain
Bill Manaris, College of Charleston, USA
Carlos Grilo, School of Technology and Management of Leiria, Portugal
Colin Johnson, University of Kent, UK
Eduardo R. Miranda, University of Plymouth, UK
Evelyne Lutton, INRIA, France
Francisco Camara Pereira, University of Coimbra, Portugal
Gary Greenfield, University of Richmond, USA
Gerhard Widmer, Johannes Kepler University Linz, Austria
James McDermott, University of Limerick, UK
Janis Jefferies, Goldsmiths College, University of London, UK
Jeffrey Ventrella, Independent Artist, US
John Collomosse, University of Bath, UK
Jon McCormack, Monash University, Australia
Jorge Tavares, University of Coimbra, Portugal
Ken Musgrave, Pandromeda, Inc., US
Lee Spector, Hampshire College, USA
Luigi Pagliarini, Academy of Fine Arts of Rome, Italy & University of Southern Denmark, Denmark
Martin Hemberg, Imperial College London, UK
Matthew Lewis, Ohio State University, USA
Mauro Annunziato, Plancton Art Studio, Italy
Michael Young, University of London, UK
Niall J.L. Griffith, University of Limerick, UK
Paul Brown, Visiting Professor, Centre for Computational Neuroscience and Robotics, University of Sussex, UK
Paulo Urbano, Universidade de Lisboa
Peter Bentley, University College London
Peter Todd, Max Planck Institute for Human Development, Germany
Rafael Ramirez, Pompeu Fabra University, Spain
Rodney Waschka II, North Carolina State University, USA
Scott Draves, San Francisco, USA
Stefano Cagnoni, University of Parma., Italy
Stephen Todd, IBM, UK
Tatsuo Unemi, Soka University, Japan
Tim Blackwell, University of London, UK
William Latham, Art Games Ltd, UK

Accepted Papers: titles and abstracts

Using Physiological Signals to Evolve Art
Tristan Basa, Christian Anthony Go, Kil-Sang Yoo, Won-Hyung Lee

Human subjectivity have always posed a problem when it comes to judging designs. The line that divides what is interesting or not is blurred by the different interpretations as varied as the individuals themselves. Some approaches have made use of novelty in determining interestingness. However, computational measures of novelty such as the Euclidean distance are mere approximations to what the human brain finds interesting. In this paper, we explore the possibility of determining interestingness in a more direct method by using learning techniques such as Support Vector Machines to identify emotions from physiological signals, and then use genetic algorithms to evolve artworks that resulted in positive emotional signals.


Continuous-Time Recurrent Neural Networks for Generative and Interactive Musical Performance
Oliver Bown, Sebastian Lexer

This paper describes an ongoing exploration into the use of Continuous-Time Recurrent Neural Networks (CTRNNs) as generative and interactive performance tools, and using Genetic Algorithms (GAs) to evolve specific CTRNN behaviours. We propose that even randomly generated CTRNNs can be used in musically interesting ways, and that evolution can be employed to produce networks which exhibit properties that are suitable for use in interactive improvisation by computer musicians. We argue that the development of musical contexts for the CTRNN is best performed by the computer musician user rather than the programmer, and suggest ways in which strategies for the evolution of CTRNN behaviour may be developed further for this context.


Science of Networks and Music: A New Approach on Musical Analysis and Creation
Gianfranco Campolongo, Stefano Vena

Science of Networks is a very young discipline whose results have rapidly influenced many different fields of scientific research. In this paper we present some experiments of a new approach on generative music based on small-world networks. The basic idea of this work is that network can be a useful instrument for musical modeling, analysis and creation. We studied over 100 musical compositions of different genres(classical, pop, rock) by means of science of networks, then used this data for generating algorithms for musical creation and author attribution. The first step of this work is the implementation of a software that allows to represent and analyse musical compositions, then we developed a genetic algorithm for the production of networks with particular features. These networks are finally used for the generation of self-organized melodies and scales.


Supervised genetic search for parameter selection in painterly rendering
John P. Collomosse
(Nominated for Best Paper Award)

This paper investigates the feasibility of evolutionary search techniques as a mechanism for interactively exploring the design space of 2D painterly renderings. Although a growing body of painterly rendering literature exists, the large number of low-level configurable parameters that feature in contemporary algorithms can be counter-intuitive for non-expert users to set. In this paper we first describe a multi-resolution painting algorithm capable of transforming photographs into paintings at interactive speeds. We then present a supervised evolutionary search process in which the user scores paintings on their aesthetics to guide the specification of their desired painterly rendering. Using our system, non-expert users are able to produce their desired aesthetic in approximately 20 mouse clicks --- around half an order of magnitude faster than manual specification of individual rendering parameters by trial and error.


Synthesising timbres and timbre-changes from adjectives/adverbs
Alex Gounaropoulos, Colin G. Johnson

Synthesising timbres and changes to timbres from natural language descriptions is an interesting challenge for computer music. This paper describes the current state of an ongoing project which takes a machine learning approach to this problem. We discuss the challenges that are presented by this, discuss various strategies for tackling this problem, and explain some experimental work. In particular our approach is focused on the creation of a system that uses an analysis-synthesis cycle to learn and then produce such timbre changes.


Robot Paintings Evolved using Simulated Robots
Gary Greenfield
(Nominated for Best Paper Award)

We describe our efforts to evolve robot paintings using simulated robots. Our evolutionary framework considers only the initial positions and initial directions of the simulated robots. Our fitness functions depend on the global properties of the resulting robot paintings and on the behavior of the simulated robots that occurs while making the paintings. Our evolutionary framework therefore implements an optimization algorithm that can be used to try and help identify robot paintings with desirable aesthetic properties. The goal of this work is to better understand how art making by a collection of autonomous cooperating robots might occur in such a way that the robots themselves are able to participate in the evaluation of their creative efforts.


Modelling Expressive Performance: a Regression Tree Approach Based on Strongly Typed Genetic Programming
Amaury Hazan, Rafael Ramirez, Esteban Maestre, Alfonso Perez, Antonio Pertusa

This paper presents a novel Strongly-Typed Genetic Programming approach for building Regression Trees in order to model expressive music performance. The approach consists of inducing a Regression Tree model from training data (monophonic recordings of Jazz standards) for transforming an inexpressive melody into an expressive one. The work presented in this paper is an extension of \cite{MTGRami2005:EVO}, where we induced general expressive performance rules explaining part of the training examples. Here, the emphasis is on inducing a {\it generative} model (i.e. a model capable of generating expressive performances) which covers all the training examples. We present our evolutionary approach for a one-dimensional regression task: the performed note duration ratio prediction. We then show the encouraging results of experiments with Jazz musical material, and sketch the milestones which will enable the system to generate expressive music performance in a broader sense.


MovieGene: Evolutionary Video Production based on Genetic Algorithms and Cinematic Properties
Nuno Henriques, Nuno Correia, Jˆonatas Manzolli, Lu´ýs Correia, Teresa Chambel

We propose a new multimedia authoring paradigm based on evolutionary computation, video annotation, and cinematic rules. New clips are produced in an evolving population through genetic transformations influenced by user choices, and regulated by cinematic techniques like montage and video editing. The evolutionary mechanisms, through the fitness function will condition how video sequences are retrieved and assembled, based on the video annotations. The system uses several descriptors, as genetic information, coded in an XML document following the MPEG-7 standard. With evolving video, the clips can be explored and discovered through emergent narratives and aesthetics in ways that inspire creativity and learning about the topics that are presented.


Audible convergence for optimal base melody extension with statistical genre-specific interval distance evaluation
Ronald Hochreiter

In this paper, an evolutionary algorithm is used to calculate optimal extensions of a base melody line by statistical interval-distance minimization. Applying an evolutionary algorithm for solving such an optimization problem reveals the effect of audible convergence, when iterations of the optimization process, which represent sub-optimal melody lines, are combined to a musical piece. An example is provided to evaluate the algorithm, and to point out differences, when different musical genres, represented by different interval distance classification schemes, are applied.


A Two-Stage Autonomous Evolutionary Music Composer
Yaser Khalifa, Robert Foster

An autonomous music composition tool is developed using Genetic Algorithms. The composition is conducted in two Stages. The first Stage gen-erates and identifies musically sound patterns (motifs). In the second Stage, methods to combine different generated motifs and their transpositions are ap-plied. These combinations are evaluated and as a result, musically fit phrases are generated. Four musical phrases are generated at the end of each program run. The generated music pieces will be translated into Guido Music Notation (GMN) and alternate representation in Musical Instrument Digital Interface (MIDI). The Autonomous Evolutionary Music Composer (AEMC) was able to create interesting pieces of music that were both innovative and musically sound.


Evolutionary Musique Concr\`ete
Cristyn Magnus

This paper describes a genetic algorithm that operates directly on time-domain waveforms to generate electronic music compositions. The form of these pieces is derived from the evolutionary process. Recorded sounds are treated as chromosomes. The sounds evolve in a world that consists of multiple locations. Each location has its own fitness function and mutation probabilities. These can change over the course of the piece, producing musical surprises. The aesthetic motivation of the work is discussed and the results of the algorithm are described.


A Connectionist Architecture for the Evolution of Rhythms
Jo˜ao Magalh˜aes Martins, Eduardo Reck Miranda

In this paper we propose the use of an interactive multi-agent system for the study of rhythm evolution. The aim of the model proposed here is to show to what extent new rhythms emerge from both the interaction between autonomous agents, and self-organisation of internal rhythmic representations. The agents' architecture includes connectionist models to process rhythmic information, by extracting, representing and classifying their compositional patterns. The internal models of the agents are then explained and tested. This architecture was developed to explore the evolution of rhythms in a society of virtual agents based upon imitation games, inspired by research on Language evolution.


Layered Genetical Algorithms Evolving Into Musical Accompaniment Generation
Ribamar Santarosa, Artemis Moroni, Jˆonatas Manzolli

We present a theoretical evolutionary musical accompaniment generation system capable of evolving to different organized sounds according to an external performer. We present a new approach for implementing the fitness functions.


Consensual paintings
Paulo Urbano

Decentralized coordination can be achieved by the emergence of a consensual choice inside a group of simple agents. Work done on emergence of social laws, and on emergence of a shared lexicon, are known examples of possible benefits of consensus formation in multi-agent systems. We think that in the artificial artistic realm, the agreement on some individual choices (attributes, behaviour, etc) can be important for the emergence of interesting patterns. We describe here an effective decentralized mechanism of consensus formation and how we can achieve a random evolution of decentralized consensual choices. Our goal is designing swarm art, exploring the landscape of forms. Non coordinated social behaviour can be unfruitful for the goal of collective artistic creation. On the other hand, full agreement along time generally leads towards too much homogeneity in a collective pattern. This way, the random succession of collective agreements can lead to the emergence of random patterns, somewhere between order and chaos. We show several application of this transition between consensual choices in a group of micro-painters that create random artistic patterns.