Dr Duncan Williams - Interdisciplinary Centre for Computer Music Research, Plymouth University
Music is a powerful and near-universal vector for emotions. Listeners seem to be ‘hard-wired’ to respond intuitively and automatically, regardless of personal preferences and individual differences. What happens in our brains when we are exposed to particular types of sounds? Can neuro-imaging techniques help us to understand these psychoacoustic cues and the cognitive effects caused by different pieces of music?
This talk outlines the use of artificial intelligence (AI) techniques based on these concepts, when harnessed for the creation of emotionally-charged music. Applications include assistive technology for disabled patients, personalised composition tools, meditative and therapeutic music generation, and next-gen soundtrack creation for video games or film.
As well as examining algorithmic composition techniques using AI, the talk will also give an overview of the use of joint fMRI (functional magnetic resonance imaging) and EEG (electroencephalography) analysis techniques to study neural responses to music, and subsequently utilize emotional neurofeedback to control AI processes for the real-time creation and performance of music which might target specific emotional states in an individual.
*BCI stands for Brain-Computer Interfacing and is synonymously referred to as Brain-Machine Interfacing. see http://cmr.soc.plymouth.ac.uk/bcmi-midas/ for more details about the BCMI-MIdAS project, using BCI technology to monitor and induce affective states using neurofeedback and artificial intelligence techniques.