What makes music emotional
While musicians tended to perceive syntactically irregular music events single irregular chords as slightly more pleasant than non-musicians, these generally perceived unpleasant events induced increased blood oxygen levels in the emotion-related brain region, the amygdala.
Unexpected chords were also found to elicit specific event related potentials ERAN and N5 as well as changes in skin conductance Koelsch et al. Specific music events associated with pleasurable emotions have not yet been examined using central measures of emotion. Davidson and Irwin , Davidson , , and Davidson et al. Broadly, a left bias frontal asymmetry FA in the alpha band 8—13 Hz has been associated with a positive affective style, higher levels of wellbeing and effective emotion regulation Tomarken et al.
Interventions have been demonstrated to shift frontal electroencephalograph EEG activity to the left. An 8-week meditation training program significantly increased left sided FA when compared to wait list controls Davidson et al. Blood et al.
The amygdala appears to demonstrate valence-specific lateralization with pleasant music increasing responses in the left amygdala and unpleasant music increasing responses in the right amygdala Brattico, ; Bogert et al. The pattern of data in these studies suggests that this frontal lateralization is mediated by the emotions induced by the music, rather than just the emotional valence they perceive in the music.
Hausmann et al. This measure therefore provides a useful objective marker of emotional response to further identify whether specific music events are associated with physiological measures of emotion. To optimize the likelihood that emotions were induced that is, felt rather than just perceived , participants listened to their own selections of highly pleasurable music.
Two validation hypotheses were proposed to confirm the methodology was consistent with previous research. It was hypothesized that: 1 emotionally powerful and pleasant music selected by participants would be rated as more positive than silence, neutral music or a dissonant unpleasant version of their music; and 2 emotionally powerful pleasant music would elicit greater shifts in frontal alpha asymmetry than control auditory stimuli or silence.
The primary novel hypothesis was that peak alpha periods would coincide with changes in basic psychoacoustic features, reflecting unexpected or anticipatory musical events. Since music-induced emotions can occur both before and after key music events, FA peaks were considered associated with music events if the music event occurred within 5 s before to 5 s after the FA event. Music background and affective style were also taken into account as potential confounds.
The sample for this study consisted of 18 participants 6 males, 12 females recruited from tertiary institutions located in Melbourne, Australia. Participants were excluded if they were younger than 17 years of age, had an uncorrected hearing loss, were taking medication that may impact on mood or concentration, were left-handed, or had a history of severe head injuries or seizure-related disorder.
Despite clearly stated exclusion criteria, two left handed participants attended the lab, although as the pattern of their hemispheric activity did not appear to differ to right-handed participants, their data were retained. Informed consent was obtained through an online questionnaire that participants completed prior to the laboratory session. The online survey consisted of questions pertaining to demographic information gender, age, a left-handedness question, education, employment status and income , music background MUSE questionnaire; Chin and Rickard, and affective style PANAS; Watson and Tellegen, The survey also provided an anonymous code to allow matching with laboratory data, instructions for attending the laboratory and music choices, and explanatory information about the study and a consent form.
The physiological index of emotion was measured using electroencephalography EEG. Further spatial exploration of data for structural mapping purposes was beyond of the scope of this paper.
In addition, analyses were performed for the P3—P4 sites as a negative control Schmidt and Trainor, ; Dennis and Solomon, All channels were referenced to the mastoid electrodes M1—M2. Data were collected and analyzed offline using the Compumedics Neuroscan 4. This software allows participants to indicate how they feel in real time as they listen to the stimulus by moving the cursor along the screen.
The Emujoy program utilizes the circumplex model of affect Russell, where emotion is measured in a two dimensional affective space, with axes of arousal and valence. Previous studies have shown that valence and arousal account for a large portion of the variation observed in the emotional labeling of musical e. The sampling rate was 20 Hz one sample every 50 ms , which is consistent with recommendations for continuous monitoring of subjective ratings of emotion Schubert, Consistent with Nagel et al.
Four music stimuli—practice, pleasant, unpleasant, and neutral—were presented throughout the experiment. Each stimulus lasted between 3 and 5 min in duration.
The practice stimulus was presented to familiarize participants with the Emujoy program and to acclimatize participants to the sound and the onset and offset of the music stimulus fading in at the start and fading out at the end. The pleasant music stimulus was participant-selected.
This option was preferred over experimenter-selected music as participant-selected music was considered more likely to induce robust emotions Thaut and Davis, ; Panksepp, ; Blood and Zatorre, ; Rickard, That is, it could not be sad music that participants enjoyed. While previous research has used both positively and negatively valenced music to elicit strong experiences with music, in the current study, we limited the music choices to those that expressed positive emotions.
This decision was made to reduce variability in EEG responses arising from perception of negative emotions and experience of positive emotions, as EEG can be sensitive to differences in both perception and experience of emotional valence. The music also had to be alyrical 1 —music with unintelligible words, for example in another language or skat singing, were permitted—as language processing might conceivably elicit different patterns of hemisphere activation solely as a function of the processing of vocabulary included in the song.
Differentiating between these various causes of emotion was, however, beyond the scope of the current study. The unpleasant music stimulus was intended to induce negative emotions. This stimulus consisted of three versions of the song played simultaneously— one shifted a tritone down, one pitch shifted a whole tone up, and one played in reverse adapted from Koelsch et al. The neutral condition was an operatic track, La Traviata, chosen based upon its neutrality observed in previous research Mitterschiffthaler et al.
The presentation of music stimuli was controlled by the experimenter via the EmuJoy program. The music volume was set to a comfortable listening level, and participants listened to all stimuli via bud earphones to avoid interference with the EEG cap. Prior to attending the laboratory session, participants completed the anonymously coded online survey. Participants were tested individually during a 3 h session. An identification code was requested in order to match questionnaire data with laboratory session data.
Participants were seated in a comfortable chair and were prepared for fitting of the EEG cap. The structure of the testing was explained to participants and was as follows see Figure 1 :. Example of testing structure with conditions ordered; pleasant, unpleasant, neutral, and control. B, baseline; P, physiological recording; and S, subjective rating.
The testing comprised four within-subjects conditions: pleasant, unpleasant, neutral, and control. Differing only in the type of auditory stimulus presented, each condition consisted of:.
These lasted 3 min in duration and participants were asked to close their eyes and relax. At every step of each condition, participants were guided by the experimenter e.
Before the official testing began, the participant was asked to practice using the EmuJoy program in response to the practice stimulus. Participants were asked about their level of comfort and understanding with regards to using the EmuJoy software; experimentation did not begin until participants felt comfortable and understood the use of EmuJoy.
Participants were reminded of the distinction between rating emotions felt vs. After this, the experimental procedure began with each condition being presented to participants in a counterbalanced fashion.
Electroencephalograph data from each participant was visually inspected for artifacts eye movements and muscle artifacts were manually removed prior to any analyses. All data were re-referenced to mastoid processes.
Data were baseline corrected to ms pre-stimulus period. EEG data were aggregated for all artifact-free periods within a condition to form a set of data for the positive music, negative music, neutral, and the control. Chunks of ms were extracted for analyses using a Cosine window. Power values from all chunks within an epoch were averaged see Dumermuth and Molinari, The data were log transformed to normalize their distribution because power values are positively skewed Davidson, Power in the alpha band is inversely related to activation e.
Cortical asymmetry [ln right —ln left ] was computed for the alpha band. This FA score provides a simple unidimensional scale representing relative activity of the right and left hemispheres for an electrode pair e. FA scores of 0 indicate no asymmetry, while scores greater than 0 putatively are indicative of greater left frontal activity positive affective response and scores below 0 are indicative of greater right frontal activity negative affective response , assuming that alpha is inversely related to activity Allen et al.
FA difference between left and right power densities values were ranked from highest most asymmetric, left biased to lowest using spectrograms see Figure 2 for an example. Due to considerable inter-individual variability in asymmetry ranges, descriptive ranking was used as a selection criterion instead of an absolute threshold or statistical difference criterion.
The ranked FA differences were inspected and those that were clearly separated from the others on average six peaks were clearly more asymmetric than the rest of the record were selected for each individual as their greatest moments of FA. A EEG alpha band spectrogram; B subjective valence and arousal ratings; and C music feature analysis. A subjective method of annotating each pleasant music piece with temporal onsets and types of all notable changes in musical features was utilized in this study.
Coding was performed by a music performer and producer with postgraduate qualifications in systematic musicology. A decision was made to use subjective coding as it has been successfully used previously to identify significant changes in a broad range of music features associated with emotional induction by music Sloboda, This method was framed within a hierarchical category model which contained both low-level and high-level factors of important changes.
Secondly, the low-level factor model utilized by Coutinho and Cangelosi was applied to assign the identified music features deductively to changes within six low-level factors: loudness, pitch level, pitch contour, tempo, texture, and sharpness. Each low-level factor change was coded as a change toward one of the two anchors of the feature. TABLE 2. Operational definitions of high and low level musical features investigated in the current study. Due to the high variability of the analyzed musical pieces from a musicological perspective — including the genre, which ranged from classical and jazz to pop and electronica — every song had a different frequency of changes in terms of these six factors.
Hence, we applied a third step of categorization which led to a more abstract layer of changes in musical features that included two higher-level factors: motif changes and instrument changes. No missing data or outliers were observed in the survey data. Bivariate correlations were run between potential confounding variables — Positive affect negative affect schedule PANAS , and the Music use questionnaire MUSE — and FA to determine if they were potential confounds, but no correlations were observed.
A sample of data obtained for each participant is shown in Figure 2. For this participant, five peak alpha periods were identified shown in blue arrows at top. Changes in subjective valence and arousal across the piece are shown in the second panel, and then the musicological analysis in the final section of the figure. A one-way analysis of variance ANOVA was conducted to compare mean subjective ratings of emotional valence.
Nonetheless, as ANOVAs are robust to violations of normality when group sizes are equal Howell, , parametric tests were retained. No missing data or outliers were observed in the subjective rating data. Figure 3 below shows the mean ratings of each condition.
Mean subjective emotion ratings valence and arousal for control silence , unpleasant dissonant , neutral, and pleasant self-selected music conditions. Figure 3 shows that both the direction and magnitude of subjective emotional valence differed across conditions, with the pleasant condition rated very positively, the unpleasant condition rated negatively, and the control and neutral conditions rated as neutral.
Arousal ratings appeared to be reduced in response to unpleasant and pleasant music. Anecdotal reports from participants indicated that in addition to being very familiar with their own music, participants recognized the unpleasant piece as a dissonant manipulation of their own music selection, and were therefore familiar with it also.
Several participants noted that this made the piece even more unpleasant to listen to for them. Sphericity was met for the arousal ratings, but not for valence ratings, so a Greenhouse-Geisser correction was made for analyses on valence ratings. Two-way repeated measures ANOVAs were conducted on the FA scores averaged across baseline period, and averaged across condition for each of the two frontal electrode pairs, and the control parietal site pair. The within-subjects factor included the music condition positive, negative, neutral, and control and time baseline and stimulus.
Despite the robustness of ANOVA to assumptions, caution should be taken in interpreting results as both the normality and sphericity assumptions were violated across each electrode pair. Where sphericity was violated, a Greenhouse—Geisser correction was applied. Asymmetry scores above two were considered likely a result of noisy or damaged electrodes 62 points out of and were omitted as missing data which were excluded pairwise.
Asymmetry scores of 0 indicate no asymmetry. The greatest difference between baseline and during condition FA scores was for the pleasant music, representative of a positive shift in asymmetry from the right hemisphere to the left when comparing the baseline period to the stimulus period.
The music event description was then examined for presence or absence of coded musical events within a 10 s time window of 5 s before to 5 s after the peak FA time-points.
The type of music event coinciding with peak alpha periods is shown in Table 3. A two-step cluster analysis was also performed to explore natural groupings of peak alpha asymmetry events that coincided with distinct combinations 2 or more of musical features.
TABLE 3. Frequency and percentages of musical features associated with a physiological marker of emotion peak alpha FA. High level, low level, and clusters of music features are distinguished. Table 3 shows that, considered independently, the most frequent music features associated with peak alpha periods were primarily high level factors changes in motif and instruments , with the addition of one low level factor pitch. Musical features of these pieces were also examined to explore associations between key musical events and central physiological markers of emotional responding.
The first aim of this study was to examine whether pleasant music elicited physiological reactions in this central marker of emotional responding. This finding confirmed previous research findings and demonstrated that the methodology was robust and appropriate for further investigation.
The second aim was to examine associations between key musical features affiliated with emotion , contained within participant-selected musical pieces, and peaks in FA.
FA peaks were commonly associated with changes in both high and low level music features, including changes in motif, instrument, loudness and pitch, supporting the hypothesis that key events in music are marked by significant physiological changes in the listener. Further, specific combinations of individual musical features were identified that tended to predict FA peaks.
These findings are consistent with previous research indicating that music is capable of eliciting strong felt positive affective reports Panksepp, ; Rickard, ; Juslin et al. But why do sound waves hitting our ears transfer into real emotions, felt at the very core of our beings? A new study attempts to provide an answer….
A new study by the University of Southern California USC has attempted to answer one of our favourite questions: why does music make us feel the way it does?
Indeed — why do the soundwaves reaching our ears transfer into physical reactions think quickened heart rates and dampening eyelids? Does your heart quicken a little? Can you feel your skin tingle with goosebumps as your ears are hit by those beautiful but angsty chords?
It focused on three aspects of the music listening experience: neural how our brains respond , physiological how our bodies respond , and emotional whether we report to feel happy or sad during listening , and focused on 74 musical variables, including rhythm, timbre and volume.
Contrasts in pulse and strength of beats, especially, were found to work on the brain. Quiz: Are you logical or emotional, based on your taste in music? Who were you travelling with? Implicit memories, however, are memories stored in the unconscious and are a more reactive form of memory. Yet, they can still be retrieved by our conscious mind and usually last longer than explicit memories. The key to this long-lasting memory capability is that they are generally attached to a specific emotion.
To give you an example: I remember my favourite band playing a surprise gig when I was 16 years old because I was extremely happy and paired that strong emotion with a particular song.
Another research-based theory says that music evokes memories because it is related to movement. Participants got an MRI as they listened to music, and the researchers found that certain parts of the brain the cerebellum and cerebrum that involve our motor abilities were stimulated while listening to music.
Along with the stimulation of the limbic system in the brain, which controls emotions, it proves how music, emotion, and movement are all interconnected. It might also explain why I was biking across the canals of Amsterdam, felt extremely happy, and was listening to the song, and can now perfectly recall that moment to this day.
More revealing facts about music and memories can be found here. And when your playlist strikes all the right chords, the rise of dopamine can take your body on a physiological joyride by increasing your heart rate, body temperature rising, redirecting blood to your legs and activate the mission control centre for body movement. These sensations also stimulate our motivation system; making us enjoying a piece of music, deriving pleasure from it, wanting to listen to it again and being willing to spend money for it.
It almost sounds like a drug. Research shows that music, it seems, may affect our brains the same way that sex, gambling, and chocolate do.
But we guess you already knew that. But did you know that about 50 per cent of people gets chills when listening to music? It can be unpredictable, teasing our brains and keeping those dopamine triggers guessing. The greater the build-up, the greater the chill. We love chills.
You can read more about chills through classicfm. So now you have a better understanding of your brain on music, why music evokes emotions and why you get goosebumps while listening to Buzzfeeds Spotify Playlist. If you are one of those 50 per cent that is. Image by ElisaRiva from Pixabay. People Staff. Blog Production Techniques.
Contact Contact Us. Book a Visit. Download Prospectus. About Us FAQ.
0コメント