Sunday, December 11, 2005

"Wagner: The Rebel"

In 1842 Wagner had begun to garner approbation and fame from the immense success of his third opera, Rienzi. He and Cosima soon moved to Dresden, where the opera had premiered. The next year he was appointed Kapellmeister at the Royal Court Theater after having conducted the premiere of Der Fliegende Hollander. Over the next five years he would also compose Tannhauser and Lohengrin, all of which marked his growth and maturity as a composer. His musical career and preoccupations did nothing to fetter his political activity, however.
A leftist nationalist movement had been gaining support within the independent German States. Many middle-class citizens felt downtrodden by social inequities and wanted their rights respected and improved upon in addition to unifying each of the weak German States into a stronger nation. Wagner was an earnest participant in this movement, hosting such guests in his home as the August Rockel and Russian anarchist Mikhail Bakunin, an associate of Marx. Rockel happened to be Wagner's assistant conductor and editor of Volksblatter, a weekly journal. It was through this journal that Wagner and other political malcontents frequently published their diatribes against the government.
Wagner expressed much zeal not only for political reform, but for cultural and artistic reform as well. He proposed a plan to the government which entailed the formation of a German national theater with an elected director, the organization of a drama school and the expansion and autonomy of the court orchestra. Such avante garde ideas were of the same democratic mind-set as the goals of the new nationalist movement, however, and were therefore rejected.
If there had been anything equivocal or uncertain about Wagner's position among the revolutionaries, they were definitely expunged from everyone's mind in June of 1848 when he gave a speech to the Vaterlandsverein, the most prominent republican group. He spoke of republican goals in relation to the ruling Saxon monarchy. He castigated the corruption frequently correlated with commerce, labeling it as a hindrance to the liberties of mankind. He also foretold the downfall of the aristocracy. While Wagner and the other middle-class radicals sought out a new, constitutional republic, the maintained that the Saxon king would govern as "the first and truest republican of all." Of course, this idea was propagated not because of their approval of the concept, but because of necessary compromise that had to be made in light of the limited power of the republican group in face of the monarchy.
Frustration with the Saxon government came to a head in April 1849 when King Frederick Augustus II dissolved his parliament and dismissed the new constitution that his people presented to him. A revolution that would come to be known as the May Uprising broke out in retaliation, but was soon to be put down by the combined strength of Saxon and Prussian forces. Soon, warrants were issued for the arrests of all the revolutionaries. Because of Wagner's past political activities and his involvement in the rebellion, he and Cosima were forced to flee for their lives. With the help of Franz Lizst, they were sheltered in Weimar, then fled to Paris by means of a false passport. They soon left Paris and finally ended up in Zurich, Switzerland. Wagner was forced to spend the next twelve years there with Cosima in exile.

"Wagner, (Wilhelm) Richard,"Microsoft Encarta Online Encyclopedia 2005 1997-2005 Microsoft Corporation. All Rights Reserved.

"Richard Wagner." Wikipedia: The Free Encyclopedia 2005 2005 Wikimedia Foundation, Inc.

"Richard Wagner." Grove Music Online 2005 2005 Oxford University Press.

Saturday, December 10, 2005

The Origins of Music: Innateness, Uniqueness, and Evolution

Hello, my article is titled The Origins of Music: Innateness, Uniqueness, and Evolution, and although the entire article focuses on just that, my presentation focuses on explaining how the article was written and researched.
My article was long (24 pages to be exact). If I were to stand here and give a detailed, in-depth presentation of the entire article, I would most likely be standing here for a half hour, maybe even longer. Therefore, I will briefly review the article by talking about how the article is set up. Think of it as an outline to a textbook chapter or a review to a three-hour movie. [next slide]
The article begins by pointing out that music is universal in how it is found in every known human culture, past and present, and how it is incorporated with many cultural events, including weddings, funerals, religious services, dances, sporting events, and solitary listening sessions. The article questions as to why an individual has their own unique musical preferences: is it because of their cultural upbringing or an innate mechanism? The facts that every culture in the world has some form of music and most were developed independently from each other suggest that there seems to be some innate machinery motivating the production and appreciation of music. By giving a detailed account of the innate mechanisms behind music and how they interact with cultural experience, the strong constraints on the evolutionary explanations of music will become clearer.
Music is, by no means, easily defined. The article uses its own definition of music, meaning that music is structured sounds produced directly or indirectly by humans, the sounds are made to convey emotions and to produce enjoyment, and they often have complex structure. The article also describes its use of “innate traits”: traits determined by factors present in an individual from birth, even though the trait may not be present until later on in development. [next slide] As for theoretical background, the article highlights four main points: developmental evidence, comparative evidence, cross-cultural evidence, and neural, or brain, evidence. [next slide]
Developmental evidence deals with the studies of mainly infants, as this is one of the most obvious ways to study whether any aspects of music perception are innate. In a classic setup for an infant study, a sample of music is played repeatedly from a speaker in front of an infant. Once the infant is used to the music, the experimenter conducts test trials, some of which introduce some change to the music sample, such as a key change or rearrangement of the notes. If the infant is sensitive to the change that is made, then they will tend to look longer at the speaker. It is important to study infants because infants lack the cultural exposure that adults have been subject to their whole lives. [next slide]
Comparative evidence works with the studies of animals because it is a way to limit musical exposure and its effects, much like with infants. Some infant and animal tests are the same, such as placing the animal in front of the speaker and watching its reaction as the music changes. Other techniques include training animals to recognize and/or tell the difference between different types of music, only then to be tested with new untrained music to see how they handle it. Why should there be studies on animals and music if we are interested in humans and music? The animal studies show if the trait in question is unique to humans. If the trait is not uniquely human, tests in multiple species can reveal whether it evolved as a homology (inherited from a common ancestor that expressed the trait) or a homoplasy (shared across two distinct lineages lacking a common ancestor with the trait). Also, animal studies can help to establish whether the trait evolved as an adaptation to a particular problem. Comparative studies can provide insights into the evolution of music that are difficult to obtain with other methods. [next slide]
Cross-cultural evidence deals with the studies of music perception in different cultures. The common features of different cultures provide evidence of innate constraints on what people are inclined to perceptually discriminate, remember, and enjoy. Similarities between musical styles from different periods might indicate that there are innate constraints on the music cultures are likely to produce. The features of music that have not undergone change most likely represent musical features that are stable given the brain’s tendencies or constraints. There is some risk to this hypothesis in that common features might have been simply passed down across the ages and are not indications of anything built into the brain. [next slide]
Neural evidence suggest that genetic constraints on music might also be indicated by the existence of brain circuitry dedicated to music, such as the possibility of circuitry that is used primarily during music perception or production. The larger issue, though, is that even if there is evidence that part of the brain functions specifically for music perception, it is difficult to rule out the possibility that the music-specific structures in question emerged through a lifetime of musical experience rather than innate constraints. [next slide]
The article goes in depth by giving countless study examples for each of the four points. If any of you are interested in this topic at all, I suggest reading (or skimming as this is very long and tends to repeat itself) this article as it is filled with different studies and hypothesizes.

Friday, December 09, 2005

"How Loud Is My Voice Inside My Mouth and Throat"

In, “How Loud is My Voice Inside My Mouth and Throat?” one of the Journal of Singing’s articles for this year, Ingo R. Titze explains how the sound that is produced within the larynx is not the same sound that is heard by the listener. Now, he isn’t referring to tone color, or even resonance. What Mr. Tizte is talking about in this article is how the actual loudness of the sound is greatly diminished after leaving the throat.

Mr. Titze mentions two different terms which are usually related to sound intensity with loudness being the more common of the two. Loudness is actually what is referred to as a “psycho-acoustical measurement”. This means that loudness is a subjective term that has no specific definition or precise scale, but is subject to the internal interpretation of the listener. For example, at home my mother will often complain about that the television is too loud while my brother and I are watching it. We will not necessarily think that the volume of the television is too loud, but that is the way my mother perceives it. Even if I am you are borrowing someone’s headphones, for example, I doubt that you would leave it on the previous setting; we all have different opinions about what loud is. There is no specific level where loudness becomes present nor is there a specific level where loudness diminishes or becomes non-existent.

An example more pertinent and relevant to our lives as musicians is the deviation and differences in the same dynamic marking between different performers, or even the same performers playing the same piece at a previous or later time. We are taught from the beginning of our musical training that forte means to play loudly and piano means to play soft. Yet there is no one set level of volume for either of the dynamic markings, nor any dynamic markings for that matter. The fortes and pianos that one plays can depend upon the health of the performer, the particular instrument played upon, the size of the performing venue, and other various factors. The actual precision of the dynamics in a piece must then be measured in respects to the dynamics of all notes in sections in relation to each other. For example, the louder one’s normal forte, the less soft the performer’s piano. However, the player or singer’s fortissimo would have to be louder in order to be consistent with the forte and the other dynamic changes.

So in short, the loudness of sound is a subjective thing that is capricious and is set in accordance to many other volatile factors. Yet there is a way that scientists accurately measure the intensity of sound emission. They do this by measuring the sound pressure level, or SPL. While loudness is a subjective analysis, the SPL is actually a physical measure. In spite of this, the SPL is useful method of estimating what would be the generally perceived loudness.

This estimation is possible because of the standard reference pressure used to measure acoustic emission, 0.00002 Pascal. This particular level of sound pressure is so miniscule that it is just barely discernible even by the best ears in the most ideal surroundings. Our ears are able to manage sounds over a million times this reference pressure, however, so this logarithmic equation is used to calculate the SPL: 20 log10 P/P0 dB. The SPL is always measured in decibels. Inside of our vocal tract airway, the SPL can exceed 1000 Pascal.
Different areas of the air tract obviously have different pressures as well. The varying pressures consist of a great range of disperate levels. For example, the pressure in the lungs can range from 800 Pa to over 3000 Pa. The body is thus capable of producing sound well over the pain threshold.
So if our body is this great conduit of sound, why then does so much less pressure come across to our listeners? This is because the acoustic energy is emitted in a continuing circle outward from the head. This means that the sound is distributed across more surface area, which greatly minimizes on the SPL. Another cause is that much of our sound is reflected by our mouth, tongue and other internal structures, also diminishing the intensity of the sound.

In conclusion, the SPL is greatly diminished upon exiting the body because of the reflection of sound and the increase of area upon which the sound is spread. Within our own air paths the sound is significantly louder. Take in mind that if the sound were not so affected upon exiting the mouth, one would be able to hear a singer from a distance of twenty miles away. This is, of course, a hypothetical situation, but one that gives one a great appreciation for the natural, untampered power of the human voice.

Titze, Ingo R. "How Loud Is My Voice Inside My Mouth and Throat?" The Journal of Singing 19 (2005) : 177-178.

The Feeling of Music Past

The Feeling of Music Past: How Listeners Remember Musical Affect

Article written by Alexander Rozin

Report written by Gabriel Yonkler

This study discussed in the article from this article was conducted to determine how listeners derive evaluations of past aural musical experiences based on moment-to-moment increments as well as overall evaluation.  The experiment will provide us possible ways and reasons people feel and remember music based on intensity.  

Background Knowledge
A musical experience is felt differently based on specific moments of a musical experience or piece.  The characteristics of moment-to-moment listening that influence “remembered musical affect” are the music’s intensity and quality.  Despite temporary differences in felt musical affect, listeners form a overall evaluation of an experience/piece after it has ended, reducing the many different effects felt into a single word, or even a cliché (e.g., sad, good, out of this world).  This article investigates how listeners remember these musical influences; how they assess a past musically effective experience.  A study of remembered musical affect and how it derives from moment-to-moment affect is a crucial element of a total understanding of musical influence.  “How does a listener’s designation of a piece as ‘sad’ or ‘intense’ emerge from a continually changing experience that may last several hours?”

Past Research and Results
Experiment #1: The first experiment, led by Daniel Kahneman, proposes insight on how to explore the affiliation between remembered affect and moment-to-moment affect.  Despite this experiment not relating to music and dealing with pain rather than intensity, this research can still aid us in our current study.  Participants of the study evaluated plotless movie clips from ocean waves to an active volcano, which yielded pleasant/unpleasant responses and overall evaluations.  The statistical analysis from the experiment showed that the best predictor for remembered affect of positive film clips is the sum of the highest of the moment-to-moment ratings (the affective peak) during the clip, and the momentary rating at the end of the clip.  This is what Kahneman called the peak-end rule.  As far as negative film clips are concerned, the affective peak rating appeared to be the best predictor of remembered affect.  In both positive and negative examples, the duration of the film clips did not have any influence on the retrospective ratings.  This is known as duration neglect.  Although it is rational to think that by adding moments of appeal or pleasure makes an experience better and adding pain makes it worse, this experiment refutes that logic.
Experiment #2: This experiment further investigates the role of duration neglect we found in the first experiment.  Each participant provided on-line and retrospective ratings of two separate experiences.  In the first trial, participants dipped one of their hands in 14˚C (57.2˚F) water and left it submerged for sixty seconds.  The next trial was the same as the first trial, except after the initial sixty seconds, the experimenters heated the water from 14˚C to 15˚C (59˚F) in an additional thirty seconds.  If the peak-end rule predicts remembered affect like in the positive film clips, than the participants would have preferred the second trial because it ended with a higher temperature, which was lower pain or less uncomfortable in that case, than the first trial.  This came to be true as most participants wanted to repeat the second trial.  Again the length of the trial had no effect on the remembered affect even though the longer trial proved to be more enjoyable; it was simply the temperature that affected remembered affect.

Hsee and Abelson (1991)
Experiments Hsee and Abelson believed that the significance of other moment-to-moment characteristics go beyond the peak-end rule.  They suggest that the positive or negative slope of change within a musical experience is perhaps the dominant factor for evaluating these past experiences.  Evidence shows that participants naturally prefer declining sequences of pain rather than increasing sequences.

The Main Experiment
  • “Hypothesis 1. Duration Neglect: The length of a piece of music should contribute minimally to the remembered affect.”

  • “Hypothesis 2. Peak Significance: The peak of momentary affective intensity should disproportionately influence remembered affect.”

  • “Hypothesis 3. End Significance: The last momentary affective intensity should disproportionately influence retrospective evaluations of affect.

  • “Hypothesis 4. Slope Significance: The slope of moment-to-moment intensity experience should influence remembered intensity in some significant way.  Perhaps a larger, more positive slope translates into better memory encoding.”

The following study tests the validity of these hypotheses for music.  How do listeners derive a single remembered intensity from moment-to-moment intensities?  Do any or all of these effects hold for experiencing music?

(Turn on projector)

Procedure: Twenty participants from the University of Pennsylvania (seven male, thirteen female, with an average age of 21, and each had varied musical training ranging from 0-15 years) were to listen to fourteen different selections (see Table 1) each played twice consecutively.  These selections varied from pop music to classical and that the length of each selection would range from forty seconds to three minutes.  While the participants listened to the selected music, a pressure sensitive button on the right arm of recliner they were seated in was used to determine the moment-to-moment emotional intensity.  To gather information of remembered affects, in addition to the pressure sensitive button, the participants filled out a questionnaire after each selection.
Results: For eighteen out of the twenty participants, the connections between remembered intensity and liking were positive.  The more intense memories of a piece of music the listener has, the more that listener likes the piece.  Table 2 is a clear representation of this observation.  As you can see also from Table 2, eighteen out of the twenty participants had a positive correlation between remembered intensity and familiarity.  Although not as influential as the connection with liking, remembered intensity does appear to depend on familiarity.  Table 3 is a clear indication of duration neglect.  The remembered intensity is better predicted by the average of moment-to-moment affects than the sum of moment-to-moment affects for nineteen of the twenty participants.  (Change slide to fig.4 +5).
     Data gathered from the experiment does not support the peak-end rule.  Peak plus offset was the best predictor of remembered intensity for only three participants, while compared to five for average, five for offset, and seven for peak.  Further analysis reveals that peak and end significance are examples of more general effects.  End significance results from a recency effect.  Recency is a familiar property of memory and demonstrates that greater amounts of time between initial learning of information and attempts to recall that information yield inaccurate recall.  Peak significance is an indicator of an intensity effect.  This concept can be understood by the correlations between remembered intensity and intensity-ranked values of on-line experience.  Listeners derive remembered experience predominantly from the most intense moments of on-line experience.  The least intense moments contribute relatively little to affective memory.
     The last effect that was observed from the data is a slope effect (change slides).  A measure of the slope of a moment-to-moment curve is the difference between one moment and its immediate antecedent, which in this experiment is measured by 0.1 seconds before that moment (refer to figure 3).  

(Turns slide on and appears title) Good morning everyone! Right now, I will ask everyone to take out a scratch piece of paper and a pencil. I will play 15 seconds of a piece, and I would like you to tally/count how many soft tones are in the melody. (Play melody, and then give a couple of seconds for them to review). I will now play it once more, and I would like you to see if you have missed any. Talk about the answers everyone got. This introduces the topic: Differential Brain Response to Metrical Accents in Isochronous Auditory Sequences. Within the sequences, listeners may distinguish the tones from one another, and hear the accenting of the sound events. The process of grouping is one of the basic method performed while listening to music.
(change slide) The Basic Process of Grouping The sequence of events is segmented into units, which this limits processing time and memory demands.
Despite how old you are and how much musical education you have received you will conform a general pattern in the sequence.
Perceptual and cognitive principles have been proposed by Drake & Bertrand to be universal.
(Change slide) Phenomenal Accents
“any event at the musical surface that gives emphasis or stress to a moment in the musical flow.”
“The physical properties of certain tones, the time intervals between them, or their serial position within the sequences make some events more perceptually salient (accented) than others.”
These professors in the article did a reasearch on different brain reactions. They had an amount of males and feamles. They let them play the melody as much as they wanted to. Then they had to write about what they heard.

Influences of Temporal Fluctuation on Infant Attention

In the auditory world of human life, the events that occur and how we as listeners react to them, is based entirely on predictability. Whether listening to a Mozart Sonata, a clock ticking, or a bell chiming every hour, we hear these different events based on their being predictable or not. The article puts it simply enough, “A soothing lullaby exemplifies a highly predictable event…a sudden exclamatory vocalization exemplifies a relatively predictable event.”
The article by Nakata and Mitani describes an experiment which tests infant attention based on the regularity and irregularity of controlled sounds. Before the experiment is described the authors mention numerous examples from various works dealing with reactions to sound sequences when dealing with infants. For instance, infants have the ability to detect subtle “auditory events,” like differences in the time between “brief tones” (Trehub, Schneider, and Henderson). Also, infants are said to be able to detect acceleration from a constant sound and tempo sequence of an intermediate tempo if the ending tempo is 15% faster than the original (Baruch and Drake).
The article goes on to discuss information dealing with the ability of adults and infants to differentiate meter-preserving and meter-violating musical patterns. American adults could tell a difference in simple meters that had disturbances, but not in complex meters (often found in music of Eastern Europe, Middle East, Africa, and South Asia). Adults from Bulgaria and Macedonia who are familiar with simple and complex meters were able to hear the violations in both meters. Oddly enough, American infants, who had little contact with western and eastern music, were also able to hear the disturbances contained in both the simple and complex meters. (Hannon and Trehub)
The last topic before the experiment discusses “infants’ responsiveness to the maternal speech style.” First they mention adult directed speech, normal talk, and infant directed speech, baby talk. Many feel that babies are more likely to pay attention to infant speech because of the exaggerated pitches and contour lines (Fernald and Kuhl), but further research reasons that babies have no preference of adult directed versus infant directed based on pitch modulation alone (Colombo and Horowitz). They hypothesized that the rhythm of infant directed speech plays an important role in keeping the infants’ attention. Next, they described different ideas dealing with maternal singing opposed to maternal speech. An experiment by Shenfield, Trehub, and Nakata provided evidence to suggest that singing increases the level of health and lowers stress in infants. The last idea by Drake and Bertrand says that humans are predisposed to consistent sound based on internal oscillations and expectancies of future events.
The second half of the article addresses the experiment. Nakata and Mitani decided to test attention towards regular and irregular sound sequences for infants in two age groups, 6-8 months and 9-11 months old. They theorized that regular sound sequences would keep the attention of both age groups and irregular sequences which require more cognitive thinking would keep the older infants’ attention better than the younger infants’ attention. They made sure to note that all infants were healthy, free from congestion or ear infections, and had no family history of hearing loss to assure that all children had good hearing ability that would not skew the results. The participants consisted of 17 infants between 6 and 8 months of age and 15 infants between 9 and 11 months. An iBook computer was used to present visual and audio stimuli for the participants. The main stimuli consisted of a sound sequence, sounding similar to a xylophone tone, each sequence lasting 30 seconds in length, either a regularly occurring tone sequence or an irregularly occurring sequence. The visual stimulus was needed so that the infants would link the sound to the image allowing the testers to recognize and record the time that the infants were looking at the image (listening to the music), and not listening. The results found that younger infants looked longer at the regular sequence in the last five trials versus the first 5 trials but lost interest in the irregular sequence, having shorter listening times in the last 5 trials. The older infants had similar regular and irregular looking times with both decreasing from the first 5 trials to that last 5.
In conclusion, the authors were correct in certain portions of their hypothesis. They guessed that the regular sequence would hold infants’ attention longer than the irregular sequence. For younger infants, this was exactly true with their attention to regular increasing and irregular decreasing. On the other hand, older infants’ attention to the differing sequences stayed the same, decreasing in both tests. The younger infant results make sense based on studies dealing with motor development which show that regularity is very important at 6 months. Infants are said to kick at regular intervals until about 6 months.
The article goes on further to mention a study by Deckner, Adamson and Bakeman, stating that “rhythmic maternal vocalizations were associated with good infant language outcomes.” Experiments on auditory temporal patterns and attention still have much more research to carry out, controlled tests to run, and outcomes to analyze.

Harmonious Music Helps You Keep Control

(SLIDE) Norman Cook's recent book Tone of Voice and Mind: The Connections Between Intonation, Emotion, Cognition, and Consciousness is examined chapter by chapter in this article by Klaus R. Scherer and David Sander, appearing in the journal Music Perception. (CHANGE SLIDE)
First, a quick look is taken at Cook's previous book entitled The Brain Code, and Scherer and Sander say that the expectations are high for the new book. They say that Cook is "a thoughtful and provocative author in the surging domain of the neurosciences," or dealing with the nervous system. Cook is said to find new ways to think about things, as opposed to the traditional, "inside-the-box" thinking of some modern scientists. (CHANGE SLIDE)
The main argument in Cook's book is the fact that the human brain does not contain two completely different hemispheres, but rather two parts that have complimenting components. A musical example would be that the right side of the brain picks up on pitch, harmony, and melody, while the left side focuses on processing rhythm and tempo of tones. Cook uses another example of how humans digest language to support this idea.
One thing that the authors disagree with Cook about is the fact that Cook says that this asymmetry of the hemispheres is not true in the case of animals. Cook says that this fundamental principal dictates human nature, while the authors point out that animals also share this, specifically pointing to songbirds. (CHANGE SLIDE) Studies have shown that canaries display song control with the left hemisphere and harmonic control with the right hemisphere. (CHANGE SLIDE) Pointing this out, the authors make the amusing comment that this conclusion doesn't only point out "what it means to be a human but also…what it means to be a canary." (CHANGE SLIDE)
The next chapters are focused on the affective, or emotion causing, properties of pitch in spoken words. What is suggested is that when in high spirits and good moods, the intonation of someone's speech will reflect this by generally outlining major triads, and when in low spirits and bad moods, the triads will be minor. The author's may agree, but they believe that this information is provided in a sketchy and unclear manner. For example, they discuss that Cook referenced the general consensus that major chords are linked to happiness, while minor tones are linked to sadness. There is little reference to examples dealing with speech, however, according to the authors. (CHANGE SLIDE)
Cook's reference deals with animal calls. He says that higher, shriller animal calls are associated with inferiority in status, whereas lower, stronger calls display dominance. The authors point out that Cook does not take into account that a lion's low and high pitches are much different in range than a monkey's low and high pitches. Also, threats meaning to display dominance may sound gruff, while high pitched, shrill alarm calls do not sound minor and dark.
Another main argument is that all of these moods revolve around the fact that minor scales are differentiated from major scales by harmonic ambiguity or tensions. The authors note this several times, referring to Cook's work and following back to the previous situations, such as animal calls. It is not merely the pitch that determines whether or not the call is dominant or submissive, but the subtle changes in the tones, creating harmonic consonances, or pleasing sounds, or harmonic dissonances, or unsettling, incomplete sounds. However, the authors find inconsistencies in this thought as well. Cook does not explain a reason why a decrease in one of the pitches of the chord, normally creating dissonance, actually creates a bright, happy sound, while the change in the fundamental frequency, or bass pitch, is foreboding when animals are trying to scare or dominate others. (CHANGE SLIDE)
The authors tend to agree with Cook's ideas, but frequently find them to be incomplete or contradictory. There are a few more chapters, but they are breezed through in the article toward the end, only stating a few facts about the last part of the book. In general, Scherer and Sander are saying that Cook is on the right track, but his ideas, at least in this book, are not yet complete enough to be taken seriously by the neuroscientific community.

Thursday, December 08, 2005

Is there a perception based alternative to kinematic models of tempo rubato?
By Henkjan Honing

A theory commonly accepted says that kinematic models have no effect. Kinematic models are made up of three parts.
Number of events
Overall rhythmic structure of the piece
Overall tempo of the piece

A huge part of kinematic structure is the final ritard. Fridberg and Fridberg, scholars who studied final ritard in depth, found that the final ritard induces inner human movement. Also found a mathematical equation known as the Runner's Declaration.

Note density also plays a role in kinematic structure. The question taht note density raises is whether the rhythm is synchopated, or the tempo is indeed slowing.

Rhythmic structure- quantizers take time patterns after the tempo of teh piece is presented, and predicts the percieved duration of the slowing tempo. The best/most useful quantizers are symbolic, connectionist, and traditional. It's been assumed that a musician intuitively follows the rhythmic structure.

The global tempo is the overall speed at which an expressive performance. The overall tempo of the piece constrains expressive freedom.

In conclusion, this study finds that the closest mathematical resemblance to a final ritard is a square root function, or in lamen's terms, the higher the tempo, the more of a chance of a grandeur final ritard.

"Voicecraft" Study

The article by Alison D. Banghall and Kirsty McCulloch is about a study on “The Impact of Specific Exertion on the Efficiency and Ease of the Voice.” Previous and recent voice literature has encouraged avoiding exertion, muscular tension and strain by relaxing to produce a better sound. Speech pathologists follow the same techniques with patients with voice disorders by use of “progressive relaxation.” “Progressive relaxation” is done by tensing and relaxing certain parts of the body, one part at a time, to understand what relaxation truly feels like. Contrary to the idea that relaxation produces the best voicing is the belief that those using a great deal of energy and proper amounts of exertion, will produce the best tone. “Voicecraft” is a technique based off of the physiological and acoustical studies of Honda and Estill, which focuses on exertion control to help with health, stamina, and versatility of the voice for both speakers and singers by controlling the laryngeal muscles for efficient functioning of the vocal folds. It’s focus is to help be aware of the difference between the less desirable sound and the sought out sound. To help establish the best form of exertion, whether controlled or not at all, a pilot study was conducted.

The study was made up of 10 volunteers between the ages of 18 and 50, 7 women and 3 men, 7 speech and language therapists and three singers with different levels of training. These volunteers participated in a “Voicecraft” workshop in 2002, in England and Greece. These workshops ranged from 3 to 6 day trainings which taught the volunteers different techniques to control the larynx to rid strain and remain comfortable in order to produce a clear sound. The technique helped the volunteers to contract certain muscles in an orderly fashion and to perceive the difference in different amounts of exertion but does not deal with breathing techniques and avoids all references to relaxation. To help determine if the “Voicecraft” training is beneficial, each subject was hooked up to Sony Digital Audio Tape-corder, Walkman and JVC Binaural Headphone Stereo Microphone and was recorded before and immediately after the training. A subject sang or read a passage, according to their profession, for both recordings, as well as sustained the [i] sound for analysis, then filled out a questionnaire. In this questionnaire the subjects were asked to label the amount of total exertion on a 7-point scale, with 7 being the most. The second question asked them to specify a percentage on certain body parts, corresponding with their total exertion. Lastly, they were asked to rank on a 4 point scale, their total comfort during both recordings, ranging from “uncomfortable” to “extraordinarily easy and comfortable.”

To begin the analysis of the data, the digital recordings of the sustained [i] sound were sent to Kay’s Computerized Speech Laboratory, in Lincoln Park, NJ.. Similar 10 second sections of both the pre-training and post-training recordings were analyzed specifically for noise-to-harmonic ratio, and jitter and shimmer, to measure the level of efficiency of the signal. Then the recordings of the passages and songs were analyzed by 6 experts: 3 speech pathologists and 3 experienced voice instructors. These experts were asked to compare the pairs of recordings, played in random order, for each subject, and decide which recording had a better tone quality with respects to “audible breathiness, strain, or roughness.”

The results of the questionnaires completed by the subjects show they rated “comfort and ease” significantly higher for the post-training recordings. The average rating at this point was 3.5 on a 4 point scale. 50% of the subjects stated their voices were uncomfortable pre-training, but post-training, all of the subjects stated complete comfort with half of them stating it to be “extraordinarily easy and comfortable.” The overall exertion had little change on the 7 point scale where 4 subjects said they felt an increase, 1 said it remained the same, and 5 said it had decreased. The percentage assignment to a specific body part’s exertion showed only one increase in exertion: the head and neck. The chest, throat and abdomen all showed a decrease. The most significant change in exertion was with an increase in the back. The results showed an 18.72% increase, changing from an average of 11.78% to 30%. Although 8 out of 10 subjects reported their back muscles working harder, none of them reported working any less.

The results of the noise-to-harmonic ratio with the sustained [i] sound reported there was a significant difference with an audible improvement in breathiness, strain, and roughness. The results from the perceptual analysis by the experts also reported an improvement. All of the experts agreed which recording of the pair was indeed the post-training sample, but only were able to agree on 8 out of the 10 subjects had outstanding post-training recordings with regards to breathiness, strain, and roughness.

These results show the “Voicecraft” training is efficient and effective by producing a significant improvement in the quality of the voice while increasing the level of comfort and ease. The subjects were never told to sing a specific way during either recording ,yet the results of the post-training recording show the subjects had remembered and applied the techniques. “Voicecraft” targets reduction of strain in the throat, abdomen, and chest and an increase of exertion in the head, neck, and back. This technique’s efficiency is shown through the decrease of the average percentage of overall exertion in the throat, chest and abdomen, and a significant increase in the back. Relaxation was never mentioned during the training, yet it is evident that controlled relaxation in specific areas occurred, which allowed for the controlled exertion in the other areas. This study of “Voicecraft” training shows it is effective to not be completely relaxed when wanting the best voice quality, but it is not the only effective training. Other similar techniques are also being tested and studied to hopefully define the best use of controlled relaxation and exertion to efficiently produce the best clarity of voice.

Effects of a Change in Instrumentation on the Recognition of Musical Materials

Effects of a Change in Instrumentation on the Recognition of Musical Materials

[Timbre- tone color: brass has a different timbre than woodwinds]

The main purpose of this study was to determine whether or not different instrumentation has an effect on a person’s ability to recognize a melody. The study also provided an opportunity to determine whether timbre, along with pitch and rhythm, helps to define the identity of a musical work. For example, would Moonlight Sonata be as recognizable if it were performed by an orchestra instead of a piano? In the history of Western music, the timbre of a piece of music, which is created by its instrumentation, has always been considered less important than its pitches and its rhythms.

Several previous studies contributed to the formation of this study’s hypothesis. The first, a study completed by Peretz & Kolinsky in 1993, asked participants to make various judgments about two consecutively sounded pitches. When the two pitches were performed on the same instrument, those participating in the study were able to judge them faster and with greater accuracy than when the pitches were performed on two different instruments.

A study completed by Pitt & Crowder in 1992 suggested that the timbre of a pitch can actually influence someone’s perception of the pitch. In this study, subjects were again asked to listen to two consecutive tones. After hearing the tones, they were to determine whether the second pitch was the same as or different than the first one. When the timbre remained constant, the listeners had no trouble providing the correct answers, however, when the timbre changed, the listener’s accuracy decreased significantly.

Contrary to the Pitt and Crowder study, however, was the Semal and Demany study of 1991, which attempted to study the relationship between pitch memory and timbre. They concluded that pitch memory is independent of timbre and that there is no correlation between the pitch and the timbre of a note.

But again in 1998, a study by Peretz, Gaudreau, and Bonnel showed the strong effect of timbre change on melody recognition. Melodies that stayed in the same timbre were recognized with greater accuracy than melodies that were performed in different timbres.

Because of the results of these studies, along with a few others, it was believed that the timbre of a piece of music would be influential in the work’s recognition.

The Trials

For the first trial, 73 students from a university in France participated: 29 were regular university students, and 44 were music students. Musical excerpts from The Angel of Death, a piece composed by Reynolds in 2004, were played for them. 18 of the excerpts were performed on piano, and 18 were played by a chamber orchestra.
The experimental procedure was performed in two phases. In the first phase, which was referred to as “the learning phase,” participants were asked to carefully listen to 9 excerpts, which were a combination of piano and orchestral recordings. They were told before the examples were played that there would be a recognition test following the learning phase.

In the recognition phase of the study, the participants were asked to listen to 18 excerpts. 9 of them were those they had listened to earlier, and 9 were other sections of the same work. Sometimes, the timbre of the old sections was different than in the first hearing, and sometimes it remained the same.

They were then asked whether or not they had heard each of the 18 excerpts previously. For the non-musicians, the percentages of correct answers always hovered between 50 and 60%, or just a little over chance. Only with the trained musicians did the timbre appear to have a significant effect.

It was then decided that since the participants had been told before listening to the original 9 excerpts that they were going to be tested later over recognition, the results may have be misleading. So, a second trial, nearly identical to the first, was prepared and set into motion. The only difference being that the students were not informed of the second phase of the study.

The results of the second trial were nearly identical to those of the first. When the same excerpt was replayed in the same timbre (both performed by an orchestra), the accuracy of the participants in recognizing it was 80%. However, when a melody was performed first by an orchestra, and then by a piano, the accuracy was only 55%.

There was also a third trial performed as part of this study. The purpose was the same, and the only difference was category of musical excerpt chosen. Instead of contemporary instrumental music, the chosen selection was Liszt’s Symphonic Poem #3. The results of this third study were similar to the two previous ones.

In conclusion, the timbre of a musical work can influence an audience’s recognition of it.


What is sound and how do our bodies deal with sound? That's what I'm going to talk about today. Most people would describe sound by saying it something you hear. It is actually sed to describe two different things: 1) it's an auditory sensation in the ear and more precisely, the ear drum and 2)the disturbance in the medium that causees this sensation. But there is a lot to be explained in these two definitions.
Sound is carried through gases, liquids, and solids by sound waves. When we think of "waves", we usually think of the ocean. But sound waves create a different motion but generally all waves--light waves, water waves, shock waves, and radio waves--have some similiar properties. They can be reflected or refracted, which means that they bend as they travel from 1 medium to another. Think ofa corridor with a bend. 1 side has a light and after the bend, there are no lights in the hallway. You are standing in the dark portion and you can't see where the light is coming from but you can still see light. [I'll be drawing a figure on the board]. THis is because ethe light waves are bouncinng off thew alls until they reach the bend and travel down the perpendicular corridor and into your eyes. The same thing would happen if a perso was talking in the same spot as our presupposed light bulb--you cant see the person but the sound waves are still hitting you ear. The waves transport information and enery through a medium without actually transportinng hte medium itself. In space, sound cannot travel because there is no medium for it to pass through. A sound is a disturbance or small movement that passes through the particles of the medium. Imagine air particles between a stereo and a persons ear. The sounds waves leave the stereo and bump into the air particles closest to it. [another figure drawn by me]. These particles quiver a ltitle and bump innto their neighboring particles. This chain reaction continues all the way to your ear, wehre the particles collide with yor ear drum and make it vibrate. This vibration sends a signal to yor brain that you are heaving something. The brain then interprets the sounds as music or words. Louder sounds are percieved to be such becasue the particles shake and collide with each other and consequently hit your ear faster and harder. The particles retrn to their calm state after the waves have passed. Sound waves are slower then other kinds of waves, traveling at 343 m/s or 1125 ft/s in air, compared to light waves which travel at 3*10^8 m/s or 186,000 ft/s.
The threshold of audibility is the minimum pressure flctuationns t which your ear can respond. For most people, the threshohld lies at one billionth of atmospheric pressure. On the flip side, the threshold of audibility is a sound pressure change of 1 million times greater but still less then 1/1000 of atmosphere pressure. [Table 3.2]
Sound pressure leverls are differnet than "loudness" because loudness is a subjective thing. After hearinng a rock concert, people usually talk louder than normal because their ears have gotten used o hearing loud noises and base all other noises on previous sounds. Loudness also depends on the frequency of the sound--high sounds will alwyas seem louder to our ears than lower sounds even though they might have the same sound pressure level. This is important to remember as musicians in order to keep a balance of sounds, whether you are a pianist or play in a group setting. HIgh sounds seem louder because they almost always fall in the range of 1000-4000 hz which is where the ears are most sensitive. Music usually falls inn the middle of the range of sounds in both Hz and dicibels. [ Table 3.3]
Music tries to express different levels of loudnes sby dynamics. THer are six common ones from pp to ff but research has shown that musicians rarley play all six dynammics throughouut any given cocert. As musicians, we need to remember that the loudness is all relative. A piece may be marked piano, bt htat doesn't mean that you necessarily have to start very soft--instead, you need to be able tto make a much louder sound relatvive to the beginning sound in order for your audience to recognize the differnet as p>>>f. Many wind and string instruments rarely go over 20 db althoguh thhere are multiple exceptions to this statement, for example: saxophones and violins. This also highly depends on how far the listener is from the instrument. Generally, it is established that the sound pressure level decreases 6 db as the distance doubles. It also depends on the source power of the innstument. This power, like elcticity, is measure in watts. However, to be practical in the numbers, people express the power level in decibels by using a reference level which is a power of 10-12 watts. THe most powerful instrument in the orchestra is the bass drum which emits a powere level of 20 W or 133 db at its loudest. At a distance of 1 meter, thhe db level is 122, while at 10 meters is still loud at 102 db. This explains why the bass drum can always be heard over the entire orchestra. This causes a problem in the balance of the orchestras sound because some tonns can be masked by other tones.
Masking the upward shift in nthe hearing threshold of hte weaker tone by the stronger one. Weak and stronng are just words used to describe the relationnship and do not correspond with the words softer and louder, althhough that obviously can happen. masking can happen when 2 tons hhappen simultaneously but more surprisin is the fact that masking can happen even when they are not.
Now taht I've discussed waht loudness is and how it related to the human ear, I want to discuss the differnece betwee the loudness of a short, impulsive sound capared to thhe loudness of a steady sound at hte same level. Previous experiments hahve shown that the ear averages sounds over 200 milliseconds so that teh sond grows over time up to that level. People cannot feel this growth in sound because 200 milliseconds is a really shohrt time.
The human ear has come up withh defenses to protect itself from the damage of very loud sounds. THe ear can give effective protection for up to 20 db by the muscles attached to hte eardum and the bones that are attched to the middle ear called ossicles. WHen the sound elvel reaches over 83 db, occicles muscles tighten another mscle called the ossicular chain and pull 2 bones called stapes away form the cochlea. This reactionn to loud sounds is call the acoustic effect. However, thihs reflex doesn't kick in until 30 or 40 milliseconds afterh sounds begins. Also, the reaction to cover the ears with you hands usually only occurs after 150 milliseconds. This means that loud, unexpected sounds can do damage to the ear before any reaction can occur. An interesting concept to think about is what kind of protection the human body would have developed had loud sounds of hte modern world hahd existe for millins of years. Maybe soemthing like earlids?

Presentation Transcript

“Music Perception”
Differential Brain Response to Metrical Accents in Isochronous Auditory Sequences
By Donna Abecasis and others
Equidistant Tones: as it Relates to the Brain and Groupings
What makes for the meter of a song?
(4 quarter notes on the same line repeated)
Grouping in Music Introduction
1) Very Basic Process that occurs because of memory stipulations
2) Statistics
I. Children and Adults Alike
II. Leads to accents (Strong vs. Weak Beats)
3) Accents are a basic form of organization that starts through brain waves. Your brain waves cause the accent, the accent does not cause the brain wave.
Example: The way people say words sometimes depends on where they
think the accent lies in the word or phrase.
I. Phenomenal Accents-any events in music that have a stress was researched in this article and is the basis for it
II. (aside)Structural accents and Metrical accents exist but are not emphasized
III. Accents are then organized into what is called events or happenings

3b) Recap grouping to accentuation and emphasize the word events
3) Events (actions) are the time that one note or “event” takes can cause it to be accented simply because of the way the line is written by the composer. Also the pitch and other factors can lead to accentuation. So, different properties can also lead to a note being accented.
Some theories say: An accent is an event... An attention getting event
Theory 2 Says: Changes are better processed on accented events than on unaccented ones.

4) Now we arrive at the question of the day. What happens when there are no changes in stimuli? So, can we prove that accentuation starts in the mind not just in events that cause accentuation?
5) With the use of identical sound and time sequences (reference the quarter note example) we can begin to experiment and make some conclusions.
6) People will generally report that they hear a group of tones (usually 2,3, or 4 tones) with an accent on the first tone of each group. (Subjective Rhythmization)
7) So, we could simply ask people are since accents are attention getting events, we can measure brain waves to come to a more substantial answer.
8) Components of measurements for brain waves are used to measure auditory significance.
9) Now that a conclusion has been reached about holding each note constant, we can move on to change one note to be higher or lower but hold all else constant….i.e. hold rhythm constant.
I. So, the the beats that were tested was 8,9,10 and11.This gives the individual time to hear accented rhythms in their mind.
II. A change in the tone of was placed on each one of the following beats. But each time the tone was changed, It was changed to be the exact same tone just in a different place in the number of tones 8, 9, 10, and 11.
10) The conclusion ended up being that peoples brain activity increased more on the oddly numbered events where changes occurred on the evenly numbered events.
Explain: 8 and 10 versus 9 and 11
Experiment Concluded
11) What if the changes occurred by making one beat longer and holding all else constant?
12) Used an alternating long-short pattern(Binary) and long-short-short pattern(ternary) holding everything else equal.
13) Used EEG recording for brain waves and the same techniques or changing the 8,9,10, and 11 tones were implied only with elongation being the change not a change in the tone.
14) Two groups were separated (binary and ternary) and a control group was used.
15) Data collected supported the hypothesis that on the “weak beats” people response would be height. (Significantly larger amplitude in 9 and 11)
Refer to page 7

Tuesday, December 06, 2005

"Voice Onset Time"

In this study, an experiment was done to assess the differences in Voice Onset Time (VOT) between trained male singers and non-trained male singers. The hypothesis was that the trained males would have a longer (VOT), especially during singing, because of the physiological changes caused by vocal training and the alleged recurrent weakness in articulation during singing. It was also hypothesized, however, that trained and non-trained singers' articulatory timing during speaking would be similar. The experiment was conducted by having five male trained singers and five male non-trained singers sing and speak several phrases with a specific, designated vowel each time. The results showed that there is a very distinct, pronounced difference in VOT in trained singing and non-trained singing. While the hypothesis correctly showed that singers sing with longer VOTs than non-singers, it was found that this is due to more deliberate, articulate phonation, not imprecision. The hypothesis was also correct in predicting that VOT among trained singers and non-trained singers would be similar; the difference in VOT among trained singers and non-trained singers was negligible.

McCrea, Christopher R., and Richard J. Morris. " Comparisons of Voice Onset Time for Trained Male Singers and Male Nonsingers During Speaking and Singing." The Journal of Voice 19 (2005): 420-429.

Sunday, December 04, 2005

Influences of Temporal Fluctuation on Infant Attention

Our life is filled with recognizable rhythms, characteristic tempos, and beginnings and endings, making up important time-based features of our auditory environment. These auditory events can be based on predictability. For example, when one rhythm is played over and over, it becomes predictable, but when changed with an accented note, or altered rhythm, it becomes slightly less predictable. Infants, in contrast with adults generally, hear many more subtle auditory events. This makes them more capable of hearing changes in time, and more capable of reacting to them. In realizing these facts, an experiment entitled "Influences of Temporal Fluctuation on Infant Attention" was done by Takayuki Nakata and Chisato Mitani at Nagasaki Junshin Catholic University. The results of this experiment were presented in the Spring 2005 edition of the journal entitled Music Perception. In this experiment, an infant's attention to sequences of sound was examined as a function of the repetition and regularity of timing and the infants' ability to familiarize themselves with the sequences.

According to Takayuki Nakata and Chisato Mitani, the purpose of this experiment was "to compare infant's attentional responsiveness to sound sequences that differed in temporal coherence." In other words, they wanted to see how well an infant remembered and recalled sound sequences that had changed in some fashion. The reason why they did this experiment was because no studies had been done on infant attention to temporal, or time-based attention preferences for regular and irregular sound sequences. Previous studies, such as Nakata and Trehub in 2004, had done similar studies based on recordings of infants' mothers speaking or singing. In that study, they found that regularity in tempo plays a key role in keeping the atention of an infant. The only difference between that study and our study is that the Nakata and Trehub study only took into account regularity of tempo.

In order to take this difference into account, it was hypothesized by Nakata and Mitani that "regular sound would be more effective than irregular sound sequences in sustaining attention among younger infants, 6-8 months of age, and older infants, 9-11 months of age." The study itself was made up of infants from the two age groups listed above. All of the infants were healthy and did not have a history of ear infections or a family history of hearing loss. An iBook computer was placed in front of the infant in order to present the infant with auditory and visual stimuli. The digital sound files were played on the computer to a speaker behind it. White paper and cloth was placed over everything so the infant could only see the monitor and a puppet to attract the infants attention before every trial. Regular and irregular sound sequences were played for the children in 30 second intervals. The infants were tested individually in a quiet child care facility on top of a female caretaker's lap. When the infant was alert and calm, the experimenter started the monitor, flashing alternating red and black flashes on the monitor every 1/3 of a second. Then the sound sequences begun. The infants were given 10 trials of regular sequences and 10 trials of irregular sequences in random order, but making sure that no more than two of the same type were played in a row at any time.

The results of this experiment supported their predictions for infants 6-8 months of age, but not for those who were 9-11 months of age. The younger infants payed better attention to regular than irregular sound sequences in the last five trials. The older infants did not show different degrees of listening on the basis of sequence of regularity, and they showed less attention than the younger infants to both regular and irregular sound sequences. Also, the experiment showed that the sex of the infants and the sound sequence presented first had no effect on the attention of the infants. Overall, these results thouroughly showed that infants need regularity. Not only did the results support this, but the caretakers reported realizing this during the experiment. This research can also apply to language learning of infants. If regularity is used in teaching language, the students will learn language more effectively. This is the same for music in general. Repitition in music is crucial in the musical learning process.

Wednesday, November 30, 2005

Exertion on Efficiency and Ease of Voice

"The Impact of Specific Exertion on the Efficiency and Ease of the Voice: A Pilot Study" Alison D. Bagnall and Kirsty McCulloch

Previous and current voice literature encourages relaxation for better voicing to avoid muscular tension, strain, and force, for both singers and those who speak publicly. However, evidence shows that those whose head and neck musculatures are active rather than passive have more energy while performing. This article goes through the physiological parts of using ones voice and the process and findings of an experiment based on “Voicecraft” training. In this experiment 10 volunteers, made up of men and women, speech and language therapists and singers, young adults and elders. Each subject was recording immediately before and after their participating in a 3 or 6 day “Voicecraft” training, then were asked to complete a questionnaire assessing their ease, comfort, overall and on specific body parts. Judges then listened to the pre-training and post-training recordings and assessed the differences. 80% of the judges assessments matched the personal assessments, which showed the subjects’ comfort and ease were significantly higher after the training, with no significant change in overall exertion. Although the results of this pilot study conflict with previous teachings and beliefs, it may help define the best way to use the voice to increase clarity, stamina, and maintain vocal health.


Abstract on “The relationship between the piano teacher in private practice and music in the National Curriculum”

By Elizabeth Goddard (From the Cambridge Journal Online- Oct 2002)

This article investigates the relationship between the private music teacher and the music of the National Curriculum that is used in most schools. The article explores present insights and practices in order to gain a clearer understanding of the situation between the two groups as they coincide today. Numerous strategies for improving communications between school and private teachers are discussed with the hope that pupils feel as though their private lessons are working in harmony with their music in school. The research from the article suggests that an increased awareness needs to exist in order to develop the relationship in teaching philosophies between private teachers and the National Curriculum. The article clearly explains that the two groups need to cooperate with each other in order to further the education of today’s musicians.

Basic Saxophone Skills:Reeds Part I

This is the first of a series of articles titled Basic Saxophone Skills that are presently devoted to the conditioning of reeds.

The main ingredient of a saxophonist’s sound is the reed. The reed’s significance is often overlooked by many players, when should be focused on more than any other element of the saxophone. Unprepared reeds lead to all kinds of problems and this article will teach you how to assess your reeds as well as take care of them. Reeds that are played directly from the box cause many problems. Most manufacturers will sell them without taking the time each reed needs devoted to it. Paul Berler discusses a process of “breaking the reeds in” which condition them to last longer and be ready for action when needed.

The Fish

Music is the force that allows humans to express what may not be expressed with words or in other means. Music education is instruction in the symbols used for this mode of expression. This idea comes from the fact that communication occurs through symbols, most easily recognizable in human language. Several "theories" have arisen about how or why to teach music, and how students learn these lessons. Musical content should include conceptual focuses, meaning that musical concepts should be taught instead of hard facts alone, or skills to reproduce music on instruments or with voice. The way in which students learn must allow for individual pacing, and the idea that learning deals with experiences and the rate at which these experiences are encountered aiding in the comprehension involving the whole person through action, emotion, and cognition. All learning, however, occurs only within one's social and cultural environment. Sharing is also a large part of teaching, and this should encompass holistic experiences, or involving the whole being in musical experiences. This is not the only mode of teaching, but it is a solid foundation for ideas on theories for music education and instruction.

Abstract for the article "Generating a Theory of Music Instruction" by Eunice Boardman from the January 2001 issue of Music Educator's Journal.

Woody's Misconceptions Put to Ease

Robert Woody takes five frequet misconceptions and puts them to ease with his dual explanations of "Element of Truth" and "Informed Perspective." Topics such as "Scientific study of music diminishes it's magic" and "The artificial enviornment of a research experiment is nothing like a real-life music classroom" are remedied with two explinations; one of which says that there is truth in the statement, and the other saying that there's a bit of a perspective. Both answers, by default and writer's choice, lead to the ultimate "Yes and No". The 'misconceptions' include the following; "Research adresses esoteric topics that most music teachers wouldn't be interested in", "Scientific study of music diminishes it's magic", "There are some things in music that cannot be measured by research", "The artificial enviornment of a research experiment is nothing like a real-life music classroom", and "Statistics can be used to prove anything."