back to list

Posts from Brian McLaren

🔗"John H. Chalmers" <non12@...>

9/23/1995 8:36:56 AM
I will be uploading one of a new set of messages from
Brian McLaren per day for the next few weeks. At his request, I will
not title them in the Subject line. (My own posts will have titles
so they can be distinguished from his, though I imagine content and
tone will be sufficient indications of authorship. )
Also, I do not have time at the present to edit or reformat
them. So, read'em at your own risk. :-).
As Brian does not have a telephone or email, any comments or
questions that need a quick answer should be sent to him by US mail,
though I do send him the accumulated Tuning Digests on a disk about
once a month. His US mail address is the following:

Brian McLaren
2462 S.E. Micah Place
Corvallis, OR 97333-1966-17
USA

(Note the non-standard ZIP code. OR is the state of Oregon.)

--John

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sat, 23 Sep 1995 17:40 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id IAA02819; Sat, 23 Sep 1995 08:39:53 -0700
Date: Sat, 23 Sep 1995 08:39:53 -0700
Message-Id: <9509230839.aa04145@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

9/24/1995 9:55:13 AM
From: mclaren
Subject: Tuning & Psychoacoustics - Post 1 of 25
---
The rules of Western harmony, to paraphrase
Voltaire, are a lie commonly agreed upon.
Ultimately, the music we choose to make
is limited (or liberated) by our understanding
of what our ears hear, and how.
Over the course of more than a year various
forum subscribers have treated us to
a Mount Everest of misinformation about
what the ear hears, how the brain interprets it,
and how sounds change during the complex and
surprising process we call listening.
These fairy tales and "just so" stories about hearing
and the ear are common currency. They are
the misinformation about the ear/brain system
that "everyone knows." And, like giant
alligators in the sewers and detectives
photographing an image of a murderer in a
corpse's pupils, these tall tales never seem
to go away.
This post is the first of a series which
will examine the evidence about what
the ear actually hears and how. These posts
will discuss some of the *facts* of the
ear/brain system, as opposed to the
fantasies and canards that
"everyone knows are true."
---
First, it's important to understand that
some subscribers will react violently to this
series of posts.
Many composers, musicians and
performers will angrily attempt to refute the
facts listed here. These violent reactions
will arise partly out of surprise, partly from
an unwillingness to relinquish long-held
beliefs, and partly because the facts of the
ear/brain system are not yet widely known
outside the realm of psychoacoustics and
psychophysics. In fact, the majority of
today's composers and music theorists
exist in a blissful state of ignorance
about the ear/brain system--a state similar
to that which characterized clerics in the days
when Galileo first pointed his telescope
at the moon. Back then, "everyone knew"
that the stars were fixed in Aristotle's
crystal spheres; "everyone knew" that the
moon and sun belonged to a celestial
sphere unchanging and perfect; "everyone
knew" that the planets rotated around the
earth, and that no satellites circled (say)
Jupiter or Saturn; "everyone knew" that
Aristotle was the beginning and the end
of all knowledge, and "everyone knew"
that there remained only the tiniest
crumbs of knowledge yet to be gleaned
about a universe which was perfectly
ordered, perfectly simple, and--
by and large--perfectly understood.
---
When Galileo turned his lens to the
moon and discovered that it had
mountains, and when he observed
satellites around Jupiter, and when
he saw new stars in the sky,
he was called, alternately, "ignorant,"
"a charlatan," "an imposter," "well-
meaning but ignorant of Aristotle's
teachings," "too stupid to properly
interperet what he saw through
his telescope," and so on.
Many of the best-educated men and
women of Galileo's time refused
to look through his telescope at the
sky, because they *knew* that
his claims could not possibly be
true.
It's sadly easy to deduce that all of
the above antics will be duplicated
in the course of this or that subscriber's
reaction to this series of posts.
---
This sounds shocking. It is. In saying
this, I assert that most musicians and
composers today are ignorant of how
the ear hears and how the brain interprets
sound. More: I assert that they are not
only ignorant, but actively and perversely
misinformed. Lastly, I assert that much
of this misinformation hampers the
progress of music and interferes with
our ability even to conceive new
universes of harmony and melody.
What we cannot perceive, we cannot
explore; and when we cannot explore,
we stagnate.
Much of the myth and fantasy which fills
musicians' heads is promulgated by
so-called "modern" academia using
so-called "modern" music theory
textbooks (the content of which actually
hails from the 17th, 18th and 19th
centuries).
The next post will present some of
the surprising characteristics of the ear/
brain system, and some recommendations
for references.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 25 Sep 1995 01:56 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id QAA13589; Sun, 24 Sep 1995 16:56:38 -0700
Date: Sun, 24 Sep 1995 16:56:38 -0700
Message-Id: <950924235355_71670.2576_HHB53-1@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

9/25/1995 8:19:24 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 2 of 25
---
Many and strange are the myths which
afflict so-called "modern" music theory,
particularly when it comes to the operation
of the human ear.
Most of these tall tales have been handed
down to present-day musicians from the 19th
century, although some of the myths date
from much earlier--as early as the number
mysticism of Pythagoras, the Babylonians,
the Egyptians, and of Hindu astrology.
Shockingly, typical "modern" music theory
texts cite Helmholtz, Rameau and Mersenne
as the sole authorities on acoustics and
psychoacoustics--or they cite other
"modern" music theory texts which cite
only these musty & antique sources.
This is comparable to a "modern" science
text citing Lagrange, Hamilton and
Newton as authorities on the nature of
subatomic physics. A physics professor
who wrote such a book would be laughed
out the profession--but for some reason
this practice is acceptable in music.
Together, this antique trove of musical
old wives' tales and acoustic "just-so stories"
constitutes a body of misinformation
which has been handed down through
textbooks which thoughtlessly draw on
older textbooks, until the chain of
errors reaches back through 3 centuries
or more.
Partch has illuminated a few links in this
monumental chain of fabulation and
compounded error, but the amount of
misinformation is far larger than
even he could ever have suspected.
---
Every statement presented in this series
of posts as fact will be supported
as far as possible by references from
the scientific literature. When subscribers
to this forum violently attack these
posts--as no doubt they will--the interested reader
is advised not to rely on *my* bare assertions
*or* the unsupported claims of those
who say "it's [a lie/ignorant/wrong, etc]"
Rather, the interested reader is
advised to go to the original reference
sources. Read them. Search out the audio
tapes & CDs specified below. Listen to them.
And finally: perform your
own experiments with Csound or a
synthesizer, using your own ears and
a computer.
This last point is *crucial.*
You will need to perform true double-
blind tests on your own hearing to
obtain valid conclusions. If you concoct
a set of test tones and listen to them
*knowing* what they are, your ears
will lie to you and you will literally
not be able to hear the test tones
and acoustic examples objectively.
Only by using a true double-blind
procedure can you reliably ascertain
what your ears *actually* hear, as opposed
to what you *think* you're hearing.
This is why the most common
objection to the facts of modern
psychoacoustics--"I don't hear it that
way!"--is utterly meaningless. Without
A-B-X double-blind tests, none of you
can tell what intervals you prefer (nor
can I) because the knowledge of what you
*think* you're listening to and what
you *expect* to hear contaminates
and alters what you hear.
---
This point is so important that it is
worth an example:
"The extent to which observers can persist
in the same error of observation was shown
to me by the following experiment. High-fidelity
fans complained about the nonlinear distortion
in a certain sound-transmission system. To test
the maximal distortion these listeners would
tolerate, an induction coil was made with an iron
core that was highly overloaded and produced
nonlinear hysteresis distortion. A second coil
containing no iron was combined with the first
in such a way that only the pure distortion remained.
Musicians were delighed with this system, which
made tenors' voices sound metallic and heightened
the dynamics of the orchestra. Their adjustments of
the system to optimal sound had about 70% pure iron
distortion." [von Bekesy, G., "Hearing Theories and
Complex Sounds," Journ. Acoust. Soc. Am, 35(4),
April 1963, pg. 589]
Without an objective reference and
an independent means of measurement,
*we do not know what we hear.*
Over the course of this series of posts,
it will become clear that the ear sometimes
adds to, sometimes subtracts from, and
always changes the information that
enters our auditory system as
sound waves.
For example:
[1] Highly trained symphony orchestra
musicians reglarly perform intervals as
small as 683 cents and as wide as 725 cents,
yet hear them as "perfect fifths;"
["Some Aspects of Perception - I," Shackford,
Journ. Mus. Theory, Vol. 5, 1961, pp. 13-26]
[2] So-called "perfect" intervals, including
the octave and 3/2, can sound dissonant
or consonant depending on the range in
which they sound, even if they use
harmonic-series timbres; ["The Science
of Musical Sound," Sundberg, 1992, pg. 73]
[3] Pitches transposed up an octave can
be heard by musically trained listeners
as dropping slightly in pitch; ["The Science
Of Musical Sounds," Pierce, 1992, pg. 214]
[4] Tones of specific pitch can be heard
by musically trained listeners when in
fact no tones are physically present; and
tones can disappear and become inaudible
to the ear/brain system even though
they're presented to the ear at high
amplitude; ["Psychoacoustics: Facts and
Models," Zwicker & Fastl, 1993, pp. 91-135;
"The Perception of Musical Tones," Rasch
and Plomp, pp. 1-21, in "The Psychology
of Music," ed. Diana Deutsch, 1982; "The
Science of Musical Sounds," Sundberg,
pp. 48-86.]
[5] The ear/brain system can generate
audible illusions which convince the
listener that s/he is hearing paradoxical
and impossible sounds--sounds which
simultaneously speed up and slow down,
for instance, or sounds which simultaneously
rise and fall in pitch; or sounds which
rise endlessly in pitch, or fall endlessly
in pitch; ["The Science of Musical Sounds,"
Pierce, pg. 215; "Structural Representations
of Musical Pitch," Shepard, in "The
Psychology of Music," Ed. Diana Deutsch,
1982, pp. 334-373.]
[6] There is a universal human craving for
stretched intervals, which leads highly
trained musicians to perform so-called
"perfect" intervals consistently wider
than the ratios by which "everyone knows"
these intervals are defined; ["The Science
of Musical Sound," Sundberg, pp. 104-105,
"Introduction to The Physics and
Psychophyics of Music," Roederer, pg. 155]
[7] The ear/brain system detects pitch
in a complex way still not fully understood,
with the result that the pitch of a complex
sounds is perceived to change with
the loudness of the sound, the amount
and onset of noise masking the sound,
the type other harmonic sound played
simultaneously, the degree of harmonicity
of the partials in the sound, the length
of the sound being played, the spectral
centroid of the sound, and the suddeness
of onset of the sound; ["Experiments
On Tone Sensation," Plomp, pp. 127-129;
"Introduction to the Physics and
Psychophyics of Music," Roederer, pg.
135; "Perception of Timbral Analogies,"
Wessel & Ehresman, Rapports IRCAM 1978,
pp. 1-29, Pickles, James O., "An Introduction
to the Physiology of Hearing," Academic Press,
1988, pp. 270 ff., etc.]
[8] Many of the inner workings of the
ear/brain system are still unknown, and
each of the conflicting theories of how
the ear/brain system hears is supported
by some psychoacoustic evidence, but
contradicted by the rest.
["Experiments On Tones Sensation," Plomp,
pp. 49-52; "The Science of Musical
Sound," Sundberg, pp. 100, 186; "The Science
of Musical Sounds," Pierce, pp. 101, 113-114;
"Rapports IRCAM - Musical Acoustics,"
Risset, 1978, pg. 8; "Introduction To the
Physics and Psychophysics of Music,"
Roederer, 1973, pp. 130-133]
---
All of which points to the conclusion that
the ear/brain system is *complex.*
There is no simple explanation for how we
hear. The ear/brain system generates
false information, destroys some of
what comes into our ears, and transforms
all of it, either subtly or grossly.
Yet the single common thread that will run
through all the so-called "rebuttals" and
attacks on this series of posts will be:
THE EAR IS SIMPLE. "Helmholtz explained it
all," one person will yelp, while others will
screech "Terhardt explained it all," or "Backus'
book tells you everything you need to know!"
The interested reader is advised, again,
not to believe me *or* to believe those who
attempt to rebut me. Rather, the interested
reader is advised to *study the psychoacoustic
literature,* excerpts of which and references
from which will be listed extensively in every
post.
Only by doing this can the objective reader
get a real sense of the extraordinary complexity
of human hearing, and the self-evident
falsity of claims that "the ear is simple"
and "Helmholtz explained it all in 1863,"
and "my 1939-vintage references don't say that."
Do not lend your credence thoughtlessly to
*any* statement without *testing for yourself*
the evidence (or lack thereof) for that
statement. Wisdom does not arise from
credulity, but from doubt.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 25 Sep 1995 17:21 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id IAA21245; Mon, 25 Sep 1995 08:21:28 -0700
Date: Mon, 25 Sep 1995 08:21:28 -0700
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

9/26/1995 10:34:05 AM
From: mclaren
Subject: Tuning & psychoacoustics - Post 3 of 25
---
The psychoacoustic literature overwhelmingly points to the conclusion that,
first and foremost, the ear/brain system is *complex.*
There is no one simple explanation for how we hear. The ear/brain system
generates false information, throws away some of the sound waves
physically received by the ear, and transforms the rest, either subtly or
grossly, in the process first of encoding the physical rarefactions and
compressions of air into neural impulses, and subsequently processing the
neural information in higher brain centers.
While the physical process by which the organ of Corti responds to incoming
sound waves and the impulses produced in the auditory nerve are known facts, the arguments among various investigators arise from the *interpretations* of these facts. Some psychoacoustic
researchers claim a primary role for the physical acoustic transduction of
others claim that the operation of higher brain functions on the encoded nerve impulses is most important (the theory of tuning as a learned response).
Because of the extraordinary variation in reliability and competence among
the various authors of psychoacoustic texts over the last century, and
because of the rapid progress in the field (which has rendered many earlier
texts obsolete), the interested reader is advised *not* to unquestioningly
believe *any* references dated earlier than 1970.
Rather, the reader is advised to *study as much of the
psychoacoustic literature as possible*, excerpts of which and references to
which are listen extensively in this article.
Only in this way can the objective reader get a real sense of the
extraordinary complexity of human hearing, and the provable falsity of
claims that "the ear is simple," "Helmholtz explained it all," or "so-and-so's
book on musical acoustics written in the 1950s tells us everything we need
to know about human hearing."
Reading is not enough. Since the subject is what we hear and how, the
reader must also listen and make up hi/r own mind.
After perusing this post, the reader is strongly advised to listen to the
following tapes/CDs and study the following references:
[1] "Auditory Perception: An Audio Training Course," by F. Alton Everest. This
is the best single audio-tape set of examples of classic experiments
demonstrating the complexity of the hearing process. At $159 this 104-page
manual and 4 audio cassette set isn't cheap. However, if you read the manual
and listen to the tapes, you'll quickly learn the basics of how we *actually*
hear (as opposed to how most musical theory textbooks and all too many
outdated acoustics and psychology texts *claim* we hear).
[2] "The Science of Musical Sound," by John R. Pierce, 2nd edition, 1992, with
accompanying audio cassette, covers the simplest elements of
psychoacoustics.
The cassette is useful for elementary phenomena--binaural beats,
"streaming," the critical band, consonance of simple vs. complex tones, etc.,
but it cannot subsitute for Everest's far more complete set of
demonstrations.
[3] Houstma, Rossing and Wagenaars, 1987, "Auditory Demonstrations,"
Philips 1126-061. An 80-track CD compendium of psychoacoustic
demonstrations of many psychoacoustic phenomena.
[4] Mathews, ed., "Sound Examples: Current Directions In Computer Music,"
MIT Press, 1989. A disc with a wider range of synthesized psychoacoustic
examples than the original cassette companion [2] above.
[5] "Introduction To the Physics and Psychophysics of Music," 2nd ed. 1973
and 3rd ed. 1995 by Juan Roederer contains one of the best general
discussions of the psychoacoustic literature up to that time (1973).
Roederer covers a wide range of surprising characteristics of the ear/brain
system which are entirely ignored by less complete, and sometimes
completely misinformed or out-of-date texts published around the same
period.
[6] "The Science of Music Sounds" by Johan Sundberg (1992) is one of the
best general references on modern psychoacoustics to date. It contains more
up-to-date citations than any text other than Zwicker and Fastl, and it
quotes a wider ranger range of sources than any other text but Sundberg
(1992) and Deutsch (1982).
[7] "The Psychology of Music," ed. Diana Deutsch, 1982, contains an excellent
cross-section of definitive summaries of various psychoacoustic phenomena
by the leaders in the field.
[8] "Psychoacoustics: Facts and Models," by Zwicker and Fastly, 1993, is the
best in-depth discussion of experimental psychoacoustics. It does not
discuss the various theoretical models of the ear/brain system and does not
cover streaming, nor does it consider the musical implications of
psychoacoustics. Within its limits, however, it's the best reference on the
experimental side of the field for the specialist.
[9] "Audition" by Pierre Buser and Michael Imbert (translated by R.H. Kay),
1995, is the most detailed book on the physical structure of the ear/brain
system to date. It also offers the most complete picture to date of the
neural structure of the ear/brain pathway, along with a micrometric
discussion of the various kinds of neurons which repond to
different frequencies, amplitudes, frequency differences, etc. passed along
the auditory nerve. This book also does not discuss large-scale theoretical
models of the ear/brain system, nor does it concern itself with such high-
level phenomena as categorical perception or auditory illusions; but on the
level of the physical neural structure of the ear/brain pathway it is
unmatched.
Lastly, readers should *avoid* the statements about psychoacoustics
contained in many of the following well-meaning but outdated or simply
erroneous texts:
"The Acoustical Foundations of Music" by Backus, 1969, contains accurate
information on acoustics and the physics of some musical instruments.
Unfortunately, almost all of Backus' statements about psychoacoustics had
been proven incomplete or incorrect by the time of publication of his book
(1969).
For example:
"The sense of pitch (related to vibration frequency) is thus partly
determined by the place along the basilar membrane where the vibration
amplitude is largest. There must be other factors also, since for sounds
close together in frequency, especially at low frequencies, the difference in
motion of the basilar membrane does not appear great enough to account for
the pitch discrimination of a good musician." [Backus, 1969, pg. 81]
This statement is correct but incomplete: the periodicity theory of hearing
can explain pitch discrimination at low fundamental frequencies but Backus
never mentions it. In fact the word "periodicity" does not appear in the index
of his book.
Again:
"Complex tones may also be built up out of harmonics but with the
fundamental omitted. The ear generally hears such tones as having the
fundamental frequency, even though there is no actual vibration of that
frequency present in the sound. This missing fundamental effect is
explained on the basis of difference tones, since any two adjacent
harmonics will have a difference tones of the fundamental frequency."
[Backus, 1969, pg. 106]
This explanation dates from Helmholtz's time (1860s) and is known to be
incorrect. "One of the experimental results in Chapter 3 was that the
detectability threshold for combination tones is significantly lower for
small than for large frequency differences between the primary tones. From
this, the conclusion was drawn that the ear's distortion cannot be
represented by a frequnecy-independent nonlinear characteristic." [Plomp,
1966, pg. 121]
"Helmholtz's belief that summation tones and difference tones are the most
prominent aural combination tones has become very widespread. However,
recent psychophysical and physiological experiments have revealed that this
belief is unjustified (Zwicker, 1955; Holdstein, 1970). Evidence for aural
summation tones has never been found, and difference tones arise only for
stimuli of relatively high intensity." [Houtsma, Adrian, "What Determines
Musical Pitch?" Journal of Music Theory, Vol. 17, No. 1, 1973, pp. 139-158]
Clearly Backus is misinformed and is passing that misinformation along to
his readers.
Other inaccuraies abound in Backus' text.
Jean-Claude Risset and Max Mathews, two of the most important pioneers in
analyzing real-world musical timbres, found in 1969 that instrument sounds
synthesized using fixed harmonic overtones sounded lifeless and artificial.
"Attempts of synthesis show how grossly inadequate it is to describe the
tone quality by a simple frequency spectrum." [Pierce, J.R. , Mathews, M.
and Risset J-C., "Further Experiments on the Use of the Computer in
Connection with Music," Grasvener Blaetter, no. 27/28, pg. 93, 1965]
In 1963--when Backus was still writing his text--Max Mathews pointed out:
"Our experience has shown how little we now know about relation of the
quality of sound to various features of waveform." [Mathews, M., "The Digital
Computer As a Musical Instrument," Science, Vol., 142, November, 1963, pg.
554]
Risset describes the state of ignorance which prevailed in the field of
musical acoustics through 1969 (the year in which Backus' text was
published): "Despite the considerable skill and ingenuity of scientists such
as Hermann Helmholtz or Dayton C. Miller, early analyses of musical-
instrument tones have not given satisfactory results. (...) For a long time
physicists have performed analyses of musical-instrument tones, to find
out the physical correlates of their tone quality. Many results of such
analyses have been published. (...) Computer sound synthesis makes it
possible to synthesize virtually any sound from a physical description of
that sound.
"This technique provides a way to check sound analyses: a successful
analysis should yield a physical description of the sound from which one
could synthesizer a sound that, to a listener, is nearly indistinguishable
from the original.
"We have tried to use the results of analyses of musical-instrument tones
that are to be found in musical-acoustic treatises as input data for
computer sound synthesis. In most cases we have obtained sounds that bear
very little resemblance to the actual tones produced by the instrument
chosen; in almost all cases the available descriptions of musical-
instrument tones fail the fool-proof synthesis test. Hence the descriptions
must be considered inadequate." [Risset, J.C. and Mathews, Max, "Analysis of
Musical-Instrument Tones," Physics Today, Volo. 22, No. 2, February 1969,
pp. 23-24.]
By Backus' own admission, his text was based on a series of lectures
developed over 10 years--which means that his psychoacoustic references
date from the 1930s, 1940s and 1950s during a critical period of upheaval
in psychoacoustics caused by the application of computers to music.
Backus does not mention computer analysis of sound: "The mechanical
method of analyzing sounds was cumbersome, slow and inaccurate; much
better equipment is available now for this kind of work. This equipment is
electronically operated and therefore much faster; an oscillogram can be
obtained in one cycle of sound and analysis of the sound wave can be made in
one second or less." [Backus, 1969, pg. 101]
As Risset points out, the use the oscillograms prevents researchers from
following the evolution of a sound's spectrum throughout a long time period.
Thus Backus' techniques are by definition inadequate and out of date.
None of Backus' references hint at the then-unpublished results of
Guttman, Shepard, Risset and Mathews, which changed the entire field of psychoacoustics.
Thus Backus' book is full of errors and misconceptions about the ear/brain system and should be ignored.
"Genesis of a Music" by Harry Partch, (1947, 2nd ed. 1974) contains much
valuable information about just intonation, acoustic instrument-building
and the fundamentals of musical acoustics. Alas, virtually all of Partch's
statements about consonance, dissonance, human hearing and the ear/brain
system are claptrap. He cites no psychoacoustic literature dated later than
1945: Partch was simply unfamiliar with modern experimental evidence
about the ear/brain system.
"Musical Engineering," by Harry F. Olson is an excellent introduction to the
physics of sound production in musical instruments. The statements about
musical timbre, the ear/brain system and consonance/dissonance embodied
the best knowledge up to that time (1957). Unfortunately, most of what
Olson says about musical timbre, consonance, hearing, etc., was disproven
by the experiments of Wessel, Risset, Ward, and many others in the 1960s
and early 70s.
""[On] my arrival in the States in 1964...I elected to focus on timbre. The
palette of computer sound, potentially boundless, was in fact quite
restricted, and one did not know how to generate certain sounds. In
particular, brassy sounds resisted synthesis efforts. I had to convince
myself that the recipes of respected acoustics treatises (like H. F. Olson's)
did not work. As one may judge, from tones synthesized from such recipes,
they did not." [Risset, Jean-Claude, "Computer Music Experiments 1964...",
Computer Music Journal, Vol. 9, No. 1, Spring 1985, pg. 11]
"Fundamentals of Musical Acoustics," by Arthur Benade (1976) contains
reliable details on acoustics, but some of Benade's information on sound
production in various instruments has now been proven incorrect--
particularly, Benade's theory of "regimes of oscillation" for brass
instruments.
Benade's text, like Backus, does not contain the word "periodicity" in its
index. And many of Benade's statements on psychoacoustics contradict the
results of modern research, although to his credit Benade himself admits
this: "The foregoing remarks disagree somewhat with the conclusions drawn
by the authors of the following thoughtfully written papers:
J.E. F. Sundberg & J. Lindqvist, "Musical Octaves and Pitch," JASA, Vol. 54,
1973, pp. 922-929," and so on.
Benade's references on psychoacoustics show strange gaps and a peculiar
selectivity. Ohm, Stumpf, Seebeck, Schouten, Plomp, von Bekesy, Sundberg,
Ward and Burns are not cited in the biolography at the end of the chapter
"The Acoutical phenomena governing the musical relationships of pitch,"
while Pierce, Mathews, Shepard, Risset, Sundberg, Ward and Burns are not cited in the chapter "Successive Tones: Reverberation, Melodic Relationships, and Musical Scales."
Instead an article by Steven and Volkman dating from 1940 is cited,
along with Helmholtz--whose work dates from the 1860s--and "The
Collected papers of Wallace Sabine," another 19th-century figure.
For a researcher in the 1970s to write a text whose primary psychoacoustic
citations hail from the 1860s-1880s is peculiar, to say the least. Like Hall,
Benade is a physical acoustician whose bibliography and reference lists
betrays a scanty knowledge of modern psychoacoustics, and an unsavory
penchant for discarding results with which his simplistic mathematical
models disagree.
Rossing, "The Physics of Music," 1993, is a comprehensive discussion of the
physical basis of acoustic instrument timbre, but it contains very little
information about psychoacoustics. To be fair, that was not Rossing's focus.
Still, the text contains nary a mention of Wessel's streaming phenomenon,
no reference to Ward, Corso, Pikler, Sundberg and Terhardt's findings on the
universal preference for stretched octaves, fifths and thirds; no information
on how pitch perception is affected by masking, context, tone length, etc.;
no distinction twixt physical and perceptual pitch or physical and
perceptual loudness, etc.
Other texts may prove popular. Before believing any reference on
psychoacoustics, be sure to check its date. Many pre-1970 acoustics text
either lack important psychoacoustic data, are outdated, or contain a wealth
of outright misinformation parrotted from Rameau, Mersenne and Helmholtz.
Check the bibliographies of these suspect texts--notice how few post-1945
papers are cited, and how often the authors take issue with well-known and
accepted results from the 60s, 70s, 80s and 90s verified independently by
many reserachers in psychoacoustics on 4 continents. Lastly, simply
compare what you hear on F. Alton Everest's tapes and the Houtsma, Rossing
and Wagenaars disc and the Mathews "Sound Examples: Current Directions in
Computer Music" disc with claims made by the author in question.
--mclaren


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 26 Sep 1995 21:23 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id MAA17646; Tue, 26 Sep 1995 12:22:26 -0700
Date: Tue, 26 Sep 1995 12:22:26 -0700
Message-Id: <199509261918.AA04185@net4you.co.at>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

9/27/1995 8:13:43 AM
From: mclaren
Subject: Psychoacoustics - post 4 of 25
---
The ear itself is the gateway to new kinds
of harmony and new forms of music. The
more we learn about the subtleties of the
ear/brain system, the wider the range of
xenharmonic musics we can make.
---
MYTH: EVERYONE KNOWS HOW THE EAR PERCEIVES
PITCH. HELMHOLTZ EXPLAINED IT IN 1863.
FACT: Since 1841 there have been 3 competing theories
of how the ear operates.
Some evidence supports, while other evidence
contradicts, each hypothesis about
how the ear perceives sound.
---
Ohm (1843) is the founder of modern hearing
theory. "Ohm introduced the view that the
analyzing power of the hearing organ may be
compared to the way in which periodic
functions can be analyzed mathematically
by applying Fourier's theorem (Ohm 1843).
Helmholtz fully recognized the significance
of this hypothesis and based his theory on
it..." ["Experiments On Tone Perception,"
Plomp, R., 1967, pg. 102]
In effect both Ohm and Helmholtz viewed
the ear as a frequency analyzer. Ohm's First
Law states that the ear detects a pitch
only if there is significant acoustic
energy at the fundamental frequency of
that pitch, which (as we all know) consists
of a set of sine wave harmonics added
together.
---
Everyone knows this, and it's wrong.
Ohm's hypothesis was dealt a series of
blows by experimental evidence.
Researchers using crude light microscopes
from the 17th through the early 19th centuries
examined the cochlea and found what appeared
to be rods. This naturally suggested a set of
tuned resonators--an idea which Helmholtz
picked up in the 1840s and elaborated into
the first (resonance) version of his theory of
hearing.
"The principle of resonance, based on early work
of Galileo, was proposed as a way for low and high
tones to have different effects on the ear. For example,
in 1683 du Verney, in his `Traite de l'organe de
l'ourie,' suggested that a ribbon-like structure
along the length of the cochlea vibrates in different
places to different frequencies through resonance
by noting that the width of the ribbon changed from
one end of the cochlea to the other. (..) In 1851 Corti
described a number of the finer structures inside the
cochlea, the most prominent of which he called teeth,
while others called them rods. The inverted V is
now called the "arch of Corti," but his vantage point
was from above, so that the arches appeared as extended
rods... He described the rods as delicate, free, and flexible,
and he supposed that their movements stimulated acoustic
nerve fibers which, at the time, were believed to end
in the vicinity of the rods." [Gulick, W. Lawrence,
George A. Geschneider and Robert D. Frisina, "Hearing:
Physiological Acoustics, neural coding and Psychoacoustics,"
Oxford University Press, 1989, pg. 59]
The mistaken picture of the arches of Corti as resonating rods
was the one used by Helmholtz to support his earliest
"resonance" theory of hearing.
However, successive improvement in the resolution of
available light microscopes--due largely to the work
of the mathematician Abel--dealt a serious blow to
Helmholtz's 1863 "Tonemfindungen," in which he stated
in detail a theory of hearing he had presented in a lecture in
Bonn during the winter of 1857.
"In his public lecture in 1857, Helmholtz proposed that
sounds reaching the cochlea would set certain of the rods
of Corti in motion by sympathetic vibration. He envisaged
the rods as a set of tuned resonators, so that only those with
a natural frequency equal to that of the stimulus would vibrate
and thus stimulate only those acoustic fibers that served them."
[Gulick, W. Lawrence, George A. Geschneider and Robert D.
Frisina, "Hearing: Physiological Acoustics, neural coding and Psychoacoustics," Oxford University Press, 1989, pg. 60]
The subsequent microscopic "...discoveries of Deiters in
1860 made it clear that the rods of Corti were unsuitable as
resonators because they were arches rather than independent
rods. So, in `Empfindungen,' Helmholtz revised his theory by
shifting the resonators to the transverse fibers of the basilar
membrane, a membrane "stretched" across the cochlear tube."
[Gulick, W. Lawrence, George A. Geschneider and Robert D.
Frisina, "Hearing: Physiological Acoustics, neural coding and Psychoacoustics," Oxford University Press, 1989, pg. 61]
In keeping with the ancient principle of likeness, according
to which features of the nervous system symbolzed the external
world, Helmholtz adopted the doctrine of "specific nerve energies,"
which stated that each nerve responded to a unique stimulus
and only to that stimulus. This doctrine was proposed by Helmholtz's
teacher Johannes Mueller in 1838 in Mueller's `Handbuch der
Physiologie,' but it was first stated by Herophilus and Eristratus
in 490 B.C. and subsequently expounded by Aristotle in 344 B.C.
"By extending Mueller's doctrine, Helmholtz claimed that each
acoustic nerve fiber had its own `quality,' so that, when activated,
it always led to the perception of a particular pitch. (..) Accordingly,
frequency was coded by the place of stimulation along the longitudinal
axis of the cochlea." [Gulick, W. Lawrence, George A. Geschneider and
Robert D. Frisina, "Hearing: Physiological Acoustics, neural coding and Psychoacoustics," Oxford University Press, 1989, pg. 61]
This did not solve the problems with Helmholtz's theory of
hearing. Instead, more and more dififculties began to appear
after its publication in 1863:
"First, the transverse fibers of the basilar membrane are
neither under tension nor independent. Therefore, they are
ill-suited to serve the function ascribed to them in theory.
(..) Second, even if the transverse fibers were under tension
and independent suspended, the variation in fiber length and
mass is so restrictive as to limit resonance to a frequency
range that is only a small fraction of the total to which we
respond. (..)
"Third, there is a serious difficulty with the principle of
resonance as a means to account for frequency discrimination.
(..) Fourth, since no resonance is wholly specific, Helmholtz's
theory also was criticized because a tone of a given frequency
would produce resonance not only in the tuned resonator but also
in those that are slightly mistuned. Accordingly, one tone would
signal a number of places, and therefore, a number of pitches.
In 1900 Gray offered his hypothesis of maximum stimulation to
counter this objection. He proposed that the exactly tuned
resonator would also show maximum resonance, and it was this
trasnverse fiber that singaled the place for that tone. However,
he claimed that with intense stimulation many resonators would
be responding at the practical maxima, and since the precision of
the place would thereby be lost, he predicted that differential
pitch sensitivity would worsen as a function of increasing
intensity. Psychophysical data show the opposite to be true.
(..) Fifth, Helmholtz assumed that changes in stimulus intensity
produced changes in stimulus magnitude. However, by 1914, the
work of Adrian on the all-or-none property of neural action seemed
emphatically to deny this requirement of the Helmholtz theory.
As the current century began, the resonance-place theory of hearing
was in some trouble." [Gulick, W. Lawrence, George A. Geschneider and
Robert D. Frisina, "Hearing: Physiological Acoustics, neural coding and Psychoacoustics," Oxford University Press, 1989, pg. 63]
Other strong doubts about the Helmholtz or "place" theory of hearing
arose as a result of Seebeck's experiments in 1843.
"He constructed a siren from a forced air system, in front of which
he placed a rotating disk with small holes separated by specific
distances so as to produceshort sound pulses separated by
precisely specified time intervals. (..) The pitch of this periodic
pulse was the same as that of a 500-Hz tone. This is not surprising,
since the pulse train delivered 500 pulses/sec. A more surprising
result occurred when pulses alternated between two slightly
different values. Although the timing between pulses
had been changed only slightly...the perceived pitch dropped
dramatically from that of a 500-Hz tone to that of a 250-Hz tone
despite the fac that most of the energy in this slightly
modified stimulus was still at 500 hz. The physical property
that was clearly altered by ths light change in pulse timing was the
period. (..) This change in pitch was attributed to the change in
the period of repeating sound pressure wave and eventually became
known as periodicity pitch." [Gulick, W. Lawrence, George A.
Geschneider and Robert D. Frisina, "Hearing: Physiological Acoustics,
Neural Coding, and Psychoacoustics," Oxford University Press,
1989, pg. 257]
As Gulick et alii point out, "Seebeck's work was important because it
led investigators to consider the timing of neural impulses as a
possible neural code for pitch perception. The place and neural synchrony
[periodicity] theories represent two very different views on how the
nervous system codes pitch. For the place theorist it is the frequency
spectrum of the stimulus that is important, whereas for the neural
synchrony theorist it is some aspect of the time waveform, such as
the period, that is important. (..) The controversy between proponents
of two viewpoint gained momentum in the 1940s with the work of
Schouten." [Gulick, et al., 1989, pg. 258]
Pierce describes hearing Schouten's effect first-hand: "He had constructed
a sort of optical siren (Figure 6-4) by means of which he could produce sounds with various waveforms. Using this, he produced sounds with harmonically related partials... Then, by proper adjustments, he
could cancel out the fundamental frequency... I could hear
this fundamental frequency come and go, but the pitch of the sound
did not change at all. In some way, my ear inferred the proper pitch
from the harmonics..." ["The Science of Musical Sound," Pierce,
2nd ed., 1992, pg. 92; also see Ohm, G.S., "Ueber die Definition des
Tones, nebst daran geknuepfer Theorie der Sirene und aehnlicher
tonbildener Vorrichtungen," Ann. Phys. Chem, Vol. 59, ppg. 513-565,
also see Schouten, "The Perception of subjective tones," Proceedings
of the Koninklijke Nederlandse Akademie van Wetenschappen, 1938,
vol. 41, pp. 1083-1093.]
While most of the details of Helmholtz's theory of hearing are now
known to be inaccurate, parts of the underlying idea were adopted
in the "place" theory of hearing. According to this theory, just intonation
is the ideal musical tuning and the 4:5:6 chord which stands at the
center of traditional Western harmony is a necessary outcome of
the physical structure of the human auditory system.
As will be seen, however, all of the objections to Helmholtz's
theory remain troublesome even to modern-day place theories.
The modern place theory cannot explain the simultaneously fine
frequency discrimination of the ear and its broad range of frequency response; the modern place theory cannot explain the missing fundamental;
nor can it explain how the relatively wide travelling waves on the
basilar membrane give rise to delicate pitch judgments. The
modern place theory cannot explain how the ear can detect
the pitch of tones whose fundamental lies below rougly 150 Hz.
The modern place theory cannot explain how only 3000 hair cells
account for the measured jnd of the average subject; the modern
place theory cannot explain why louder sounds are more
accurately judged in frequency when the opposite is predicted
from conventional frequency; and the modern place theory cannot
explain categorical perception, the encoding of pitch and
amplitude in the auditory nerve, the universal human preference
for stretched intervals and beat rates between 4 and 6 Hz, and
so on.
Thus there is substantial reason to doubt that either small integer
ratios or 4:5:6 chords consitute either a privileged or even a
necessary outcome of the human ear/brain system.
On the other hand, some aspects of the auditory system are
convincingly explained by the modern place theory. The cocktail
party effect, the ability to perceive the pitch of tones with
a high-pitched fundamental. "Furthermore, support for the view
that perception ohte pitch of them issing fundamental is not due
to the excitation of low freuqency-sensitive neurons responding
to low freuqnecy distortion products of a complex tone comes
from observatiosn that a low frequency masking noise presented
with the high freuqnecy complex tone does not eliminate the
perception of the missing fundamental." [Gulick, et al., 1989, pg. 259]
The next post will examine in greater detail the place theory
of hearing, and the psychoacoustic evidence for and against it,
along with the implications for tuning and music.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Wed, 27 Sep 1995 19:26 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA06522; Wed, 27 Sep 1995 10:26:35 -0700
Date: Wed, 27 Sep 1995 10:26:35 -0700
Message-Id: <199509271725.KAA19263@osiris.ac.hmc.edu>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

9/28/1995 10:31:12 AM
From: mclaren
Subject: Tuning & Psychoacoustics - post 5 of 25
--
As mentioned in the previous post,
the elements of the modern place
theory of hearing are found in Helmholtz's
19th-century model of the ear. Thus it's
worth taking some time to examine
the implications of that model:
"Helmholtz's hearing theory can be
considered as an elaboration of three
hypotheses. In general terms, the first
one is:
"Hypothesis I. The analysis of sound is
accomplished in the inner ear by means
of a large number of resonators tuned
to different frequencies from low to
high." ["Experiments On Tone Perception,"
Plomp, pg. 102, 1966]
Helmholtz originally ascribed the
"resonator" function to the arches
of Corti but (as mentioned in the
last post) when he put his ideas down
in his book he changed his mind
and proposed that the transverse
fibres of the basilar membrane act
as resonators. His arguments were:
[1] In the cochlea of birds, no arches
of Corti are found (Hasse, 1867); [2]
the width of the basilar membrane
varies from about 0.04 mm at its
base up to 0.5 mm at the helicotrema
(Hansen, 1863); [3] the membrane is
much more tightly stretched transversely
than longitudinally.
"On the basis of these measurements,
Helmholtz estimated the selectivity
of the resonators, amounting to about
4% of the resonance frequency, with
bandwidth proportional to logarithmic
frequency.
"Hypothesis II. A particular tone-pitch
corresponds to each of the numerous
nerve fibers in such a way that pitch
decreases gradually from the basal to
the apical end of the organ of Corti."
["Experiments On Tone Perception,"
Plomp, R., pg. 103, 1966]
While Helmholtz's hypotheses explained
some aspects of human hearing, it
did not explain others. In particular,
these hypotheses did not explain how
combination tones or beats
of mistuned consonances occur.
Thus Helmholtz proposed a third
hypothesis:
"Hypothesis III. The sound transmission
of the ear is characterized by nonlinear
distortion." ["Experiments On Tone
Perception," Plomp, R., pg. 103, 1966]
By means of these 3 hypotheses
Helmholtz was able to explain much
of the experimental data available
to him in 1863.
Other aspects of human hearing remained
unexplained. As Plomp points out, "The
Achilles' heel of his conception was
why periodic sound waves are always
characterized by a pitch corresponding
to the fundamental. ... Even so, Helmholtz's
theory became widely accepted soon
after its publication under the names
of resonance theory and place theory."
[Plomp, R., 1966, pg. 104]
There were other problems with
Helmholtz's theory. Whether in modern
form as the "place" theory or in terms
of Helmholtz's original conception, the
pitch sensitivity of the human ear is
significantly greater than the predictions
made on the basis of the place theory.
Moreover, if the ear is primarily a Fourier
analyzer, why did it respond to the irregularly-
spaced holes of Seebeck's siren (Seebeck,
1846) with a sensation of definite
pitch not present in any of the
Fourier components of the waveform
generated when the siren rotated?
Stumpf, one of the proponents of a
competing theory of hearing, pointed
out these flaws in the original
place theory:
"The view that fibres of 0.5 mm length
should be tuned to low frequencies did
not sound very credible and we may
suppose that many agreed with Stumpf's
statement: `It remains wonderful,
however, that so small particles can
resonate even on the lowest tones
that we produce by strings of enormous
size and by which we can bring into
resonance only strings of the same
size.'" [Plomp, 1966, pg. 107; see also
Stumpf, C., "Tonpsychologie," Vol. 2,
Verlag S. Hirzel, Leipzig, 1890, pg. 92]
Plomp points out: "Some investigators
tried to save the resonance hypothesis
by supposing that the resonators
must be sought in other structures of
the cochlea: the hair cells (Baer, 1872;
Hermann, 1894; Myers, 1904; Specht,
1926) or the tectorial membrane (Kishi,
1907; Shambaugh, 1907, 1909, 1911;
Leiri, 1932). Others, however, rejected
the resonance hypothesis entirely,
proposing new hearing theories in which
the frequency-analyzing power of the
hearing organ was approached in quite
a different way (Meyer, 1896, 1898,
1899, 1907; Ewald, 1899, 1903,
Wrightson, 1918, and many others)."
[Plomp, 1966, pp. 107-108]
Because of the failure of Helmholtz's
original hypothesis to explain many
auditory phenomena, many researchers
cast about during the period from the
1840s to the 1860s for another model
of human hearing. Many researchers
seized upon Seebeck's 1843 proposal
as the answer.
Namely, that "Tones give rise to
synchronous nerve impulses whose
rate determines pitch. Wundt tried
to evade the difficulty that according
to this hypothesis Bernstein's findings
would suggest a pitch limit at about
1600 cps. He explained that not the
total duration of the nerve impulses
but the much shorter duration of their
peaks might determine the highest
pitch audible..." [Plomp, 1966, pg. 105]
In favor of this competing hypothesis,
called the periodicity theory of hearing,
two pieces of early evidence were advanced
by Seebeck, Wundt, Stumpf and others:
"1. Binaural beats. Dove (1839) had
pointed for the first time to the fact
that stimulating the ears separately
with tones of slightly different frequencies
gives rise to slow "binaural beats." Usually,
they were explained as resulting from
bone conduction between the ears (Seebeck,
1846; Mach, 1875; Stumpf, 1890, p. 458;
Schaefer, 1891). Thompson, who discovered
the beats independently, found that they do
not change over into a difference tone when
the frequency difference is increased (1877,
1878, 1881). Therefore, he suggested that
binaural beats are caused by interference in
a higher centre of the auditory pathway."
[Plomp, 1966, pg. 105]
This latter was the first suggestion that the brain was
directly involved in the processing of musical
sounds. Previous theories, like Helmholtz's,
assumed that the ear did all the processing
required and that the auditory nerve simply
acted as a conduit through which the preprocessed
nerve impulses travelled. Wundt's and Stumpf's
observations made it clear, however, that the
brain was *part* of the auditory system which
determined pitch, spectral content, etc.--perhaps
*the* crucial part (as subsequent late-20th-century
"pattern transformation" hypotheses of hearing have
stressed).
The second piece of evidence supporting
the Seebeck/Stumpf/Wundt periodicty theory was:
"2. Direct stimulation of the auditory nerve.
The sensational conclusion that the cochlea
is not essential for obtaining an auditory
sensation was drawn independently by
Fano and Massini (1891) and by Ewald (1892).
They based their opinion on the positive
reactions on sound by pigeons with removed
hearing organs. The conclusion was severely
crticized by Matte (1894), Bernstein (1895),
Strehl (1895), and Kuttner (1896), and
defended by Ewald (1895) and Wundt (1895)"
[Plomp, 1966, pg. 106]
Plomp points out that although this second
competing hearing theory could explain
interruption tones and beats of mistuned
consonances much better than Helmholtz's
theory did, its influence was small, perhaps
because Wundt did not work the theory out
in nearly as much detail as did Helmholtz
in the 2nd edition of "On The Sensation of Tone."
The whole later development of physiological
acoustics can be regarded as an elaboration
of these two competing and contradictory
hypotheses, along with Fetis' 1843 learned-
response theory of hearing. Like Seebeck's
and Stumpf's periodcity theory--which was
largely ignored until Schouten in 1935 performed
a convincing series of experiments which clearly
demonstrated the inadequacy of the place theory
of hearing--Fetis' 1843 theory of learned response
was likewise ignored for many years. Starting in
the 1950s, Ward, Burns, Corso, Licklider, and others
performed a series of experiments which cast profound
doubt on many aspects of both the periodicity and
place theory and strongly supported Fetis' 1843
hypothesis.
Recently, the auditory artifacts produced by
cochlear implants have provided strong evidence against
the periodicity theory: "If we believe the
extreme position that at low frequencies information is
carried purely by the temporal pattern of nerve impulses,
then periodic electrical stimulation should produce
faithful auditory sensations and good discrimination
of frequencies. The results of electrical stimulation have
on the whole been disappointing for such a prediction.
In only a few cases to electrical stimuli seem to produce
clear tonal sensations. A typical report is that tones
sound like "comb and paper" (e.g., Fourcin et al., 1979).
[Pickles, James. O., "An Introduction to the Physiology
of Hearing," Academic Press, 2nd ed., 1988, pg. 316]
On the other hand, the place theory also conflicts with
experiment: "In a quasi-linear spectral analyzer such
as the cochlea the physical limits of frequency resolution
are limited by the duration of the stimulus, as a result
of spectral splatter: stimulus duration x spectral line
width = 1. (..) Temporal theories are not so limited...
(..) On the hypothesis that place and not temporal cues are
used, we can calculate a lower limit for the frequency
difference limen as a function of the length of the
stimulus. Moore (1973) showed that below 5 khz frequency
discrimination for short stimulus was up to an order of
magnitude better than expected on a place basis. "
Pickles, James. O., "An Introduction to the Physiology
of Hearing," Academic Press, 2nd ed., 1988, pg. 273]
As a result, "At the moment pattern hypotheses are
dominant..." Pickles, James. O., "An Introduction to the Physiology
of Hearing," Academic Press, 2nd ed., 1988, pg. 273]
The phenomenon of forward and backward masking
also directly contradicts the place theory. In forward
masking, a masking tone precedes the test tone by a
small time period--in backward masking, the masking
tone occurs *after* the test tone. If Fourier analysis
is occurring mechanically in the ear, it's difficult to
explain how a second tone appearing *after* the test
tone can interfere with the Fourier analysis. And in
any case, the fact that masking occurs is a fundamental
problem for Fourier models of hearing.
"Masking is an example of limitations of the
auditory system's ability to analyze individual frequency
composnents in a complex sound. If the ear were a perfect
frequency analyzer, then one sound would never mask
the detecability of another sound. Instead, simultaneously
presented sounds would be independently processed, and the
perpcetion of one would not affect the perception of others.
Masking demnstrates that this ideal state does not exist.
Whenever masking occurs, frequency analysis fails. When
the presence of a sound of a particular frequency makes it
difficult or impossible to hear another sound of a different
frequency, the ear has failed to analyze and detect the
individual frequency components of the complex sound
created by simultaneous presentation of the two sounds."
[Gulick, W. Lawrence and George A. Geschneider and
Robert D. Frisina, "Hearing: Physiological Acoustics, Neural
Coding, and Psychoacoustics," Oxford University Press,
1988, pg. 300]
As a result of these pervasive problems with both the
place and periodicity theories of hearing,
Fetis' model of the ear/brain system
as a feedback path controlled primarily by software (viz.,
learned response) has now gained great currency.
in part because of the inadequacy of current evidence
In part this is also probably due to increasing use of computers
and software in the congitive sciences and their consequent
popularity as a conceptual model for neural systems.
(As will be seen in a future post, a researcher's tools
exert a potent influence on the mental models he forms.)
If accurate, the "pattern transformation" model of hearing
implies that many different tuning systems and musical
syntaxes are appropriate. According to this theory of hearing,
no particular complex of overtones has a privleged status
in the ear, and no specific musical tuning is implied as
superior on the basis of the structure of the ear.
Because of the importance of this question for tuning and
music, the next post will examine detailed evidence
for and against the periodicity and place models of
pitch perception.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 29 Sep 1995 07:12 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id WAA06397; Thu, 28 Sep 1995 22:12:35 -0700
Date: Thu, 28 Sep 1995 22:12:35 -0700
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

9/29/1995 9:24:03 AM
From: mclaren
Subject: Tuning & Psychoacoustics - Post 6 of 25
---
MYTH: THE OPERATION OF THE EAR CAN BE
EXPLAINED AS THAT OF A FOURIER ANALYZER,
WITH SOME SLIGHT MODIFICATIONS.
FACT: Today there are 3 competing models which
explain how the ear/brain system operates, and
some experimental data supports each hypothesis
and contradicts the others. Each theory has enjoyed
proponents for more than 100 years, primarily
because many aspects of the ear's behaviour cannot
be explained by means of Fourier analysis.
---
During the late 19th and early 20th
century, rapid advances in technology allowed
scientists to subject both the place theory
and the periodicity theory of hearing
to ever-more-sophisticated tests.
"On the basis of these different methods,
the fact is now well established that the
stimulated region of the basilar membrane
shifts for decreasing frequency from the
basal to the apical end." [Plomp, 1966,
pg. 108; see also Cioco, 1934; Crow et al,
1934, Oda, 1938; Stevens et al., 1935;
Walzl and Bordley, 1942; Schuknecht,
1960; Kemp, 1935, 1936, Smith, 1947,
Smith and Wever, 1949; Davis et al.
1953; Culler, 1935, Culler et al. 1937,
1943; and particularly von Bekesy,
1944, 1955, 1957.] von Bekesy
constructed a large and simplified
physical replica of the cochlea which used
the tactile sense of the arm to
stimulate the organ of Corti and the
auditory pathway. He proved that
even in the case of a very broad
maximum of the pattern of vibration,
only a small section was felt
subjectively to vibrate. This lent
strong suport to the view that
the place of maximal stimulation
along the basilar membrane
corresponds to pitch. (That is,
to Helmholtz's theory, with a good
deal of updating; Helmholtz's idea
of "resonators" had to be abandoned,
and many of the details of his theory
modified, to explain experimental
results, as we've seen.)
While von Bekesy's experiments
provided strong confirmation for
some of the place theory's prediction,
they contradicted other aspects
of the place theory. There
remained unresolved, for instance,
the question of how to account
for the ear's extraordinary
sensitivity to tuning differences
of individual partials and of
the fundamental frequency of
the sound wave itself. WIth only
3000 hair cells each spaced 9 microns apart,
this was difficult to explain. Known pitch
discrimination would demand sensitivity to
stimulation on the basilar membrane measured
in fractions of a micron, even though the measured
width of the travelling wave on the basilar membrane
is many times that width. (This objection
was originally raised to Helmholtz's
now-obsolete hypotheses of the
1860s, and it still bedevils advocates
of the modern place theory.) Moreover,
if pitch sensitivity were due solely to the hairs
lining the basilar membrane, and not to neural
processing, there would have to be far more
hairs than the known 3000 inner hair cells
(electron microscopy has shown that
the remaining 12,000 outer hair cells serve
an ancillary function, rather than a direct
freqency-detection role. This is also supported
by data from the action potentials of the
two classes of stereocilia as obtained by
microelectrodes.) "The two types of coupling can
therefore be associated with the different
roles of the two types of hair cell in cochlear
function, inner hair cells detecting the movement
of the [basilar] membrane, and the outer hair cells
helping to generate it." [Pickles, James O., "An
Introduction to the Physiology of Hearing," Academic
Press, 1988, 2nd ed., pp. 158-159]
It is also difficult to explain pitch perception of
sounds with low but missing fundamentals:
"Suppose high harmonics generate the low pitch.
They wil be relatively closely spaced and will not
be resolved by the auditory stytem. Recognition of
the psectral pattern will not therefore be possible,
but hte harmonics wil eb able to interact in the
nervous system to produce periodically varying
activity. Temporal theories are therefore supported."
[Pickles, James O., "An Introduction to the Physiology
of Hearing," Academic Press, 1988, pg. 273]
However, Pickles points out that "This is again an
area which is controversial, and over the years
opinions have swayed in favour of one hypothesis
or the other." (I.e., periodicity or place theory.)
Both theories suffer from the limits imposed
mathematically by their proposed mechanisms
of action, which are in each case different
from those observewd: "The lower limit for the place
principle is believed to be about 150 Hz becuase
the excitation pattern on the basilar membrane does
not change with frequencies lower than this limit.
Some investigators think that the temporal principle
codes ptich for the very low frequencies and supplements
the place principle over the midrange frequencies. The
upper frequency limit of the applicability of the temporal
principle is uncertain. Some investigators put this limit
as low as 300 to 400 hz, and others put it as high as
4000 to 5000 Hz." [Gulick, W. Lawrence,
George A. Geschneider and Robert D. Frisina, "Hearing:
Phsyiological Acoustics, Neural Coding, and
Psychoacoustics," Oxford University Press,
1989, pg. 261]
Equally troubling for advocates of the place
theory is the fact when sine tones are used,
the ear displays a completely different consonance/
dissonance curve than that produced by complex
tones--instead of a series of peaks and troughs, a
smooth shifted-bell-curve-like response is seen
for sine tones. The Ohm/Helmholtz Fourier theory
of hearing fails to explain this result.
Moreover, von Bekesy found that
contrary to Helmholtz's presumption,
"it appeared that combination tones
are not due to nonlinear vibration of
[the timpanic] membrane. Furthermore
he discovered that the introduction
of a negative or positive static
pressure into the external meatus
changed the loudness of difference
tones. This would imply that these tones
are produced in the middle ear." [Plomp,
1966, pg. 111]
"However Wever et al, 1941, conducted
experiments which contradicted von
Bekesy's findings just mentioned. ...
Further investigations...suggested that
the main source of combination tones
must be sought in the sensory
processes, where the microphonic
potential is evoked, and not in the
mechanical part of the inner ear (Wever
and Lawrence, 1954)" [ Plomp, 1966,
pg. 111]
In addition, the experiments of von Bekesy,
who did more than any other researcher to
put the place theory of hearing on a modern
scientific basis, were also open to considerable
doubt. " Von Bekesy's observations
have been questioned on two grounds. Visual
observations mean that the vibration amplitude
had to tbe at least of the order of the wavelength
of light, and the high intensities (130 dB SPL)
necesssary make extrapolation to a more
physiological range unjustified. Secondly, his
measurements were performed on cadavers. It is
now known that not only does the experimental
animal have to be alive, but the cochlea has to be
in extremely good phisological condition, to
show a satisfactory mechanical response."
[Pickles, James O., "An Introduction to the
Physiology of Hearing," Academic Press, 1988,
end ed., pg. 40]
In short, investigations began
to suggest that many important
auditory phenomena could only be
explained by the software, not the
hardware, of the ear/brain system--
that is, by the brain itself. "[For] the
phenomenon...once called "periodicity
pitch" [there are] alternative explanations...
known as "pattern" theories. They suppose that
the auditory system, by recognizing that the tones
sounded are the upper harmonics of a low tones,
supplies the missing fundamental that would have
generated them. This is again an area which is
controversial, and over the years opinions have swayed
in favor of one hypothesis or the other." [Pickles, James
O., "An Introduction to the Physiology of Hearing,"
Academic Press, 1988, 2nd. ed., 1988, pg. 273]
Clearly by the 1980s much of the support for the
place theory of hearing had crumbled. In 1984 Pierce
writes: "Helmholtz accomplished a great deal despite the
limitations of the technology available to him.
Yet he reached false conclusions. He believed the perception
of musical pitch depends on the presence of the fundamental
frequency. This is not true for low keys on the piano keyboard,
or for orchestra chimes, or for bells. HIs other false conclusion
was that the relative phases of sinusoidal components do not
affect the timbre of a sound." ["The Science of Musical Sound,"
Pierce, J.R. , 1992, pg. 185; see also "Tone Segregation by Phase:
On the Phase Sensitivity of the single ear," Kubovy and
Jordan, JASA, Vol. 66, No. 1, 1979, pp. 100-106]
Still, the place hypothesis accounts very convincingly for
at least a few characteristics of the ear/brain
system: it explains how the ear can resolve complex sounds into
separate pitches, explains the function and structure of some
of the mechanical components of the inner ear, it explains
elegantly the near-logarithmic nature of pitch, and it explains why
very close tones are heard as being identical in pitch.
On the other hand, the place theory does *not* explain why stretched
intervals significantly larger than those predicted
by the small whole number ratios (or numerological, essentially
Kabalistic) theory of consonance are universally preferred to
so-called "pure" intervals (which in psychoacoustic tests
are consistently heard as "flat" or "too narrow"). Nor does the modern
place theory explain combination tones, (as mentioned above), or
the fact that two inharmonic-series tones matched to an inharmonic-
series scale sound strongly consonant (Risset, 1978, 1984, 1985;
Sethares, 1992; Geary, 1980; Pierce, 1966; Carlos, 1987; Plomp and
Levelt, 1965, Kameoka and Kuriyagawa, 1969); nor does the
place theory explain (or predict) modern auditory illusions--
Shephard's tones, Risset's tone containing ten 1180-cent
intervals which when transposed UP an octave DROPS
in audible pitch by a perceived 20 cents, etc.
All of these phenomena *can* be explained by the periodicity
theory of hearing as emergent properties of an autocorrelation
system.
However, the periodicity theory itself has a number of problems.
It does not explain the universal human preference for stretched
octaves, fifths and thirds, a preference found in the earliest
experiments performed on measured intervals and in all
double-blind psychoacoustic tests performed for 150 years
since; the periodicity theory of hearing cannot account for the
fact that pitches very close together create a "chorus" effect
instead of massive dissonance. By contrast, the broad region
of general stimulation of the basilar membrane around the
much narrow region of maximal stimulation--one of the hallmarks
of the place theory--explains this effect simply and clearly.
The phenomenon of combination tones is poorly explained by *both*
competing hypotheses. As Plomp points out, "This problem applies
both to place pitch and periodicity pitch. If pitch is based on the place
of maximal vibration, it is essential for hearing a combination tone that
the corresonding place of the basilar membrane is stimulated. Then the
question may be asked of how this can be accomplished by sensory
processes of hair cells at a distant place of the cochlea. The ascertainment
of Six (1956) that cochlear microphonics correponding to combinationg tones have their maximum at the same place as the primary tones, contradicts this possibility.
If, on the other hand, pitch is based on the periodicity of nerve impulses,
the problem arises how impulses that are synchronous with the frequency of combination tones can be initiated when the waveform of cochlear
michrophonics is flattened (the common form of distortion)" [Plomp, 1966, pg. 121]
Summing up, James Pickles points out that "Frequency difference limens
are very much smaller than cirtical bands. Two mechanisms are possible.
For instance, the subject may detect shifts in the place of excitation
in the cochlea. This is called the "place theory." Or he may used temporal
information. We know that the firing int he auditory nerve is phase-locked
to the stimulus waveform up to about 5 khz. In this theory, called the
"temporal" [or "periodicity] theory, the subject discriminates the
two tones by using the tme interval between the neural firings. It is not
clear which of the two mechanisms is used. Indeed the controversy has been
active for more than 100 years, and the fact that it is not yet settled shows
that we still do not have adequate evidence. Auditory physiolgoists divide
into three groups, namely those that think only temporal information is used, those that think only place information is used, and an eclectic
group, who suppose that temporal information is used at low frequencies,
and only place information at high." [Pickles, James O., "An Introduction
To the Physiology of Hearing," Academic Press, 2nd ed., 1988, pg. 271]
The implications for musical tuning are mixed. Because no clear evidence
has emerged in favor of any of the three major theories of hearing,
no single tuning can be considered to "privileged" or uniquely suited
to the human ear. On the other hand, because of the mixed results
from pscyhoacoustic experiments, the data examined so far would
tend to support the use of any of the major categories of musical
tuning: namely, just intonation, equal temperament, or non-just non-
equal-tempered scales.
The next post will discuss several modern experiments which provide
evidence for the third "pattern recognition" hypothesis of hearing,
and the implications of all 3 of these hypothetical auditory mechanisms
for music & tuning.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 29 Sep 1995 19:07 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA23266; Fri, 29 Sep 1995 10:07:19 -0700
Date: Fri, 29 Sep 1995 10:07:19 -0700
Message-Id: <9509291007.aa21406@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

9/30/1995 9:09:50 AM
From: mclaren
Subject: Tuning & Psychoacoustics - post 7 of 25
---
We've seen that three competing theories
of pitch perception have tried to explain
the ear-brain system since the middle
of the 1840s. No one model accounts
for all the ear's behaviour, and some evidence
contradicts each hypothesis.
"Protagonists of both place and time theories
point out how small the detectable limits are
when translated into the terms of the other theory.
Temporal [i.e., periodicity] theorists point out that a frequency
discrimination limen of 3 Hz at 1 Khz corresponds
to a shift in the pattern of excitation on the basilar
membrane of 18 microns, or the width of 2 hair cells.
Place theorists point out the that the same limen
corresponds to a time discrimination of 3 microseconds,
as against some 1000 microseconds for the width of a
nerve action potential, and a variability of some hundreds
in its intiation. (..) There are several lines of evidence
for and against these two theories, none of which is
conclusive." [Pickles, James O., "An Introduction to
the Physiology of Hearing," Academic Press, 2nd.
ed., 1988, pp. 271-272.]
At this point it's instructive to step back
and recall that all cultures are conceptually
limited by their experience. The Cargo Cult
of the South Seas Islands during WW II arose
because the islanders fitted B-17s into
their experience as godlike birds from
a supernatural realm.
In the same way, the most sophisticated
means of frequency analysis available to
Helmholtz was a set of tuned glass resonators.
By putting his ear to these globes, he could hear
a particular resonant frequency amplified out
of a complex harmonic timbre. So it was
natural for Helmholz to model the ear/brain
system as a set of millions of tuned
resonators.
As technology advanced during the late 19th
and early 20th century, high-precision machine
tools became available. So it was natural for
von Bekesy & others to model the ear/brain system as
a precision machine for performing mechanical
Fourier transforms of complex harmonic sounds.
[For more details on these hypotheses, see:
Helmholtz, "On the Senations of Tone," 1863;
Plomp, R. "The ear as a frequency analyzer," JASA,
1964, vol. 36, pp. 1628-1636; Boomsliter & Creel,
"The Long Pattern Hypothesis of Pitch and Harmony,"
Journ. Mus. Theory, Vol. 5, 1961, p. 1-12; von
Bekesy, G. "Concerning the Fundamental Component
of Periodic Pulse Patterns and Modulated Vibrations
Observed on the Cohlaer Model with nerve Supply,
JASA, Vol. 33, 1961, ppg. 888-896; von Bekesy,
"Three Experiments Concerned with Pitch
Perception," JASA, Vol. 35, pp. 602-606, 1963;
von Bekesy, G. "Hearing Theories and Complex
Sounds," JASA, Vol. 35, pp. 588-601, 1963;
Licklider, J.C. R. : "Periodicity Pitch and Related
Auditory Process Models," Intern. Audiol. Vol. 1,
pp. 11-36, 1962.]
Then in the 1940s and 1950s computers became
available, and with them software.
So it became natural for modern researchers to
model the ear/brain system as a combination
of hardware and software, with software
performing the crucial functions of pitch
detection, perception of consonance, dissonance,
etc. Thus we now see papers like: Goldstein, J.R.
"An optimum processor theory for the central formation of the
pitch of complex tones," JASA, 1973, Vol. 54, pp.
1496-1616; Wightman, F. I. "The pattern-
transformation model of pitch," JASA, 1973,
vol. 54, pp. 407-416.
So what do we have?
Quite possibly, a series of cargo cults.
The ear/brain system is viewed
by each era in terms of the most convenient
available paradigms, regardless of whether
those paradigms are actually appropriate.
In an interview with Curtis Roads, Max Mathews
summarized all 3 theories of hearing with
typical elegance and pith: "In a book first published
in 1863, Helmholtz proposed that dissonance
arises from unpleasant beats between partials
whose drequencies are too close together. [4] The
octave is the most consonant of intervals because
all of the partials of the upper coincide in frequency
with partials of the lower tone. (...) Rameau had
another view of harmony. [6] He observed that
in a major triad all frequencies present are integer
multiples of a basse fundamentale or fundamental
bass which, in the root position of the chord (C, E, G)
lies two octaves below the root of the chord. (...)
But, one might hold that musical harmony is merely
a matter of brainwashing; that we accept combinations
of tones that we have been taught are correct, and
reject those that we have been taught are incorrect.
We have some experimental evidence that bears on
this." [Mathews, M. and Pierce, J.R. "Harmony and
Non-Harmonic Partials," Rapports IRCAM, 1980, pp. 3
-5]
The above passage describes clearly the 3
different competing hypotheses of hearing
still competing even today: namely, [1] that
the ear is a frequency-domain Fourier analyzer;
[2] that the ear is a time-domain autocorrelator;
[3] that the ear/brain system uses a learned
neural net system of pitch/interval recognition
and consonance/dissonance classificiation.
The contradictory results of experiments on
pitch sensation lead to the conclusion that
some aspects of all of these 3 models of human
hearing bear some relation to the ear/brain
system's actual operation. However, because
some of the experimental results are contradicted
by each of these 3 models of hearings, it
is inescapably clear that under various
circumstances one or more of these ear/brain
hearing systems becomes dominant, and in
some cases (particularly in the case of
auditory illusions) all 3 of the ear/brain
systems can clash and yield conflicting
results.
These 3 separate hypothetical mechanisms
for processing both vertical and horizontal
(sequential) pitch are: [1] a frequency-based or
Fourier analysis system; [2] a time-based or
autocorrelative system; [3] ear/brain "wetware"
that includes a strong learned component,
and which is capable of actively filtering
out auditory information, creating illusory
auditory information, and transforming some
or all of the information conveyed from the
basilar membrane and the hair cells to the auditory
nerve, and from there into the Sylvian fissure,
the superior medial olive and the geniculate
nucleus--all areas in the brain responsible
for dealing with aspects of auditory perception.
This last point is important, because it is now
known that musicians and non-musicians use
different brain centers when hearing the same
music. Musicians show glucose metabolism
primarily in the left brain when listening to
music, while non-musicians show glucose
metabolism both brain hemispheres.
These PET scan results offer strong
confirmation of the third model of hearing--
what Mathews and Pierce call the "brainwashing"
hypothesis, the model of hearing as molded by
learned response (first put forward by Fetis and
Alexander J. Ellis in the middle of the 19th century).
While the Fourier analysis model of the ear/brain and
the autocorrelative (or periodicity pitch) model
have been extensively documented, what
about experiments documenting the "wetware"
component of the ear/brain system?
Auditory illusions provide strong evidence
for this hypothesis of the ear/brain
system: Shepard, R. N., "Circularity in
judgments of relative pitch," JASA, 1964,
vol. 36, pp. 2346-2353; McAdams, S.
and Bregman, A. "Hearing Musical Streams,"
Computer Music Journal, 1979, Vol. 3,
pp. 26-44; Locke, S. and Kellar, L., "Categorical
percpetion in a non-linguistic mode," Cortex,
Vol. 9, 1973, pp. 355-369; Cohen, A. "Inferred
sets of pitches in melodic perception,"
In R. Shepard, Cognitive structure of musical
pitch," symposium presented at the meeting
of the Western Psychological Association, San
Francisco, CA, April 1978; Burns, E. M. and
Word, W. I., "Categorical perception--phenoneon
or eiphenomenon: Evidence from experiments
int ehperception of melodic musical intervals,"
JASA, 1978, vol. 63, pp. 456-468; Blcehner,
M.J. "Musical Skill and categorical perception
of harmonic mode," Status Report on Speech
Perception, SR-51/52. New Maven, Connecticut,
Haskins Laboratories, 1977, pp. 139-174;
Balzano, G. J., "Musical versus psychoacoustical
variables and their influence on the perception
of musical intervals," Bulletin of the Council
for Research in Music Education, 1981; Bachem,
A. "Tone Height and tone Chroma as two different
pitch qualities," Acta psychogica, 1950, vol.
7, pp. 80-88; Moreno, E., "Expanded Tunings in
Contemporary Music: Theoretical Innovations and
Practical Applications," Vol. 30, Studies in the
History and Interpretation of Music, The Edwin
Meller Press, Lewiston: 1992; Moreno, E. "The
Existence of Unexplored Dimensions of Pitch:
Expanded Chroma," Proc. ICMA, 1992, pp. 404-405;
Pierce, J.R. "Attaining Consonance in Arbitrary
Scales," JASA, 1966, pg. 249; Butler, J. W. and
Daston, P. G., "Music Consonance as Musical
Preference: A Cross-Cultural Sutdy," Journ. of Gen.
Spcyh.., 1968, vol. 79, pp. 129-142; Hutchinson,
W. and Knopoff, L., "The Acoustic Component of
Western Consonance," Interface, Vol. 7, 1978,,
pp. 1-29; Watkins, A. J., "Perceptual Aspects of
synthesized approximations to Melody," JASA,
Vol. 78 No. 4, 1985, pp. 1177-1186; Pikler, A. G.,
Mels and Musical Intervals," Journ. Mus. Theory,
Vol. 10, 1966, pp. 288-298; Risset, J-C., "Musical
Acoustics," Rapports IRCAM 1978, pp. 7-8.
Why are these 3 models of ear/brain function
important to musicians in the real world?
They're of crucial concern to microtonalists because
if the place theory is right, then just intonation is
the ideal tuning. If the periodicity theory is the
correct description of how the ear hears, then
many tunings are acceptable provided that the
interval between the fundamental and the first
partial, and twix each subsequent pair of partials,
is larger than the critical band at that frequency.
As Pickles points out, the periodicity theory does
*not* offer support for conventional Western
musical practice: "It might be thought that the
pleasant consonance of simple musical intervals
depends on the simple relations between their periods,
resulting in synchronous nerve firing. However,
once it is realized that most musical notes are
rich in overtones, and that consonance might depend
on a lack of beats between the harmonics, the
argument cannot be used to support the importance
of time information." [Pickles, James O., "An Introduction
To the Physiology of Hearing," Academic Press, 2nd ed.,
1988, pg. 274]
On the other hand, if Fetis/Ward/Burns' "pattern
recognition" hypothesis of the ear as a pliable active
feedback system molded by learned response is the true
picture of human hearing, then *any* type of tuning is
acceptable. After a while, the listeners will
become acculturated and learn to accept *any*
arbitrary interval as "consonant" or "dissonant."
And what if elements of all three models are
at work in the ear/brain system?
In that case, the implications for musical tuning
are more complex--a situation which will be
considered in the next post.
--mclaren
&

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sun, 1 Oct 1995 02:44 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id RAA20743; Sat, 30 Sep 1995 17:43:51 -0700
Date: Sat, 30 Sep 1995 17:43:51 -0700
Message-Id: <9510010034.AA17220@us2rmc.zko.dec.com>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/1/1995 9:31:39 AM
From: mclaren
Subject: Tuning & Psychoacoustics - post 8 of 25
---
As xenharmonists, it's of some interest to us
exactly how the ear/brain system works. If the
human ear favors one tuning system over another,
we want to know about it.
Alas, the evidence is far from clear, and there are
problems with all three hypotheses of ear/brain
function.
Concerning the place and periodicity hypotheses
of hearing, "On current evidence, it is not possible
to decide betwene the temporal and place theories
of frequency distribution. (..) In any case the best
support for the eclectic [pattern recognition] view
is the rather negative one that the evidence in favour
of either of the other two theories is not conclusive,
and this may be a function of the quality of evidence
available, rather than of the actual operation of the
auditory system." [Pickles, James O., "An
Introduction to the Physiology of Hearing," Academic
Press, 2nd ed., 1988, pg. 277]
Risset summarizes the evidence for and problems
with all 3 proposed ear/brain mechanisms in
his 1978 IRCAM report: "Numerological theories of
consonance suffer difficulties. Because of the
ear's tolerance, intervals corresponding to 3:2 (a
simple ratio) and 300,001/200,000 (a complex
ratio) are not discriminated. Also psychophysiological
evidence indicates that numerical ratios should not
be taken for granted. The subjective octave corresponds
to a frequency ratio a little larger than 2, and is
reliably different for different individuals (Ward,
1954; Sundberg and Lindqvist, 1973); this effect
is increased by sleep-deprivation (Elfner, 1964).
There are also physical theories of consonance.
Helmholtz (1877) links the degree of dissonance
to the audibility of beats between the the partials
of the tones. This theory is hardly tenable,
because the pattern of beats, for a given interval,
depends very much on the placement of the
interval within the audible frequency range. Recent
observations (Plomp, 1966) suggest an improved
physical explanation of consonance: listeners find
that the dissonance of a pair of pure tones is maximum
when the tones are about a quarter of a critical
bandwidth apart; the tones are judged consonant
when they are more than one critical bandwidth
apart. Based on this premise, Pierce (1966, also
in von Foerster and Beuachamp, 1969, pp. 129-132)
has used tones made up on non-harmonic partials,
so taht the ratios of fundamnetal leading to
consonance are not the conventional ones:
Kameoka et al. (1969). have developed an
involved method to calculate the magnitude
of disonance...
"Whereas the explanation put forth by Plomp
can be useful to evalutate the "smoothnesss"
or "roughness" of a combination of tones, it
is certainly insufficient to account for
musical consonance. In a laboratory study,
Van de Geer et al., (1962) found that intervals
judged the most consonant
by laymen do not correspondent to the ones
usually term consonant. This result is
elaborated by recent work by Fuda and Wessel (1977).
The term consonance seems ambiguous, since it
refers at the same time ot an elemental level,
where "smoothness" and "roughness" are
revaluated, and to a higher esthetic level,
where consonance can be functional in a given
style. The two levels are related in a culture-
bound fashion. In music, one does not judge only
the consonance of isolated tones: as Cazden (1945)
states, "context is the determining factor. (...) the
resoution of intervals does not have a natural
basis; it is a common response acquired by all
individuals within a culture area (cf. also Lundin,
1947)." Musical consonance is relative to a musical
style ( Guernesey, 1928); ninth chords, dissonant
in Mozart's music, are treated as consonant by
Debussy (Chailley, 1951; Cazden, 1962, 1968, 1972).
The cultural and contextual aspects of musical
consonance are so important that, despite nativists'
claims to the contrary, purely mathematical and/or
physical explanations can only be part of the story.
cf. Costere, 1962)." [Risset, J-C., "Musical Acoustics,"
Rapports IRCAM, 1978 pp. 7-8]
In short, all 3 hypotheses of hearing explain different
aspects of the ear/brain system. Depending on the
acoustic stimulus, different systems appear to operate
to process sound. This is most powerfully evidenced by Sethares,
W., "Local Consonance and the Interaction between
Timbre and Tuning," JASA, vol. 94 No. 3, 1993, pp. 1218-1219,
Slaymaker, J., "Inharmonic Tones," JASA 1970, and
Roads, C. "An Interview with Max Mathews," Computer Music
Journal, 1980.
In the latter, Mathews points out:
"Our initial experiments were aimed at finding
out what properties of normal harmonic music
carried over to music that was made with
stretched overtones. We found some things
carried over and some things did not. The
sense of "key" carried over better than we
expected.
ROADS: So you can actually detect "keys" in
sequences of completely inharmonic sounds.
MATHEWS: That's right. You play two samples
and a person can reliably say whether they're
in the same or a different key.
Other properties do not carry over. The sense
of finality in a traditional cadence does
not carry over. A person who hears a cadence
with unstretched tones says, "That sounds
very final to me." When he hears the same
cadence played with stretched tones, he'll
say "That doesn't sound especially final." But
we have been able to make other inharmonic
materials which do convey a sense of cadence.
(...)
ROADS: If we can detect "keys" and some form
of finality within a cadence or progressions
within inharmonic tones, then some of the
theories of harmony in the past must not have
been as cogent as some of their proponents
have thought them to be.
MATHEWS: Our results are contradictory. We
loked at two theories. One was the Rameau
theory of the fundamental bass, and the other
was the Helmholtz and Plomp theory of the
consonance and dissonance of overtones. The
destruction of the cadence would support
the Rameau theory and the persistence of
the sense of key would support the Helmholtz
and Plomp theory. So we have one result which
supports one theory and one which supports the
other, with the overall conclusion that the world
a more complicated place than we had perhaps
hoped it was. We will have to dig deeper
before we can say which is causing
the various perceptions we find meaningful
to music" [Roads. C, "An Interview With Max
Mathews, Comp. Mus. Journ., Vol 4 No. 4, 1980,
pp. 21-22]
MYTH: PITCH PERCEPTION IS SIMPLE, AND
CONSONANCE, DISSONANCE AND HARMONY
CAN BE EXPLAINED BY EITHER RAMEAU'S
OR HELMHOLTZ'S MODELS
FACT: Musical phenomena are a complex
interaction of at least 3 ear/brain mechanisms
for recognizing and assigned pitch, and
each ear/brain system can conflict with
the other 2, leading to paradoxical results,
auditory illusions and a great deal of learned
behaviour on the part of the listener as to
what "consonance," "dissonance," and even what
"pitch" is.
This latter will prompt the usual screams
of protest from those into whose brainpans
little information about modern psychoacoustics
has dripped; thus it is important to
make clear that even something as purportedly
"elementary" and "innate" as the pitch sense
displays extremely complex behaviour.
The next post will examine the meaning(s)
of the term "pitch," the psychophysical
factors which influence its perception, and
the implications for tuning & music.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sun, 1 Oct 1995 20:53 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id LAA27839; Sun, 1 Oct 1995 11:52:54 -0700
Date: Sun, 1 Oct 1995 11:52:54 -0700
Message-Id: <199510011852.AA24025@net4you.co.at>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/2/1995 9:12:35 AM
From: mclaren
Subject: Tuning & Psychoacoustics - Post 9 of 25
---
"A theory should as simple as possible--
but not simpler." -- Albert Einstein
---
MYTH: PITCH IS THE LOGARITHMIC FREQUENCY
HEIGHT OF A MUSICAL TONE, AND ITS
PERCEPTION IS AUTOMATIC AND INNATE.
Fact: There are 3 kinds of pitch: physical,
mel and perceptual pitch. The 3 differ
significantly. Perceptual pitch is never
identical to the log of the frequency of
the fundamental of the perceived tone,
and mel pitch differs radically from both.
The perception of all 3 kinds of pitch is
strongly influenced by both context and
learned experience.
---
The perceived pitch of sine tones depends
on their duration and their loudness. "A
150-Hz tone increasing from 45 to 90 dB
drops in pitch to an extent corresponding
to a 12% frequency shift. This is close
to 2 semitones in the diatonic scale. The
sensitivity to this effect varies considerably
between individuals. (...) A funny consequence
of this is that a soft sine tone at 300 Hz
may sound as a pure octave of a loud sine
tone at 168 Hz. The mathematically pure
octave, however, has the frequency of 150
Hz. The tone that sounds as a pure octave
is 12% too high. This means that mathematically
it is a minor seventh! This is a good argument
for avoiding confusion of perceptual and physical
entities." ["The Science of Musical Sounds,"
Sundberg, 1992, pg. 46]
Pierce explains that "By asking naive subjects
to relate frequency changes of sine waves to
a halving of pitch, psychologists found a mel
scale of pitch (for sine waves). In the mel scale
there is no simple relation between frequency
and pitch; nothing like the octave shows up. (...)
The sounds of orchestral bells are not periodic,
and these sounds do not have all the properties
of periodic musical sounds. One can play tunes
with bells, and the pitches that are assigned
to bells can be explained largely in terms
of the frequencies of prominent almost-
harmonic partials.
"Clucking sounds and shushing sounds (bands
of noise) have brightness, but no
periodicity. Oddly, *we can play a
recognizable tune with these sounds,
even though they cannot be heard as
combining into chords or harmony.*
Apparently, in the absence of a clear pitch,
brightness can suggest pitch.' ["The Science
of Musical Sound," Pierce, J.R. 1992,
pg. 37]
"Systematical series of experiments have been
carried out in which listeners have been
asked to adjust the frequency of a variable
tone so that it sounds "twice as high"
or "half as high" as a reference tone. (...)
The mel scale is constructed such that
a halving of the number of mels corresponds
to a halving of the pitch perceived. As
shown in the figure, a tone with the pitch
of 1,000 mel sounds twice as high as
another tone with the pitch of 500 mel.
Examination of the figure tells us this
corresponds to a frequency shift from
approximately 1,000 to 380 Hz." ["The
Scinece of Musical Sounds," Sundberg,
1992, pg. 47]
The mel scale of pitch measures what
is also sometimes called "ratio pitch."
This is drastically different from the
ordinary scale of perceptual musical
pitch, since the mel scale applies only
to sine tones.
However, the plot thickens as soon as
we realize that *many musical timbres
can be modelled as sums of sine waves.*
Thus the conceptual basis for
conventional harmonic-series models
of consonance, as well as for the
purported acoustic superiority of
small whole-number ratios,
comes into doubt as soon as we
begin to examine the psychoacoustic
evidence in detail. If perceptual
pitch is always different from
physical logarithmic pitch, how
can either equal-tempered or
just intonation tunings offer a
valid model for musical harmony
and musical melody?
To make matters even more complex,
"In certain cases the amplitude
dependence of the pitch of complex
tones is the opposite of that shown
in Figure 3.4. If the loudness of a
complex tone of about 100 Hz
fundamental frequency is increased,
its pitch may rise rather than drop."
["The Science of Musical Sounds,"
Sundberg, 1992, Pg. 46]
"The pitch of pure tones depends not
only on frequency, but also on other
parameters such as sound pressure
level. (...) Pitch shifts of pure
tones can also occur if additional
sounds that produce partial masking
are presented. Pitch shifts produced
by a broad-band noise masker are
shown in Fig 5.4, and are given as
a function of both frequency and
critical-band rate of the pure tones,
the level of which is 50 dB. (...)
The results display in Fig 5.5 show pitch
shifts up to 8% at low frequencies
near 300 Hz, and a pitch shift
of only 1% at higher frequencies
between 1 and 4 kHz, due to the octave
ratio of partial-masking tone
and test tone." ["Psychoacoustics:
Facts and Models," Zwicker
and Fastl, 1993, pp. 105-107]
The pitch of pure tones is also
dependent on their duration.
Doughty and Garner (1948) found
that pitch is unchanging for tones of
25 msec and longer, but that
12-msec and 6-msec tones have
a lower pitch. [See Doughty, J. M and
Garner, W.R. "Pitch Characteristics
of short tones. II: Pitch as a function
of tonal duration," J. Exp. Psychol.,
Vol. 38, pp. 478-494, 1948]
Corso summed up the situation when
he concluded that "the pitch of musical
sounds is not directly proportional to
the logarithm of the frequency and is
probably complexly conditioned." [Corso,
J.F., "Scale Position and Performed Musical
Octaves," Journal of Psychology, Vol. 37,
1954]
In short, most of what musicians
"know" about pitch is untrue,
at least as far as pure sine tones
are concerned. This casts strong
doubt on tuning theories which ascribe
to various pitch relationships "special"
characteristics--in particular, the
data adduced in this post casts strong
doubt on both just and equal-tempered
tuning systems, and would tend instead
to favor non-just non-equal-tempered
tunings.
What about complex tones made
up of many sinusoidal components
and the influence of learning and
context? What are the implications
for tuning and for music?
That's the topic of the next post.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 2 Oct 1995 19:10 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA17686; Mon, 2 Oct 1995 10:10:07 -0700
Date: Mon, 2 Oct 1995 10:10:07 -0700
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/3/1995 8:22:15 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 10 of 25
---
So far we have seen the complexity of the ear's response to pitch; not a
quantity linearly proportional to the logarithm of frequency, pitch appears
to be influenced by many aspects of timbre--amplitude, masking tones,
overtone harmonicity, and range of the note played.
The ear's perception of complex sounds is equally complicated.
"Both von Helmholtz and Wundt based the development of harmony and
melody on the coinciding harmonics for consonant intervals." [Plomp, R. and
Levelt, W.J.M., "Tonal Consonance and Critical Bandwidth," Journ. Acoust.
Soc. Am., Vol. 6, No.1, April 1965, pg. 549]
Simple experiments, conducted in the 1960s, showed this not to be the case.
"On the basis of more recent and more sophisticated experiments (Plomp and
Levelt, 1965) on consonance judgment involving pairs of pure tones and
inharmonic complex tones, it became apparent that the beats between
harmonics may not be the major detemrining factor in the perception of
consonance. Two pure tones an octave or less apart were presented to a
number of musically naive (untrained) subjects who were supposed to give a
qualification as to the "consonance" or "pleasantness" of the superposition.
A *continuous pattern* was obtained, that did not reveal preferences for
any partuclar musical interval. Whenever pure tones are less than about a
minor third apart, they were judged "dissonant" (except for the unison);
intervals equal orlarger than minor third were judged as more or less
consonant, irrespective of the actual frequency ratios. The shape of the
curve really depends on the absolute frequency of the fixed tone." [Roederer,
J., "The Physics and Psychophysics of Music," 1973, Pg. 142]
Plomp and Levelt called this interval, somewhat smaller than a minor third,
"the critical bandwidth." It changes size slightly at lower frequencies.
"...maximal tonal dissonance is produced by intervals subtending 25% of the
critical bandwidth, and maximal tonal consonance is reached for interval
width of 100% of the critical bandwidth." [Plomp, R. and Levelt, W.J.M.,
"Tonal Consonance and Critical Bandwidth," Journ. Acoust. Soc. Am., Vol. 6,
No.1, April 1965, pg. 549]
"An interesting consequence of the significance of the critical bandwidth is
that the degree of dissonance of a dyad depends on many factors other than
the frequency ratio. If the tones have many strong overtones, the
consonance quality is reduced. A consonant dyad becomes increasingly
dissoannt when it is transposed downward on the frequency scale, just as
for sine tones. For example, a major third sounds reasonably consonant
around A4, but if played close to C2 it sounds quite dissonant on most
instruments. The relation between consonance and frequency ratios is also
entirely dependent on whether the tones have harmonic spectra.
"Consonance is apparently a highly conditioned phenomenon. It is stimulating
to realize that the dissonance/consonance concept in music theory would
have been entire different if our musical instruments had not provided us
with harmonic spectra!" [Sundberg, J., "The Science of Musical Sounds,"1992,
pg. 85]
This finding lends support to all three tunings (just, equal tempered, non-
just non-equal) provided that the partials of the musical notes are changed
so as to fit the tuning.
As mentioned above, Plomp's and Levelt's findings (extended and refined by
Kameoka and Kuriyagawa's formula for calculating the consonance of
complex tones) also casts doubt on many conventional "rules" of harmony
and melody--even if just intervals and perfectly harmonic overtones are
used: "As an application of the consonance theory, effects of harmonic
structure on the consonance characterisc are discussed. (...)
(...) It became clear that the fifth was not always a consonant interval. A
chord of two tones that consists of only odd harmonics, for example, shows
muchworse cosnonance at the fifth (2:3) than at the major sixth (3:5) or
some other frequency ratios. This was proved true by psychological
experiments carried out in another institute (sensory Inspection Committe
in the Japan Union of Scientists and Engineers) with a different method of
scaling. Thus, the fact warns against making a mistake in applying the
conventional theory of harmony to synthetic musical tones that can take
variety in the harmonic structure." [Kameoka, A., and Kurigawa, M.,
"Consonance Theory Part II: Consonance of Complex Tones and Its
Calculation Method," Journ. Aoucst. Soc. Am., Vol. 45, No. 6, 1969, pg. 1460]
Because consonance and dissonance depend not on the harmonicity of two
complex tones, but on the coincidence (or lack thereof) of their component
partials within the critical bandwidth for that frequency range, "...these
examples refer to a vast domain opened up by digital synthesis, namely that
of inharmonic tones. Most sustained instrumental tones are equasi-
periodic, and their frequency components are harmonically related, which
stresses certain intervals like the octave and the fifth. With the freedom of
constructing tones from arbtirary frequency components, one can break the
relationship between consonance-dissonance aspects and fixed, privileged
intervals (Pierce 1966). In his piece Stria (1977), Chowning has thus been
able to make rich textures permeate each other without dissonance or
roughness, by controlling the frequencies constituting these textures. This
is also a case where spectra not only play a coloristic role (see Roads 1985)
but actually peform a quasi-harmonic function." [Risset, J.C., "Digital
Techniques and Sound Structure in Music," in "The Music Machine," ed. Curtis
Roads, 1985, pg. 122]
"By using a digital computer, musical tones with an arbitrary distribution of
partials can be generated. Experience shows that, in accord with Plomp's
and Levelt's experiments with pairs of sinusoidal tones, when no two
successive partials are too close toegher such tones are consonant rather
than dissonant, even though the partials are harmonics of the fundamental.
For such tones, the conditions for consnance of two tones will not in general
be the traditional ratios of the frequencies of the fundamentals. (...) It
appears that, by providing music with tones that have accurately specified
but nonharmonic partial structures, the digital computer can release music
from the tyranny of 12 tones without throwing consonance overbarod."
[Pierce, J.R., Journ. Acoust. Soc. Am., Vol. 6, No. 12, 1966, pg. 249]
"I suggest that the nonharmonic domain of frequency relationships may in
some way contain a necessary system of hierarchical structural functions."
[Dashow, J., "Spectra As Chords," Computer Music Journal, 1980]"
"The hypothesis has been made that perceived effects similar to the
consonance and dissonance experienced with harmonic tones should exist
for inharmonic tones. Clearly, it cannot be claimed that the perceptions are
exactly the same, since inharmonic and harmonic tones themselves sound
different to the ear. However, the experiments do establish a similarity
between the consonance dissonance phenomenon in harmonic and inharmonic
sounds." [Geary, J.M., "Consonance and Dissonance of Pairs of Inharmonic
Tones," J.Acoust. Soc. Am, 67 (5), May 1980]
"The chords sounded smooth and nondissonant but strange and somewhat
eerie. The effect was so different from the tempered scale that there was
no tendency to judge in-tuneness or out-of-tuneness. It seemed like a peek
into a new and unfamiliar musical world, in which none of the old rules
applied, and the new ones, if any, were yet undiscovered." [Slaymaker, F. H,
"Chords From Tones Having Stretched Partials," J. Acoust. Soc. Am., Vol. 47, pp. 1469-1471, 1970]
"We have to compose real music of many kinds within all and any of our new
tuning schemes, if this work is to have any lasting value at all, or be taken
seriously by the music community..."[Carlos, W., "Tuning: At the Crossroads,"
Computer Music Journal, 1987]
In short, "Experiments with inharmonic partials (Slaymaker, 1970; Pierce,
1966) have shown that consonance or dissonace is indeed dependent on the
coincidence of partials and not necessarily on the simple frequency ratio
between the fundamnetal frequencies..." [Rasch, R.A. and Plomp, R., "The
Perception of Musical Tones," in "The Psychology of Music," ed. Diana
Deutsch, 1982, pg. 21]. Thus all three tuning systems appear equally viable
on the basis of the evidence considered in this post, given a digital or acoustic instrument whose partials are matched to the tuning system
in question.
The next post will discuss the important phenomenon of categorical
perception, and its implications for tuning and music.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 3 Oct 1995 19:30 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA04880; Tue, 3 Oct 1995 10:30:18 -0700
Date: Tue, 3 Oct 1995 10:30:18 -0700
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/4/1995 10:18:47 AM
From: mclaren
Subject: Tuning & Psychoacoustics - Post 11 of 25
---
MYTH: "I KNOW WHAT MY EARS HEAR AND
I KNOW WHAT A 'PURE' OCTAVE, A 'PURE'
FIFTH, AND A 'PURE' THIRD IS."
FACT: Because of the phenomenon of
categorical perception, none of us know
we actually hear--as opposed to what
our ear/brain system brainwashes us
into *believing* we hear. The only way to
actually *determine* what you're
hearing (rather than what you *think*
you're hearing) is to use double-blind
psychoacoustic tests.
Until 1969, such tests were seldom
used. Computers were unheard-of;
and prior to computer-generated
psychoacoustic test tones, the only
available readily controllable test
tones were those generated by analog
circuits whose frequency drifted
by significant amounts as the temperature
of the sound-generating circuit changed.
As a result, ignorance of the
ear/brain system's behaviour was
near-absolute prior to Max Mathews'
creation of the acoustic compiler in 1959.
As a result of Mathews' innovation, many
surprising properties were discovered in
the ear/brain system.
One of the most surprising of these properties
is known as "categorical perception."
---
The phenomenon of categorical perception
is familiar to linguists.
Everyone pronounces phonemes,
vowels, and consonants slightly differently--
and in different regional dialects the
sound of a word may be entirely
transformed. In New England, "pahk my cah,"
in the MidWest, "park my car," down south,
"purk m' cuhr." The ear/brain system
has a learned mechanism for dealing with
these differences--different sounds are
heard as the same semantic unit. This
ear/brain system of learned categorization
is known as categorical perception, and it
operates so efficiently that people in a given
region of the country cannot even "hear" their
own accent. As far as they can tell, they're
speaking "standard English"--everyone else is
slurring or pinching or warping their words "with
some strange kind of accent."
Categorical perception has been proven to
operate in the perception of musical sounds,
and it gives rise to many of the same distortions
of the auditory system.
In the paper "Categorical Perception--Phenomenon
or Epiphenomenon: Evidence from experiments in
the perception of melodic musical intervals,"
by E. M. Burns and W. D. Ward [JASA, vol. 63, No. 2,
1978, pp. 456-468], the authors point out:
"An experiment on the perception of melodic
intervals by musically untrained observers
showed no evidence for the existence of
"natural" categories for musical intervals."
The authors also found that for trained
musicians "musical intervals are also
rather unique in that musicians are able to
perfectly identify more than 30 categories
of musical intervals..." These results
strongly contradict both the standard 12-tone
dogma that only the intervals of the 12-TET
scale are recognizable or musically
significant; and the standard "natural
interval" dogma which holds that some
musical intervals are [fill in your own
preferred propaganda] "natural," "pure,"
"preferred," "rational," etc.
Among other interesting conclusions, Ward
and Burns found that "The average difference
limen (based on the 75% correct
points from the psychometric functions)
for three subjects at the physical octave
was 16 cents. The DL's at other ratios in the
civicinity of the octave were not
significantly different. A DL of 16 cents is in
good agreement with the DL estimated from the
from the standard deviation of repeated
adjustments of sequential octaves (about 10
cents) in the same frequency region found by
Ward (1954). (...) As in Moran and Pratt's
experiment, large differences were found for
DL's at different ratios, but the range of
DL's (14-25 cents) was in good agreement
with their results." [Burns, E. M. and Ward,
W.D., JASA, 63(2), Feb. 1978, pg. 456]
This preference for stretched as
opposed to purportedly "natural" intervals
is not a new discovery. As will be seen
in the post after this one, the preference
for stretched vertical intervals--and for
significantly *wider* melodic than
vertical intervals--was discovered by
the very first researchers who
investigated the operation of the
ear/brain system.
What are the implications of these particular
psychoacoustic data for tuning and music?
First, these data explain clearly and convincingly
why there are so many different tunings systems
and timbres used in the various musics of cultures
throughout the world. Because of the influence of
categorical perception and the implied importance
of learned response on the ear/brain system, any
system of pitches can be learned as "preferred"
by the ear. Thus a Mongolian Buddhist using the r-gynd-stad
tuning can with equal justification claim that the pitches
of his musical system enjoy a privileged status in the
ear/brain sytem as can Javanese gamelan performer.
According to the psychoacoustic results adduced above,
both musicians are correct--because the cultures in
which their pitch preferences were formed characterize
those particular pitches as "special." And because of the
known effects of learned response and categorical perception,
a wide variety of pitches can equally be perceived as
"special" or "uniquely privileged."
These psychoacoustic data would also tend to support current
Western musical practices, at least to the extent that the Western
12-tone tuning system is acculturated into Wesern musicians
and composers, and to which Western performers and audiences
perceive departures from those pitches as falling within the
range of variability which (as Moran and Pratt point out) characterize
all pitches.
Categorical perception strongly favors all three classes of tuning--
just intonation, equal temperament and non-just non-equal tunings--
since once the pitches are learned and perceived as "special" or
"privileged" both audience and performers strongly tend to
perceive departures from those pitches as "ornamental," if
indeed the departures are heard as different pitches at all. If
these psychoacoustic results are accurate, all tuning systems
are self-reinforcing feedback systems, with "errors" heard as
slight variations of base pitches (as in the case of Jaipongan,
where slendro or pelog are used as base scales for ornamental
extra-scalar variations, or as in the various sruti of East Indian
practice, where the remaining 22 pitches are used as ornamental
extra-modal variational pitches, or as in the vocal inflexions of
Sinead O'Conner, Louis Armstrong or Ella Fitzgerlad, all of whom consistently range microtonally outside the 12-tone equal tempered
scale in which their song purports to reside).
The next post will consider the effects of possible interactions
between the various ear/brain processes discussed to date,
and the evidence for such interactions, along with the implications
for tuning and music.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 5 Oct 1995 21:58 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id MAA22353; Thu, 5 Oct 1995 12:58:04 -0700
Date: Thu, 5 Oct 1995 12:58:04 -0700
Message-Id: <9510051256.aa03198@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/5/1995 12:58:04 PM
From: mclaren
Subject: Tuning & psychoacoustics - post 12 of 25
---
The psychoacoustic evidence for the periodicity and Fourier-analysis-based
models of hearing has now been examined.
But what about interaction between these two ear/brain systems?
Evidence for this comes from David Wessel's examination of a psychoacoustic effect known as "streaming" in the mid-70s:
"Consider a melodic line of eleven tones where the even-numbered tones and
the odd-numbered tones are separated in register. As shown...at a rate of 5
or 6 tones per second, a listener would hear the sequence as a coherent
succession. At a faster tempo--10 to 12 tones per second--the high tones
group together to form a separate stream from the low tones. At an
odge, C., Jerse, T. "Computer Music: Synthesis, Composition and Performance," 1985, pg. 47]
Time is a crucial factor in pitch perception, implying further feedback
between the periodicity and Fourier-analysis systems of pitch detection:
"The data...show that the just-noticeable relative frequency difference
increases with decreasing test-tone duration. (...) At long durations (around
500 ms) a critical-band rate difference of 0.01 Bark represents the just-
noticeable difference for pitch. At a duration of 10 ms, the JNDF amounts
on average to 0.2 Bark. For a decrease of the test-tone duration by a factor
of 10 ms, the magnitude of the JNDF, expressed in critical-band rate,
increases by a factor of 10. Thus pitch differences which are easily
detected at logn durations are no longer distinguishable at short durations.
This effect is well known to musicians: inaccuracies in intonation, easily
detected in sustained tones, almost disappear if the tones are considerably
shortened in duration, for example by playing `spiccato.'" [Zwicker, W. and H.
Fastl, Psychoacoustics: Facts and Models, 1990, pg. 116]
Other results make it clear that the perception of pitch is dependent on the
temporal order of tones:
"When presented witha group of spectral components, a listener may or may
not fuse them into the percept of a single sound. One of the determining
factors is the "onset asynchrony" of the spectrum which refers to the
difference in entrance times among the components. (...) Rudolf Rasch has
noticed a related phenomenon with regard to the synchronization of tones in
chords in polyphonic music. He has found that the amount of asynchrony in
starting times of chord tones actually improves our ability to perceive the
individual tones while we continue to perceive the chord as a whole.
Rasch has shown that the effect obtains best when the attacks of the tones
are spread out over a time span of 30 to 50 msec." [Dodge, C., and Jerse., T.
"Computer Music: Synthesis, Composition and Performance," 1985, pg. 59]
Because of this unexpected interdependence of time with freqeuncy
perception, it seems likely that all 3 of the ear's mechanisms for processing
sound interact to some degree. There is further evidence for this interaction
between all 3 ear/brain systems in the form of auditory paradoxes:
"Paradoxical effects can be obtained thanks to the precision and flexibility
inherent in computer synthesis. Shepard produced a sequence of 12 tones in
chromatic succession which seem to rise indefinitely in pitch when they are
repeated. I extended this paradox and generated, e.g., ever-ascending or
descending glissandi, and sounds going down the scale and at the same time
getting shriller. These paradoxes are not merely "truquages"--artificial
curiosities: they reflect the structure of our pitch judgments. Pitch appears
to comprise a focalized aspect, related to pitch class, and a distributed
ascpect, related with spectrum, hence with timbre--and the paradoxes are
obtained by controlling independently the physical counterpart of these
attributes, which are normally correlated. I have even manufactrured a
sound which does down in pitch for most listeners when its frequencies are
doubled--i.e., when one doubles the speed of the tape recorder on which it is
played; this shows how misleading mere intuition can be in predicting the
effect of simple transformations on unusual sounds." [Risset, J.C., "The
development of Digital Techniques: A Turning Point for Electronic Music?"
Rapports IRCAM No. 9, 1978, pg. 7]
These effects make clear the importance of context and the subtlety of
interaction among the ear's various methods of sound processing.
This would tend to militate against tuning theories which stress absolutes
like beats or the convenience of easy modulation, and would support instead
the use of non-just non-equal-tempered tunings whose context-driven
character mirrors this aspect of the ear/brain system.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 6 Oct 1995 07:40 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id WAA01139; Thu, 5 Oct 1995 22:40:25 -0700
Date: Thu, 5 Oct 1995 22:40:25 -0700
Message-Id: <951006053855_71670.2576_HHB33-5@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/6/1995 11:06:52 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 13 of 25
---
One of the oldest of old wives' tales about the ear/brain
system is the fairy tale that the ear/brain system responds
with in some special (viz., magical) way to intervals described
by small whole numbers. Sometimes the supersitition is overt,
betraying clearly its occult origins in Cabbalism, gematria,
Hindu astrology and Babylonian extispicy--as in Zarlino's choice of
the senario (small whole numbers less than 6) as the
basis for tuning because of the Medieval system of number
mysticism, according to which 6 was "a perfect number" because
its prime factors add up to the number itself.
Or as in the case of Johannes Avianius' view of the 4:5:6 triad
as "Unitrisona omnis Harmoniae," a mystical "three in one"
musical trinity deriving its supernatural potency from
the Christian Father, Son and Holy Ghost.
In other cases the superstitition is gussied up with references
to Fourier analysis--the mathematics of which (as has been seen)
do not explain many properties of the ear/brain system.
Contrary to this long-running musical myth, psychoacoustic
evidence shows that listeners hear stretched intervals as
"pure" and intervals with the small whole number ratios
predicted by numerological theories of consonance as "too
flat" and "impure."
The evidence is extensive:
In his 1976 thesis for the Dept. of Speech Communciation and Music
Acoustics at the Royal Institute of Technology at Stockholm, K. Agren found
the average size of the major second to be 199 cents; the major third 402
cents; the purportedly "perfect" fifth 704 cents; and the octave 1204 cents.
The standard deviation for all subjects was 14 cents for the M2nd, 9 cents
for M3rd, P5 10 cents, and octave 10 cents. In accord with all other
experiments on perception of musical intervals, Agren found that subjects
uniformly preferred *stretched* intervals on average 5-8 cents wider than
their purportedly "natural" counterparts.
This preference for stretched as opposed to purportedly "natural" intervals
is not a new discovery. The preference for stretched vertical intervals--
and for significantly *wider* melodic than vertical intervals--was
discovered by the very first researchers who investigated the operation of
the ear/brain system.
C. J. Delezenne, in "Memoires sur les valeurs numeriques des notes de la
gamme," Recueil des travaux de la Societe des Sciences de Lille, 1826-
1827, was the first researcher to identify the preference for thirds wider
than the purportedly "natural" 5:4. Delezenne's data also showed a
preference for stretched octaves: "In fact, it is a daily experience that in
climbing to the octave from the tonic, the musical ear demands the octave
so strongly that in order to get away from the leading tone, and to arrive
more quickly at the octave, the latter is raised involuntarily." Pg. 24,
Ibid.) Delezenne used an adjustable monochord and gave his values in
fractions of a Pythagorean comma rather than in cents.
His results are especially striking because when starting the experiment he
explicitly rejected the Pythagorean system, but was eventually forced to
admit the now-universally-recognized preference for stretched intervals
wider than the so-called "natural" intervals.
In 1869 Cornu and Mercadier built a phonautograph, an early method of
recording waveforms. Mssrs. C. & M. directed the sound waves with a
parabolic dish toward a membrane, which in turn engraved the waveform on
a smoked drum of camphorated cellulose by means of a needle.
Simultaneously an electric chronometer marked the drum the drum at
intervals of 1 second. By measuring the wavelength, Cornu and Mercadier
could precisely determine pitch--as opposed to the usual wild guess about
what pitch was actually played, or heard (the "wild guess" method is still
used by many contemporary music theorists). Many musicians assert on the
basis of nothing other than some undefined extrasensory perception that
they "can tell a Pythagorean third when they hear it," etc.
As these experiments unmistakably prove, this is not so.
Again, to make the point clear, categorical perception leads us to recognize
as "perfect" or "natural" intervals which differ greatly from small-whole-
number ratios; moreover, there is at work a universal human craving for
stretched intervals.)
The amount of evidence for human preference for stretched octaves is so
voluminous that no digressions can be afforded. The subject will be dealt
with later in this series. Thus, to return to the early scientific record:
Cornu & Mercadier found that the mean intonation for a vertical dyad
yielded the ratio 1.251, close to but sharper than the 5/4; however, a
consecutive-tone test yielded 1.2666 as the mean value.
Notice that this shows a marked preference not only for stretched vertical
intervals, but for even more widely stretched melodic intervals.
The Pythagorean third is 1.256, so the experiment showed that in the
successive intonation of 2 tones, the subjects preferred a sharped
Pythagorean third. Cornu and Mercadier interpreted their results to mean
that musicians customarily use 2 different thirds for vertical and
successive intervals, and that they also prefer intervals somewhat sharper
than would be expected by contemporary models of human hearing. [Cornu &
Mercadier, "Sur les intervalles musicaux," Comptes Rendus de l'Academie
Royale des Sciences, 1869a, pp. 301-308.]
The evidence for a preference for wider-than-"rational" intervals doesn't
end in 1869, however. It scarcely begins there:
In 1876 Preyer found that what he called the "index of sensitivity" for the
major thirds was 158; this corresponds to 11 cents. So intervals as large
as 397 cents were still identified as major thirds. This is midway between
the 5/4 and the 81/64 (12 cents short of the 407-cent Pythagorean major
third). [Preyer, W. T., "Ueber die Grezen der Tonwahremung," Jena, 1876]
In 1897-8 Carl Stumpf studied both the minor third and the major third in 3
forms: ascending, descending and simultaneous intervals. His results showed
a marked asymmetry in the spread of sharp vs. flat errors. For the minor
third, the listeners were more inclined to accept considerable flatting than
slight sharping. Stumpf stated that the "point of subjective purity" shifted
toward flat minor thirds. Analysis of data from his second experiment with
major thirds showed exactly the opposite tendency. In this case, the point of
subjective purity shifted to sharp thirds. [Stumpf, C. and H. F. Meyer,
"Massbestimmungen ueber die Reinheit consonanter Intervale," in Beitraege
zur Akustik und Musikwissenschaft, Vol. 2, 1898]
Moran and Pratt's investigations in 1926 found an average error of 18 cents
for the intervals of the 12-TET scale. "There is a range of about half an
equal semitone midway between each musical interval, within which an
interval should be recognized by D as neither of the familiar intervals, next
above or below it." This meant that major thirds sharped by as much as 25
cents were still recognized by their subjects as pure thirds. [Moran, H. and
C. C. Pratt, "Variability of Judgements on Musical Intervals," Journal of
Experimental Psychology, Vol. 9, 1926]
Comprehensive statistics showing a preference for melodic Ptyhagorean
thirds were first compiled by P.C. Greene in 1937. [Green, P.C. "Violin
Performance with Reference to Tempered, natural and Pythagorean
Intonation," Iowa Studies Music 4, pp. 232-251, 1937]
Bolt was the first to use pure sine waves and electronic instrumentation. In
1947 he observed mistuned major thirds and found the same results--
although his main interest was in spectral cues for identifying mistuned
intervals. [Bolt, R.H., "Masked Differential Pitch Sensitivity of the Ear for
Musical Intervals, " JASA, vol. 19, 1947]
In 1948 Nickerson observed the same melodic preference for widely
stretched (viz., near-Pythagorean) thirds. [Nickerson, J.F., "A Comparison of
Performance of the Same Melody in Solo and in Ensemble with Reference to
Equi-Tempered, Just, and Pythagorean Intonation," Ph.D. thesis, Univ. Minn.,
1948]
Ward studied the octave sense along with preferred values for thirds and
fifths starting in 1954.
Ward's 1970 chapter in "Music Perception" lends further support to the
preference for wider-than-5:4 thirds, both vertically and melodically.
Additional papers which amass evidence for this universal human preference
include: Ward, W.D., "Music Perception," In "Foundations of Modern Auditory
Theory," Ed. J. V. Tobias, 1970 see also Ward, W.D. "Subjective Musical
Pitch," JASA, Vol. 25, 1954.] Corso and Pikler & Harris reproduced these
results in the late 50s-early 60s. [Corso, J.F. "Scale Position and Performed
Musical Octaves," J. Pysch., Vol. 37, 1954; Pikler, A. G. & J.D. Harris,
"Measurement of the Musical Interval Sense," JASA, Vol. 33, pg. 862, 1961.]
Pikler's article "History of Experiments on the Musical Interval Sense,"
Journal of Music Theory, 1966, pp. 55-95, summarizes these results at
length.
During the late 60s and throughout the 70s Johan Sundberg used computer
analysis of recordings of live performances to determine performers'
intonational preference. His difficult-to-obtain report STL-QOSR- 2-3,
Speech Transm. Lab., Stockholm 1970, "Statistical computer measurement
of the tone-scale in played music," by Fransoon, F., Sundberg, J. and
Tjernlund P. is summarized in Sundberg's 1992 text as confirming
Delezenne's, Voos', Corso's, Ward's, Lichte's, Pikler & Harris', et al.'s
results; Sundberg, J. "The Science of Musical Sounds," 1992, pg. 105.
Roederer's 1973 text quotes Ward's confirming data for Pythagorean thirds.
Roederer ascribes the preference to Terhardt's hypothesis of universally
stretched intervals rather than Pythagorean intonation as such.
In 1975 Terhardt and Zick published a paper showing that for tone
sequences emphasizing melody over accompaniment, subjects preferred all
intervals stretched. [Terhardt, E. & Zick, M. "Evaluation of the Tempered Tone
Scale in Normal, Stretched and Contracted Intonation," Acustica, Vol. 32,
1975, pp. 268-274.]
The most recent paper supporting the stretched-third preference appears to
be Voos, "Quality Ratings of Tempered 5ths and Major 3rds," Music
Perception, Vol. 3 No. 3, 1986.]
By this point it should be clear that the evidence for musicians preferring
intervals *wider* than the 81:64 as a "major third" goes back at least 125
years.
What about the octave?
This subject is so enormous and so important that the purported "perfect"
octave of 2:1 will be dealt with separately in the next two posts.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 6 Oct 1995 22:52 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id NAA12599; Fri, 6 Oct 1995 13:52:09 -0700
Date: Fri, 6 Oct 1995 13:52:09 -0700
Message-Id: <009977C6B6BABBDC.42AF@ezh.nl>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/7/1995 7:42:06 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 14 of 25
---
"In the case of the octave, the craving for stretching has been noticed for
both dyads and melodic intervals. The amount of stretching preferred
depends on the mid frequency of the interval, among other things. The
average for synthetic, vibrato-free octave tones has been found to be about
15 cents. Thus, subjects found a just octave too flat but an octave of 1215
cents just." [Sundberg, J. "The Science of Musical Sounds," 1992, pg. 104]
MYTH: "The interval is just or not at all." [Harrison, "Lou Harrison's Music
Primer," 1971, pg. 48]
FACT: "For centuries, musical folklore has held that the simplest ratios are
the best ratios, in musical intonation. Thus the interval betwen two
frequencies having a ratio of 3:2 is the "perfect" fifth; 4:3 gives a "perfect"
fourth, etc. (...) These philosophical and a priori views of temperament,
however, are hardly supported by empirical evidence." [Ward, W.D. and
Martin, W.D., "Psychophysical Comparison of Just Tuning and Equal
Temperament in Sequences of Individual Tones," JASA, Vol. 33, No. 50, 1961,
pg. 586]
MYTH: "This 2 to 1 relationship is a constant one...the fact is that nature
does not offer one tone and its doubling (200 to 400) as a given quality of
relationship, and the same quality of relationship in two tones which are
not a ratio of doubling (200 to 600, for example)" [Harry Partch, "Genesis of
a Music," 2nd. Ed., 1974, pg. 77.]
FACT: "If a frequency of 8 kHz is chosen for f1, subjects produce for the
sensation of `half pitch' not at a frequency of 4 kHz, but a frequency of about
1300 Hz." [E. Zwicker and H. Fastl, "Psychoacoustics: Facts and Models,"
1993, pg. 103.]
MYTH: "If, through some terrestrial disaster, our [equal-tempered] musical
system were completely lost, it would sooner or later be inevitably
redicsovered, just as it exists today, after having passed through
transformations identical or similar those those it has undergone." [Ducup
de Saint-Paul, quoted in Matthys Vermeulen, "Hic et Nunc, Jacobe," Djawa,
Vol. 12, 1932, pp. 146-149]
FACT: "It is quite remarkable that musicians seem to prefer too wide or
"stretched" intervals." [Johan Sundberg, "The Science of Musical Sounds,"
1992, pg. 103.]
MYTH: "Notice that these frequency clumps are arranged in a harmonic series
based on a fundamental frequency half that of tone M, and also that any lack
of accuracy in setting an exact 3/2 frequency ratio will be called to our
attention... " [Benade, A. H., Fundamentals of Musical Acoustics, 1975, pg.
272]
FACT: "The experimental results very convincingly show that, on the
average, singers and string players perform the upper notes of the major
third and the major sixth with sharp intonation (Ward 1970)...The same
experiments revealed that also fifths and fourths and even the almighty
octave were played or sung sharp, on the average! (A reciprocal effect
exists. Pure octaves are consistently judged by musicians to sound flat!)
Rather than revealing a preference for a given scale (the Pythagorean), these
experiments point ot the existence of a previously unexpected *universal
tendency to play or sing sharp all musical intervals.* (italics in original
text)" [Juan Roederer, "Introduction to The Physics and Psychophysics of
Music," 1973, pg. 155.]
MYTH: "Consequently, these statements can be conclusively made; the ear
consciously or unconsciously classifies intervals according to their
comparative consonance or comparative dissonance; this faculty in turn
stems directly form the comparative smallnesss or comparative largeness
of the numbers of the vibrational ratio..." [Harry Partch, "Genesis of a Music,"
2nd. Ed., 1974, pg. 87.]
FACT: "Therefore it must be concluded that even just or pythagorean
intonation cannot be considered as ideal. Rather, optimum intonation of a
diatonic scale probably depends on the structure of the actual sound in the
same manner as has been previously discussed with respect to tempered
scales." [E. Terhard and S. Zick, "Evaluation of the Tempered Tone Scale In
Normal, Stretched, and Contracted Intonation," Acustica, Vol. 32, 1975, pg.
273.]
MYTH:"The reason that the ratio does not change is simply and wholly because physiogically the ear does not change excpet over a period of thousands and millions of years." [Partch, Genesis of a Music, 2nd ed.,
1974, pg. 97]
FACT: "The degree of consonance depends on the quality or spectrum of the
component tones, i.e., the relative intensity of dissonant vs. consonant upper
harmonics." [Juan Roederer, "The Physics and Psychoacoustics of Music," pg.
143.]
MYTH: "Long experience in tuning reeds on the Chromelodeon convinces me
that it is preferable to ignore partials as a source of musical materials.
The ear is not impressed by partials as such. The faculty--the prime
faculty--of the ear is the perception of small-number intervals, 2/1, 3/2,
4/3, etc., etc., and the ear cares not a whit whether these intervals are in or
out of the overtone series." [Harry Partch, "Genesis of a Music," 2nd. Ed.,
1974, pg. 87.]
FACT: "In 1987 IPO issued a wonderful disc by Houtsma, Rossing and
Wagenaars...illustrating the effects of a moderate stretching...of scale
frequencies and/or partial spacings. Part of a Bach chorale is played with
synthesized tones. When neither scale nor partial frequencies are stretched,
we hear the intended harmonic effect. When the scale is unstretched but the
partial frequencies are stretched, the music sounds awful. Clearly, intervals
in the ratio of small whole numbers are in themselves insufficient to give
Western harmonic effects." [John R. Pierce, "The Science of Musical Sound,"
2nd Ed., pp. 91-92.]
MYTH: "In the previous section of this chapter, I made a definition: we would
henceforth reserve the word *tone* to refer to sounds having harmonic
partials. For emphasis, I will often refer to such sounds as musical
tones...to underline the fact that harmonically related complexes of partials
have a very special perceptual status that happens also to make
them useful in music." [Benade, A.H., Fundamentals of Musical Acoustics,
1975, pg. 264]
FACT: "Clearly the timbre of an instrument strongly affects what tuning and
scale sound best on that instrument." [Wendy Carlos, "Tuning: At the
Crossroads," Computer Music Journal, 1987.]
"Most instruments in our music culture produce harmonic spectra, as
mentioned. However, in the contemporary computer-aided electroacoustic
music studios, is not a necessary constraint any longer. One would then ask
if this does not open up quite new possibilities also with respect to
harmony. If one decides to use one particular kind of inharmonic sepctra for
all tones, it should be possible to tailor a new scale and a new harmony to
this inharmonicity." - Johan Sundberg, "The Science of Musical Sounds," pg.
100.
"By using a digital computer, musical tones with an arbitrary distribution of
partials can be generated. Experience shows that, in accord with Plomp's
and Levelt's experiments with pairs of sinusoidal tones, when no two
successive partials are too close together such tones are consonant rather
than dissonant, even though the partials are not harmonics of the
fundamental. For such tones, the conditions for consonance of two tones
will not in general be the traditional ratios of the frequencies of the
fundamentals... [The 8-TET scale] is, of course, only one example of many
possible scales made up of tones whose upper partials are not harmonics of
the fundamental and having unconventional intervals, which nonetheless can
exhibit consonance and dissonance comparable to that obtained with
conventional musical intstruments (which have harmonic partials) and the
diatonic scale. It appears that, by providing music with tones that have
accurately specific but nonharmonic partial structures, the digital comptuer
can release music from the constraint of 12 tones without throwing
consonance overboard." [John R. Pierce, "Attaining Consonance in Arbitrary
Scales," Journal of the Acoustical Society of America, 1966, p. 249.]
"The physical correlate of an interval is not a ratio, anymore than the
physical correlate of a pitch is a frequency. Intervals and pitches both have
thresholds, ranges of variability," [Moran, H. and C. C. Pratt, "Variability of
Judgments on Musical Intervals," Journal of Experimental Psychology, Vol. 9,
1926]
Evidence for this conclusion is so voluminous and so detailed that it
cannot be contained in a single series of 22 posts. However, the next post
scratches the surface of this body of evidence, and hints at the enormous
extent of the experimental data showing a universal human preference for
stretched octaves, fifths, thirds, etc.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sun, 8 Oct 1995 03:26 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id SAA22621; Sat, 7 Oct 1995 18:25:50 -0700
Date: Sat, 7 Oct 1995 18:25:50 -0700
Message-Id: <951008012346_71670.2576_HHB23-1@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/8/1995 8:12:29 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 15 of 25
---
MYTH:"In any given range of pitch the comparative consonance of an interval
is determined by the relative frequency of the wave period in the sounding
of the interval." [Harry Partch, "Genesis of a Music," 2nd. Ed., pg. 151.]
FACT: "Systematic measurements show that people tend to find that an
interval traditionally classified as "consonant" sounds progressively
dissonant the farther down in the bass it is played...In the very low bass,
even the octave sounds dissonant!" [Johan Sundberg, "The Science of Musical
Sounds," pg. 73.]
"Even the *order* in which two instruments define a musical interval is
relevant. For instance, if a clarinet and a violin sound a major third, with
the clarinet playing the lower note, the first dissonant pair of harmonics
will be the 7th harmonicof the clarinet with the 6th harmonic of the violin
(because only odd harmonics of the clarinet are present). This interval
sounds smooth. If, on the other hand, the clarinet is playing the upper tone,
the 3rd harmonic of the latter will collide with the 4th harmonic of the
violin tone, and the interval will sound `harsh.'" [Roederer, J., "Introduction
to The Physics and Psychophysics of Music," 1973, pg. 143.]
In his 3-part article, "Some Aspects of Perception," Shackford reveals how
widely so-called "perfect" intervals can deviate when performed by trained
symphony-caliber performers from major orchestras--yet these intervals
are still heard as "perfect." Because Shackford's measurements offer such
remarkable proof of the ear/brain's categorical perception mechanism at
work, the article deserves an extended quote:
"Mean Values (MV) and quartile deviations (QD) of interval sizes measured in
a string trio [composed of members of the New York Philharmonic] as
compared with just, equally tempered and Pythagorean tunings (J,ET and P):
--------------------------------------
Dyads
Interval MV (cents) QD (cents)
Major 2nd 204 197-211
minor 3rd 305 287-318
Major 3rd 410 402-418
Fifth 707 699-714
---------------------------------------
Melodic
Interval MV (cents) QD (cents)
Minor 2nd 93 86-101
Major 2nd 204 199-209
Fourth 501 408-510
Fifth 701 692-708
---------------------------------------
[Shackford, "Some Aspects of Perceptions," Journ. Mus. Theory, Vol. 6, 1961]
"Fifths and twelfths are shown in Examples 17 and 18; and then a statistical
analysis of the sizes of these intervals in performance is represented in
Example 19. The spread of 50 cents, a quarter tone, between the largest and
smallest fifths played is surprisingly large for the interval that is supposed
to be the most sensitive to inaccuracies of intonation." [Shackford, C. "Some
Aspects of Perception - I," Journ. Mus. Theory, Vol. 6, 1961, pg. 185]
"Thirds and tenths are shown in Example 20 and are then analyzed in
Example 21. Major thirds and tenths have about the same spread [50 cents]
but the median and mean is larger for thirds. (...) As with fifths, long-held
thirds and tenths show a narrower spread than those used in passing. Though
sizes approaching the "natural" value of 386 cents were used, the
Pythagorean interval of 408 cents appears to be the most representative of
actual practice." [Shackford, C., "Some Aspects of Perception - I," Journ. Mus.
Theory, Vol. 6, 1961, pg. 189]
"For barbershop quartets, a Major 3rd of 403 cents was preferred, while a
fourth of 493 cents and a fifths of 705 cents was preferred." [Sundberg, J.
"The Science of Musical Sounds," 1992, pg. 100]
"It is quite remarkable that musicians seem to prefer too wide or 'stretched'
invervals in many cases. Above we have seen several examples of interval
stretching: the barbershop singers' fifth and just minor seventh; string trio
players' melodic major and minor thirds and fifths; music listeners
preferred sizes of fifth and octave; and a professional musician's settings
of melodic intervals that contain ascending fifths. In the case of octaves,
the craving for stretching has been noticed for both dyads and melodic
intervals. The amount of stretching preferred depends on the mid frequency
of the interval, among other things. The average for synthetic, vibrato-free
octave tones has been found to be about 15 cents. Thus, subjects found a
just octave too flat but an octave of 1215 cents just. (...) A German
acoustician, Ernst Terhardt, developed an interesting theory that proposes
an explanation for why we are so eager to stretch octaves. He departed
from the fact that the pitch of a sine tone is changed if another sine tone
starts to sound simuntaneously. The net result is that both tones push the
pitch of the other tone away from its own pitch to increase the distance
between the two. In this way, the pitch interval between the first partials
in a harmonic stpectrum becomes a bit stretched. (...) Terhardt believes that
this stretched octave follows us from cradle to casketand that is is this
octave that musicians model when they play." [Sundberg, J. "The Science of
Musical Sounds," 1992, pp. 103-105]
Additional papers which adduce evidence for the universal preference for
stretched octaves include Terhardt, E., "Pitch, Consonance and harmony,"
JASA, Vol. 55, 1970, pg. 410. He didn't cite Terhardt, E., "On the perception
of periodic sound fluctuations (roughness)." Acustica, Voll. 30, 1974, pg.
201. Nor does he cite Terhardt, E. and Zick, M., "Evaluation of the Tempered
Tone Scale in Normal, Stretched and Contracted Intonation," Acustica, Vol.
32, 1975, pp. 269-276, in which Terhardt points out: "Therefore it must be
concluded that even just or Pythagorean intoantion cannot be considered as
ideal. Rather, optimum intonation of a diatonic scale probably depends on
the structure of the actual sound in the same manner as has been previously
discussed with respect to tempered pianos." [Terhardt, E. and Zick, M., op
cit.]
Terhardt goes on to conclude: "It is remarkable, however, that stretched
intonation is distinctly preferred to contracted intonation. Probably, the
pitch interval established by the simultaneous complex tones fits better
with the mentally stored octave interval in stretched intonation than in
contracted intonation. This is well in line with the psychoacoustic
phenomenon of octave enlargement." [Terhardt, E. and Zick, M. , op cit.]
Various musicians who find these results inconvenient have claimed that
the preference for stretched intervals occurs not because Western
musicians "prefer" non-just sharp intervals, but because they learn to play
them; a wealth of evidence disproves this claim.
In "Octave adjustment by non-western musicians," Edward M. Burns
states "The results were essentially the same as found with Western
musicians, that is, small intrasubject variability, large intersubject
variability, and a small but statistically signficant frequency-dependent
"stretch" of the physical octave when adjusting the subjective octave.
(...) As suggested by Terhardt, they do seem to rule out the explanation
that this stretched is learned from the "stretched" tuning of the piano
to which Western musicians are universally exposed. This stretched
tuning of the piano is due to certain physical characteristics of the
piano strings and is not found in most instruments to which the Indian
musicians are exposed." [Burns. E. M, op cit., Session M: Musical acoustics,
88th meeting of the Acoustical Society of America; in JASA, vol. 56,
Supplement, pg. S 26]
In the same session, Burns' paper "In Search of the Shruti," cites evidence
from Indian musicians confirming "earlier experiments [which] indicate
that the phenomenon of "categorical perception" is present in the
perception of musical intervals." [Burns, op cit, in JASA, Vol. 56, Supplement, pg. S 26]
This explains why various just intonation fans hear the perceptually
non-octave 2:1 ratio as being a "pure" octave while listeners not indoctrinated by constant exposure to just intonation accurately
perceive the distorted 2:1 ratio as smaller than the perceptual octave.
Exactly the same effect is at work among Javanese musicians who
hear many different gamelan as being tuned to "pelog" even though "the majority of large Balinese gamelan are tuned with five pitches to the octave, having some intervals larger than others, in a general pattern
that has come to be called pelog: but no two gamelan have exactly the
same pattern of intervallic structure. This is not for want of skill.
It is because the tuning pattern has been composed." [Erickson, Robert, "Timbre and the Tuning of the Balinese Gamelan," Soundings, pg. 98, 1984]
Categorical perception also explains why equal tempered intervals are
accepted with such equanimity by listeners; a wide variety of different
intervals are perceived as "thirds" and "fifths" and "fourths" and "sixths"
in an actual musical peformance setting: a psychological mechanism is
at work whereby listeners unconsiously process the sounds they hear
and fit unfamiliar intervals into familiar categories. Listeners
literally *hear what they expect to hear*--regardless of what is
*actually played.*
More evidence for this phenomenon comes from "Categorical Perception--
Phenomenon or epiphenomenon: Evidence from experiments in the perception
of melodic musical intervals," Burns, E. M. and Ward, W.D., JASA vol. 63, No.
2, Feb. 1978, pg. 456.
"In marked contrast to the extreme accuracy of musical-interval judgments
in the experimental situations cited above is the large variability found in
measurements of intonation in musical performance. The results of several
studies on intonation in performance of western classical music have been
summarized by Ward (1970). They show large variations in the tuning of
individual intervals (ranges of almost a semitone) in a given performance.
Similar variability has been found in measurements of intonation in
nonclassical western music (Stauffer, 1954; Fransson, Sundberg and
Tjernland, 1970; Owens, 1974) and in nonwestern music (Jhairazbhoy and
Stone, 1963; Callow and Shepherd, 1972; Spector, 1966). Of course this
large variability is not in itself surprising since variability in production of
tones is involved. The important point, however, is that in the above-cited
studies, all listeners, including the performing musicians, agreed that the
compositions were performed correctly and the large variability in
intontion was not detected.
"This apparent inability to detect large variations in interval size in certain
situations suggests that a phenomenon associated with the perception of
speech tokens, "categorical perception," may be involved." [Burns, E., and
Ward, W.D., "Categorical Perception--Phenomenon or epiphenomenon:
Evidence from experiments in the perception of melodic musical intervals,"
JASA vol. 63, No. 2, Feb. 1978, pg. 456.]
Again, in "Categorical Perception of Musical Intervals," Burns and Ward point
out "A body of evidence indicates that certain speech units are perceived in
in a special mode called `categorical perception,' the characteristics of
which are (1) the existence of well-defined identification functions and (2)
the ability to predict precisely the discrimination functions from the
identification functions, based on the assumption that the subjects can
discriminate two stimuli better than they can differentially identify them.
(...) A study of the perception of musical intervals by experienced musicians
was performed using the procedures associated with categorical perception
experiments;...the results show that the musicians exhibit categorical
perception, some to a degree approaching that shown in the perception of
stop consonants." [Burns, E. and Ward W., "Categorical Preception of Musical
Intervals," JASA, Vol. 55, No. 2, Feb. 1974]
The powerful evidence for categorical perception in listeners and
performers alike lends equal support to just, equal-tempered and non-just
non-equal-tempered tunings. Because listeners unwittingly brainwash
themselves to hear the intervals they expect regardless of what intervals
are actually performed, all three tunings should prove equally acceptable to
listeners.
Of course there is more evidence for a difference between the stretched
intervals heard as "pure" and the perceptually-distorted intervals
characterized by small whole numbers, and universally heard as
"too narrow" and "flat." The next post will examine some more
of this enormous body of evidence.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sun, 8 Oct 1995 20:46 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id LAA28440; Sun, 8 Oct 1995 11:45:50 -0700
Date: Sun, 8 Oct 1995 11:45:50 -0700
Message-Id: <951008184423_71670.2576_HHB28-1@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/9/1995 7:52:48 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 16 of 25
---
The evidence for a universal human preference for stretched intervals is
so overwhelming that it appears throughout the length and breadth
o`e psychoacoustic literature, both with Western and non-Western
musicians:
"Dowland has reported that measurements of Western and non-Western fixed
pitch instruments suppprt Ward's conclusion that the perceptual octave is
some 15 cents larger than the physical or mathematical octave. Western
musical practice supports these conlusions (play sharp in higher octave).
Balinese gamelan tunings take advantage of this apparently widespready
characteristic of pitch perception to create a multi-octave beating ocmplex
in their fixed pitch instruments." [Erickson, Robert, "Timbre and the Tuning
of the Balinese Gamelan," Soundings, pg. 100, 1984]
Particularly revealing is "The 1215-Cent Octave: Convergence of Western and Non-Western Data on Pitch Scaling," W. J. Dowling, Abstract QQ5, 84th meeting of the Acoustical Society of America, Friday, December 1, 1972, p. 101 of program.
Yet more evidence for a universal preference for stretched octaves comes
from Sundberg, who found that the octave was as a rule played
significantly sharp by performing musicians, and was also preferred sharp
of the 2:1 in adjustment tests:
"Evidently the octave intervals in such stretched scales will exceed a 2:1
frequency ratio slightly. Thus, it is necessary to distinguish between the
*physical octave * (PO) which is defined as a 2:1 frequency ratio, and the
*subjective (musical) octave * (MO) that is perceived as pure. (...) As a rule,
the perceptual octave corresponds to a fundamental frequency ratio
exceeding 2:1." [Sundberg, J. and Lindqvist, J., "Musical Octaves and Pitch,"
JASA, 54(4), 1973, pp. 973-929]
Among the many implausible arguments which attempt to explain away this
mountain of experimental evidence for a preference for an octave interval
larger than the purportedly "pure" 2:1, most prevalent is the claim that
these "laboratory experiments do not represent real musical practice."
If this objection is correct, why does computer analysis of the frequencies
of pitches played during actual performances which show a uniform stretch
of the octave also show the same results as the laboratory psychoacoustic
experiments? And why do psychoacoustic measurements and experiments
stretching back over 150 years uniformly produce the same results?
"This disparity between the physical and subjective octaves is not a new
discovery. Stumpf and Meyer, using the method of constant stimuli, had 18
subjects judge pairs of successive tones as greater than, less than, or equal
to an octave. They lower tone was 300 cps and the upper tone was varied
around 600 cps. They found that 602 cps (the higher upper tone used)
received 52 percent "less," 43 precent "equal," and 5 percent "greater"
responses from the group, indicating that the mean subjective octave of 300
cps was somewhere above 602 cps (the present Fig. 4 gives about 605 cps).
Later von Maltzew, in an investigation on the identification of intervals in
the upper frequency range, found that a physical octave was more often
called a major seventh or below than a minor ninth or above. See C. Stumpf
and M. Meyer, Beit. Akust. Musikw., Vol. 2 ppg. 84-167, 1898. C. v. Malzew, Z.
Psychol. Vol. 64, pp. 16-257, 1913. [Ward., W.D., "Subjective Musical Pitch,"
Journ. Acoust. Soc. Am. , Vol. 26, No. 3, May 1954, pg. 374]
"The average standard deviation of repeated adjustments of sequential or
simultaneous octaves composed of sinusoids is on the order of 10 cents
(Ward, 1953, 1954; Terhardt, 1969; Sundberg & Lindquist, 1973). A range of
average deviations from 4 to 22 cents for adjustments of the other
intervals of the chromatic scale (simultaneous presentation) has been
reported by Moran and Pratt (1926). Rakowski (1976) reports variability--in
interquartile ranges--of 20 to 40 cents for both ascending and descending
melodic versions of the 12 chromatic intervals. Other general trends
evident from the results of adjustment experiments are...a tendency to
'compress' smaller intervals (adjust narrower than equal-tempered
intervals) and "stretch" wider intervals (adjust wider)." [Burns, E. M., and
Ward, W.D., "Intervals, Scales and Tuning," in The Psychology of Music, ed.
Diana Deutsch, 1982, pg. 250.]
"A number of measurements have been made of the intonation of musicians
playing variable-tuning instruments under actual performance conditions
(e.g., Greene, 1937; Nickerson, 1948; Mason, 1960; Shackford, 1961, 1962, a,
b). The results of these measurements have been summarized by Ward
(1970). They show a fairly large variability for the tuning of a given
interval in a given performance--ranged of up to 78 cents, interquartile
values of up to 38 cents. The mean values of interval tunings, in general,
show no consistent tendency to either just intonation or Pythagorean
intonation in either melodic or harmonic situations. The general tenedency
seems to be to contract the semitone and slightly expand all other intervals
relative to equal temperament. There is also some evidence of context-
dependent effecst (e.g., to play F# sharper than Gb (Shackford, 1962 a,b)].
Those results mirror, to a certain extent, the results of the adjustment and
identification experiments using isolated intervals (discussed in Sections
III A and III B) which showed a tendency to compress the scale for small
intervals and stretch the scale for large intervals, in both ascending and
descending modes of presentation.
"The above measurements were obtained for Western classical music, but
the same general tendencies are evident in intonation form a military band
(Stauffer, 1954), Swedish folk musicians (Fransson, Sundberg & Tjernland,
1970), and jazz daxophonists (Owes, 1974). Measurements of intonation
inperformance for Indian (Hindustani) classical music (jairazbhoy & Stone,
1963; Callow and Shepard, 1972) show similar variability." [Burns, E. M., and
Ward, W.D., "Intervals, Scales and Tuning," in The Psychology of Music, ed.,
Diana Deutsch, 1982, pg. 258.]
"Even the ubiquitous 5th itself is played, on the average, sharper than the
702 cents predicted; indeed, in Shackford's study, it is played sharpest in a
harmonic context, where the minimization-of-beat forces would be
expected to be the most active." (...) Thus evidence indicates strongly that in
musical performances the target pitch for frequencies actually produced in
response to a given notation is one that is just a shade sharper than that
called for by Et. In the 500 and 1000 Hz regions, even the subjective octave
(sacrosanct 2:1 in all theoretical systems) is about 1210 cents for pure
tones (Ward, 1954). In his studies, Shackford (1962 a,b) measured harmonic
10th, 11th and 12th and found that they were sharped to about the same
extent as 3rd, 4tha nd 5th.
"Boomsliter and Creel (1963) too have provided striking confirmation of this
theory. (...) ...it is clear from the sample dat they present aththe preferred
scale almost always is composed of tones consistently higher in frequency
than those of ET. For example, in three classical numbers (the Marseillaise,
a Bartok dance, and Mozart's Serenta Notturna), all notes above "do" are
preferred 4 to 23 cents sharp." [Ward, W.D., "Musical Perception," in
"Foundations of Modern Auditory Theory," ed. J.V. Tobias, Vol. 1, pp. 420-
421]
cent 2:1.
The next post will examine data bearing on the third theory of hearing--
a model of the ear so far not dealt with as extensively as the other two.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 10 Oct 1995 02:20 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id RAA12768; Mon, 9 Oct 1995 17:20:34 -0700
Date: Mon, 9 Oct 1995 17:20:34 -0700
Message-Id: <951010001440_71670.2576_HHB24-3@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/11/1995 2:29:05 PM
From: mclaren
Subject: Tuning & psychoacoustics - post 17 of 25
---
While Ohm's/Helmholtz's view of the ear as Fourier analyzer has staggered
under the blows of psychoacoustic research, so has the
Seebeck/Stumpf/Schouten model of the ear as a neural periodicity pitch
detector.
True, 50 years of research has unearthed many results which cannot be
explained by the frequency-domain model or "place theory" of hearing... But the
periodicity model of hearing has *also *shown many shortcomings.
In particular, both models of the ear/brain system fail to explain or predict
the important phenomenon of categorical perception. Neither model of hearing ex
plains the existence or pitch of the Zwicker Tone, or Risset's and
Shepherd's auditory paradoxes (sounds which when transposed upward by an
octave, drop in perceived pitch; sounds which appear to rise or fall
indefinitely in pitch yet whose fundamental frequency never changes;
sounds which are heard as speeding up/slowing down constantly yet whose
rate never changes).
Moreover, if the structure of the human ear is responsible for the music we
make and the tunings we use and the harmonies we prefer, how is it possible
to explain the fact that different cultures make entirely different kinds of
music?
If the ear is either a Fourier analyzer or a time-based autocorrelator which
responds most powerfully to either small-integer ratios played on
instruments with integer harmonics (the Fourier analysis model of hearing)
or fixed pitches played on instruments whose partials are matched to the
tuning--for example, the Railsback stretch of the grand piano matched to
the stretch of the partials of the piano strings--(this is the autocorrelation
model of hearing in which tuning & timbral partials, even if stretched, will
autocorrelate so as to yield uniform time-domain periodicities and thus a
sense of definite pitch with fundamental)...
If either of these models of hearing is accurate, why do the Javanese play
inharmonic instruments using stretched octaves and non-just non-equal-
tempered tunings?
Are other cultures deranged? Are their ears physically different from ours?
Musicologists of the 19th century dismissed the Javanese, the Balinese, and
other non-western music with references to "the degree of aural
development among races as well as individuals."
The variability in Javanese and Balinese tunings was dismissed by De Lange
and Snelleman in 1992 with the memorable phrase: "for those whose ears
are insufficiently developed, the perfect fourth is not divided into two
whole tones and a semitone, but rather as the sixth, eventh and eighth
harmonic partials." [DeLange, Daniel and Snellman, J.F, "La Musique et les
instruments de musique dans les Indes Orientales neerlandaises," in
Lavignac, "Encylopedie de la musique et dictionnaire du COnservatoire:
Histoire de la musique, premiere partie (Paris, 1922," Vol. 5, pg. 3148.]
Such racism has become less fashionable over the last three quarters of a
century, but much of the debate over psychoacoustics and tuning takes place
in a cultural vacuum: only western music is considered, and psychoacoustic
arguments for or against this or that tuning are often made only in the
context of western equal temperament or just intonation tuning systems.
However, such enthocentrism is slowly changing.
"One of the revelations of modern psychoacoustic and etnomusicological
reserach has been the extraordinary complexity of intonation as used by
Western and non-Western musicians." [Perlman, Marc, "American Gamelan in
the Garden of Eden: Intonation in a Cross-Cultural Encounter," Musical
Quarterly, 1995, pg. 532]
"There are...a number of musical cultures that apparently employ
approximately equally tempered 5- and 7-interval scales (i.e., 240 and 171
cent step-sizes, respectively) in which the fourths and fifths are
significantly mistuned form their natural values. Seven-interval scales are
usually associated with Southeast Asian cultures (Malm, 1967). For
example, Morton (1974) reports measurements (with a Stroboconn) of the
tuning of a Thai xylophone that "varied only + or - 5 cents" from an equally
tempered 7-interval tuning. (In ethnomusicological studies measurement
variability, if reported at all, is generally reported without definition.)
Haddon reported (1952) another example of a xylophone tuned in 171-cent
steps from the Chopi tribe in Uganda. The 240-cent step-size, 5-interval
scales are typically associated with the "gamelan" (tuned gongs and
xylophone-type instruments) orchestras of Java and Bali (e., Kunst, 1949).
However, mreasurements of gamelan tuning byHood (1966) and McPhee (1966) show e
xtremely large variations, so much so that McPhee states: "Deviations in what i
s considered the same scale are so large that
one might with reason state that there are as many scales as there
are gamelans." Another example of a 5-interval, 24--cent step tuning
(measured by a stroboconn, "variations" of 15 cents) was reported by
Wachsmann (1950) for a Ugandan harp. Other examples of equally tempered scales
are often reported for pre-instrumental cultures...
For example, Boiles (1969) reports measurements (with a Stroboconn,
"+ or - 5 cents accuracy") of a South American Indian scale with equal
intervals of 175 cents, which results in a progressive octave stretch.
Ellis (1963), in extensive measurements in Australian
aboriginal pre-instrumental cultures, reports pitch distirbutions that
apparently follow arithmetic scales (i.e., equal separation in Hz).
Thus there seems to be a propensity for scales that do not utilize perfect
consonances and that are in many cases highly variable, in cultures that
either are pre-instrumental or whose main instruments are of the xylophone
type. Instruments of this type produce tones who partials are largely
inharmonic (see Rossing, 1976) and whose pitches are often ambiguous (see
de Boer, 1976)." [Burns, E. M. and Ward, W. D., "Intervals, Scales and Tuning,"
in The Psychology of Music, 1982, ed. Diana Deutsch, pg. 258]
These empirical data would seem to indicate that of the three tunings (equal te
mpered, just intonation, and non-just non-equal-tempered)
non-just non-equal-tempered and equal temperament are most widely
used by other cultures. This is a conclusion directly opposite to that
implied by the idea of small whole numbers as uniquely preferred by
the ear/brain system.
The next post will examine in detail the third theory of hearing, first
proposed by Fetis in 1843, and in later life espoused by Helhmoltz: namely,
the view that the ear prefers a set of intervals determined by learning
and culture.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 12 Oct 1995 03:16 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id SAA00081; Wed, 11 Oct 1995 18:15:55 -0700
Date: Wed, 11 Oct 1995 18:15:55 -0700
Message-Id: <951012011313_71670.2576_HHB40-2@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/12/1995 6:33:22 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 18 of 25
---
So far little has been said about the third theory of hearing, first proposed
by Fetis in 1841 and supported by Helmholtz and Ellis, as well as by vast amounts of psychoacoustic data from Ward, Dixon, Terhardt, et alii.
Fetis ascribed musical intervals and msuical conventions to "education"
rather than to "nature," mathematics, or the structure of the ear.
Although few of the proponents of equal temperament or just intonation who
quote him will admit it, Helmholtz echoed this sentiment when he wrote
that the Western musical system "does not rest solely upon inalterable
natural laws, but is also, at least partly, the result of esthetical principles,
which have already changed, and will still further change, with the
progressive development of humanity." [Helmholtz, Hermann, "On the
Sensation of Tone," 2nd. Dover ed., 1863, pg. 235]
As previously noted, after exhaustive study of the musics of many cultures,
Alexander James Ellis concluded that "the Musical Scale is not one, not
`natural,' nor even founded necessarily on the laws of the constitution of
musical sound, so beautifully worked out by Helmholtz, but very diverse,
very artificial, and very capricious." [Ellis, A. J., "On the Musical Scales Of
Various Nations," Journal of the Royal Society of the Arts, Vol. 3, 1885, pg.
536]
"The evidence presented thus far implies that musical-interval categories
are learned rather than are the direct result of characteristics of the
auditory system. This evidence includes: (1) the variability found in
measured scales and intonation, even when possible contextual effects are
taken into account; (2) the intrasubject variability, large intersubject
variability, and consistent deviations from small-integer-ratio categories
found in category-scaling and adjustment expeirments; (3) the absence of
small-integer-ratio ssingularities in frequency-ratio-JND functions and
absence of small-integer- ratio confusions in absolute-identification
experiments; and (4) the relative inability ofmusically untrained subjects to
perform musical interval identification or discrimination exeriments.
[Burns, E.M., and Ward, W.D, "Intervals, Scales, and Tuning," in The Psychology
of Music, ed. Diana Deutsch, 1982, pg. 261]
Moreover "since stimulus uncertainty in `real world' perception is, in
general, high, it might be expected that categorical preception of musical
pitchwould be the normal situation. This conclusion is supported by the
results of the various investigations of intonation in performance. Second,
the lack of evidence for the existence of natural categories for musical
intervals implies that individuals in a given culture learn the scales of their
culture from experience, not because of any innate propensity of the
auditory system for specific intervals." [Burns, E. M., and Ward., W.D.,
"Categorical Perception," Journ. Acoust. Soc. Am., Vol. 63, No. 2, February
1978, pg. 466.]
"To the casual glance, the range and variability figures of Table III may
suggest that the performers were not very proficient, which is not at all
true. Even when tuning an instrument to the *same* pitch as a standard,
the typical musician will show a standard deviation, in repeated settings, of
about 10 cents (Corso, 1954)." [Ward, W.D., "Musical Perception," in
"Foundations of Modern Auditory Theory," ed. J.V. Tobias, 1970, Vol. 1, pg.
418]
Again, these psychoacoustic results provide equal support for all three
major systems of tuning: equal-tempered, just intonation, and non-just non-
equal-tempered, inasmuch as the evidence strongly suggests that once the ear learns to accept any given set of musical intervals they are quickly
learned and categorized as "pure," "preferred," "natural," etc.
--mclaren


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 12 Oct 1995 15:58 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id GAA08250; Thu, 12 Oct 1995 06:58:00 -0700
Date: Thu, 12 Oct 1995 06:58:00 -0700
Message-Id: <00997C43D9CD945A.4CBC@ezh.nl>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/13/1995 8:00:23 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 19 of 25
---
We return to the parable of the three blind men and the elephant. Because of
the powerful prejudicial effect of mathematical and physical models on
our perceptions and preconceptions, it's vital to understand some of the
drawbacks of those mathematical models--in particular, the Fourier
transform, far and away the most popular analysis technique for dealing
with sounds.
As it turns out, the complexity of the ear's behavior is mirrored by the
complexities and limitations which beset the Fourier transform.
"The measurement of sound spectra is complicated by the fact that the
spectra of almost all sounds change both rapidly and drastically as time
goes by. This situation is worsened by the fact that the accuracy with
which we can measure a spectrum inherently decreases as we attempt to
measure it over smaller and smaller intervals of time.
"The spectral content of any instant during the temporal evolution of a
waveform does not even exist; for example, we could scarcely tell anything
at all about the frequency comopnents of a digital signal by examining a
single sample! We can measure what happens to the spectrum only on the
average over a short interval of a sound--perhaps a millisecond or so. The
longer the interval, the more accurate our measurement of the average
spectral content during that interval, but the less we know of the variations
that occurred during that interval. Thus the problem of spectral
measurement may be see to be one of finding the best compromise between
the opposing goals. Just how much accuracy is needed is still an open
question in the realm of musical psychoacoustics: in some cases our ears
seem to be more toleraant of approximations than in others. The historical
model of spectra as measured by Hermann Helmholtz (see References) is
clearly inadequate for believable resynthesis..." [Moore, F. R., "An
Introduction tothe Mathematics of Digital Signal Processing," Part II,
Computer Music Journal, Vol. 2, No. 2, pg. 43]
Claims of "perfect reconstruction of the input signal" are often made for
Fourier transform analysis. These claims are true only insofar as the
continuous Fourier transform is involved, a mathematical operation demanding infinite amounts of data and infinite numbers of frequency
components.
Claims of "perfect reconstruction of the input signal" are *untrue* for the discrete short-time Fourier transform applied to real-world signals.
"The test of any analysis/synthesis system is how little it distorts the
signal. The phase vocoder, or short-term Fourier system, as it is also
called, is capable of zero distortion. (...) Consequently, what comes out of
each channel is a pure sinusoid. We can measure its frequency, phase, and
amplitude. Thus, we can recreate the signal exactly by producing a new
sunsoid of that frequency, phase, and amplitude.
"In the electronic music literature this is called additive synthesis.
Needless to say, when the assumption is violated, and the input signal is
something like white noise, the output of each channel is not a sinusoid and
cannot be represented with only phase, frequency, and magnitude data. You
must use a different representation. But for harmonic sounds in which the
pitch and amplitude are not changing, categorizing the signal by these three
numbers yields a system which is an identity."[Moorer, J.A., "A Conversation
with James A. Moorer," Roads, C. in "The Music Machine," 1989, pg. 14]
In the real world all sounds contain noise. And not just one kind
of noise: many different kinds of noise are present. There is
always some white noise, and always some 1/f noise represented by jitter
in the fundamental frequency, there is always a stochastic component in the
individual harmonic envelopes, and there is always some band-limited noise,
as for instance from the scrape of a violin bow or the background breath-
noise in a flute. Thus the claim of "perfect reconstruction of the input
signal" is untrue for real-world Fourier analysis applied to real-world
sounds. The exact amount of distortion introduced and the precise
amount of data lost by doing a real-world short-time Fourier analysis of a
real-world sound varies with the sound.
In some cases, the distortion and data loss is negligible, while in other
cases the short-time Fourier analysis renders the sound unrecognizable.
So why do many people continue to act as though the Fourier perspective
is the only valid one for dealing with acoustics?
Because engineering courses are saturated with the Fourier perspective, it
is often assumed that frequency/time duality (or in optics,
contrast/periodicity) is the only possible way of looking at any physical
phenomenon--particularly acoustics.
This is often true, but by no means always.
In fact multiple methods of sound analysis have long been suggested: in
1947 Denis Gabor suggested an alternative to the Fourier method of analysis
of sounds in his paper "Acoustical Quanta and the theory of hearing." [Gabor,
D, "Nature," Vol. 159, 1947, pg. 303.]
Both Morlet and Daubechies wavelets have offered new paradigms for
analyzing sounds: see Kronland-Martinet et alii, "Grossman, A. and Kronalnd-
Martinet, R., "Time-and-scale representation obtained through continuous
wavelet transforms," 1988, Signal Processing IV: theories and Applications,
Amsterdam: Elsevier Science Publishers.
The Fourier transform viewpoint is a linear parametric method of analysis.
Thus it is strictly limited by the assumptions inherent in parametrized
analysis, and by the assumption that input functions will obey the
superposition principle.
Other non-linear non-parametric models for signal processing exist. See
Maragos, P., "Slope Transforms: Theory and Application to Nonlinear Signal
Processing," IEEE Trans.Sig. Proc., Vol. 43, No. 4, April 1995, pp. 864-877.
Each of these mathematical analysis methods has its own unique drawbacks:
"Even though the Morlet wavelet analysis seems to compact information in a
way that is well suited to the characteristics of hearing, it does not work
as easily as the use of Gabor grains for altering independently frequency and
speed. Hence the value of this or that method is contingent on the music
purpose. Indeed to realize various types of intimate sonic modification, one
may have to go out of the general framework and resort to more specific
methods." [Risset, J.C., "Timbre Analysis by Synthesis," in "The Music
Machine," 1992, pg. 37]
Frequency-based mathematical models of analysis are not the only kind
possible: time-based autocorrelation methods of pitch detection have long
been implemented in computers. Both the Average-Mangitude Difference Function (a time-based autocorrelation) and the zero-crossing detection function (another time-based period-detection method) are typically used
to extract a fundamental frequency track from a digitized waveform prior to employing a Fourier transform to dissect the sound into sinusoids.
Thus, ironically, the best-known frequency-domain algorithm for analyzing
musical sounds is crucially dependent on a *prior time-domain
autocorrelation algorithm * when we want to apply it in the real
world. Given this incestuous connection between frequency- and time-
domain methods of analysis in the real world, it is not clear why one
viewpoint (the Fourier transform) ought to dominate the discourse of signal
processing.
The progressive discovery of the many limitations of the Fourier transform
as a tool for analyzing real-world sounds has led to a search for alternative
methods of analysis: Jean-Claude Risset, the pioneer of analysis-by-
synthesis in 1965 on mainframe computers at Bell Labs, states: "From these
studies I draw two conclusions: first, there does not seem to be any general
and optimal paradigm to either analyze or synthesize any type of sound. One
has to scrutinize the structure of the sound--quasiperiodic, sum of
inharmonic components, noisy, quickly or slowly evolving--and also
investigate to find out which features of sound are relevant to the ear."
[Risset, Jean-Claude, "Timbre Analysis by Synthesis," in "The Psychology of
Music," ed. Diana Deutsch, 1982, pg. 18]
Because the Fourier mindset dominates and shapes so much
of the discourse of Western music, it's important to point
out *all* of its limitations and inaccuracies. Thus the
next post will continue the discussion of the many conditions
under which the Fourier viewpoint is inappropriate for
real music in the real world.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 13 Oct 1995 17:07 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id IAA05441; Fri, 13 Oct 1995 08:07:19 -0700
Date: Fri, 13 Oct 1995 08:07:19 -0700
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/14/1995 7:39:20 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 20 of 25
---
As is now evident, the Fourier transform is at best poorly suited
to the analysis of real-world sounds.
Noise, inharmonic partials and radical phase changes insure that
many real-world instrument tones are poorly modelled as a
sum of pure sinusoids near-constant in magnitude and phase.
This is complicated by the fact that "...much of the characteristic sound of
an instrument is in its transient regions, such as the attack portion of its
tone." [Moorer, J. A., "Signal Processing Aspects of Computer Music - A
Survey," Computer Music Journal, Vol. 2, No. 2, pg. 7]
To date, the aperiodic and chaotic attack portions of instrumental notes
have consistently resisted analysis, as have the residual stochastic
portions of the sound which cannot be analyzed successfully by current
techniques: a variety of work-arounds are generally used to "smooth-over"
the radical phase and magnitude discontinuities generated by Fourier
analysis of the attack transients, or to parametrize the chaotic-attractor
"residual" and mimic it with some kind of bandlimited noise generator added
to the resynthesized signal.
"Each...spectrogram tells how amplitude and phase vary as a function of
frequency. This process of analysis and resynthesis has been called a phase
vocoder. Serra made use of the process to his ends, taking successive
spectra at intervals of around 10 milliseconds.
"Such successive spectra do not in themselves give a deep insight into
musical sounds. Serra's innovation was to use successive spectra in
dividing the signal into two parts--and deterministic, or predictable part,
and a stochastic, or unpredictable, noisy part. The deterministic part Serra
took to be clear peaks which in several successive spectra change just a
little in apmlitude and phase. This part of the spectrum Serra resynthesized
by generating the individual sinusoidal compoennts whose amplitudes,
frequencies, and phases changed with time in the fashion indicated by the
successive spectra. (...) [Serra] replaced this [non-deterministic] part of the
spectrum with a noise that had roughly the same overall spectrum as his
stonachastic part but that didn't match it in waveform.
"Serra tested this division of the signal into a deterministic and stochastic
part...by listening separately to the deterministic and stochastic parts, and
then adding them and listening to their sum. A piano sound reconstructed
from the deterministic spectra alone didn't sound like a piano. With the
stochastic or noise portions added, it sounded just like a piano. The same
was true of a guitar, a flute, a drum, even the human voice." [Pierce, J.R.,
"The Science of Musical Sound," 2nd ed., 1992, pg. 106]
Moreover, Fourier techniques work even to a rough approximation only with
a small class of harmonic-series instruments (Western brass instruments,
double reeds and strings, the harp).
"Most percussion instruments (drums, bells) are inharmonic. This means that
to describe them with sinusoids aften requires a large number of such.
There are some synthesis techniques for creating what often turn out to be
quite convincing drum-like or gong-like sounds, but to date there are few
analysis techniques that can be used with inharmonic sounds." [Moorer, J. A.,
"Signal Processing Aspects of Computer Music - A Survey," Computer Music
Journal, Vol. 2, No. 2, pg. 7]
Bearing in mind that this puts all of Javanese and Bainese and Thai and most
African and South American music off-limits to Fourier analysis (because
these cultures use mainly inharmonc instruments), the value of
Fourier analysis becomes questionable in the context of world music.
The net result is that conventional FFT analysis *sometimes* tells us
*something* about what is going on in *portions* of *a few notes* played by *a few instruments.*
However, the Fourier transform is far from the universal mathematical
Swiss Army Knife it has been touted as being.
For example, suppose we try to increase frequency resolution by taking an
FFT of tens of thousands of points--we have only 2 ways of doing this. [1]
Add tens of thousand of zeroes padded at the end of each wavecycle, which
merely refines the accuracy with each bin's frequency is specified but does
not tell us anything about what's going on between the frequency bins
(where most of the interesting and complex behavior of real-world
intstruments takes place); or [2] we can extend our FFT over multiple
wavecycles, which does give us some information about what's going on
between frequency bins because the fundamental of the wavecycle is apt to
change over the couse of several period--but this dodge lumps all the
spectral changes in 2, 3, or more wavecycles into a single analysis frame
and thus "smears out" the spectral changes of the sound in time.
If this sounds like "Catch-22," guess what?
It is.
Heisenberg's Uncertainty Principle is actually an outgrowth of the basic
characteristic of wave motion: to wit, you can accurately measure
*either* the frequency *or * the period of a changing waveform, but you
cannot measure *both * precisely at the *same time. * And when you
increase the precision with which you measure the wavetrain's period, you
consequently decrease the precision with you measure the wavetrain's
frequency.
This has profoundly important consequences for the FFT.
[1] Increasing your time resolution (that is, making the FFT snapshots closer
together in time) deceases your frequency resolution (because you must
therefore take smaller FFTs over those smaller time-lengths).
[2] Increasing your frequency resolution (that is, taking the FFT snapshot
over a bunch of different spectrally-evolving wavecycles) decreases your
time resolution (because all the spectral changes in each frequency line are
lumped into a single frequency and averaged over the number of wavecycles
you're looking at).
[3] Increasing the number of phase points, to give you a more precise
measurement of the exact amount by which partial is detuned from the
others, also increases the number of frequency bins--and to do this you
must extend the FFT over a longer time-period, which in turn means you're
lumping your phase changes together and averaging them out, which entirely
defeats your purpose.
[4] Decreasing the number of frequency points, to narrow down the time
window over which you take the FFT "snapshot," also decreases the number
of phase points--which divides the fundamental frequency in a smaller
number of divisions and defeats your purpose by making the measured
frequency changes coarser in the time domain.
Because the discrete Fourier transform is cyclic and imposes an infinite
periodicity (both supersonic and subsonic) on your spectrum, you must use a
limited fixed sampling rate and band-limit your input samples to avoid
aliasing (that is, to avoid the lowest supersonic and the highest subsonic
frequency bins from bleeding into your audible frequency bins and
contaminating them with inharmonic-sounding spurious garbage).
But once you fix your samling rate, you've thrown out all information
between samples. This means that there's no way to reconstruct the detail
in the waveform "between" the samples because it's gone. You've dumped it
out. You've thrown out the baby with the bathwater. The price you pay for
perfect reconstruction of a signal is that the signal that spews out of your
mathematical analysis algorithm is sometimes quite different from the real
analog signal that came in.
"Sometimes" because if there's very little noise and the partials are almost
perfectly harmonic, you get good results with the Fourier transform even in
its discrete version.
Alas, most sounds are *not* noise-free and perfectly harmonic.
"If you have a hammer, everything in the world looks like a nail." This is
nowhere more true than of the Fourier transform.
Many writers have begun articles on tuning or acoustics with statements
along the lines of: "Musical sounds are made up of sinusoidal frequency and
phase components..."
No!
Wrong!
Completely false!
Classic error.
These people have mistaken the *map* for the *territory.*
They have confused the *mathematical model * of the physical
phenomenon with the *physical phenomenon * itself.
*Sounds * are displacements of air molecules.
*Sinusoids * are ideal mathematical entities infinite in temporal extent
and perfectly periodic.
Sometimes one or another mathematical model works well in analyzing
acoustical phenomena; other times they all work well, sometimes *none *
yield useful results.
Different mathematical techniques and different conceptual models are
required for different acoustical phenomena, as Risset points out.
There is no "one size fits all." Yet this this is exactly what the Fourier
tykes would have us believe.
The universe exhibits what the mathematicians call "the inexhaustibility of
the real." Goedel proved this in 1930: the universe is ultimately more
complex than any statements we can make about it mathematically.
Or, to put in another way, there are an infinite number of true but
unprovable propositions.
Clearly proponentsn of this or that tuning system who write circularly-
reasoned arguments a la "that's the beauty of mathematics--it has an
inescapable logic" haven't collided with the real world in the form of a
sound that turns to junk when you Fourier analyze it.
For example, breathy vocal sounds. Or flute multiphonics, or a cymbal clash,
or gamelan bar note.
It's worth remembering Fourier never came up with his transform to solve
the problem of frequency analysis. He used it as a clever dodge to solve the
problem of heat comduction in a metal bar.
It worked well on that problem, but it has since been greatly extended--in
some cases, overextended.
The wavelet transform, a probably superior mathematical method for
analyzing acoustic phenomena, dates from only 1987.
Since then we've had very few useful & powerful transforms.
Mathematical progress has been slow. The Walsh Transform is no big help--
it is acoustically "brittle" and is too few Walsh components are used in a
reconstructing a sound, the output is intolerably buzzy and distorted-
sounding. Bart Kosko's fuzzy Kalman filter promises some insight into
acoustic transformations but the amount of ground gained has been
small...and the rate of progress slow.
In the end, the real world is very *very* VERY complex.
Our linear parametric mathematical models models apply with accuracy only
to an tiny class of physical phenomena.
It has become increasingly (distressingly!) clear over the last few years
that sounds are not "infinite harmonic wavetrains containing perfectly
harmonic overtones with a few stochastic ergodic noise components mixed
in."
Rather, real computer analysis of actual instrument timbres shows with
brutal clarity that real-world sounds run the entire gamut from chaotic
strange attractors to pure noise to semi-noise/semi-pitched to strongly
pitched narrowband noise to mostly harmonic sounds with a rock-steady
fundamental.
No one mathematical analysis technique is adequate to model all these
ranges of behavior. And when you realize that something as simple as the
flute can exhibit the entire range (overblown multiphonics, semi-pitch
"breathy" whispery notes, flutter-tongued notes, strongly pitched notes in
the lower registers with a steady fundamental) you start to realize that
the FFT is useful for only a very limited class of musical sounds. Thus it is
hardly surprising that the to-date largely FFT-obsessed model of the ear as
frequency analyzer has had extremely limited success in explaining real-
world musical preferences, the effects of real-world intervals on listeners,
and the real-world propensity of composers, performers and audiences to
prefer intervals not well defined by the integer eingenvalue vocabulary of
the Fourier transform.
--mclaren


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sat, 14 Oct 1995 17:30 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id IAA19603; Sat, 14 Oct 1995 08:30:29 -0700
Date: Sat, 14 Oct 1995 08:30:29 -0700
Message-Id: <9510140829.aa01987@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/15/1995 7:59:42 AM
From: mclaren
Subject: Tuning and psychoacoustics - post 21 of 25
---
Throughout this series of posts, we have seen that the ear/brain system is not simple.
Tones," in "The Psychology of Music," ed. Diana Deutsch, 1982, pg. 21]
But the behavior of the ear/brain system is also different from that predicted by Seebeck, Stumpf and Schouten:
e away the fundamental 440 cps and the second harmonic 880 cps?
l. 1, No.1, 1955, pg. 55 (English edition, 1957)]
This is the simplest and most compelling example of the contradictory behavior of the ear/brain system.
s. (...) The fundamental neural message is given by the rate and the distribution in time with which individual impulses are fired along the axon." [Roederer, Juan, "The Physics and Psychophysics of Music," 1973, pg. 45]
des the information on repetition rate or periodicity pitch (see below)." [Roederer, Juan, "The Physics and Psychophysics of Music," 1973, pg. 45]
main and the Seebeck/Stumpf time-domain model of the ear, and instead supports the Fetis/Burns/Ward model of the ear as an adaptive system molded by learned responses.
Thus in one simple experiment we have compelling evidence both for and against all 3 major theories of hearing.
Other compelling evidence for the "contextual" behaviour of the ear/brain system abounds.
influencing is effected by the following vowel (regressive dissimilation) as well as by the preceding." [Werner Meyer-Eppler, "Statistic and Psychologic Problems of Sound," Die Reihe, Vol. 1, No.1, 1955, pg. 55 (English edition, 1957)]
and Synthesis," in "The Psychology of Music," ed. Diana Deutsch, 1982, pg. 36]
429.]
1, pg. 429.]
greater than 2 milliseconds." [Pierce, J.R., "The Science of Musical Sound," 2nd ed., 1992, pg. 149]
"On the hypothesis that critical band filters can be identified with
auditory nerve filters, the model of Zwicker and Scharf was tested by
Pickles (1983). (...) The results agreed with the psychophysical darta, in
that the summed activity increased with stimulus bandwidth, for wider
timulus bandwidths. (Fig. 9.13 B). However, there was no clear sign
of a flat portion in the function at narrow bandwidths. The reason for
this is not known..." [Pickles, James O., "An Introduction to the Physiology
of Hearing," Academic Press, 2nd ed., 1988, pg. 283]
.
he listener is based on learned experience..." [Roderer, Juan, "The Physics and Psychophysics of Music," 1973, pg. 134]
But by far the most striking gaps in current knowledge of the ear/brain system involve the question of how the human auditory performs 3-dimensional sound localization.
TF, or the location of a specific sound using that mathematically-modelled HRTF.

Human Subjects," J. Audio Eng. Soc., 43(5), 1995 May, pp. 300-321; also
see Appleton, J., "Machine Songs III: Music In the Service of Science--
Science in the Service of Music," Computer Music Journal, 16(3), 1992,
pp. 17-21]
Clearly, there remain many unexplained aspects in the behavior of the ear/brain system.
The next post examines the mass of evidence gleaned from
the many psychoacoustics results confirmed and supported
by a wealth of modern. Although the modern psychoacoustic
data is complex, it does support some conclusions. These will
be given in the next post, after which the various biases and
prejudices of the psychoacoustic researchers themselves will
be examined...with an eye to determining how much or how little
their prejudices affected the conclusions each major researcher
drew from hi/r research.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sun, 15 Oct 1995 19:43 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA28157; Sun, 15 Oct 1995 10:42:44 -0700
Date: Sun, 15 Oct 1995 10:42:44 -0700
Message-Id: <951015174059_71670.2576_HHB28-1@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/16/1995 8:42:42 AM
From: mclaren
Subject: Tuning and psychoacoustics - post 21 of 25
---
Throughout this series of posts, we have seen that the ear/brain system
is not simple.
As is now clear, the behavior of the auditory system when presented
with complex tones is different than that predicted by Ohm/Helmholtz:
"Helmholtz's working hypothesis has been put aside by later
investigators, both those who worked in music and those who worked
in psychoacoustics. Several reasons for this can be given. First,
before the introduction of electroacoustic means of tone production
and control in the 1920s, it was not possible to carry out the
necessary psychoacuostical experiemnts, while Helmholtz's observations
proved insufficient in many ways. Second, it turned out that musical
theory has its own rules apart formm the perceptual relevance of the
characteristics of the sounds it makes." [Rasch, R. A., and Plomp, R.,
"The Perception of Musical Tones," in "The Psychology of Music," ed.
Diana Deutsch, 1982, pg. 21]
But the behavior of the ear/brain system is also different from that
predicted by Seebeck, Stumpf and Schouten:
"It sounds obvious if we say that we hear a note a' if the fundamental
of the frequency is 440 c.p.s. But what happens if we remove this
fundamental by electrical means, leaving on ly the harmonics with
frequencies of 880, 1320, 1760 cps, etc? Or if we take away the
fundamental 440 cps and the second harmonic 880 cps?
We learn from experiments that the perceived pitch level remains the
same: a'. One may take away many of the lower harmonics without
altering this. If this `mutilated' note is interrupted for only an
interval of a second, the sensation is completely altered. Instead of
the `residual tone' on a' we now hear another pitch which lies
approximately in the region of the strongest remaining harmonics and
is called `formant pitch.'" [Werner Meyer-Eppler, "Statistic and
Psychologic Problems of Sound," Die Reihe, Vol. 1, No.1, 1955, pg. 55
(English edition, 1957)]
This is the simplest and most compelling example of the contradictory
behavior of the ear/brain system.
The ability of the ear to extract an unambiguous fundamental frequency
from a complex tone argues strongly in favor of the Ohm/Helmholtz
hypothesis of the ear as a Fourier analyzer; but the ability of the
ear to still hear a "residual" fundamental even after the fundamental
and lower harmonics have been removed electronically cannot be
explained by the ear-as-Fourier-analyzer model of hearing. Instead,
"beats of mistuned consonances and periodicity pitch can be perceived
even if each component tone is fed dichotically to a different ear.
In such a case, of course, the complete vibration pattern never
arises--what must arise is a superposition or interaction of the
neural signals from both cochleas after they have been combined at
the medullar or midbrain levels. (...) The fundamental neural message
is given by the rate and the distribution in time with which
individual impulses are fired along the axon." [Roederer, Juan,
"The Physics and Psychophysics of Music," 1973, pg. 45]
Thus the ability to hear such residual pitch argues against the
Ohm/Helmholtz Fourier analyzer model of hearing, and for the
Seebeck/Stumpf model of the ear as time-domain autocorrelator in
which "the actual time distribution of [auditory nerve] impulses
codes the information on repetition rate or periodicity pitch (see
below)." [Roederer, Juan, "The Physics and Psychophysics of
Music," 1973, pg. 45]
However, the fact that after a brief interruption the ear hears
exactly the same tone from which fundamental and lower harmonics
have been electronically removed as having an entirely different
pitch argues strongly against BOTH the Ohm/Seebeck frequency-domain
and the Seebeck/Stumpf time-domain model of the ear, and instead
supports the Fetis/Burns/Ward model of the ear as an adaptive
system molded by learned responses.
Thus in one simple experiment we have compelling evidence both
for and against all 3 major theories of hearing.
Other compelling evidence for the "contextual" behaviour of the
ear/brain system abounds.
"...it is to be emphasized that sound elements which are juxtaposed
in time can have the effect that identical physical vibration
procedures give rise to totally different sensations. The phenomenon
has been particularly observed in the case of synthetic explosive
sounds such as `p',`t', `k' which may be perceived in a totally
different manner, depending on the vowels which are juxtaposed
to them (3). To explain this one cannot attribute it to masking
which has already been known for a long time becuase the
influencing is effected by the following vowel (regressive
dissimilation) as well as by the preceding." [Werner Meyer-Eppler, "
Statistic and Psychologic Problems of Sound," Die Reihe, Vol. 1,
No.1, 1955, pg. 55 (English edition, 1957)]
Further evidence of complex phenomena possibly produced by
interaction between the ear's frequency-domain and its
time-domain processing functions was brought forward by
Strong and Clark. "...in order to evaluate the relative significance
of spectral and temporal envelopes, [they] resorted to an
interesting process: they exchanged the spectral and temporal
envelopes among the wind instruments and asked listeners to
attempt to identify these hybrid tones. The results indicated
that the spectral envelope was dominant if it existed in a unique
way (as in the oboe, clarinet, bassoon, tube and trumpet); otherwise
(as in the flute, trombone, and French horn), the temporal envelope
was at least as important." [Risset, Jean-Claude, "Exploration of
Timbre by Analysis and Synthesis," in "The Psychology of Music,"
ed. Diana Deutsch, 1982, pg. 36]
Some results are simply inexlicable by any of the above models:
"Doughty and Garner (1948)...concluded that pitch was unchanging
for tones of 25 msec and longer, but that 12-msec and 6-msec
tones have a lower pitch. However, Boomsliter et al., (1964)
emphasize that the transition from "click" to "tone" depends on
the intensity. Swigart (1964) has reported a perhaps related
phenomneon for repeated short bursts of tone. If one presents
successive 8-msec bursts of 1000-Hz tone with 1-msec pauses
between (i.e., if one cuts out every ninth cycle), the pitch is
significantly lower than that of a continuous 1000-Hz tone.
Just why, however, is still unclear." [Ward, W.D., "Musical
Perception," in "Foundations of Modern Auditory Theory," ed. J.V.
Tobias, Vol. 1, pg. 429.]
"An aspect of pitch perception that is still regarded as
somewhat mysterious despite a goodly amount of experimentation
is "aboslute" (or "perfect") pitch." [Ward, W.D., "Musical Perception,"
in "Foundations of Modern Auditory Theory," ed. J.V. Tobias, Vol. 1,
pg. 429.]
"In 1973 David M. Green published some interesting results on
temporal acuity. He measured the ear's ability to discriminate
between two signals that have different waveforms but the same
energy spectrum. An example of such signals is any short waveform
and the same waveform reversed in time, such as those in figure
10-5. Here Part A is a sound of decreasing frequency, Part B is a
sound of increasing frequency. Green found that the ear can tell the
difference between two such waveforms if their duration is greater
than 2 milliseconds." [Pierce, J.R., "The Science of Musical Sound,"
2nd ed., 1992, pg. 149]
"On the hypothesis that critical band filters can be identified with
auditory nerve filters, the model of Zwicker and Scharf was tested by
Pickles (1983). (...) The results agreed with the psychophysical data, in
that the summed activity increased with stimulus bandwidth, for wider
timulus bandwidths. (Fig. 9.13 B). However, there was no clear sign
of a flat portion in the function at narrow bandwidths. The reason for
this is not known..." [Pickles, James O., "An Introduction to the
Physiology of Hearing," Academic Press, 2nd ed., 1988, pg. 283]
No model of the ear/brain system convincingly explains any of
the above results, or for that matter one of the best-known
quirks in musical perception: perfect pitch. Musical aesthics is
yet another abyss which has swallowed many a psychoacoustic
researcher.
"Why some vibration patterns appear more "beautiful" than others
is not known. A great deal of research (good and bad) has been
attempted, for instance, to find out what physical characteristics
make a Stradivarius violin a great instrument. Many of these
characteristics are dynamic in character, and most of them seem
more related to the major or minor facility with which the player
can control the wanted tone "color" (timbre), than to a "passive"
effect on a listener. To a large extent the impression on the listener
is based on learned experience..." [Roderer, Juan, "The Physics and
Psychophysics of Music," 1973, pg. 134]
But by far the most striking gaps in current knowledge of the
ear/brain system involve the question of how the human auditory
performs 3-dimensional sound localization.
A Head-Related Transfer Function can be measured empircially for
each individual by means of microphones placed in the ear canals
and an extremely high-speed computing engine, but to date no one
has offered a general mathemtical model which predicts the HRTF,
or the location of a specific sound using that mathematically-
modelled HRTF.
The best example of this complete gap in our psychoacoustic
knowledge is the inadequacy of current stereo speakers. Even a
6-year-old child easily detects the difference between a recording
reproduced from digital media on high-quality stereo speakers, and
a live performance: but to date no one has succeeded in
formulating a mathematical model of hearing which details the
difference in hard numbers. [See Moeller, H. K., Soerenson, M. F.,
Hammershoei, D., and Jensen, C. B, "Head Related Transfer Functions
of Human Subjects," J. Audio Eng. Soc., 43(5), 1995 May, pp.
300-321; also see Appleton, J., "Machine Songs III: Music In the
Service of Science-- Science in the Service of Music," Computer
Music Journal, 16(3), 1992,pp. 17-21]
Clearly, there remain many unexplained aspects in the behavior of
the ear/brain system.
The next post examines the mass of evidence gleaned from
the many psychoacoustics results confirmed and supported
by a wealth of modern. Although the modern psychoacoustic
data is complex, it does support some conclusions. These will
be given in the next post, after which the various biases and
prejudices of the psychoacoustic researchers themselves will
be examined...with an eye to determining how much or how little
their prejudices affected the conclusions each major researcher
drew from hi/r research.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 17 Oct 1995 01:35 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id QAA13399; Mon, 16 Oct 1995 16:34:38 -0700
Date: Mon, 16 Oct 1995 16:34:38 -0700
Message-Id: <199510162332.AA046446328@athena.ptp.hp.com>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/17/1995 8:40:51 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 22 of 25

---
In the course of this series of posts much experimental evidence has been
reviewed. Forum subscribers open-minded enough to have read the original
references now understand the degree of contradiction and complexity
which attends the human auditory system.
As we've seen, the evidence offered by psychoacoustic research is not
simple and straightforward. Some results conflict, and others support
none of the three accepted models of human hearing.
There is, however, a preponderance of evidence to support a number of basic
conclusions about the ear/brain system:
The fact that different aspects of the ear/brain's sound processing function
are evoked by different experiments opens up the likelihood that at least 3
different ear/brain systems operate to process sound. Wever (1964) was
the first to suggest that more than one mechanism exists to process sound;
von Bekesy strongly implies this conclusion in his 1966 paper. Plomp (1967)
strongly disagrees, but gives no basis for his objection. Pickles (1989)
points out that "the best support for [this] view is the rather negative
one that the evidence in favour of either of the other two [place and
periodicity] theories is not conclusive, and this may be a function of
the quality of the evidence available, rather than of the acutal operation
of the auditory system." [Pickles, James O., "An Introduction to the
Phsyiology of Hearing," Academic Press, 2nd ed., 1988, pg. 277]
The leading candidates for different mechanisms for detecting pitch and
processing sounds, they are: [1] the Fourier analyzer action of the
basilar membrane; [2] the encoding of pitch and spectral content by
repetition rate of neurons firing along the auditory nerve; [3] the
combination of received neural impulses in medullar and higher brain
areas and the consequent active feedback pathway between the Sylvian
fissure, the superior medial nucleus, and the different classes of neurons
in the primary, secondary and third- and fourth-order nerve fibers of
the auditory nerve.
What does this imply?
First, the role of learning and the possible conflict twixt periodicity and
Fourier analysis makes it clear that "the pitch of musical sounds is not
directly proportional to the logarithm of the frequency and is probably
complexly conditioned." [Corso, J.F. "Scale Position and Performed Musical
Octaves," Journal of Psychology, Vol. 37, 1954] This renders suspect tuning
theories which ascribe absolute and invariant qualities to this or that
scale degree or interval. Instead, pitch and intervallic quality appear
to be storngly influenced by musical context, as well as by the overtone
content of the intervals.
This conclusion would tend to support all three tunings equally, depending
on musical context, and learning: just intonation, equal temperament, and
non-just non-equal-tempered tunings are all equally supported by the
brain's learned sound processing capacities.
With instruments using strictly harmonic spectra, this conclusion weakly
supports the use of just intonation tuning--weakly, because of the
important role of learned response in the ear/brain system. For
instruments which do not use strictly harmonis spsectra, non-just non-
equal-tempered systems are weakly supported: this conclusion does not
support use of equal temperament except insofar as education and
acculturation can sufficiently indoctrinate listeners into accepting that
tuning.
Second, from Plomp and Levelt's and Kameoka and Kuriyagawa's work it is
clear that the roughness or sharpness of musical intervals is largely
determined by proximity of individual overtones with the critical band for
that frequency range.
It is also clear from Mathews', Pierce's, Geary's, Sethares' and Dashow's
work that timbres can be matched to tunings using digital technology so as
to precisely control the degree of audible roughness or smoothness within
the intervals of the scale.
However, it is less clear from the psychoacoustic evidence that audible
roughness and sharpness of musical intervals equates with "consonance" and
"dissonance," much less with the more sophisticated quality of
"concordance" and "discordance" put forward by Easley Blackwood.
Much of the music-theory literature on consonance and dissonance merely
redefines those terms to favor the author's chosen list of references. By
redefining consonance as "beats," just intonation emerges as the preferred
tuning; by redefining consonance as "roughness," the periodicity theory is
favored with the proviso that the partials of the overtones be warped to fit
the tuning (so that all the partials share sub- and super-multiples of the
same periodicity); and if consonance is redefined as "learned preference," or
"experimental results of interval preference," non-just non-equal-tempered
tuning emerges as the "best" tuning based on the empirical evidence of
preference for stretched non-just non-equal-tempered intervals.
A good example of this process can be found in the following quote from
Plomp:
"In conclusion, Helmholtz's theory, stating that the degree of dissonance is
determined by the roughness of rapid beats, is confirmed. It appears that
maximal and minimal roughness are related to critical bandwidth..." [Plomp,
"Experiments on the Tone Sensation," 1967, pg. 58] By discussing only
harmonic tones and by defining dissonance as "the roughness of rapid beats,"
just intonation emerges as the de facto winner.
A different set of assumptions produces an entirely different conclusion.
For example, Risset's composition Inharmonique is based on inharmonic
tones which also exhibit a lack of "roughness of rapid beats." Thus Risset's
harmonic practices would appear to be equally acceptable according to
Plomp's definitions, yet Risset's composition does not employ small-integer
ratios--but the compositions still exhibits marked examples of dissonance
and consonance.
Moreover, musical intervals are often not heard as isolated units: "there is
considerable evidence that melodies are perceived as *gestalts * or
patterns, rather than as a seuccession of individual intervals, and that
interval magnitude is only a small factor in the total percept." [Burns.,
E.M., and Ward, W.D., "Intervals, Scales and Tuning," in "The Psychology
of Music," ed. Diana Deutsch, 1982, pg. 264]
As Plomp points out, "A clear relationship exists between these data,
justifying the conclusion that consonance is closely related to the absence
of (rapid) beats, as in Helmholtz's theory. The critical-bandwidth curve
fits the data rather well." [Plomp, "Experiments In the Tone Sensation,"
1967, pg. 58]
Thus, while the psychoacoustic evidence on "roughness" and "smoothness" of
musical intervals is clear, the musical imlications are less
striaghtforward. As von Bekesy points out, "...linearity can be assumed of
the mechanical part of the [auditory] stimulation; but from a physiological
point of view, the question of linearity in the nervous system is still
open to speculation." [von Bekesy, G., "Hearing Theories and Complex
Sounds," Journ. Acoust. Soc. Am., Vol. 35, No. 4, 1963, pg. 588]
Although "beats" are castigated by one group of theorists and used as the
justification for one set of tuning theories, strong evidence exists that a
beat rate of 6 to 7 Hz adds to the perceived musicality of a performance.
This is true of both non-Western and Western music: "Even a cursory
acquaintance with the sound of a Balinese gamelan uncovers some puzzling
aspects of musical timbre. The beating complex that is the result of all
the beats produced in the gamelan, and especially dependent upon the beats
of paired metallophones, resembles in its effect the quality of a string,
woodwind or brass section in a Western orchestra. (...) It appears that
some type of pulsation at rates between 6 and 7 times per second -- and
slightly irregular -- is musically desirable, both in Bali and in the West,
that the effects of beats and the effects of vibrato are similar so far as
the quality of richness is concerned, and that the unfiication of hte
sound (section sound) can be accomlished with either technique." [Erickson,
R., "Timbre and the Tuning of the Balinese Gamelan," Soundings, 1984,
pg. 100]
In fact the argument for this or that tuning according to beat rate is not
supported by the psychoacoustic data, except insofar as the data strongly
support a universal preference for low-level beats in the 6-7 Hz region.
The ability of the ear to extract individual components from complex
sounds, however, argues strongly in favor of just intonation.
JI tunings are the only tunings which accord with the ear's Fourier
analysis mechanism.
It is, however, clear that Fourier analysis is only one method used by the
ear to process sounds, and in many situations it is the least important
system. Both Plomp & Levelt's and Kameoka and Kuriyagawa's findings strongly
support all three general kinds of tunings, provided that the overtones of
the individual partials of the notes are matched to the scale as suggested
by Sethares, Risset, Pierce and McLaren.
We've seen from Shepard's and Risset's auditory illusions and from Wessel's
streaming phenomenon that all 3 ear/brain systems of pitch processing
interact; sometimes they conflict. This provides opportunities for
composition using digital media, and tends to support non-just non-equal-
tempered tunings (albeit weakly).
At first glance, the surprising and universal human preference for stretched
intervals, including a stretched octave of between 1210 and 1215 cents on
average, strongly favors non-just non-equal-tempered scales.
However Terhardt has brought forth convincing evidence that much of this
effect is due to learning: and in that case, since any musical tuning can be
learned, all three tunings are supported by this body of evidence to the
degree that acculturation is involved.
The wide variability of interval size in live concerts by expert performers
tends to support this concludion; however Terhardt's findings in comparing
stretched, compressed and equal-tempered tunings found a differential
preference for all three types of tunings--depending on context.
This evidence gives superiority to either equal-tempered, stretched or
compressed tunings, depending on whether harmony or melody predominates.
The body of psychoacoustic evidence as a whole clearly shows a difference
between the size of preferred melodic and harmonic intervals called by the
same name; this also provides strong evidence for categorical perception of
musical intervals, which in turn tends to vitiate the superiority of any
given tuning. If, once learned, an interval can be recogtnized even though
significantly altered in size, then many different tunings should prove
equally musical and effective provided that Rothenberg's properiety criteria
are observed.
The experiments also show a clear dichotomy between the preferred size of
vertical intervals and sequential intervals. Both preferred interval sizes
as measured in adjustment tests are significantly larger than small-integer
ratios predicted by conventional Western theory, but the sequential
intervals heard as "true thirds," "true fifths," "true octaves," etc. are
even wider than the already winder-than-just vertical intervals. This body
of data strongly supports the use of non-just non-equal-tempered tunings,
inasmuch as the psychoacoustic data support neither a preference for
tempered nor just intervals.
Lastly, the limitations on the applicability of the Fourier transform do
not argue against just intoantion or for any other tuning, since the FFT is
entirely appropriate for strictly harmonic sounds; however an awareness of
the limitations of the FFT as a tool for explaining and analyzing musical
sounds serves as a warning against generalizing the results of this or that
psychoacoustic experiment beyond its proper range.
Thus the data is mixed and, as promised, there exists considerable
psychoacoustic evidence to support each major type of tuning, as well as a
significant body of findings which tend to *refute * the arguments for each
particular tuning.
To date, the psychoacoustic data amassed by various researchers has been
considered as though it were a perfect array of unbiased results. However,
all the major psychoacoustic researchers have exhibited strong biases
toward this or that particular tuning philosophy. In some cases this
muiscal bias had little effect on the conclusions they chose to draw from
their data; in other cases, these researchers deliberately buried or
ignored results which did not conform to their tuning prejudices.
The next post will examine in detail the biases of each major reseracher,
and the degree to which it warped his conclusions.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 17 Oct 1995 18:17 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id JAA08685; Tue, 17 Oct 1995 09:16:38 -0700
Date: Tue, 17 Oct 1995 09:16:38 -0700
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗JOHNSON@SKSOID.dseg.ti.com

10/18/1995 5:55:00 AM
Greetings!

I've read with great interest the topics of this list for some time...

Recently there was mention made of Chinese bells and I remember an excellent
article in the April 1987 issue of Scientific American which describes an
instrument made of 65 bronze bells recovered by archaeologists in 1978 that
dates back to the fifth century B.C.

An unusual (and rather charming) discovery about these old chime bells is that
they were constructed such that each will produce two separate pitches
depending upon where they are struck (The interval is always a minor or major
third.). To quote the article, "The design of the bells requires a
theoretical grasp of physics and engineering formerly thought to have evolved
only in the late 18th century."


Thoughts??? Observations???

regards,

dj

dana-johnson@ti.com

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Wed, 18 Oct 1995 17:09 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id IAA09635; Wed, 18 Oct 1995 08:09:28 -0700
Date: Wed, 18 Oct 1995 08:09:28 -0700
Message-Id: <9510180805.aa17347@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/18/1995 8:09:28 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 23 of 25
---
Psychoacoustics is an empirical science. Mathematical theories abound,
but they are attempts to explain experimental evidence. There are no valid "a priori" theories of hearing.
Psychoacoustics uses choice and adjustment methods, along with
computer analysis of live performances. The first methods involve laboratory setting, while the second generally does not.
Overall results from both settings tend to agree.
Psychoacoustic researchers are prone to bias. The goal of scientific
investigation is not to reduce or eliminate the prejudice of the
investigator, but to reduce or eliminate its effect on the measured
results. All of the psychoacoustic researchers cited so far have tuning
preferences.
Some of these researchers warp the presentation of results to favor a given
tuning system, while others do not.
Ohm (1843), Helmholtz (1863) and Pikler (1898) strongly favored just
intonation tunings and the ear-as-Fourier-analyzer model. To explain away
the empirical evidence that 19th-century composers and performers
universally used equal tempered tunings, Helmholtz claimed that modern
performers were incompetent and that "true" (i.e., just) intonation was a
lost art, requiring superlative skill. To refute the empirical evidence that
most people who listened to music seemed to like equal-tempered intervals,
Helmholtz claimed that modern listeners had been brainwashed by equal
temperament.
In order to determine a listener's "true" preference among musical
intervals, he contended, one needs must find an ear uncomtaminated by
debased modern equal-tempered music. Since ears accustomed to pure just
intonation and unsullied by equal temperament were not to be found in 19th
century Europe, Helmholtz's hypothesis ran afoul of the first demand on any
scientific hypothesis: namely, that it be testable.
However, in the second edition of his book "On the Sensations of Tone,"
Helmholtz modified his original position and wrote that music "does not
rest solely upon inalterable natural laws, but is also, at least partly, the
result of esthetical principles, which have already changed, and will still
further change, with the progressive development of humanity." [Helmholtz,
Hermann, "On the Sensation of Tone," 2nd. Dover ed., 1863, pg. 235]
Helmholtz is thus a contradictory figure, on the one hand explaining away
empirical evidence with various pieces of circular reasoning, and on the
other hand openly suggesting that the third, or learned, model for hearing
had a strong influence on the auditory system.
Pikler and Ohm proved less open-minded, and they consistently bury results
which conflict with just intonation as the "best" tuning.
Seebeck (1843), Stumpf (1896) Schouten (1938), Boomsliter & Creel (1965),
Plomp (1967) and Roederer (1973) are strongly biased toward the
periodicity model of the ear. Of these, Seebeck, Schouten and Stumpf do not
attempt to explain all pitch processing or auditory phenomena on the basis
of the periodicity theory: they simply give their results and omit mention of
data which would tend to contradict their conclusions.
Plomp's approach deserves special mention. He argues tenaciously and at
length against von Bekesy's place theory near the end of his magnum opus,
Experiments In the Tone Sensation (1967). After describing a model of the
ear's pitch and spectral-analysis functions as a set of 1/3-octave bandpass
filters, Plomp goes on to state: "This model of the ear's frequency-analyzing
mechanism must be considered as a simplified representation, since the ear
does not containa limited number of fixed filters but is a continuous
system of overlapping filters. Nevertheless, Figure 60 elegantly accounts
for both the discrimination of the lower harmonics and the preservation of
periodicity. The way in which these waveforms give rise in the ear to
periodic nerve impulses, however, is still rather unknown." [Plomp,
"Experiments In the Tone Sensation," pg. 128, 1967]
Plomp mentions experiments by de Boer (1956), Schouten (1962), Ritsma
and Engel (1964) and Licklider (1956, 1959, 1962)--all staunch advocates
of the periodicity theory. Significantly, Plomp does not mention any of the
experiments which tend to contradict the periodicity theory of hearing--in
particular, Flanagan and Guttman's 1959 experiments and the removal of the
fundamental from the Seebeck click series. These experiments *are
*described by von Bekesy.
By neglecting to cite the full range of experimental results, Plomp creates a
false impression that all psychoacoustic data support the periodoicity
theory. Moreover, he neglects to mention that the Fourier-analysis (or
place) theory of hearing could equally well account for the fact that pitch is
primarily determined by the lower harmonics of a sound, and that even when
those lower harmonics are masked by noise the ear still detects a definite
pitch. "Apart from the question of how a model spanning only two octaves
can throw any light on the perception of a complex tone, we may ask how
this observation can explain why the pitch of a complex tone is not altered
when the lower harmonics are masked completely by noise... In that case,
thecontribution fo the place corresonding to the fundamental is eliminated."
[Plomp, "Experiments In the Tone Sensation," 1967, pg. 130]
Oddly, Plomp neglects to mention the obvious place-theory explanation of
this effect. This phenomenon is well known from the perception of definite
pitch ascribed to bells: the ear operates as a Fourier analyzer and fits the
partials of the bell into the higher members of a harmonic series, then
assigns the bell a perceived pitch given by the assumed fundamental. If the
ear's Fourier analysis can assign a nonexistant fundamental to a bell, why
not to a strictly harnonic sound whose lower harmonics are masked by
noise?
Houtsma summarizes this explanation concisely: "De Boer (1956) reported
pitch matches in which inharmonic complex tones comprising five or seven
partials with uniform frequency spacing were aurally matched to periodic
complex tones with a fundamental that differed systematically from the
spectral spacing of the inharmonic sound. Schouten, Ritsma and Cardozo
(1962) produced similar data for AM complexes (three partials)...and
smoorenburg (1970) found essentially the same results using complexes
consisting of only two tones. [Houstma, A.J.M., and Goldstein, J.L., "The
Central Origin of the Pitch of Complex Tones: Evidence from Musical Interval
Recognition," Journ. Acoust. Soc. Am., Vol 51, No. 2, 1971, pg. 525]
Moreover, Plomp glosses over the most serious objection to the periodicity
theory--namely, that because of the duration imposed on the impulses from
nerve fibers in the inner ear, pitches above 1600 Hz should not be
perceptible. "The physiological process which sets an upper limit to the
frequency of impulses in each fiber is the refractory period. For a brief
interval of approximately 1 msec after each impulse the nerve-fiber is not
excitable and cannot transmit another impulse." [Boring, E. G. and A. Forbes,
Hearing: Its Psychology and Physiology, 1983, 2nd ed., pg. 401] Plomp's
response to these objections is worth noting:
"The hypothesis that hte pitch of a tone is based on the period of the sound
waves may be criticized ont eh ground that this periodicity is preserved up
to 3000-4000 cps, whereas we are able to distinguish tones up to about
16000 cps. This discrepancy is one of the most serious arguments against
periodicity pitch. It is obviated by Wever's assumption (Chapter 7) that the
ear is provided with two ptich-detecting mechanisms: one, based on
periodicity, for low frequency and one, based on place of maximal
stimulation along the basilar membrane, for high frequencies. This
conception is not very attractive, however." [Plomp, "Experiments In the
Tone Sensation, 1967, pg. 130]
This is a remarkable statement. Plomp saves the periodicity model of pitch
by dragging in another model of ear/brain function, then criticizing his own
conclusion!
Plomp never explains why the idea of multiple auditory pitch detection
mechansisms is "not very attractive." Moreover, his arguments for the
periodicity theory lose much of their power because he's forced to appeal to
the very theory he's arguing against (the place theory) in order to save the
periodicity theory for notes with high fundamentals.
In several cases where both the periodicty and place theories offer equal
explanatory power, Plomp consistently comes down on the side of the
periodicity theory. Consider the following example:
"Upon presenting tone intervals with the same freuqnecy ratios to the ear,
for instance the mistuned interval 200 + 601 cps, the same phenomena were
observed as with (von Bekesy's) model. This corresondence was considered
by von Bekesy as an affirmation that in both cases the same mehcanism is
involved. In his opinion, it seems clear that the periodicity of the nerve
impulses does not play an important part in the production of beats.
Although this reasoning appears rather attractive, I prefer an alternative
explanation of the beats which is based on the assumption that pitch is
related to the periodicity of nerve impulses." [Plomp, "Experiments In the
Tone Sensation," 1967, pg. 125]
Perhaps even the most diligent researchers become confused when
attempting to explain away their own biases. At least Plomp admits his
prejudice: "In the writer's opinion, this explanation of pitch perception in
terms of periodicity is more satifactory than those based on the palce
principle. If the pitch of complex tones is derived from periodicity, it is
difficult to see why this should not be the case for simple tones, too."
[Plomp, "Experiments In the Tone Sensation," 1967, pg. 129]
Aside from the fact that many of his objections to the place theory are
easily answered, and that he neglects to point out some of the most serious
problems with the periodicity theory, Plomp is reasonably straightforward
about his leanings toward the periodicity theory.
Boomsliter & Creel, in "The Long Pattern Hypothesis in Music," Journ. Mus.
Theory, 1965, are not so forthcoming.
They are strongly prejudiced toward just intonation, and their article does
not cite results which tend to contradict either periodicity or JI tunings
(viz., combination tones, the universal preference for stretched intervals,
etc.) and they explicitly attempt to derive all of the ear's functions from the
periodicity theory. For example, there's no mention of the body of
psychoacoustic evidence for a universal preference for stretched intervals,
and Boomsliter and Creel explain away their own data showing a general
preference for just intervals on their "search organ" singificantly larger
than those predicted by the just intonation theory of small whole numbers.
The same is true of Roederer in his book's section on the physical makeup of
the ear/brain system: there is a great deal of discussion of neurons and
neural firing pattern, none at all on the processing of medullar and higher
brain areas nor any detailed discussion of von Bekesy's experiments, the
physical action of the organ of Corti, etc. However to his credit Roederer
does cite extensively the results of Ward, Corso, Terhardt, Sundberg and
Lindqvist for the unviersal preference of stretched invtervals.
Thus Roederer, like Helmholtz, is a contradictory figure--deliberately
overemphaszing some of the evidence, while remaining open-minded about
other contradictory results. Fetis (1943), Ellis (1885) Corso (1954), Ward
(1970), Burns (1970), Hood (1975) and Erickson (1983) are all strongly
biased toward the ear-as-controlled-by-learned-preference model of
hearing. Terhardt and Ward/Burns tend to bury evidence which does not
favor their view by dumping a superflux of additional results on top of the
relevant data; this has the same effect as squirelling the relevant data
away in a bibliograhy at the back of the article, but it achieves the same end
by opposite means. Fetis, Hood, Ellis, tend to stress the multiplicity of
results and musical cultures, rather than concentrating on empirical data
from specific experiments.
Von Bekesy demonstrates a bias toward the place theory but his bias does
not appear to affect his willingness to bring forward contradictory results.
In particular, he cites both the periodicity and place theories as worthy of
additional investigation in his 1966 article "Hearing Theories and Complex
Sounds," Journal of the Acoustic Society of America.
Terhardt & Zick, Kameoka & Kuriyaga, Pierce, Mathews, Green, Wessel,
Risset and Sundberg do not exhibit a bias toward any specific theory of
hearing. These authors all cite competing hypotheses and suggest lines of
further experimental inquiry into all 3 ear/brain models.
Throughout this article the intent has been to present psychoacoustics data
as clearly as possible.
In many cases this meant extracting experimental results from layers of
refutation or from citations buried in bibliographies because this or that
psychoacoustician preferred not to bring the inconventient result out into
the body of the text, where it might raise embarassing questions.
In other cases, viz., Risset or von Bekesy or Pierce, extensive direct quotes
of secondary sources were used because these sources offered the most
detailed survey of the evidence.
Thus the casual reader must be wary of Roederer, Helmholtz, Plomp and
other sources widely cited because of the covert (sometimes overt) bias
toward this or that tuning or theory or hearing.
As mentioned at the outset, acousticians have fared far worse than
psychoacoustics researchers in this regard.
Backus (1969), like von Bekesy, is biased toward just intonation--but unlike
von Bekesy he neglects to mention any of the psychoacoustic experiments
which cast doubt on either just intonation as the ideal tuning system or the
place theory as the sole explanation of hearing. In the chapter ""Intervals,
Scales, Tuning, and Temperament," Backus lavishes 3 pages on just
intonation and 2 pages on Pythagorean intonation but only 1 page on equal
temperament. No tunings are mentioned other than Pythagorean, meantone,
just intonation and equal temperament: there is, for example, no reference
to pelog, slendro, the Indian srutis, or any other non-European tuning..
Moreover, Backus buries or is not aware of many psychoucstic data which
contradict his view of the ear as simple Fourier analyzer. In part (as
mentioned earlier) this is because Backus had the bad luck to publish his
book "The Science of Musical Acoustics" just before computers introduced
an immense unpheaval into acoustic and auditory research. In part the
problem appears to be overt prejudice against acoustic results which do not
favor just intonation. As mentioned earlier, texts by Rossing and Hall
supersede the acoustics in Backus and the psychoacoustics (where Backus
refers to them at all) are incorrect as well as out of date. Thus Backus'
entire text should be ignored.
Benade (1975) cannot excuse his lapses on the basis of bad timing. In the
period 1970-1974 much of the data cited throughout this series of posts
was already well known; Benade chooses not only to ignore it, but actually
to argue with a number of independently-confirmed results, particularly the
universal preference for stretched musical intervals and the accumulated
evidence for the periodicity theory.
For example: "Experiments by Paul Boomsliter and Warren Creel give us very
important information on what a musician actually does about tuning. My
discussion in this capter is strongly influenced by these data, although I do
not completely accept their published interpretatiojn. Paul C. Boomsliter
and Warren Creel, "The Long Pattern Hypothesis in harmony and hearing," J.
Mus. Theory, Vol. 5, No. 2, 1961, pp. 2-30, and Paul C. Boomsliter and Warren
Creel, "Extended Reference: AN Unrecognized Dynamic in Melody," J. Mus.
Theory, vol. 7, No. 2, 1963: pp. 2-22. " [Benade, A., "Fundamentals of Musical
Acoustics, 1975, pg. 303]
In short, Benade agrees with Boomsliter and Creel's claim for "small-
integer-ratio" detectors in the ear, but rejects the evidence they provide for
the periodicity theory of hearing. Benade's commentary is notable because
[1] it is hidden away in a footnote and [2] it ignores the fact that a wealth
of additional evidence supports the periodicity theory of hearing--evidence
which cannot be easily explained by the place theory, which Benade
espouses.
Again: "Everywhere in our experiments we have found indications that our
nervous system processes complex sounds coming to it by seeking out
whetever subsets of almost harmonically related components it can find."
[Benade, A., "Fundamentals of Musical Acoustics," 1975, pg. 68]
This statement is as deceptive as it is true. The accuracy of Benade's claim
depends on experiments he chooses to perform: and by failing to perform
auditory experiments which would cast doubt on the place theory of hearing,
Benade creates the impression that no such doubt exists. As has been seen
throughout this article, there is ample evidence both to support and
contradict the place theory of hearing (and to support and contradict the
other two models of hearing as well). By omitting mention of any of this
additional evidence, Benade creates a profoundly misleading impression in
the unwary reader. The implication of Benade's statement is that all
psychoacoustic experiments support the place theory of hearing--entirely
untrue, as we have seen.
Again: "The situation with tones having harmonic partials is much more
straightforward. We have already learned that pitch-matchings between
usccessive and superposed tones are in agreement the tones consist of a
few strong partials." [Benade, A., "Fundamentals of Musical Acoustics,"
1975, pg. 302]
The psychoacoustic data do not support this claim. On the contrary: every
psychoacoustic experiment since the 1830s shows a distinct and
measurable difference betwen intervals heard as "perfect" when played
successively and when played simultaneously, with a strong tendency for
successive tones to be played wider than simultaneous tones, and a
consistent tendency for both categories of tones to be played wider than
small-integer ratios. Clearly in this case Benade is unware of (or has
chosen to ignore) 150 years of psychouacoustic data.
For these reasons Benade's text is unreliable insofar as it bears on
psychoacoustics. Some of Benade's acoustic results remain valid, others
have been disproven--particularly Benade's discussion of oscillation
patterns in woodwinds and his theory of regimes of oscillation for brass
instruments. On balance the entire text should be ignored in favor of more
detailed and far more accurate treatments by Rossing, Fletcher, Askill and
Lord Rayleigh.
Rossing and Fletcher's "The Physics of Musical Instruments," and Askill's
"The Physics of Musical Sound" remain excellent surveys of the state of the
art in musical acoustics. As mentioned, however, little information on
psychoacoustics can be gleaned from these texts because psychoacoustics is
not their concern. These texts do not cite appreciable amounts of
psychoacoustic data and should not be quoted to support this or that tuning
or theory of hearing.
"Musical Acoustics: An Introduction," by Donald Hall, 1980, contains an
accurate precis of acoustics of piano strings, metallohpohnes and
woodwinds, as well as good survey of brass instruments, et alii. Hall is
strongly biased toward just intonation and he buries or calls into question
psychoacoustic data which do not accord with his prejudices. The acoustic
portion of Hall's text is impeccable and worth reading, while the section of
the book which bears on tuning systems and psychocoustics is incomplete,
outdated, full of errors of omission, and should be ignored.
The single best overall survey of psychoacoustic experiments peformed up
to the 1960s is Plomp's 1966 text "Experiments In the Tone Sensation." It
contains an exhaustive bibliography unmatched anywhere else, and
constantly uses extensive direct quotes from the original sources. Plomp
exhibits a constant and strong bias toward the periodicity theory; however,
he readily admits his prejudice.
He is also conscientious in pointing out behaviours of the human ear which
are not well explained.
Georg von Bekesy is biased toward the place theory; not surprising,
inasmuch as his work put the place (Fourier) theory of hearing on a firm
foundation. He cites both the periodicity and place theories as deserving
further study, however, and (like Plomp) also cites results which
contradict all three models of the ear/brain system.
"Experiments in Hearing," New York: Robert E. Krieger, 1960 and republished
in 1980, is the single best source of references for the place theory of
hearing.
Harvey Fletcher's "Speech and Hearing in Communication," 1953, is dated but
unlike Backus and Benade it is not rendered worthless by overt bias. Better
texts now exist (Sundberg, Rossing, Pierce, Terhardt & Zick) but the results
Fletcher cites tend to be accurate.
Diana Deutsch's 1982 "The Psychology of Music" summarizes key
psychoacoustic results by many of reserachers who made the original
findings. Most of the contributors are biased toward one or another tuning
and the reader must take care to separate experimental results from the
conclusions drawn by the various authors. As has been seen, the conclusions
of various researchers are on occasion mere opinions, unsupported by the
facts. The data cited in Deutsch's compilation are extensive and accurate,
although the bibliogpraphies for each section prove distinctly selective.
"Psychological Acoustics," edited by E.D. Herbert, is a collection of the
original papers in psychocoustics from the 1870s to the 1970s. This is the
only text which amasses all the original results in the original authors' own
words. Much of the material is now dated, however, and therefore provides
an incomplete picture of the ear/brain system.
"Auditory Scene Analysis," by Albert S. Bregman, 1990, is a disappointment.
It is vague on crucial points and does not cite enough psychoacoustic
references. While Bregman does not exhibit major biases toward any
specific tuning system, he appears to gloss over many difficult areas of
psychoacoustics; viz., the contradictory evidence for various theories of
hearing, unexplained ear/brain phenomena, the role of musical illusions in
the auditory path, etc.
On the whole Bregman's text is useful as a quick overview but should not be
cited as a primary source.
The next and last post of this series will discuss the higher-level
ineraction of tuning, timbre and structural tonality as considered by
Rothenberg, Keislar, Douthett and as examined in the work of Pierce, Risset, Sethares, Carlos, et al.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Wed, 18 Oct 1995 18:02 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id JAA13167; Wed, 18 Oct 1995 09:01:51 -0700
Date: Wed, 18 Oct 1995 09:01:51 -0700
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/19/1995 9:37:24 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 24 of 25
---
Juan Roederer is one of the more influential psychoacoustic researchers.
Like many others in the field, he is biased toward a particular model of
the ear.Roederer is strongly biased toward the periodicity theory of hearing.
In "Introduction to Physics and Psychophysics of Music," the section on the
physical makeup of the ear/brain system shows this bias clearly.
In this section there is a great deal of discussion of neurons and
neural firing pattern, none at all on the processing of medullar and higher
brain areas nor any detailed discussion of von Bekesy's experiments, the
physical action of the organ of Corti, etc. However to his credit Roederer
does cite extensively the results of Ward, Corso, Terhardt, Sundberg and
Lindqvist for the universal preference of stretched invtervals.
Thus Roederer, like Helmholtz, is a contradictory figure--deliberately
overemphaszing some of the evidence, while remaining open-minded about
other contradictory results. Fetis (1943), Ellis (1885) Corso (1954), Ward
(1970), Burns (1970), Hood (1975) and Erickson (1983) are all strongly
biased toward the ear-as-controlled-by-learned-preference model of
hearing. Terhardt and Ward/Burns tend to bury evidence which does not
favor their view by dumping a superflux of additional results on top of the
relevant data; this has the same effect as squirelling the relevant data
away in a bibliograhy at the back of the article, but it achieves the same
end by opposite means. Fetis, Hood, Ellis, tend to stress the multiplicity of
results and musical cultures, rather than concentrating on empirical data
from specific experiments.
Von Bekesy demonstrates a bias toward the place theory but his bias does
not appear to affect his willingness to bring forward contradictory results.
In particular, he cites both the periodicity and place theories as worthy of
additional investigation in his 1966 article "Hearing Theories and Complex
Sounds," Journal of the Acoustic Society of America.
Terhardt & Zick, Kameoka & Kuriyaga, Pierce, Mathews, Green, Wessel,
Risset and Sundberg do not exhibit a bias toward any specific theory of
hearing. These authors all cite competing hypotheses and suggest lines of
further experimental inquiry into all 3 ear/brain models.
Throughout this article the intent has been to present psychoacoustics
data as clearly as possible.
In many cases this meant extracting experimental results from layers of
refutation or from citations buried in bibliographies because this or that
psychoacoustician preferred not to bring the inconventient result out into
the body of the text, where it might raise embarassing questions.
In other cases, viz., Risset or von Bekesy or Pierce, extensive direct quotes
of secondary sources were used because these sources offered the most
detailed survey of the evidence.
Thus the casual reader must be wary of Roederer, Helmholtz, Plomp and
other sources widely cited because of the covert (sometimes overt) bias
toward this or that tuning or theory or hearing.
As mentioned at the outset, acousticians have fared far worse than
psychoacoustics researchers in this regard.
Backus (1969), like von Bekesy, is biased toward just intonation--but
unlike von Bekesy he neglects to mention any of the psychoacoustic experiments
which cast doubt on either just intonation as the ideal tuning
system or the place theory as the sole explanation of hearing. In the
chapter ""Intervals, Scales, Tuning, and Temperament," Backus lavishes 3
pages on just intonation and 2 pages on Pythagorean intonation but only 1
page on equal temperament. No tunings are mentioned other than
Pythagorean, meantone, just intonation and equal temperament: there is,
for example, no reference to pelog, slendro, the Indian srutis, or any other
non-European tuning..
Moreover, Backus buries or is not aware of many psychoucstic data which
contradict his view of the ear as simple Fourier analyzer. In part (as
mentioned earlier) this is because Backus had the bad luck to publish his
book "The Science of Musical Acoustics" just before computers introduced
an immense unpheaval into acoustic and auditory research. In part the
problem appears to be overt prejudice against acoustic results which do
not favor just intonation. As mentioned earlier, texts by Rossing and Hall
supersede the acoustics in Backus and the psychoacoustics (where Backus
refers to them at all) are incorrect as well as out of date. Thus Backus'
entire text should be ignored.
Benade (1975) cannot excuse his lapses on the basis of bad timing. In the
period 1970-1974 much of the data cited throughout this series of posts
was already well known; Benade chooses not only to ignore it, but actually
to argue with a number of independently-confirmed results, particularly the
universal preference for stretched musical intervals and the accumulated
evidence for the periodicity theory.
For example: "Experiments by Paul Boomsliter and Warren Creel give us very
important information on what a musician actually does about tuning. My
discussion in this capter is strongly influenced by these data, although I do
not completely accept their published interpretatiojn. Paul C. Boomsliter
and Warren Creel, "The Long Pattern Hypothesis in harmony and hearing," J.
Mus. Theory, Vol. 5, No. 2, 1961, pp. 2-30, and Paul C. Boomsliter and Warren
Creel, "Extended Reference: An Unrecognized Dynamic in Melody," J. Mus.
Theory, vol. 7, No. 2, 1963: pp. 2-22. " [Benade, A., "Fundamentals of Musical
Acoustics, 1975, pg. 303]
In short, Benade agrees with Boomsliter and Creel's claim for "small-
integer-ratio" detectors in the ear, but rejects the evidence they provide
for the periodicity theory of hearing. Benade's commentary is notable
because [1] it is hidden away in a footnote and [2] it ignores the fact that
a wealth of additional evidence supports the periodicity theory of hearing--
evidence which cannot be easily explained by the place theory, which Benade
espouses.
Again: "Everywhere in our experiments we have found indications that our
nervous system processes complex sounds coming to it by seeking out
whetever subsets of almost harmonically related components it can find."
[Benade, A., "Fundamentals of Musical Acoustics," 1975, pg. 68]
This statement is as deceptive as it is true. The accuracy of Benade's claim
depends on experiments he chooses to perform: and by failing to perform
auditory experiments which would cast doubt on the place theory of hearing,
Benade creates the impression that no such doubt exists. As has been seen
throughout this article, there is ample evidence both to support and
contradict the place theory of hearing (and to support and contradict the
other two models of hearing as well). By omitting mention of any of this
additional evidence, Benade creates a profoundly misleading impression in
the unwary reader. The implication of Benade's statement is that all
psychoacoustic experiments support the place theory of hearing--entirely
untrue, as we have seen.
Again: "The situation with tones having harmonic partials is much more
straightforward. We have already learned that pitch-matchings between
usccessive and superposed tones are in agreement the tones consist of a
few strong partials." [Benade, A., "Fundamentals of Musical Acoustics,"
1975, pg. 302]
The psychoacoustic data do not support this claim. On the contrary: every
psychoacoustic experiment since the 1830s shows a distinct and
measurable difference betwen intervals heard as "perfect" when played
successively and when played simultaneously, with a strong tendency for
successive tones to be played wider than simultaneous tones, and a
consistent tendency for both categories of tones to be played wider than
small-integer ratios. Clearly in this case Benade is unware of (or has
chosen to ignore) 150 years of psychouacoustic data.
For these reasons Benade's text is unreliable insofar as it bears on
psychoacoustics. Some of Benade's acoustic results remain valid, others
have been disproven--particularly Benade's discussion of oscillation
patterns in woodwinds and his theory of regimes of oscillation for brass
instruments. On balance the entire text should be ignored in favor of more
detailed and far more accurate treatments by Rossing, Fletcher, Askill and
Lord Rayleigh.
Rossing and Fletcher's "The Physics of Musical Instruments," and Askill's
"The Physics of Musical Sound" remain excellent surveys of the state of the
art in musical acoustics. As mentioned, however, little information on
psychoacoustics can be gleaned from these texts because psychoacoustics
is not their concern. These texts do not cite appreciable amounts of
psychoacoustic data and should not be quoted to support this or that tuning
or theory of hearing.
"Musical Acoustics: An Introduction," by Donald Hall, 1980, contains an
accurate precis of acoustics of piano strings, metallohpohnes and
woodwinds, as well as good survey of brass instruments, et alii. Hall is
strongly biased toward just intonation and he buries or calls into question
psychoacoustic data which do not accord with his prejudices. The acoustic
portion of Hall's text is impeccable and worth reading, while the section of
the book which bears on tuning systems and psychoacoustics is incomplete,
outdated, full of errors of omission, and should be ignored.
The single best overall survey of psychoacoustic experiments peformed up
to the 1960s is Plomp's 1966 text "Experiments In the Tone Sensation." It
contains an exhaustive bibliography unmatched anywhere else, and
constantly uses extensive direct quotes from the original sources. Plomp
exhibits a constant and strong bias toward the periodicity theory; however,
he readily admits his prejudice.
He is also conscientious in pointing out behaviours of the human ear which
are not well explained.
Georg von Bekesy is biased toward the place theory; not surprising,
inasmuch as his work put the place (Fourier) theory of hearing on a firm
foundation. He cites both the periodicity and place theories as deserving
further study, however, and (like Plomp) also cites results which
contradict all three models of the ear/brain system.
"Experiments in Hearing," New York: Robert E. Krieger, 1960 and republished
in 1980, is the single best source of references for the place theory of
hearing.
Harvey Fletcher's "Speech and Hearing in Communication," 1953, is dated
but unlike Backus and Benade it is not rendered worthless by overt bias.
Better texts now exist (Sundberg, Rossing, Pierce, Terhardt & Zick) but the
results Fletcher cites tend to be accurate.
Diana Deutsch's 1982 "The Psychology of Music" summarizes key
psychoacoustic results by many of reserachers who made the original
findings. Most of the contributors are biased toward one or another tuning
and the reader must take care to separate experimental results from the
conclusions drawn by the various authors. As has been seen, the
conclusions of various researchers are on occasion mere opinions,
unsupported by the facts. The data cited in Deutsch's compilation are
extensive and accurate, although the bibliogpraphies for each section prove
distinctly selective.
"Psychological Acoustics," edited by E.D. Herbert, is a collection of the
original papers in psychocoustics from the 1870s to the 1970s. This is the
only text which amasses all the original results in the original authors'
own words. Much of the material is now dated, however, and therefore
provides an incomplete picture of the ear/brain system.
"Auditory Scene Analysis," by Albert S. Bregman, 1990, is a disappointment.
It is vague on crucial points and does not cite enough psychoacoustic
references. While Bregman does not exhibit major biases toward any
specific tuning system, he appears to gloss over many difficult areas of
psychoacoustics; viz., the contradictory evidence for various theories of
hearing, unexplained ear/brain phenomena, the role of musical illusions in
the auditory path, etc.
On the whole Bregman's text is useful as a quick overview but should not be
cited as a primary source.
The next and last post of this series will discuss the higher-level
ineraction of tuning, timbre and structural tonality as considered by
Rothenberg, Keislar, Douthett and as examined in the work of Pierce,
Risset, Sethares, Carlos, et al.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 19 Oct 1995 19:38 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA07308; Thu, 19 Oct 1995 10:37:43 -0700
Date: Thu, 19 Oct 1995 10:37:43 -0700
Message-Id: <009981E2B053B9DF.5D10@ezh.nl>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/20/1995 7:57:41 AM
From: mclaren
Subject: Tuning & psychoacoustics - post 25 of 25
---
As an empirical science, psychoacoustics is largely concerned with
measuring the reactions of the ear/brain system to specific acoustic
stimuli. However, human hearing is a hierarchical process made up of many
layers of abstraction.
Small acoustic stimuli shade imperceptibly into larger ones, leading
inexorably to such large-scale percepts as "key center," "cadence," and
"discordance" and "concordance." As Eberhard Zwicker points
out, "It is clear that psychoacoustics plays an important role in musical
acoustics. There are many basic aspects of musical sounds that are
correlated with the sensations already discussed in psycoacoustics.
Examples may be different pitch qualities of pure tones and complex sounds,
perception of duration, loudness and partially-masked loudness, sharpness
as a an aspect of timbre, perception of sound impulses as events within the
temporal patterns leading to rhythm, roughness, and the equivalence of
sensational intervals. For this reason it can be stated that most of this
book's contents are also of interest in musical acoustics. At this point we
can concentrate on two aspects that have not been discussed so far: musical
consonance and the Gestalt principle." [Zwicker, E. and H. Fastl,
Psychoacoustics: Facts and Models, 1990, pg. 312]
Zwicker characterizes the hierarchical perception of musical tones by
drawing a distinction between sensory consonance (perceived roughness,
sharpness, and noisiness of the tone) and harmony, (perceived tonal
affinity, tolerability, and root relationship of tones or sequences of tones
to a scale).
So doing, he posits that both modes of perception are hierarchically involved
in the sensation of musical consonance.
Both experience and experiment tell us that the process of listening to
music involves levels of neural organization above the purely physical
acoustic operation of the inner ear. While the point of maximal stimulation
on the basilar membrane indicates a simple mechanical Fourier analysis of
sounds entering the ear, the firing pattern of neural fibers in the auditory
nerve encodes pitch and spectral information in the nerve system in a
complex way.
The path between primary auditory nerve and cerebral cortex is not a simple
one. Many feedback loops control the processing of auditory information,
and there are many opportunities for higher brain centers to alter the raw
input travelling up the auditory nerve--and vice versa.
The anatomy of the pathway between the auditiory nerve and the cerebral
cortex is complex: the cells of the primary neurons (that is, those in the
auditory nerve) are located within the modiolus of the cochlea; these
primary nurons terminate in the cochlear nucleus, a mass of gray matter
located in the dorsal and lateral portion of the medulla oblongata. Here the
physical nerve connection breaks. From this point there is a synpatic
connection (mediated by neurotransmitters) to the neurons of the inferior
colliculus. After another synpatic gap in the neural pathway, the third-
order neurons converge on the medial geniculate body, the final relay station
on the auditory path to the cerebral cortex. It's worth nothing that the
medial geniculate body not only collates fibers from the audtiory nerve, but
also from other sensory systems and from the cerebral cortex as well. Thus
the geniculate body serves not as a passive relay station so much as an
active filtering and integrating locus.
>From the geniculate body, the fourth-order auditory neurons connect with
the cerebral conrtex by way of a thin sheet of radiating nerve fibers. These
radiations include corticofugal fibers running from the cortex back to the
medial geniculate body.
Thus the auditory neural pathway contains a complex feedback loop,
controlled by several sets of higher brain loci, running between the auditory
nerve and the cerebral cortex.
Most of the fourth-order neurons enter a small region ofthe posterial half
of the horizontal wall of the Sylvian fissure, which acts as a focal zone for
the entire auditory cortex. The complexity of the auditory region of the
Sylvian fissure is daunting: each cochlear fiber makes connections with
thousands of other neurons grouped in at least thirteen regions, and
populated by many different types of neurons. To make the process even
more complex, not all of these neurons respond identically. Some produce
strong signals when presented with tones in a
particular frequency range but do not respond to tones in other frequency
ranges. A small fraction of neurons emit strong signals when two different
frequencies are sounded together, but these same neurons produce little or
no response when either frequency sounds alone. Some neurons are most
strongly stimulated by sounds at specific amplitudes: sounds outside this
narrow amplitude window cause no resopnse from suchneurons. For yet other
auditory nerve fibers, the higher the sound's amplitude, the stronger the
response, until a satuation point is reached. Some neurons respond best toe
amplitude-moedulated tones, others to frequency-modulated tones. Some
neurons respond with paritcular vehemence to sounds coming from a
particular region of space, and some neurons respond best to sounds that
are moving in space.
Because these cortical loci consist of neural pathways, they are formed by
learned response and can be changed. Thus, the impact of culture and
experience on musical perception is at least as great as the physical
sensory correlates of musical tone--if not greater.
"I once attended...a concert in Bangkok that was totally mystifying. I could
see that the audience was utterly enraptured, swooning at moments of
apparently overwhelming emotional beauty that made no impression on me
whatsoever; not only that, I couldn't distinguish them from any other
moments in the piece." [Eno, Brian, "Resonant Complexity," Whole Earth
Review, May 1995, pg. 42]
This points to a important caveat. While the results adduced so far provide
evidence for this or that musical tuning system ont he basis of sensory
consonance, psychoacoustics cannot describe or validate the higher levels
of musical organization implicit in a tuning system.
Thus the internal structure of a tuning is different from the sensory
consonance produced by intervals within that tuning. For example: Risset's,
Pierce's and Sethares' timbral mapping procedure, following the
implications of research by Plomp and Levelt and Kameoka and Kuriyagawa,
allow a composer to control the level of *sensory consonance * in a given
tuning, but mapping the component partials of a sound into a given
maximally consonant set for a specific scale does *not * change the
inherent tonality of the scale, its Rothenberg propriety, the Barlow
harmonicity or the Wilson efficiency of the scale.
In short, by changing timbre, note duration, and compositional style one can
change the surface affect of music produced in a given tuning: but the
deeper structural elements of the tuning remain invariant.
Ivor Darreg described one of the deeper structural invariants in a given
tuning as its "mood:" "In my opinion, the striking and characteristic moods
of many tuning-systems will become the most powerful and compelling
reason for exploring beyond 12-tone equal temperament. It is necessary to
have more than one non-twelve-tone system before these moods can be
heard and their significance appreciated." [Darreg, Ivor, "Xenharmonic
Bulletin No. 5, 1975, pg. 1]
David Rothenberg proposed that the Rothenberg propriety of a scale
explains some aspects of the scale's deep structure; Clouth and Douthett
duplicated some of this work in their article "On Well-Formed Scales."
John Chalmers has speculated that Rothenberg propriety explains the sense
of tension in such tunings as Ptolemy's intense diatonic.
In addition to the "mood" or overall "sound" of a given tuning, Darreg and
McLaren (1991) pointed out that each tuning exhibits some degree of
inherent bias toward melody or harmony. The Pythagorean intonation and
13-tone equal temperament, for example, are both strongly biased toward
melody, while 31-tone equal temperament and 13-limit just intonation are
strongly biased toward harmony.
Douglas Keislar made this same point in his 1992 doctoral thesis. In it,
Keislar describes research which demonstrates that altering the surface
characteristics of the music--timbre, tempo, spatialization--does not
change the deeper structural characteristics of the tuning. Thus, while
mapped overtones will make a comopsition in 13-tone equal temperament
sound more acoustically smooth, it does not change the essentially atonal
character of the 13-tone scale, nor does it materially affect the scale
"mood." Similarly, changing the timbres of a composition in Ptolemy's
intense diatonic tuning will alter the degree of sensory roughness or
smoothness; adding reverberation will mask to greater or lesser degree
some of the overall "sound" of the composition. But the sense of aesthetic
tension created by scale intervals which are, in Rothenberg's usage,
improper, will remain unchanged.
Thus the implications for tuning suggested by psychoacoustic research
must be viewed as separate from larger musical and perceptual questions.
Because current psychoacoustic experiments focus on questions of sensory
perception, there remains a dichotomy between what Easley Blackwood has
called "concordance and discordance" and sensory consonance and
dissonance. In fact sensory consonance is a misnomer: the effects are more
accurately described as sensations of auditory roughness or smoothness.
Depending on the tuning or the composition, intervals which are perceived
as rough may prove concordant, while intervals which prdouce the auditory
sensation of smoothness may strike the listener as discordant--that is, out
of place musically. In Western music, the best example of this phenomenon
is the perfect fourth, which sounds acoutically smoother than the major
third but which by itself generally constitutes an unstable and musically
discordant interval.
In Balinese and Javanese music, the best example is the stretched 1215-
cent octave, which sounds acoustically rough but which produces as sense
of musical concordance when performed by a gamelan.
The most striking example in my own experience was a 1990 concert by
the Women's National Chorus of Bulgaria. One of the duets (a folk song
from the Thracian plains) ended on a large major just second (9/8). The
Western audience sat without moving forwhat seemed a long time: only
when the singers bowed did the audience realize the duet was over, and
applaud. In this case the contradiction between learned perceptions of
concordance and cadence, and the sensory perception of roughness in the
cadential intervals, prevented the audience from correctly perceiving the
cadence.
It is important not to confuse sensory roughness or smoothness, as
measured by psychoacoustical experiments, with higher-level perceptions
of musical consonance and dissonance. Many advocates of just intonation
have baselessly conflated the two categories, while advocates of Fetis'
model (viz., all auditory responses are predominately learned responses)
excessively emphasize the abstract levels of hierarchical auditory
perception while unjustifiably discounting the purely physical processes at
work in the human ear/brain system--in particular the frequency-analysis
operations of the basilar membrane and the periodicity-extraction
mechanism of the neurons in the auditory nerve.
Ultimately, what Zwicker calls Gestalt musical perception is mediated not
only by the physics and acoustics of the inner ear, but also by primary,
secondary, third-order and fourth-order neurons, a variety of different
brain locations, and the operant conditioning imposed by experience, culture
and musical tradition. The conclusions of this series of posts must be
taken in that context, and understood in that larger framework.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sat, 21 Oct 1995 01:16 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id QAA08509; Fri, 20 Oct 1995 16:16:08 -0700
Date: Fri, 20 Oct 1995 16:16:08 -0700
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/21/1995 7:38:10 AM
From: mclaren
Subject: large numbers of tones/oct on
sample-playback MIDI modules
---
It came to my attention recently that
a professor at the University of
Mississippi teaches a college course
about Elvis' Hawaiian films.
This was a relief to me. It proved that
no matter how bizarre and nonsensical
the posts of certain folks on this forum,
they're actually quite rational compared
to the *truly* looney denizens of the
so-called "real world."
Speaking of the real world, the yakademics
have now returned from sabbatical and are
no longer giving "special tutoring" to those
special coeds.
Since this forum is still largely an academic reserve,
it empties out during the summer, leaving only
the lone desultory student or two burned out
by a McJob at 2 am but not yet ready to lobotomize
hi/rself with The Weather Channel.
Thus there was little point in my posting anything
during the summer months.
Now that the rich white PhDs and the dirt-poor
overworked students are both back from summer
hiatus, there's an audience capable of hoisting
torches & pitchforks and screaming "Somebody get a rope!"
In short, an audience fully up to the high standards
of academic open-mindedness and insight we've
all come to respect so deeply.
Various claims to the contrary, it is in fact possible
to use sample-playback MIDI modules with only 1 pitch
table for either equal-tempered multiple divisions of the
octave or high-limit just intonation *without* the dreaded
"chipmunking" effect.
As you'll recall, "chipmunking" occurs when a sample
recorded at one pitch is played back at a drastically
different pitch. At first glance, you'd think this
unavoidable when dealing, say, with one of Erv Wilson's
70-pitch hebdomekontanies.
Chipmunking typically occurs on the Proteus modules
the VFX, and all the other xied-wavetable synths.
Whence arises chipmunking?
Consider: starting with some 1/1 pitch--say, A 440 Hz--
the pitch table of the MIDI module will contain a progressively
lower playback pitch than the frequency at which the sound was
originally recorded.
Take the following Wilson 70-note [1,17,41,67,97,127,157,191]
hebdomekontany (4 out of 8 CPS):
Scale degree 1: C + 0.000 cents
Scale degree 2: C + 14.9691 cents
Scale degree 3: C + 28.5475 cents
Scale degree 4: C + 32.3370 cents
Scale degree 5: C + 57.9854 cents
Scale degree 6: C# + 7.8545 cents
Scale degree 7: C# + 33.5029 cents
Scale degree 8: C# + 45.3184 cents
Scale degree 9: C# + 55.7040 cents
Scale degree 10: C# + 70.9667 cents
Scale degree 11: C# + 72.2992 cents
Scale degree 12: C# + 97.9476 cents
Scale degree 13: D + 35.0111 cents
Scale degree 14: D + 48.5894 cents
Scale degree 15: D + 50.2738 cents
Scale degree 16: D + 60.6594 cents
Scale degree 17: D + 63.8521 cents
Scale degree 18: D + 74.2378 cents
Scale degree 19: D + 77.2546 cents
Scale degree 20: D + 90.8330 cents
Scale degree 21: D# + 53.5448 cents
Scale degree 22: D# + 82.0924 cents
Scale degree 23: E + 19.5562 cents
Scale degree 24: E + 46.5370 cents
Scale degree 25: E + 95.0737 cents
Scale degree 26: E + 98.8633 cents
Scale degree 27: F + 12.4416 cents
Scale degree 28: F + 22.0545 cents
Scale degree 29: F + 24.5116 cents
Scale degree 30: F + 25.8441 cents
Scale degree 31: F + 38.0900 cents
Scale degree 32: F + 39.4224 cents
Scale degree 33: F + 51.4924 cents
Scale degree 34: F + 65.0708 cents
Scale degree 35: F + 74.3808 cents
Scale degree 36: F + 87.9591 cents
Scale degree 37: F# + 0.0291 cents
Scale degree 38: F# + 1.3616 cents
Scale degree 39: F# + 13.6075 cents
Scale degree 40: F# + 14.9400 cents
Scale degree 41: F# + 17.3970 cents
Scale degree 42: F# + 27.0100 cents
Scale degree 43: F# + 40.5883 cents
Scale degree 44: F# + 44.3779 cents
Scale degree 45: F# + 92.9145 cents
Scale degree 46: G + 19.8954 cents
Scale degree 47: G + 57.3592 cents
Scale degree 48: G + 85.9067 cents
Scale degree 49: G# + 48.6186 cents
Scale degree 50: G# + 62.1970 cents
Scale degree 51: G# + 65.2138 cents
Scale degree 52: G# + 75.5994 cents
Scale degree 53: G# + 78.7921 cents
Scale degree 54: G# + 89.1778 cents
Scale degree 55: G# + 90.8621 cents
Scale degree 56: A + 4.4405 cents
Scale degree 57: A + 41.5040 cents
Scale degree 58: A + 67.1524 cents
Scale degree 59: A + 68.4848 cents
Scale degree 60: A + 83.7475 cents
Scale degree 61: A + 94.1332 cents
Scale degree 62: A# + 5.9487 cents
Scale degree 63: A# + 31.5970 cents
Scale degree 64: A# + 81.4662 cents
Scale degree 65: B + 7.1145 cents
Scale degree 66: B + 10.9041 cents
Scale degree 67: B + 24.4824 cents
Scale degree 68: B + 39.4516 cents
Scale degree 69: B + 53.0300 cents
Scale degree 70: B + 86.4216 cents
(Those of you unfamiliar with a Wilson CPS
or the hebdomekontany will want to review
topic 2 of Tuning Digest 17 from 17 February
1994, also topics 1 and 2 of Tuning Digest
30 from 3 March 1994. The latter, by Paul
Rapoport, comprise perhaps the finest into
to the subject yet written.)
As you can see, entering the above 70-note
just array into an EMu proteus synth,
starting with 1/1 = 440.0 Hz + 0 cents,
produces a progressive transposition of sounded
pitch vs. originally-sampled pitch as the
scale rises. By the time we reach pitch 70,
the synth is playing a note at 440 Hz + 1186.4 cents
which was originally sampled playing at
440 Hz + 7000 cents! In other words note
70 of the just array is being played almost 5.5
octaves *lower* than its original pitch. This
produces wildly bizarre sonic artifacts--growls,
wah-wah-wah effects, low- or high-pitched
background noise, etc., collectively known
as "chipmunking."
Is there any way to avoid these weird sonic artifacts
when playing either just arrays with a lot of different
notes, or small just arrays which modulate extensively,
or equal temperaments with lots of notes per octave?
Yes, there is.
The solution is twofold:
[1] Use a SMPTE-locked MIDI interface with a multitrack
tape deck *OR* any ordinary MIDI interface with a
hard disk recorder; and...
[2] Write a program which re-maps the MIDI notes in
your composition to different MIDI channels depending
on their MIDI note number.
Using this combination, you can now lay down just arrays
or equal tempered scales up to 127 notes per octave without
any significant chipmunking.
To see how this works, let's take a concrete example. First
we write a program--perhaps using MAX to re-map the notes
in real time, or using the Cal portion of Cakewalk on IBM
machines, or even using the MIDI file routines available
for $40.00 from Sound Quest to write your own C or BASIC
or PASCAL program--that reads in the MIDI notes of our
composition. Note X is remapped as note X on channel 1,
note X +1 is remapped as note X on channel 2, note
X + 3 is remapped as note X on channel 3, note X + 4 is
remapped as note X on channel 4, note X + 5 is remapped as
note X on channel 5, and note X + 6 is remapped as note
X in channel 6. Then note X + 7 is remapped as note X+1 on
channel 1, note x + 8 is remapped as note X + 1 on channel 2...
and so on. You get the idea.
Next, make up 6 different tuning tables for, say,
your Proteus II module. All 6 tuning tables use 12
notes per octave out of the hebdomekontany. Tuning
table 1, for instance, would be:
Scale degree 1: C + 0.0000 cents
Scale degree 2: C# + 7.8545 cents
Scale degree 3: C# + 97.9467 cents
Scale degree 4: D + 90.8330 cents
Scale degree 5: D# + 82.0924 cents
Scale degree 6: E + 98.8633 cents
Scale degree 7: F# + 0.0291 cents
Scale degree 8: F# + 92.9145 cents
Scale degree 9: G + 85.9067 cents
Scale degree 10: G# + 48.6187 cents
Scale degree 11: A + 4.4405 cents
Scale degree 12: A# + 5.9487 cents
The second tuning table would be:
Scale degree 1: C + 14.9691 cents
Scale degree 2: C# + 33.5029 cents
Scale degree 3: D + 35.0111 cents
Scale degree 4: D# + 53.5448 cents
Scale degree 5: E + 46.5370 cents
Scale degree 6: F + 12.4416 cents
Scale degree 7: F# + 1.3616 cents
Scale degree 8: G + 19.8954 cents
Scale degree 9: G# + 62.1970 cents
Scale degree 10: A + 41.5040 cents
Scale degree 11: A# 31.5970 cents
Scale degree 12: B + 10.9041 cents
and so on.
Now copy your remapped MIDI file to 5 different
files with similar names; then delete
all notes on channels 2-6 for file 1,
delete all notes on channels 1 & 3-6 for
file 2, etc. If you're dealing with an
orchestration using multiple MIDI files
you may want to write into a program a
routine which automatically deletes all
but the 12 transposed notes/oct in each of
the channels you need for that MIDI file.
Now lock your sequencer via SMPTE to
your multitrack tape recorder and lay
down channel 1 using pitch table 1.
Run the tape recorder back and
lay down channel 2 using pitch table 2,
and so on until you've laid down all 6
channels using all 6 pitch tables.
If you don't have an Alesis ADAT or
a Tascam DA-88, don't despair--my
experience shows that for compositions
of less than 15 minutes or so you can
lock WITHOUT SMPTE to a hard disk
recorder and still maintain perfect sync
with prerecorded parts, provided your
hard disk recorder has a "trigger playback
on MIDI" feature (most do). Since hard
disk recorders boast unmeasurably low
wow and flutter, you can actually lock multiple
recorded parts in sync *without* using SMPTE
sync. I've done this with 3 different stereo
tracks, equivalent to 6 mono tracks, so there's
absolutely no reason why you can't do it with as
many tracks as your hard disk recorder will allow.
Of course the different parts will eventually
drift apart if they last long enough...20, 30,
40 minutes or so. But for pieces less than about
15 minutes my experience is that the parts
remain in perfect sync. And since hard disk
recorders are getting dirt cheap, this is good
news for all of us.
As you can see, this process is laborious--but it
produces excellent sonic results. The worst
chipmunking you'll get with a 70-note hebdomekontany
is a note transposed about 85 cents up or down from
the pitch it was originally recorded at: this is
a stretch or compression or less than 5%, and produces
virtually no sonic artifacts. And 70 notes is an acid
test! Very few just arrays use that many notes!
One further refinement you might want to consider
is to compose a piece with lots of notes per octave
using a "sketch" set of timbres generated by a synth
which doesn't suffer from chipmunking--say, a
TX81Z or a VL-1. Once you've got the skeleton of
the composition laid down using these approximate
timbres, orchestrate the final version with a box
like one of the E-Mu Protei or even one of the Korg
synths limited to retuning within 12 tones per octave.
Then lay down multiple tracks with the final timbres
for a full version of the composition.
Although it's more trouble than merely entering a simple
pitch table and playing all the MIDI notes with a single
sequencer track, this method has the advantage of
allowing you to make full use of the high-quality samples
available in contemporary MIDI boxes. And as we all
know, synthesizer technology lurched to a grinding halt
sometime around 1989 and the synthesizer industry
decided to commit financial suicide. As a result virtually all
modern synths are nothing but sample-playback boxes.
Thus the method described here is the only really
effective way to deal with modern so-called synths
(which aren't really synthesizers any more, they're all
just fancy effects boxes with prerecorded sounds burned
into ROM) when using large number of tones (just or equal
tempered) per octave.
JI composers, take special note--by using the above process
with one channel for each key into which you want to module,
you can modulate to a virtually unlimited number of different
just 1/1s by laying down different tracks with SMPTE sync
or via hard disk recorder playback-on-MIDI-note sync. (Number
of keys is virtually unlimited because remember--there's no
limit to the number of different pitch tables you can load
into your synth *sequentially.* Using this method you could
modulate into 300 successive 1/1s if you were so inclined!) This
completely obviates the purported "difficulty" of modulation
when using just intonation, and renders utterly moot all the
various "practical" objections to just intonation raised
throughout the 19th and early 20th century on the basis of the
"impossibility" of modulating between just key centers.
Best of all, as hard disk recorders drop in price and hard disks
grow bigger and cheaper, you'll have more and more capability
on hand as time goes on. God I love the 90s!
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sat, 21 Oct 1995 19:32 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA17301; Sat, 21 Oct 1995 10:32:29 -0700
Date: Sat, 21 Oct 1995 10:32:29 -0700
Message-Id: <199510211731.KAA17136@eartha.mills.edu>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/22/1995 7:42:36 AM
From: mclaren
Subject: custom MIDI controller
---
One of the biggest stumbling blocks in
the path of the microtonal revolution is
the lack of a generalized MIDI keyboard.
If MIDI users could buy a keyboard like
the one on the Secor generalized
Scalatron, microtonality would take
an enormous step forward. Suddenly,
it would become simple and easy to perform
keyboard music in 31, 19, Partch 43, Wilson
41, Secor 17, or Helmholtz 24-note tunings.
Well, guess what?
Now you can build your own generalized MIDI
keyboard--and for less than $275!
PAVO, 10. S. Front St., Philadelphia PA 19106,
makes a black box to which you can hook 64
momentary normally-open switches and get
MIDI notes out.
What kind of switches can you use?
The sky's the limit: light switches, doorbell
switches, reed switches, infrared or cadmium
selenide photocells, contact switches, touch
switches, proximity switches, ultrasound-
activated switches, moisture-activated
switches, odor-activated switches...you name it.
Among the ideas PAVO suggests for novel
MIDI controllers: [1] sewing reed switches into
your clothing and generating a MIDI composition
by your movements; [2] attaching multiple
photoelectric switches to various parts of a
room and generating MIDI notes by shining a
laser; [3] building your own custom MIDI
percussion controller or keyboard.
Of particular interest for folks on this forum
is [3].
The PAVO MIDI computer (a black box with a
64-lead ribbon cable coming out of it and MIDI
IN and MIDI OUT connectors) comes in kit form.
The cost is $265 U.S. and with that you get the
PROM of your choice. (PAVO black boxes can do
a *lot* of things; the 64-note MIDI controller PROM
is only *one* of many PROMS they offer. You can,
for example, turn the black box into a "translating
randomizer" merely by substituting another PROM
into the ZIF socket inside the black box. In this
mode the PAVO black box selectively translates one
type of MIDI message into any other type of MIDI message,
randomizing them within user-selected parameters
entered into the front panel. However this post
concerns only the 64-note custom MIDI controller
EPROM, so if you want more info on the many *other*
exotic applications of their MIDI computer black
box, write PAVO directly. That address again:
PAVO, 10. S. Front St., Philadelphia PA 19106)
The limitations of the PAVO box, are to be fair,
numerous: first, you're limited to 64 input switches.
(If you need a 128-key controller, buy two black
boxes and hook their outputs through a MIDI merge
box; ditto 192-key controller, etc. At $250 per
black box, this isn't all that expensive--especially
compared to the *outrageous* highway-robbery cost
of $2000 for a 36-pad MIDI marimba bought
commerically!)
Second, the switches can't be more than 25 feet from
the PAVO black box (and should probably be a lot
closer for reliable operation). Most limiting of all,
the black box does not accept velocity information
from the switches. Thus you must set the velocity
of the MIDI notes either via a pot on the front panel
of the black box, or by means of a MIDI controller
foot pedal hooked into the data stream with a MIDI
merge box. (It's possible that you might be able
to wire an 8-bit A/D converter to the frame of the
MIDI controller and feed it into the front pot of the
PAVO black box, but since it's not part of the original
design...you'd have to do it at your own risk.)
All in all, these limitations are minor compared to
the advantages offered by this widget. For the first
time, you can build your own custom MIDI percussion
setup. One of the first and most obvious applications
that comes to mind would be to set up a set of plywood
or pine squares in the form of a 64-note Bosanquet
keyboard and attach the squares to reed or touch
switches, then solder the switches to the ribbon
cable leads. Plus in the black box, turn it on, and presto!
You've got your own Bosqnquet-style percussion controller.
This would be ideal especially for those of us you yearn to
work in large JI arrays (say, Wilson 31 or 41 or 43, not
to mention D'Alessandro or the hebdomekontany) with
a percussion-type controller.
By fitting the switch array with a DB-25 connector, one
could easily disconnect one percussion controller and
reconnect another one, thus allowing a performer to move
within a less than minute from, say, Partch 43 to 53-tone
equal temperament. Since switches are cheap (check the
latest JDR MICRODEVICES catalog) and plywood or pine
even cheaper, it should be a breeze to build half a dozen
different percussion arrays, each suited to a given tuning.
(Remember that since we're dealing with MIDI, one need
only saw the pieces of wood and glue switches to 'em--
all tuning's done in the MIDI synth.)
Neoprene coverings on the plywood or wooden percussion
pads would help give the percussion pads a more lifelike
"feel," and again it's cheap and easy.
Building the PAVO black box sounds pretty simple.
They offer a videotape showing the complete process (for
those of you who haven't dealt with a soldering iron and
a volftmeter before), as well as a diagnostic EPROM. Plug
in the EPROM and it'll automatically check your solder
connections, chips, glue logic, and run a test on the MIDI
ins and outs as well as the 64-lead ribbon connector (via
loop-through).
PAVO claims that assembly of one of their black boxes
takes 5-7 hours, and given the apparent simplicity of
the circuitry inside their box, that sounds reasonable.
It's basically nothing but an antique 6809 8-bit
microprocessor with an EPROM, a kilobyte or so of
static RAM and some glue logic and hex buffers to
keep the inputs and outputs from fricasseeing if you
accidentally hook a MIDI out to the black box's MIDI OUT.
All told, this is spectacular development for those of us
who burn with the bestial desire to build Bosanquet
MIDI controllers!
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sun, 22 Oct 1995 23:59 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id OAA00139; Sun, 22 Oct 1995 14:58:49 -0700
Date: Sun, 22 Oct 1995 14:58:49 -0700
Message-Id: <951022215607_71670.2576_HHB46-1@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/23/1995 7:54:49 AM
From: mclaren
Subject: xenharmonic notation in Finale
---
The estimable Paul Rapoport has created a
microtonal music font for use with Finale.
This was a significant advance, hampered by
Finale's generally 12-centric mode of operation.
Well, times have changed. The latest version
of Finale--rev 3.5--just arrived, and it has
features to delight the heart of the most
hardened microtonalist.
Most notably, the NONSTANDARD KEY SIGNATURE
DIALOGUE BOX now contains an enhanced KEY STEP
MAP DIALOG BOX. What, pray tell, does this signify?
Buckle up, chromedomes--time for a trip into
the bowel of Finale's infinite set of dialog boxes...
First, click on the KEY SIGNATURE TOOL. (It's easy
to tell; this is the icon that looks like a rodent
having sex with a VCR. As opposed to the icon
that looks like a cyclotron swallowing a squid--
that's the Speedy Note Entry tool.)
When the Key Signature dialog box appears,
choose NONSTANDARD from the dop-down menu.
Click NEXT twice, then click the KeyMap icon.
Okay.
Now you're into a dialog box which allows you to
define a microtonal key signature.
First, you need to decide whether you want what
Finale mystifyingly calls a "Linear Key Format"
or "Nonlinear Key Signature." At this point permit
me to quote from the Finale Reference (Vol. 3,
ver. 3.0, which also applies to Finale ver. 3.5):
"In this dialog box, you specify how many notes
will constitute an "octave" (it's twelve notes in the
traditional system). You also sepcify how many of
these are "diatonic" (seven in the traditional system),
and where the "chromatic" steps occur in the scale. (In
the traditional system, the chromatic steps occur between
every pair of diatonic steps except steps 3 and 4 and steps
7 and 8.
"If you're creating a linear key format, note that your work
in the Key Step Map dialog box must follow certain rules in
order to meep the definition of a lienar key format. The
total diatonic steps, for example, must be an odd number.
Furthermore, the bottom and top halves of the scale must
contain the identical arrangement of diatonic and
chromatic steps. These principles ensure that there is a
progression of keys, although it may not be a circle of fifths
as there is in traditional key structures. (Finale will
correctly interpret, transcribe, and play back music in
a format that hasn't been constructed accordin go these
rules. Theformat, however, won't be technically and
musically correct; you may get unexpected results when
you transpose--or add chord symbols to--music in such a
key system).
"You may wonder what the relationship is between your MIDI
keyboard and the unusual key maps you can construct in this
dialog box. The principle is simple: each key on your keyboard
*always* corresponds to a note in your key map. If you've
established a quarter-tone key system, for example, you'd
have to drastically alter your playing style in order to input
a simple C scale, because Finale now thinks that the first
four notes on your keyboard are C, C-quarter-sharp, C-sharp,
and C-three-quarter-sharp. You'd have to play C, E and G#
"keys" on your keyboard to *notate* the C, D and E on the
screen."
Well, there it is, kiddies. Just what we've needed lo these
many years. With this upgrade, Finale appears to support
most of what's required for notating rationally a wild
microtonal piece performed on the MIDI keyboard.
The dialog box labelled KEY STEP MAP is fairly straightforward.
It includes a control for TOTAL STEPS (the total number of
steps in the scale) and DIATONIC STEPS. The remainder left
over by subtracting diatonic steps from total steps is,
obviously enough, the number of chromatic steps.
This begs the question of what to do in a JI scale when
faced with, say, the 11/9, or the 11/8...are these
diatonic or chromatic steps? The question doesn't
appear to have much meaning to me, but those of you
who've delved into Finale for the purposes of notating
high-limit JI may have a different opinion. Be interested
to hear from those of you who've done this!
By creating a nonlinear key signature you can generate
a notation for a tuning with no circle of fifths, and
no sequence of keys. This is useful both for equal temperaments
with no fifths (6,8,9,11,13,16,18,23 equal tones per octave)
and for non-just non-equal-tempered scales like the free-free
metal bar scale, scales formed from ratios of infinite continued
fractions, etc.
The ticket to getting Paul Rapoport's nonstandard accidentals
to appear next to the correct notes is the ATTRIBUTE dialog
box.
To quote the Finale manual again (Volume 3, page 300):
"For any such key system you create, you can specify a number of
special attributes, such as the symbols you want to use in the key
signature (instead of the b and # symbols).
*Harmonic Reference. This number identifies the note that all other
dialog boxes in Finale's key system will consider to be the C, or
fundamental root tone. Enter zero for C, 1 for D, 2 for E, and so on.
There's little ever to change the default setting in this box (zero,
or C.). [Note: Unless you're notating Harry Partch's music, with his
1/1 of G!]
*Middle Key Number. This number specific the MIDI key number
that corresponds to the Harmonic Reference Number.
(..) You can use this parameter to good advantage if you want
to transform your synthesizer into a transposing synthesizer (as
far as Finale is concerned). For example, if you set the middle
Key Number to 48 (C below middlel C), Finale will interpret
every note you play as a note an octave higher (...)
*Symbol Font. This number corresonds to the font from which
the symbols you want to use for accidentals are drawn. To
choose a new font, click Symbol Font; Finale displays'
the Font dialog box, form which you can choose the new font.
*Symbol List ID. This number identifies a symbol list you've created--an
array of accidental AMounts (where one sharp has an Amount of 1,
one flat has an Amount of -1, and so on) and corresonding chracters you
want to appear in the key signature to represent them. To create a
symbol list, click Symbol List ID; the Symbol List gialog box appears,
in which you can define the character you want to appear in
place of the usual sharp, flat, double-sharp, or other standard symbol.
(See SYMBOL LIST DIALOG BOX).
*Go to Key Unit. Enter a number in this text box to specify the
number of scale steps Finale should consider to be between
each pair of keys on your MIDI keyboard. In other words, if you've
specified a quarter-tone scale, tell Finale that the Key Unit is 2--
there are two scale tones, not one, between one synthesizer key and
the next. (If your synthesizer *can* produce quarter tones, however,
leave the key unit at 1, so that Finale will correctly play back your
quarter-tone score.
"If you've specified the correct Key Unit value, Finale will transcribe
and play any music perofrmed in the usual way correctly. If you
created quarter-tone scale without changing the Key Unit, by contrast,
you'd have to drastically modify your playing sytle, because Finale would
treat your keyboard as show here." [Note--the diagram in the
manual shows a quarter-tone keyboard replete with microtonal
accidentals]"
Hot rats, boys!
This is a real breakthrough. It's what we've needed for years.
It's what Lippold Haken's LIME *should* have included, but didn't.
One final point: for those you whose wee widdle hearts go pit-a-pat
at the thought of a musical staff with more than 5 lines, Finale
also allows this. You can completely redefine the staff, with
as many lines (within practical limits) as you like. Folks like
Leo de Vries whose twinline 31-tone notation used more than 5
staff lines will jump for joy at this feature in Finale 3.5
Yes, Finale is still not quite as simple as quantum
electrodynamics. Those dialog boxes are not easy to find,
or easy to use correctly. But they're there. And for the first
time, they make possible transcription of xenharmonic music
via computer.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 23 Oct 1995 20:00 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA20043; Mon, 23 Oct 1995 10:59:33 -0700
Date: Mon, 23 Oct 1995 10:59:33 -0700
Message-Id: <199510231752.NAA26693@freenet5.carleton.ca>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/24/1995 7:33:51 AM
From: mclaren
Subject: The accuracy of tunable MIDI synthesizers
---
>From time to time a recurrent gripe surfaces throughout the microtonal
community. It centers on the supposedly "crude" tuning accuracy of
the typical synth module. Ezra Sims is the most vocal proponent
of this notion: he claims that the coarse tuning resolution of modern
MIDI synths makes them unusable for just intonation.
Does this claim have any basis in fact?
First, a word about tuning tables and synths. Tuning tables on MIDI
synths have standardized on two accuracies: 1024 TU per octave and
768 TU per octave. By far the most common tuning table standard is
768 TU/oct (64 steps per 12-TET scale step). This is the standard
used for tuning tables in the E-Mu Proteus I, Proteus II, Proteus
3, UltraProteus and Morpheus synths, along with the Yamaha TX11 and
TX81Z and the Waldorf Wave and MicroWave synths. (Dick Lord also
states that a tuning table accuracy of 768/oct is buried inside the
Ensoniq VFX, VFX-SQ, TX10, EPS, EPS-16+ and ASR-10. Most post-1988
Ensoniq synths are microtunable and let the user input values to the
nearest cent, but the tuning tables when entered have a tendency to
jump around after you've set 'em. This would be consistent with a
tuning table quantum of 1.5625 cents, or 768 tuning units per octave.)
In any case it's clear that the 768 tuning units per octave has become
a de facto standard. So it's worth discussing in some detail the
question of the purported "crudity" of this TU (tuning unit).
The first important point is that there is a big difference between
the *SIZE* of the TU in the synth's tuning table, and the
*ACCURACY* of a specific tuning entered into that tuning table.
Everyone knows, for example, that the accuracy of a synth with a tuning
table having 768 parts per octave is 1.5625 cents: that is, 1200/768
cents.
Everyone knows this, and it's wrong.
In fact, the accuracy of the synth depends on the particular tuning
chosen. For example: if you tune your synth to 12 equal tones per
octave, the worst error is zero cents--and the average error is zero
cents. This, because 12 is evenly divisible by 768 (768/12 = 64)
so each step of 12/oct falls exactly on a TU in the synth's tuning
table. Thus for 12/oct (every 64 TUs), 24/oct (every 32 TUs), 48/oct
(every 16 TUs), 96/oct (every 8 TUs), 192/oct (every 4 TUs), 384/oct
(every 2 TUs) and 768/oct, the average error = the worst error of
the worst-tuned note...namely, zero cents. For these scales, you
get perfect tuning accuracy within the limits of the jitter of the
synth's clock crystal and the sample-and-hold in the D/A converter.
---
What about *other* tunings than power-of-two multiples of 12?
To a first approximation, the *average accuracy* of any given
tuning should be somewhere around (1.5625/2) cents = 0.78125 cents.
This assumes that in the worst case a scale-step lies no more than 1/2
a tuning unit away...a logical enough assumption, as it turns out.
Remember: if a given note is *more* than 50% of a TU above the
one chosen, accuracy could be gained by jumping up to the next
higher tuning unit.
For example: if a desired scale-step were 1.6 tuning units away from,
say, 324 TU, we could gain accuracy by setting the synth to 326 TU
so that the error would be -40% of a TU instead of +160% of a TU. The
same argument applies if the desired tuning step is more than 50%
below the set tuning unit. This cuts our 1st-approximation guesstimate
of the average error down to 1/2 of 1.5625 cents, for an average error
of 0.78125 cents. But even this average error is actually too large.
A moment's thought tells us that the accuracy is likely to be
significantly better than that on average, because it's *highly* unlikely
that every single step in our scale will be maxed out at the worst
possible 50% error of a TU error. Thus, at first glance, the average
accuracy is likely to be much less than 50% of a TU-- that is,
<< 0.78125 cents.
How likely?
Well, at this point I'll diverge into a brief paragraph on the integrals
of Gaussian vs. non-normal distributions over a given region. The
integral of such a statistical distribution within an interval gives
the signed probability of an event within those numerical parameters;
for a normal or Gaussian distribution this is generally expressed
in terms of standard deviations from the mean. The measurement is
non-linear, so that (if memory serves) 2 standard deviations out from
the mean excludes nearly 98% of the probable outcomes.
Alas, this logic presumes a perfectly Gaussian distribution of
scale-steps.
Clearly no musical tuning follows this distribution; a graph of the step
size of equal-tempered scales, for instance, shows a rising line
y = mx + b, not a bell curve. Similarly, a graph of the free-free metal
bar scale successive step size is proportional to the square of the
hyperbolic cotangent (see Rossing, 1992, for details), while a graph
of the distribution of step-sizes of Harry Partch's 43 tones is a
stepped histogram with a bunch of near-equal clumps (since 21/20's
84.5 cents - the 33/32's 53.2 cents falls within 0.5 cents of the
81/80's 21.5 cents - the 33/32's 53.2 cents...and so on). That is,
a number of the successive step-sizes in Partch's 43-tone scale are
nearly identical, so their distribution again does not look anything
like a bell curve. (Again, it's a shame there are no graphics available
on this tuning forum. Since the internet is hurtling us into the future
at the speed of light, naturally we're stuck in 1970 with dark ages
ASCII-only on-line technology. Naturally!)
Using back-of-the-envelope guesstimates which would make even the
most reckless mathematician cringe, a flat-line ET scale- step
distribution by the halfway point would have considerably less area than
a Gaussian bell curve, while beyond the halfway point it'd have lots
more area...so call it (1.8 + 0.25), or maybe 2.05 or so times the total
Gaussian area, while a hyperbolic cotangent's area over the domain would
at a wild guess come out to perhaps 70% of the area of a bell curve...
So the expected average error in an ET scale should be about twice that
of the bell curve average error, while for the free-free metal bar
scale it should be perhaps 70% of the (1.5625/2) cents/step average.
(End of digression. Bet you never thought a discussion of tuning
accuracy on synths would involve probability integrals!)
What does all this mean?
It means that the actual tuning accuracy for a 768 TU/oct synth is
likely to be completely different from what we'd expect from our
simplistic argument above (which led to the conclusion that the accuracy
should be << 0.78125 cents per scale step on average).
On reflection, this is obvious. Most real-world tunings will exhibit
a mix of large and small errors, with the largest error being a bit
less than 0.78125 and the smallest errors (excluding the always-zero
root note or 1/1) probably hovering around 0.01 or so. So to a second
approximation the average error should range twixt 0.2 cents and 0.4
cents, depending on the exact shape of the statistical distribution
of step-sizes in the tuning, and the exact location of the mean
step-size.
This tells us that to get real concrete answers, we'll need to do
some actual number-crunching and try a mini-Monte Carlo analysis.
So let's see what the actual worst scale-step, best scale-step and
average (total cents error divided by total number of scale-steps)
is for a number of different tunings:

[1] For Partch's Monophonic fabric of 43 just tones the error for
each step of the scale is:
SCALE STEP THEORETICAL TUNED ON SYNTH CENTS ERROR
NUMBER IN CENTS IN CENTS
0 (1/1) [0.0] [0.0] [0.0]
1 (81/80) 21.5062896 21.87 0.368710403
2 (33/32) 53.272943 53.125 -0.147943229
3 (21/20) 84.4671934 84.375 -0.092193469
4 (16/15) 111.7312853 112.5 0.768714742
5 (12/11) 150.6370585 150.0 -0.6370585
6 (11/10) 165.0042285 165.625 0.620771502
7 (10/9) 182.4037121 182.8125 0.40878787
8 (9/8) 203.9100017 204.6875 0.777498271
9 (8/7) 231.1740935 231.25 0.075906482
10 (7/6) 266.8709056 267.1875 0.316594408
11 (32/27) 294.1349974 293.75 -0.384997396
12 (6/5) 315.641287 315.625 -0.016287
13 (11/9) 347.4079406 346.875 -0.532940629
14 (5/4) 386.3137139 385.9375 -0.376213864
15 (14/11) 417.5079641 417.1875 -0.320464092
16 (9/7) 435.0840953 434.375 -0.709095252
17 (21/16) 470.7809073 470.3125 -0.468407332
18 (4/3) 498.0449991 498.4375 0.392500873
19 (27/20) 519.5512887 520.3125 0.76121127
20 (11/8) 551.3179424 551.5625 0.244557637
21 (7/5) 582.5121926 582.8125 0.3003074
22 (10/7) 617.4878074 617.1875 -0.300307392
23 (16/11) 648.6820576 648.4375 -0.2444557627
24 (40/27) 680.4487113 679.6875 -0.761211264
25 (3/2) 701.95500 701.562 -0.39250086
26 (32/21) 729.219092 729.6875 0.468407348
27 (14/9) 764.9159047 765.625 0.709095272
28 (11/7) 782.4920359 782.8125 0.320464118
30 (8/5) 813.6862861 814.0625 0.376213871
31 (18/11) 852.5920594 853.125 0.53294064
32 (5/3) 884.358713 884.375 0.016287012
33 (27/16) 905.8650026 906.25 0.384997406
34 (12/7) 933.1290944 932.8125 -0.316594386
35 (7/4) 968.8259065 968.75 -0.075906467
36 (16/9) 996.0899983 995.3125 -0.777498257
37 (9/5) 1017.596288 1017.1875 -0.40878786
38 (20/11) 1034.995771 1034.375 -0.62077149
39 (11/6) 1049.362941 1050.0 0.63705851
40 (15/8) 1088.268715 1087.5 0.768175
41 (40/21) 1115.532807 1115.625 0.09219348
42 (64/33) 1146.727057 1146.875 0.14794324
43 (160/81) 1178.49371 1178.125 -0.3687104

The average error per scale step is 0.40561 cents. The worst error
is 0.777 cents--roughly 3/4 of a cent--and the best-tuned scale step
has an error of 0.016 or about 1/60 of a cent. These numbers are
well within the range of our second approximation back-of-the-envelope
calculation.

[2] For Wilson's first stellated tetrachordal hexany (Chalmers, "Divisions
of the Tetrachord," 1993, pg. 124:

THEORETICAL THEORETICAL TUNED ON SYNTH CENTS ERROR
NUMBER IN CENTS IN CENTS
0 (1/1) [0.0] [0.0] [0.0]
1 (28/27) 62.960903 62.5 -0.460903
2 (16/15) 111.731285 112.5 0.7687147
3 (784/729) 125.921807 126.5625 0.6406922
4 (448/405) 174.692189 175.0 0.3078108
5 (256/225) 223.462570 223.4375 -0.0250705
6 (35/27) 449.274617 450.0 0.72538227
7 (4/3) 498.044999 498.4375 0.39250087
8 (48/35) 546.815380 546.875 0.05961948
9 (112/81) 561.005903 560.9375 -0.068403
10 (64/45) 609.776284 609.375 -0.40128439
11 (1792/1215) 672.737188 673.4375 0.70031172
12 (224/135) 876.64719 876.5625 -0.08469
13 (16/9) 996.089998 995.3125 -0.77749825

A median has little meaning for these sets of numbers, since
no error appears more than twice and there are many errors
in that category. A median decile might have more significance,
but its musical meaning is debatable--at least as debatable
as that of the mean or average error.
The average error is 0.41635 cents/step, similar to that of the Partch
scale. As before, absolute cents are summed and divided by the total
number of scale degrees, and the 1/1 is ignored so as not to artificially
lower the average error.
[3] Archytas' enharmonic (Mixolydian, B-b) from Chalmers, "Divisions
of the Tetrachord," 1993, pg. 104:

THEORETICAL THEORETICAL TUNED ON SYNTH CENTS ERROR
NUMBER IN CENTS IN CENTS
0 (1/1) [0.0] [0.0] [0.0]
1 (28/27) 62.960903 62.5 -0.460903
2 (16/15) 111.731285 112.5 0.7687147
3 (4/3) 498.044999 498.4375 0.39250087
4 (112/81) 561.005903 560.9375 -0.068403
5 (64/45) 609.776284 609.375 -0.40128439
6 (16/9) 996.089998 995.3125 -0.77749825

The average error per scale step is in the same ballpark: 0.47818 cents
per step.

[4] Ptolemy's intense chromatic (from Chalmers, "Divisions of the
Tetrachord," 1993, pg. 102:

THEORETICAL THEORETICAL TUNED ON SYNTH CENTS ERROR
NUMBER IN CENTS IN CENTS
0 (1/1) [0.0] [0.0] [0.0]
1 (28/27) 62.960903 62.5 -0.460903
2 (10/9) 182.403712 182.8125 0.4087878
3 (4/3) 498.044999 498.4375 0.39250087
4 (3/2) 701.955001 701.5625 -0.39250086
5 (14/9) 764.915904 765.625 0.70909527
6 (16/9) 996.089998 995.3125 -0.77749825

In this case the error's somewhat higher than previously: average
error of 0.523515 cents per step. Withal, still quite small.
For all the just intonation scales considered above, the average of
the average errors/scale step is 0.458 cents. This implies that if
you tune an arbitrary JI scale, your average error/step will be around
0.4 cents, with the average error increasing slightly as the number
of steps in your JI scale decreases.
As is readily apparent, 0.4 cents is a far cry from the 1.5625 cents
generally quoted. Clearly for JI scales the tuning *accuracy*
(for real-world scales) of a 768 TU/octave synth is *far* better
than has been bruited about by microtonalists who don't know their
math.
So much for the tuning accuracy of a JI scale on a 768 TU/octave
synth. But what about equal tempered scales?
Rather than stupefy you with a recitativo of the error per scale step
for various equal temperaments, here's the output from my simple
BASIC program for various ETs:

TONES/OCT AVERAGE ERROR/STEP WORST ERROR BEST ERROR
IN CENTS IN CENTS IN CENTS
[5] 5/oct 0.375 0.625 0.0
[6] 13/oct 0.3883115 0.600952 0.0
[7] 19/oct 0.3895362 0.65789 0.0
[8] 31/oct 0.3902204 0.705688 0.0
[9] 41/oct 0.3040017 -.7241209 0.0
[10] 53/oct 0.39048162 -.7370605 0.0
[11] 72/oct 0.34722140 0.5208282 0.0

Total average error: 0.369 cents, slightly lower than in the JI scales
considered above, probably because the ET scales have a fixed step
size thus a more even distribution of error sizes.
As you can see, the average error per scale step is just about as
low for ETs as for the just intonation scales above. And in all
cases less than 0.4 cents. Again, a damn small tuning error.
To give you an idea of how small: at 100 Hz the ratio twixt two notes
using harmonic-series overtones (one note tuned with perfect accuracy
in the target scale, one note tuned in the best approximation allowed
by the synth) would amount to a whopping 2^[0.3040017/1200] = 2^
[2.833475 exp -4] = 1.0001964 for the 41/oct case. At a fundamental
of 100 Hz this would produce an out-of-tune note of 100.01964 Hz, with
a fundamental beat rate of 1 beat every 50.911 seconds. (Of course,
the 2nd harmonic would beat at twice that rate, with additional sum
and difference tones...and so on.)
Now, come on, people... Can you *really* hear the difference
between 1 beat every 51 seconds and the normal internal beating and
vibrato from a real-world acoustic instrument?
Is that reasonable?
Is an average beat rate of 1 per 51 seconds really something to get
all het up about? Is anyone in your audience going to jump up and
shout "I can hear it! It's out of tune! There was a full two
thousandth of a beat in that sixteenth note at metronome marking 100!"
Please.
How many notes in the average JI or xenharmonic equal-tempered
composition would last long enough to hear even *one fourth* of a full
beat-complex?
Even assuming (that is) that your acoustic-instrument performers
were exactly, perfectly, precisely 100% in absolute theoretical tune
with the ideal frequency of the note required, to a full 6 or 7 digits
of precision!
Now, let's think about this, people. This cursory little statistical
analysis tells us unequivocally that even with a purportedly "coarse"
tuning grid like 768 TU per octave, any audible error when playing
a synth with a live ensemble *far* more likely to be caused by
the *live ensemble* rather than the synth.
Bearing in mind that Partch's tuning accuracy (which he required in
tuning his acoustic instruments) was "better than 2 cents," how likely
do you think it that any of the performers *or* the audience
will hear errors in the synth tuning averaging around 1/3 of a cent?
That's a tuning precision 6 times better than the accuracy Partch
demanded! The only possible conclusion to be gained from this little
investigation into the statistics of tuning accuracy is that concerns
about the "coarseness" and "inaccuracy" of a 768 TU synth are wildly
overblown. Now, some of you might argue about the significance of an
"average" tuning error (isn't the audience likely to hear and remember
the *worst-tuned* notes, rather than some numerical "average"?).
Rather than dispute the point, let's grant it.
Even so..the worst tuning error in most cases is still likely to be
no more than twice the averages cited above. That is, about 0.68
cents--*WORST CASE*.
This means that the absolute worst-tuned note
played with a 768 tuning-unit synth would be mistuned by a ratio of
2^[0.68/1200] = 2^[5.66 exp -4]
= 1.0003929:1. For a properly-tuned note of 100H z this would
produce an out-of-tune note of 100.03929 Hz, and a fundamental beat
rate of 1 beat every 25.45 seconds.
Can anyone seriously contend that this kind of so-called "inaccuracy"
is anything that even the keenest-eared member of the audience would
ever hear in anything but an hour-longLaMonte Young drone composition?
>From now on, let's have no more of these preposterous claims that
is a very coarse and inadequate grid for [fill in the blank...just
intonation, equal temperaments, meantone,whatever]." The facts do
not support such a contention.
In fact, if Johnny Reinhard can produce a computer-analyzed solo from
of any of his live acoustic performances in which the average note
was no farther off than 1/3 of a cent, I'll eat his entire collection
of microtonal manuscripts. With Worchestershire sauce. In one sitting!
In real-world live acoustic solos, performed notes are generally played
*much* farther off than 0.3 to 0.4 cent...especially at rapid tempi
above 200 bpm. Sundberg's computer analyses show accuracies for
professional musicians in the neighborhood of 10 cents, while Shackford's
computer analyses show average accuracies for symphony performers in the
neighborhood of 15 cents. This is *worlds* away from the average 1/3
cent error we've found here, and indeed the coefficients of thermal
expansion for wood and metal are such that an acoustic instrument would
almost certainly detune by *more* than 0.3 to 0.4 cents simply by
being moved from a cold car into a warm concert hall.
So let us have no more claims that "768 TU per octave are inadequate
for microtonal music."
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 24 Oct 1995 19:51 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA05819; Tue, 24 Oct 1995 10:50:32 -0700
Date: Tue, 24 Oct 1995 10:50:32 -0700
Message-Id: <9510241742.AA03583@danhicks.math.nps.navy.mil>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/25/1995 10:38:45 AM
From: mclaren
Subject: Live xenharmonic recording
---
As more and more of you get together and
produce live microtonal music (whether with MIDI
instruments, acoustic instruments, or
a combination of the two), sooner or
later you'll want to record your xenharmonic
performances on DAT.
Here are the fruits of my experience in
recording live to DAT:
[1] The first thing to remember is that
whenever a "pro" is involved, the recording
will be crap.
In my experience "professional sound
engineer" is a code phrase that actually
means "incompetent toad." Most of these
people are ex-stereo salesmen who wouldn't
know a balanced line from their own bum.
Among the examples of expertise I've seen
demonstrated by "pro recording engineers:"
using ordinary drugstore rubbing alcohol to
clean the heads of a 16-track tape deck;
using a Calrec soundfield microphone to
zero in on someone snapping a rubber band
while 5 other xenharmonic instruments were
playing live; accidentally forgetting to
record one of the tracks of a critical stereo
master during a live take; and clipping most
of a digital recording.
This last is worth a mention or two. One of
the hallmarks of a "professional" recording
is that the digital audio clips a lot. A WHOLE lot.
Fortunately, all these problems can be
eliminated very simply: if there's a "pro"
involved in the recording process, get rid
of him. (It's always a him. Women don't
seem to be able to plumb such Stygian
depths of ineptitude.)
Do the recording yourself. The results will
be *infinitely* superior.
[2] The first problem you'll encounter is
the ground loop. This is a pesky 60 Hz hum that
can drive you out of your mind. It's caused
by two grounds at slightly different
potentials; 60 Hz wall current flows between
the 2 grounds, producing an audible buzz.
Ground loops can arise at many points in the
audio chain. The 1st and most obvious place is
the wall socket. In this case, 2 different
pieces of audio equipment are hooked together
electrically (say, a synth plugged into a mixer)
but plugged into 2 different wall sockets.
A ground loop can occur if the ground
on one wall socket is faulty; in that case
current will flow from one wall socket to the
other through your equipment (an electrical
signal always seeks the lowest level of
electrical potential).
Another particularly interesting cause of
ground loop is a connection between your
computer's MIDI port and your synth. This
is NOT supposed to cause a ground loop
because the MIDI's supposed to be optoisolated;
my best guess is this sometimes occurs because
the metal shell of the MIDI cable is connected
both to the digital ground of the computer and
the analog and digital grounds of the synth,
causing current to flow twixt the analog
ground of the synth and the computer's
backplane. In this case you'll also get a
distinctive whining noise--clock noise
from the computer.
The way to solve all ground problems is with
an isolation transformer. The Ebtech Hum
Eliminator handles both balanced and
unbalanced lines and does a good job. It's
also dirt cheap: < $60 for 2 stereo
channels. I keep 3 of these on hand at all
times whenever I do live recordings, since
(as usual) if you ask any "pro" for an
isolation transformer, he'll fumble around
like a lobotomized trilobite & come up with a
single mono balanced line version. (As
always, the "pro" is too inept to realize that
in the real world most synths use stereo
unbalanced line outs.)
If all else fails, you *can* fix ground loop
hum in the mix. Digitize the audio to your
hard disk, feed .25 seconds of the hum into
the Dolson DNOISE program, then filter
it out of the recording on your hard disk
by running DNOISEwith the Fourier
filter generated by the hum. Warning: this
usually adds considerable reverb, since the
notch filters generated by DNOISE are usually
set at harmonics of 60 Hz, and as we all know
running a bunch of different FFT bins at once
through a soundfile always & unavoidably adds
reverb due to the nature of the short-time FFT.
[3] When live instruments are combined
with synths, it's important that everyone
be able to hear hi/rself. Often the
live instrumentalist will use outboard
effects and a small speaker to enhance the
sound; thus as a recording engineer you'll
often be faced with the problem of recording
a live instrument with a speaker nearby.
If you feed the output of your recording to
reference speakers so that everyone can
hear what the mix sounds like, you'll
inadvertently recreate Robert Ashley's classic
electronic music composition "The Werewolf."
The only reliable way to deal with feedback
in my experience is with a feedback
eliminator. Turning down the monitor
speakers doesn't work because no one
can hear the total mix; moving the small
speaker back from the mike doesn't work
because the live performer can't hear
what hi/r output sounds like.
The Sabine feedback eliminator is in my
experience the best way to deal with the
problem. Because it's a digital widget,
it's especially effective in killing screech
& howl before it starts. There are other
feedback eliminators on the market, but
this one is cheaper and does a better job
than anything I've come across. The cost
is about $250.
Naturally a "pro's" solution to the problem of
feedback is: "Uh, uh, hm, uh, play lower."
Typically useless; typically incompetent.
[3] When recording to DAT or ADAT, you'll
discover that Robert Fripp's rules of
recording apply:
*The SPL level of the group during live
performance will always be at least
twice the level during the sound check.
*The percussionist will always hit the
microphones at some point during the
performance.
*If the monitor speakers are placed close
enough to the performers that they can
hear themselves, there will be feedback;
if the monitor speakers are placed far
enough away from the performers to
avoid feedback, they won't be able to
hear themselves.
*The tape will always run out during the
best part of the performance.
To avoid these inevitable problems,
try the following:
*Set the DAT record level during a
sound check, then back it off by half, then
drop it by another 3 dB. Most DATs
are actually at -10 dB when they read
0 dB, so this will save your bacon during
really loud sections. If the sound levels
are generally within recording limits
but momentarily rise by extraordinary
amounts (this can occur with metallophone-
type percussion instruments), add a compressor
between the mixer and the DAT.
*Place a coincident XY pair of mikes
directly above the percussionist. Very
few percussionists will try to play
empty air.
*Instead of using monitor speakers, use
headphones for everyone in the group
whenever possible.
[4] Placement of microphones is
critical for live recording. It differs
with each room, acoustic instrument,
and type of mike.
You might be surprised to learn that
with acoustic instruments, different
frequencies radiate in different directions.
Thus placing a mike front center and 30
degrees below a cello will pick up one
set of low frequencies, while moving
the mike to the left or right will pick
up another set of mid-frequencies. This
is one of the reasons why you'll often
need a whole passel of microphones in
different locations around one
acoustic xenharmonic instrument.
Cardioid condenser mikes should be
placed as close as possible unless
the source is metal bars or tubulongs;
in that case they should be placed at
least 8 feet to 10 feet away.
When miking a guitar, you'll find that
you get more realistic sound if you
aim the mikes slightly away from the
guitar's center hole. The entire body
of the guitar radiates sound, and for
best results place one mike up by the
fingerboard and another mike down
below the center hole, aimed at the
instrument's body.
The same appears to be true of the cello,
the viola and the violin. In those instruments
the bridge is one of the "hottest" sources
of sound, and a mike moderately distant
from the bridge, aimed at the body of
the instrument, often needs to be balanced
by another close-in mike aimed at another part
of the instrument to get a recording that
sounds like the live instrument.
Again, as noted, to get the "full" sound
of the instrument you may have to combine
the output from a bunch of different mikes
in different positions. (This is one reason
why so many sampling CDs sound different
from one another; only 2 mikes were used,
and on each CD the mikes were placed in
different positions! Yes, Virginia, mike
placement is CRITICAL!)
Wind instruments can exhibit sharp
transients, so *always* use a puff filter.
This can be something as simple as a clean
sock placed over the mike or as elaborate
as a $50 screen from the local music
store. The acoustic result tends to be
the same.
If you're using PZM mikes, you can
increase dynamic range by hot-wiring
two 9 volt batteries in parallel. This
will increase the apparent noise of the
mike, so you'll have to use an additional
hiss elminator between the PZM mike
and the mixer. (More on hiss elimination
in a moment.)
PZM mikes operate unlike other mikes--
they produce the best results if they're
firmly attached to large rigid slabs of wood,
metal or glass. If possible, bolt the PZM
mike to the floor or the wall! (Warning: if
your recording site is near a street, you'll
pick up low frequencies from traffic through
the wall. A notable problem on recordings
made with PZMs in London, since in London
every recording studio is near a street with traffic.)
The PZM mike also has a peculiar pickup pattern:
totally hemispherical. Whereas cardioid mikes
will do an excellent job of rejecting
off-axis sounds, the PZM mike picks up
everything indiscriminately within 180
degrees of its soundfield. This means
that the PZM mike is suited only
for an extremely quiet recording
environment. By contrast, many of
my recordings with condenser mike
make been made in houses over which
jet planes have been flying--yet the
jet plane rumble isn't audible in the
final recording because of the cardioid
condenser's superb rejection of off-
axis sound.
The PZM mike is apt to overload during
high transients. It should be placed
farther away than a condenser mike.
While a condenser mike will suffer
drastic bass rolloff if it's farther than
about 1 foot from the sound source,
a PZM mike (firmly attached to a
rigid plane of wood or metal) will
exhibit excellent flat frequency
response at almost any distance.
The Calrec soundfield mike is a special
case. It's noisy, but 4 of 'em output
to an ADAT will allow you to "zoom"
in after the recording on any of the
corners of an imaginary tetrahedron.
This can actually let you fix some
recording problems in the mix.
[5] Different engineers prefer
different philosophies of mike
placement.
The 3 most common are the
coincident XY pair, the binaural
pair, and the widely separated pair.
Coincident XY produces excellent
results with a group in which everyone
has about the same SPL; the binaural
pair produces remarkable stereo
versimilitude but only if the listener
uses headphones. The widely separated
pair (or quad, or octet) is useful for
instruments physically distant and with
very different SPLs, but tends to
produce a final recording without
a convincing soundstage.
In general, the smaller the number of
mikes, the more realistic the soundstage
of the recording. Some of my CDs are reissues
of Mercury Living Presence recordings
made with coincident XY pairs in
1957-1959 and they sound more true
to life than almost any recordings made today.
With instruments which exhibit
significant sustain (like a psaltery)
you may want to place a mike
inside the body of the instrument. If so,
leave the sides open. Otherwise the
transients will blow the top out of
your mix and force you to record at
such a low level as to produce junk. Mixing
the output with the sound picked up
from 2 nearby mikes can help to
capture the live reverberant sound better
than a pure coincident XY or binaural pair.
It's important to realize that
microphones are acoustic
cyclopses: they see *only and perfectly*
that tiny bit of the soundfield at
which they're aimed. To get a
recording that sounds like what your
ears hear at a live performance, you'll
often have to jump through hoops and
use bizarre and irrational mike placements
and mixes.
One of the greatest fallacies in dealing
with microphones is the old canard: "If
your ears can hear it, the mike will
pick it up." This is NEVER true.
Sounds which are clearly audible to your
ears during a live performance will
invariably be ignored by the mikes;
sounds which your ears cannot hear
during a live performance will be
magnified unbearably by the mikes.
Small changes in mike placement can
produce extreme changes in the
recording.
Above all, trust your ears by listening
to the test recordings on your DAT:
if the test recording doesn't sound
like the live performance, change
things--no matter how conceptually
perfect your mike placement or
mix levels.
[6] All live recordings have hiss.
Digital microphones, digital mixers,
digital synths, digital recorders--
doesn't matter. There will STILL be
hiss.
In my experience the biggest source
of frying bacon is the effects unit.
Even if it's a 24-bit reverb, it will
hiss like crazy. The only solution to
this is to put a hiss eliminator on
the effects unit BEFORE the stereo
returns come back into the mixer.
There are a lot of models of hiss
eliminator on the market. They all
work on the same principle: a
level-sensing circuit activates
a voltage-controlled filter which
closes down depending on the
sensitivity setting.
The Hush IIcx is fairly cheap and
works well; a used KLH Burwen
Dynamic Noise Reduction unit
will also work well; dbx makes
(or used to make) a single-ended
noise reduction unit that worked
well; and other companies make
similar widgets. They're all in
the range of $150-$200.
You'll need at least 2 of these
for any recording session. 1 to
kill the hiss from the effects
unit(s) coming into the mixer,
another to kill the hiss after
it leaves the mixes and goes
into your DAT or ADAT.
[7] Fripp's rule of thumb that
the tape always runs out during
the best part of the performance
can be easily end-run. I always
record with 2 digital recorders--
one recording live from mikes,
another recording the straight
mix output from the mixer. As
Warren Burt can testify, this
approach works--what one
recording misses, the other
gets. Starting one machine
later than the other assures that
you'll never be burned by
lack of tape.
[7] "Pro" engineers will place great
stock in balanced vs. unbalanced
lines. In my experience balanced
lines are useful mainly in running
audio cables from the mixer to
the DAT. As long as your other cable
runs are short & you're not recording
next to a microwave relay tower or
a commercial radio tower, there's
no practical advantage in using balanced
lines. (Unless the stage is a long ways
away from the mixing console! A cable run
of more than 15 or 20 feets demands a balanced
line, no question.)
"Pro" engineers will claim that balanced
lines eliminate ground loops. Naturally,
this isn't true. Balanced lines are just
as prone to ground loops as unbalanced
lines when the digital grounds of digital
synths are involved. The sad fact is that
you'll still need isolation transformers
even if you use balanced lines on all
of your equipment. One further point:
if at any point in the audio chain you
introduce an unbalanced line, the whole
audio chain might as well be unbalanced.
A single unbalanced line can (& usually
will) introduce a ground loop. Bear in
mind that the unbalanced part of the
audio loop can be inside a wall socket
with 3 holes but no true ground!
So much for the myth that "balanced lines
eliminate ground loops."
[8] You should bring along your own
speakers, your own preamp, your own
surge suppressor, your own power strips
and your own line conditioner/backup UPS
to any public recording session. Last year
I had the delightful opportunity to handle
audio on a live performance in which
the "pro" sound engineer in charge thought
it would be a real smart idea to let
20 people plug coffee machines and hot
plates and toaster ovens into the same
outlet the performers were using to
power their digital synths.
The result was interesting
(as in the Chinese curse).
[9] Always bring 4 of everything--4
3-prong-to-2-prong plugs (because
the place where you perform won't
have 3-prong plugs), 4 1/4-inch phone
jack-to-RCA cables, 4 XLR-to-phone-jack
transformers, 4 MIDI mergers, 4 mini-plug-
to-headphone adapters, 4 durable
equipment bags, 4 everything. 2 are
never enough and someone always needs
one extra.
[10] All these pieces of advice deal
with digital recordings rather than
analog. Alas, there's just no comparison
twixt DAT and analog reel recordings.
The reel recordings don't measure up.
Despite the frenzied claims of the
LP and reel-to-reel fanatics, DAT
offers infinitely superior sound
quality to any possible reel recording.
--mclaren

Received: from sun4nl.NL.net [193.78.240.1] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 26 Oct 1995 09:25 +0100
Received: from eartha.mills.edu by sun4nl.NL.net with SMTP
id AA06609 (5.65b/CWI-3.3); Thu, 26 Oct 1995 06:30:36 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id WAA09429; Wed, 25 Oct 1995 22:29:20 -0700
Date: Wed, 25 Oct 1995 22:29:20 -0700
Message-Id: <199510260528.WAA03929@hopf.dnai.com>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗Allen Strange <STRANGE@...>

10/25/1995 11:32:27 AM
Folks:

In B, Maclaren's last post (when does he find time to write all this stuff)
reference is made to Robert Ashley's "The Werewolf" - the analogy is great
but I believe the title is "Wolfman"-it was published in one of the wonderful
Source Magazines of years-gone-by.

=========================================================================
| Allen Strange |
| http://cadre.sjsu.edu/music/strange.html |
|_______________________________________________________________________|
| Electro-Acoustic Music | International Computer Music Association |
| Studios | 2040 Polk St., Suite 330 |
| School of Music | San Francisco, CA 94109 |
| San Jose State University | Telephone + (408) 395-2538 |
| 1 Washington Square | Fax + (408) 395-2648 |
| San Jose, CA 95192-0095 | Email: icma@sjsuvm1.sjsu.edu |
| Telephone +(408) 924-4646 | |
| Fax +(408) 924-4773 | We hope to see you at the ICMC96 |
| | On the Edge in Hong Kong |
| | ICMC96@cs.ust.hk for info |
|=======================================================================

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 26 Oct 1995 11:12 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id CAA10774; Thu, 26 Oct 1995 02:12:04 -0700
Date: Thu, 26 Oct 1995 02:12:04 -0700
Message-Id: <199510260253.XAA09265@chasque.apc.org>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/26/1995 7:01:00 AM
From: mclaren
Subject: New xenharmonic CD
---
"Ping-Pong Anthropology," by The 13th Tribe, is yet
another microtonal CD by yet another world music
group well worth hearing.
In this case the members of the avant ensemble are:
Erik Balke, a Norwegian folk musician and student of
African and Balinese music; Werner Durand, a German
and Silvia Ocougne, a Brasilena.
Balke and Durand perform on harmonic-series PVC pipes
while Ocougne uses prepared acoustic guitars whose
strings are hammered, plucked and pulled. Skin drums
and plexiglas tubes are also used.
The music uses a "call and answer" technique in
combination with digital delays, and the result is
something like ethnic music for a tribe of children
living in the rafters of a geodesic dome. The first
track, "Dream Hunters," along with track 3, "Hazar," and
track 7, "Ping-Pong Anthropology," are particularly
memorable. All three performers (along with French
guest perfomer Pierre Berthet on tracks 8 & 9) use
higher members of the harmonic series to weave
extended melodies. The effect is reminiscent of some
of the earlier compositions of Denny Genovese, although
with greater rhythmic density and a concommitant use
of digital electronics to generate a sleet of ricocheting
"ghost" notes.
This CD is available for a limited time from Experimental
Intermedia. Not nearly as expensive as you'd expect
for an import--about $17. Highly recommended.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 26 Oct 1995 16:03 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id HAA15182; Thu, 26 Oct 1995 07:03:28 -0700
Date: Thu, 26 Oct 1995 07:03:28 -0700
Message-Id: <9510260700.aa23177@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/26/1995 7:03:28 AM
From: mclaren
Subject: trivial errata in the series of psychoacoustics posts
---
As some of you might have guessed, my series of 25 psychoacoustics
posts constitute a distilled and simplified early version of a significantly
longer and more detailed monograph on tuning and psychoacoustics.
This paper should appear in Xenharmonikon 18, perhaps in 1997. Until
then, it behooves me to point out that some errors crept into
the posts--some apparently due to end-of-line word deletions during
uploads, etc.
Other errors are entirely my own fault.
In Topic 8 of Digest 508 a dropped word at the end of a line
produced a significant distortion of the intended meaning.
The full and accurate text is:
---
It is well known that the description of the ear to which Doty,
Worrall and Alves (known as the place theory of hearing) refer is
incomplete and conflicts with much of the psychoacoustic
evidence. "A second difficulty with the place theory lies in
the fact that, in complex sounds, components are often heard
that are NOT present in the Fourier analysis. Or loudness
judgments of components may be made which do not agree
with the amplitudes obtained for Fourier components. It is
certainly true that there are phenomena which cannot at the
present time be explained by the plaheory of hearing."
[von Bekesy, Georg, "Hearing Theories and Complex Sounds,"
Journ. of the Acoust. Soc. Am, 35(4), April
1963, pg. 589] Be it noted that von Bekesy is the researcher
most responsible for compiling experimental evidence for
the place theory. (In fact he won the Nobel prize for it.)
---
Leaving out the italicized word "NOT" from the posted text
produces a distinctly false impression. Mea culpa.
---
Several other errata appeared in Digest 511, topic 6:
The title of the reference listed as [5] should be "The Physics
and Psychophysics of Music" by Juan Roederer, 2nd ed., 1973.
(There is now a third edition, 1995, same author, same title.)
My post incorrectly listed the title as "Introduction to the
Physics and Psychophysics of Music," etc. Important if you
try to look up the book in a computerized library system!
Under the reference listed as [6] the sentence should read
"...more detailed than any text but Pierce (1992)." This is my
flub. Because John R. Pierce's "The Science of Musical Sound"
from 1992 has a virtually *identical* title to Johan Sundberg's
"The Science of Musical SoundS" (S on the end) *ALSO*
published in 1992, I confuted them here. The sense of the
paragraph is that Sundberg's 1992 book offers more complete
references and a wider consideration of psychoacoustic ear/brain
models than any general-reading text other than Pierce's book
(also from 1992).
--mclaren



Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 26 Oct 1995 18:33 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id JAA22990; Thu, 26 Oct 1995 09:32:38 -0700
Date: Thu, 26 Oct 1995 09:32:38 -0700
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/27/1995 8:21:21 AM
From: mclaren
Subject: A new way of regarding non-octave
scales; Jose Wurschmidt's categorization:
implications for non-octave composition
---
As Erv Wilson has remarked, "The field of
microtonal scales is absolutely infinite."
The closer one looks at any given category
of tunings, the more detail one sees.
This is nowhere more apparent than with
non-octave scales: specifically, that subset
defined by the Nth root of K.
As everyone realizes, non-octave scales are
so new and so unfamiliar to Western ears that
their properties remain largely unexplored.
While Enrique Moreno (1992) and myself (1989,
1992, 1993, 1995) have made a start, much
theoretical work remains.
One of the more striking characteristics of
non-octave scales is that they often exhibit
a marked disjunction between what Jose
Wuerschmidt called the "defining interval"
and the "constructing interval." Indeed, this
perceptual dissonance can be so great in
non-octave scales as to turn the ordinary rules
of harmony and melody upside-down and inside-
out, and often leads to puzzlement and frustration
among those who hope to compose with Nth roots
of K.
Gary Morrison has alluded to this problem in posts
on his non-octave 13.6363/oct scale (what he
calls "88 CET," an appelation which while accurate
stresses its defining rather its constructing interval;
and in the case of 13.6363/oct the constructing intervals
are actually the ones with most musical importance).
However I propose to discuss the issue on a more general
basis, covering all Nth roots of K.
First, a word about the terminology. Jose Wuerschmidt was
a microtonal theorist who did seminal work during the
1920s: his article "Die Quinten- und Terzengewebe" ("The
Web of Fifths and Thirds") had a huge impact on subsequent
thinking about scale generation, and to a large degree
underlies Rothenberg's, Wilson's, Fokker's and Lucy's
approach to xenharmonics. (Although they themselves are
often not aware of this; W's ideas diffused far & wide, often at
2nd- 3rd, and 4th-hand.) Wuerschmidt's essential idea
was that tunings are characterized by two kinds of intervals:
one, which he called "constructing intervals," which generate
the tuning via an implied underlying harmonic root progress
and which define the tuning's tonality--and a second kind of
interval, which Wuerschmidt called "defining intervals."
This is the interval which (in what Erv Wilson calls "logarithmic
space," as opposed to "ratio space") linearly defines the melodic
modes of the scale.
In the case of 12-TET, the constructing interval is clearly 2^[7/12].
Underlying this approximation one can reach back to a Pythagorean
(one might say archetypal) constructing interval 3/2, which places
12-TET firmly in the camp of the positive scales. (I.e., those whose
fifths exceed the 3/2 in size. Technically, Bosanquet's positive/negative
classification was originally intended to relate scales to the fifth of
12-TET, as anal-renentive detail-obsessives will
doubtless point out, but modern usage relates equal tempered
scales to the 3/2. ) Other constructing intervals are equally
plausible: Erv Wilson has generated JI tunings based on cycles
of an interval given by the harmonic mean twixt 4/3 and 11/8,
and he has also generated JI tunings based on cycles of 6/5s and
5/4s. One could equally well imagine JI tunings based on cycles
of 7-limit, 11-limit, 13-limit and other constructing intervals;
Johnny Reinhard has generated a tuning based on the squares of
prime numbers. Other constructing intervals are possible.
Returning to equal-tempered scales, clearly harmonic progressions
other than the 3/2 have been used in other cultures. The Javanese and
Balinese do not appear to use a 3/2 at all, nor do the East Indian
srutis. Even in this culture, some instruments favor the 5/4
rather than the 3/2--as for example vibes.
In the realm of the Nth roots of K, subdivisions of the 3:1--most
notably the Bohlen-Pierce scale, 13th root of 3--tend to preserve
the 3:1 as a constructing interval when the division is a small
number of scale-steps, but as the number of steps rises, other
constructing intervals can appear (depending on the exact Nth root
of K).
By constrast, the defining interval of 12-TET is the semitone of
100 cents. This interval defines the melodic structures, the leading
tones and modes possible in 12-TET. Because of the size of the
12-TET defining intervals, many characteristic melodic structures
of antiquity cannot be accurately rendered; in 12-TET there is
no distinction between the diatonic and the chromatic semitones,
for example--nor can the Hellenic enharmonic genus' characteristic
plangent near-quartertone be rendered at all accurately. The sharped
leading-tone favored by fretless string players (very well rendered
by 17-TET) cannot be faithfully reproduced in 12. Ditto the string
player's flatted II in the root of a typical I-IV-V-II-I progression.
In fact, string players will tend to make a consistent pitch distinction
twixt II and IIb, while recovering pianists (and other musically
challenged individuals) will perceive no difference between the
two root notes.
However, the defining interval of the 13th root of 3 is the single
scale-step of 146.304 cents, very close in melodic size (and effect)
to a single scale-step of 8-TET. Thus, while 12-TET uses a
whole-tone very similiar to the familiar tonal 9/8, 13th root
of 3 uses a defining interval not close to anything very tonal.
In fact the scale-step of 13th of 3 is a good approximation to
3 scale-steps of 24-TET, which forms an entirely anti-tonal
interval, lying as it does halfway between one 24-tone circle of
12 fifths and another; again, 146.304 cents is a reasonable counterfeit
of the neutral third formed by the geometric mean between
the 6/5 and 5/4, but again this is hardly a tonal interval in
the just intonation sense (since 350 cents corresponds to an
irrational number 1.224053543).
Thus 13th of 3 boasts quite tonal harmonic progressions, especially
if one deliberately mis-spells the chords with a 13-scale-steps
3:1 on the outside and a 292.608 2-scale-step third on the inside.
But the melodic defining intervals and thus the modes of 13th of
3 are utterly anti-tonal and inharmonic, and produce an
interesting clash with the constructing intervals.
In the Nth roots of 2, this kind of war twixt defining and
constructing intervals is rare. Above 48 tones per octave it
does not exist; and below there are only a few examples. 35-TET
is one example, 26-TET another. The most notable exemplar is
19-TET, in which the 189.47-cent whole-tone defining interval
clashes headlong with the very good 694.7368-cent constructing
interval of a fifth, unless purely diatonic progressions are used.
(That strategy quickly wears out its welcome unless the listeners
harbor a particular love for Christmas carols.)
Wendy Carlos' alpha and beta scales are characterized by virtually
just constructing intervals almost bang-on the just 3:2, but their
defining intervals are nothing like the 200-cent approximation of
the 9/8 with which we're familiar.
Thus the clash twixt defining and constructing intervals must
be considered a particular resource of non-octave Nth root of K
scales.
Moreover, because many Nth roots of K share identical constructing
intervals while boasting entirely different defining intervals,
the adroit scale designer can fix the constructing interval and
generate new scales by searching by alternate Nth roots of K with
a different N but the same K.
For example, one might decide one wanted a just 5/4. In that
case one would fix the constructing interval (a chain of 5/4s)
and use a successive set of Ns to generate alternate
non-octave scales and then explore their characteristics.
The most obvious example is of course the set of equal divisions
of the 5/4 ordinally greater than and ordinally less than the
familiar 4-equal-part division of the approximate 5/4 used
in 12-TET.
Maintaining a just 5/4 and using 5 divisions gives a scale-step
of [386.31371/5] cents, which yields a non-octave scale of
1200/77.26274 tones/oct = 15.5314 tones/oct. Using 3
divisions of the 5/4 (one less than the familiar 4, just as we
above explored one more than the familiar 4) we obtain
a scale-step of [386.31371/3] cents, for a non-octave scale
of 1200/128.77123 tones/oct = 9.31885 tones/oct. In both
cases the constructing interval will be a 5/4, but the defining
intervals are quite different.
One result of this procedure is to generate a family of harmonically
related non-octave scales among which one can "transfer" (to
use Ivor Darreg's, and earlier still, Augusto Novaro's, terminology)
in a non-octave analogy to traditional tonal modulation. In the
case of standard Western modulation, changing to another key
maintains the defining interval while moving by the constructing
interval; non-octave "modulation" of the kind described above
turns this process on its head by maintaining the constructing
interval but often moving by the defining interval.
Other obvious elaborations abound, but that must be left for
another post.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sat, 28 Oct 1995 04:08 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id TAA15674; Fri, 27 Oct 1995 19:08:53 -0700
Date: Fri, 27 Oct 1995 19:08:53 -0700
Message-Id: <951027220824_78406650@mail02.mail.aol.com>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/28/1995 12:45:59 AM
From: mclaren
Subject: The Glechniszahlen-Reihe Monster
---
Along with the Sierpinski gasket and the
Wierstrass curve, the Gleichniszahlen-Reihe
Monster is one of the more interesting
number-theoretic constructs.
Consider the sequence:
1
1 1
2 1
1 2 1 1
1 1 1 2 2 1
..
Is there a pattern here?
The answer's embarassingly simple: row 2
is "one 1," referring back to the previous
row. Row 3 is "2 ones," referring to row 2.
And so on.
This "likeness sequence" (a loose translation
of the German name) grows quickly. Row 27,
for example, contains 2017 entries.
Clearly the Gleichniszahlen-Reihe Monster can
be generalized to any two relatively prime
numbers. The monster then takes the form:
p q
1 p 1 q
1 1 1 p 1 1 1 q
3 1 1 p 3 1 1 q
1 3 2 1 1 p 1 3 2 1 1 q
..
What does this have to do with tuning?
Well, the Gleichniszahlen-Reihe Monster
offers an attractive method of generating
modes from a tuning. It also provides a
scheme for traversing ratio space to generate
scales (if the units of the sequence are
considered as coordinates in ratio space).
Lastly, the Gleichniszahlen-Reihe Monster
could be used as a melodic engine for algorithmically
generating note-sequences in any tuning (if
the units of the sequence are considered as indices
of an array containing xenharmonic pitches).
For more info, see Hilgemeir, M., "Die Gleichniszahlen-
Reihe," in Bild der Wissenschaft, vol. 12, 1986,
pp. 194-195.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sat, 28 Oct 1995 18:53 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id CAA24365; Sat, 28 Oct 1995 02:54:13 -0700
Date: Sat, 28 Oct 1995 02:54:13 -0700
Message-Id: <9510280952.aa18004@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/29/1995 1:17:07 AM
From: mclaren
Subjedt: Xenharmonic uses of FFTs
---
The FFT (to paraphrase DSP expert F. Gump)
is like a box of chocolates: you never
know what you're going to get out.
Because of the close connection between
the micro-level of harmonic or inharmonic
partials, and the macro-level of just,
equal-tempered and n-j n-e-t tunings,
the FFT deserves a more detailed and more
knowledgable discussion than it has to
date received on this forum.
As everyone knows, today's digital
time-stretching and pitch-shifting
algorithms (exemplified by SoundHack
and the Dolson vocoder) have reached a
high state of perfection. This kind of
software allows you to reliably change
the pitch (or length) of any sound with
digital exactitude, under exquisitely
precise software control.
Everyone knows this, but it isn't true.
A number of folks on this forum have gone
so far as to claim that the FFT "tells you
what's going on inside a sound."
Alas, not so.
In fact the short-time FFT analysis of
a signal changing in time gives you
a series of snapshots. Each snapshot is
an average of what goes on during that
time interval. You get *NO* information
whatever about goes on *BETWEEN* spectral
snapshots.
To get around this severe limitation in our
knowledge about the waveform being analyzed,
scientists and acousticians use a sophisticated
scientific technique. It's called "a wild guess."
Between spectral snapshots spat out by the STFFT
analysis, audio analysis programs make a set of
arbitrary assumptions. In effect, guesses.
Namely: they guess (arbitrarily) that the frequency
components do not change very much, or very fast,
and that they remain nearly harmonic all the time.
None of these assumptions are true for any sound
in the real world, but they are sometimes *almost*
true for some sounds, some of the time.
Some sounds have nearly harmonic overtones that
change relatively slowly except for the initial
10 milliseconds of the attack. Alas, during the
first 1/100th of a second of almost every sound, short-
time FFT analysis produces garbage output and the
results must invariably be fudged with various
ad hoc rules of thumb and guesstimates.
This occurs because--during the initial attack of
a soun--both the frequency and the magnitude
of the partials change very rapidly, and the FFT
completely falls apart when applied to spectra
changing rapidly in both phase and magnitude
at the same time. (This is inherent in the nature
of wave phenomena, and is in fact the basis for
the Heisenberg Uncertainty Principle when applied
to de Broglie matter waves.)
The net result is that the FFT in fact does *not*
tell you "What's going on inside a sound." Instead,
the FFT *sometimes* gives you a hint under
*some* circumstances of *some* of what
*might* be going on inside *certain* sounds
at the micro-level...part of the time.
However, the short-time FFT also introduces
a great many distortions into the information
it generates. The short-time FFT also destroys
some of the information being analyzed.
Some people have claimed otherwise: memorably,
Mark Dolson. The statement is always incorrect
when applied to the short-time FFT, and often
incorrect when applied to the simple FFT.
Under no circumstances can
the output of an FFT be taken as gospel,
and in many cases the FFT lies outright.
Doubtless many of you will be shocked to
learn this. "But the textbooks say..."
Nope.
All too many textbooks are written by
folks without a lot of hands-on practical
experience, and they usually deal with ideal
theoretical cases ONLY. For good reasons,
engineering and physics texts are written
to make the subject matter as transparent
and comprehensible as possible. As a result,
few introductory texts go so far as to tell
you the remarkable number of ways in which
today's sophisticated DSP algorithms can
turn your input into total trash.
Perhaps you've been seduced by the siren sound
of that cognomen "digital." Anything that's
"digital" *must* be accurate--right? It *must*
be reliable--right? After all--the Fourier transform
is a classic mathematical technique...how can
it *lie* to us?
To repeat one of my favorite riffs, all methods
of analysis are reliable only insofar as the
quantity being analyzed fits the assumptions
on which the analysis is based. Aiming a
telescope at a virus doesn't tell you much;
using a microscope to study galaxy M31 is
futile.
In particular, mathematical techniques produce
accurate results only when the input honors
the boundary conditions of that technique.
This is no surprise to those of us on the physics
side of the fence, but it seems to come as a
constant shock to some other folks. While
physics dudes understand & accept that
all mathematical models of reality are merely
*approximations* of the extremely complex
real world--and usually crude approximations
at that--this is a lesson that has yet to percolate
into everyone else's consciousness. Thus one
often sees engineers with a mathematical hammer
looking at everything in the universe as though
it's some kind of nail.
The FFT is a classic example.
Surprisingly, this can be good thing. To an engineer,
distortion is a no-no...but to a composer, distortion's
*dandy.* If a mathematician puts a sound
into the FFT and gets out weird digital glop
that strikes the ear as xenharmonic & unspeakably bizarre
& nothing like the input...well, that's bad news for
a math nerd but great news to a xenharmonist.
So rejoice: not only does the FFT often produce
utter junk on output, it often spits out sonically
*interesting* junk.
Think of it as "found art," DSP-style.
This makes DSP pitch-shifting and time-stretch
algorithms (like SoundHack) splendid synthesis
modules! Because of the FFT's propensity to
act in a highly non-linear and bizarre manner,
it can mixmaster even the most mundane inputs
into utterly fascinating digital grunge.
But don't take my word for it.
Instead, let's consider 3 hands-on examples that
show just how wildly the FFT can misbehave:
[1] Try this one--take a short noisy percussive
sound (say, a breaking light bulb, or the thwack
of a hammer, or a recorded gunshot) and time-
stretch it by a factor of 100. Surprise! Although
intuition would lead you to suspect that the output
would simply be a very long very loud rumble,
such is not the case! Instead, what you get out
bears *no sonic relation whatever* to the input:
the output is a wild series of rising and falling
sine waves, a strange and xenharmonic surf of
exotic glissandi and portamenti--1000 violins
run amok with tremolo from hell.
[2] Now take a short 100-sample burst of noise--
say, from an arc welder, or off a radio being
tuned--and time-stretch it by a factor of 100,
then another factor of 100 (10,000 all told).
Surprise! The output will be a long digital
shriek composed of many sine waves rising
and falling. Surprise #2: the shriek will
gradually die away into silence after several
minutes. Yes, even though the input was
noise of a constant level, the output is not
only a set of pitched gliding sine waves, but
one which fades away into *nothing!*
[3] Digitally pitch-shift a perfectly harmonic sound
(say, a triangle wave or a square wave) by
an irrational factor...say, a 12-TET minor third.
Surprise! The output sound will suffer from
periodic "blips" in its amplitude and
severe overall phasing--in fact the original
boring triangle or square wave will sound as
though it's been passed through a Leslie
speaker simulator and then a phaser, along
with a chopper modulator that makes
the sound's envelope flutter audibly.
---
How can such things happen???
Listen up: this may be the only time anyone
tells you the whole truth about the FFT.
In case [1], time-stretching a sharp percussive
noise produced a series of wild rising
and falling sine waves because we tried to
use the FFT to analyze and resynthesize a
noise impulse as though it were made up of
a collection of sine waves.
Remember the admonition at the start of this
post? A mathematical technique is *only*
useful for analyzing those inputs which
adhere to the assumptions underlying that
technique. And in this case, our assumptions
were fatally flawed--we tried to model the
sound of a light bulb smashing as a set of
sine waves.
Sorry: this doesn't work in the real world.
As opposed to pie-in-the-sky theoretical
inputs made up of perfectly harmonic sinusoids,
many common everyday sounds in the
real world are not even remotely well
modeled as collections of sine waves.
What took place inside the FFT algorithm that's
at the heart of SoundHack's time-stretch
algorithms is worth examining in detail,
because it will demonstrate just how
badly such so-called "digitally perfect"
algorithms as the FFT can misbehave.
To see what happened, think back on the
way you got bell sounds out of an
old analog synth. Remember? First you'd
crank up the voltage controlled filter to
a whopping big Q, until it was just about
to warble into oscillation. Then you backed the
filter's Q knob off just a bit and plugged
a short sharp impulse into the filter's
input. On the Arp 2600 a trigger pulse
(which is generally *never* used as an
audio output--it's meant to trigger an
outboard synth or an analog sequencer or
envelope generator!) made a dandy candidate.
What happened, of course, is that the
short sharp impulse hit the filter with
a whole bunch of freqencies. (Remember,
the frequency and time domains are inverse
to one another: an impulse narrow in the time
domain is broad in the frequency domain.
Thus a sharp short spike like a trigger
pulse contains a wide band of sine
waves in the frequency domain) The filter
stored those frequencies and smeared
them out over time. Moreover, because
the filter had a very high Q it was near
the conditions necessary for self-oscillation,
and the burst of energy from the trigger
pulse pushed the filter over the edge and
made it "ring" for a while.
So the output was a rich clangy bell sound,
with lots of bizarre spurious sine waves
generated by the filter itself. The capacitors
in the filter circuit delayed each of the many,
many sine waves in the trigger pulse by
a different amount, spreading them out
over time: and the self-oscillation of the
filter added huge numbers of overtones
at the natural resonance frequency of the
filter itself (a frequency determined by
the capacitance, inductance, and resistance
of the filter circuit).
The end result is that a perfectly ordinary
little click, when fed into that analog
filter, generated a completely unexpected
result: a deep inharmonic bell clang that
lasts several seconds.
If you'll remember that a filterbank is
at the heart of the FFT, you'll now realize
what was happening when you fed the poor
defenseless FFT that short sharp breaking-
light-bulb sound.
Extreme values of time-stretch are analogous
to raising the Q of an analog filter. In this
case, the energy from each time-slice of
the FFT was carried out into many, many
other frames because the extreme
time-stretch cranked down drastically
the rate at which the short-time FFT was
permitted to change its output.
The net result is that the energy from
each FFT frame built up and built up,
until finally it sent the individual filters
of which the FFT is made into a brief
period of self-oscillation.
At the same time, the digital filterbank
which lies at the heart of the FFT
spread those sine waves out in time.
This explains the wild rising and falling
sine waves--whose amplitudes rise
and fall as their phases coincide
constructively or destructively.
This last phenomenon is the clue
to case [2], in which a noise burst
of constant level is changed into a
shriek which decays into silence.
Because of the extreme time-
stretch (here, a factor of 10,000!)
the many sine waves generated
by the FFT's filterbank bled
away at a rate controlled by the
internal "circuitry" of the FFTs
digital filters--in this case, the
various constants used inside
the FFT itself, which correspond
to delay times and filter gains and
tap coefficients. In effect, we
"whacked" the FFT with a
100-sample impulse and it
reverberated for a while, then
died away. This is in fact a
classic example of the impulse
response of an FIR filterbank at
the heart of a real-world FFT.
In case [3] the filter tried to
approximate an irrational
quantity with the ratio of 2
integers. In this case, the
attempt failed because the length
of the FFT wasn't very large, and so
we tried to approximate an irrational
number with 2 *small* integers.
Remember: an FFT algorithm
does pitch-shifting by changing the
decimation factor of the input FFT
of the sound and then changing the
output sample rate conversion filter
so that the ratio of the two quantities
is as close as possible to the desired
pitch shift ratio. However, in the
case of a ratio as irrational as 2^(4/12),
the FFT algorithm falls apart--because
ultimately it's limited by the fundamental
frequency of the sound itself. If the
sound's fundamental is, say, 100 samples
in length, then the FFT *must* use a 128-point
window to preserve the fundamental of
the input sound. This limits the granularity
by which the decimation factor/sample rate
conversion filter can be changed. And thus
(as usual with the FFT) we get a trade-off:
to pitch-shift a sound with extreme precision,
we must reduce the window size of our
FFT--but this amounts to throwing away most of
the frequency information in the original sound.
Conversly, the more precise our frequency analysis
of the input, the more coarse and granular the
amount by which we can ratchet the pitch
up or down via DSP methods.
---
Now that you've more insight into just how
bizarrely the FFT can behave, you might want to
try your hand at using the FFT to make microtonal
music. This is a happy hunting ground for
the imaginative xenharmonist! Talk about "found
scales..." With a sound tool like SoundHack or
the Dolson vocoder, you can obtain "found scales"
from almost any kind of percussive sound--nor
do you need access to a digital audio workstation
or a mainframe! You can do impressive and
wildly microtonal-sounding compositions with the FFT
right on your home Mac or PC.
Here are some suggestions for ways to produce
*highly* xenharmonic music using the FFT:
[1] Try ring modulating a harmonic-timbre sound
& then pitch-shifting or time-stretching it.
(To ring modulate a sound, just multiply it by
a fixed sine wave or set of sine waves. Most
DSP tools allow the user to generate a fixed
signal with an arbitrary set of frequencies
and amplitudes, and also allow you to multiply
any signal by any other signal.)
You'll discover that the more inharmonic a
sound, the wilder the output from the FFT--
regardless of the particular technique you
use.
[2] Try extreme time-stretching or pitch-
shifting of conversations. Only a small part
of any given vocalisation consists of pitched
material: the rest is made up of glottals,
fricatives, plosives, labials, and the like.
Because these are *wildly* inharmonic
impulses, they're superb candidates
for teasing wacky microtonal goobidge from the
FFT.
[3] Try pitch-shifting or time-stretching
a sound by some large amount, then
save the sound in reverse order and
repeat the process. As long as the
time-stretch or pitch-shift isn't a
power of 2, the result will be a
gobbling gibbering wobbling modulation
that's caused by the ratio of the two
incommensurate decimation rate-vs.-
FFT-widow-sizes. This procedure
can generate some extremely unusual
timbres & highly xenharmonic pitches.
[4] Try multiplying one sound by another;
try dividing one sound by another.
You'll discover that division is equivalent
to high-pass filtering of one sound
ring-modulated by the other; it's incredibly
noisy. Multiplying one sound by another
is equivalent to ring-modulating one
sound by the other.
Results are especially interesting if, say,
one input is Mozart and the other input
is The Sex Pistols.
[5] Try using an envelope follower to
control the amount by which you ring
modulate a sound; also try making
the relationship inverse. (That is,
division instead of multiplication.)
For pitched highly harmonic material,
this can turn a western symphony
orchestra is something like a
demented gamelan; it's endlessly
entertaining, particulatly with the
100 strings of Montovani.
[6] Some DSP packages allow you
to use a phase vocoder to generate
a set of time-varying filters. This
is a real gold mine. The mundane
usage is to analyze speech and
apply the time-varying filters to
a flute, an electric guitars, etc.,
ad nauseum.
But it becomes *really* interesting
if you abuse & misuse the algorithm
by "analyzing" the sound of a
thunderstorm, say, or a recording
of bugs frying on an electric bug
zapper, and then apply the weird
non-linear time-varying filters thus
obtained to, say, the sound of a
ditch witch, or a building being
demolished. The end result, as
Carter Scholz put it, is akin to
"Lloyd Bridges giving a lecture on just
intonation while wearing a scuba
regulator and breathing helium."
*Highly* xenharmonic!
--mclaren


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sun, 29 Oct 1995 18:03 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id BAA05391; Sun, 29 Oct 1995 01:04:04 -0800
Date: Sun, 29 Oct 1995 01:04:04 -0800
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/29/1995 8:35:42 AM
From: mclaren
Subject: Our friends at Ensoniq
---
To date little has been said regarding the
Ensoniq corporation. Someone needs to point
out how much we owe people like Steve Curtin,
and his fellow Ensoniq engineers.
Let's start with the fact that if you want a
sampler with built-in tuning tables, were it
not for Ensoniq you'd be in deep guano.
It's incredibly important that a sampler include
a tuning table. Without one, the only
reliable way to break out of 12 on a sampler
is to use a separate sample for each MIDI note.
This gobbles memory at a *staggering* rate. For a 2-
second stereo sample at 44.1 Khz spread over (say)
36 notes, this would demand 6.05 megs of RAM
*per MIDI channel.*
For 16 MIDI channels, that's a mind-boggling
96.89 megs of RAM!
By contrast, if your sampler allows you to
use a tuning table you save an *incredible*
of RAM. Moving to more than 12 tones
per octave lets you spread each sample over
even *more* keys than in 12--for a 31-tone
equal tempered tuning, for instance, you can
spread each sample over 31/12 times as
many MIDI notes. This means that if you've
got a multi-sampled sound that spreads
each sample over (say) 3 MIDI notes in 12,
a tuning table allows you to *drop* memory
requirements by letting you spread each sample
over 8 MIDI notes.
Compare the two situations. In one case, the
lack of a tuning table forces you to gobble RAM
by using 36 separate samples; in the other case,
a built-in tuning table lets you *save* that RAM
by using only 36/7.8 = 5 separate samples. The
savings in RAM is ENORMOUS: a full
[1 - (36 - 5/36)] * 100 = 80%!
This really matters.
We're not talking about just 10% or
even 20% or 30% savings of RAM here...a tuning
table makes an ENORMOUS differece. It lets
you use 1/2-1/6 of the memory you'd otherwise use.
This is *crucial* because the problem with every
sampler is lack of RAM. No matter how much
you've got it's never enough. And forcing the
microtonal user to burn up gobs and gobs of RAM
by duplicating each sample on every single MIDI
note is so wasteful as to defy description.
Clearly, Ensoniq's inclusion of a tuning table (a
separate tuning table for each layer of each sample,
in fact) on the ASR-10 qualifies as one of
the biggest gifts to microtonalists ever.
Period.
Of course, there's more.
Ensoniq builds reliable products. They sound pretty
good. Other companies have flashier technology,
but Ensoniq's synths sound about as excellent as
anything out there. The big *musical* advantage
Ensoniq enjoys, of course, is that they've supported
micrtonality by including built-in tuning tables
every since 1989. I'm not sure why they made that
decision--there doesn't seem to be a lot of
economic benefit in tuning tables. However, I can
say that I've bought a bunch of Ensoniq gear ever
since they started building tuning tables into their
synths. As long as they keep including tuning tables,
I'll keep buying their equipment.
This is particularly noteworthy behavior for a large
synth manufcaturer because many of the other clueless
and brain-dead synth companies still refuse to include
full-keyboard microtuning. Especially the Japanese.
Weird, when you think about it. Which culture uses
a non-12 pentatonic scale? (The Japanese) And which
culture is more frenziedly committed to 12-TET?
(The Japanese!) Kawai, Akai and many others
fanatically refuse to let their instruments be retuned.
They're adamant about it to the point of psychosis. The
behaviour is almost Manson-like in is self-destructivness,
yet they persist.
As a result I won't buy Kawai or Akai synths. Let 'em
rot: they can come out with a synth that does
everything buy makes coffee and I still wouldn't even
use it as a boat anchor.
Ensoniq's commitment to microtonality deserves special
mention on this forum. I have nothing but good things to
say about their synths. (it would nice if they dropped their
prices, but then I wish all mfrs would drop their prices;
what else is new?) Bottom line: if you're serious about
microtonality, eventually you will discover that kludges
and gyrations and contortions and weird clumy work-
arounds like tuning each note with pitch-bend, etc.,
just don't work after a certain point. You run out of
MIDI bandwidth, notes start to get dropped, attacks
sound twangy, and the logistics of such kludges just
become insupportable for even moderately complex
compositions.
In the end, a built-in full keyboard tuning table is
an absolute necessity for serious MIDI microtonality.
And Ensoniq and Yamaha are the only two synth
manufacturers that have consistently stuck to
their support for tuning tables.
Best of all, Ensoniq--unlike Yahama--has refused to
throw away its best technology. Yamaha's insane
decision to stop making FM synths is a mistake Ensoniq
would never have made: instead, Ensoniq just keeps
making its instruments bigger, better and more
capable. Ensoniq refuses to give up on a product.
They just keep adding features and upgrading the
ROMs until the synth does everything you could reasonably
want. That's a good attitude for a synth company to
have, especially nowadays when the Japanese are
coming out with a new model every 6 months that
renders obsolete all previous RAM cartridges, editors,
voice disks, etc. but doesn't sound any different or
do any more than the Japanese synths of 5 years ago.
Bottom line?
Everyone on this forum should offer Steve Curtin and
his fellow engineers at Ensoniq a very long,
very appreciative round of applause.
We owe them a lot.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 30 Oct 1995 07:07 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id VAA17788; Sun, 29 Oct 1995 21:07:07 -0800
Date: Sun, 29 Oct 1995 21:07:07 -0800
Message-Id: <951030050412_71670.2576_HHB30-6@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

10/31/1995 10:27:18 AM
From: mclaren
Subject: Models of reality, the Fourier
mindset, and negative eigenvalues in
Sturm-Liouville problems
---
Charles Lucy has been regularly pilloried
in this forum for his doubts about the
Fourier analysis model of acoustic
systems. Several months ago, in Topic
1 of Digest 404, John Chalmers also stated
that "I can think of no physical principle
which would favor flat fifths over natural..."
This is a concise expression of a general
attitude among acousticians and music
theorists, most of whom are not fully
conversant with the physics behind the
conventional textbook description of
vbrating strings and air columns.
In fact there are good solid physical reasons
why no simple harmonic mechanical oscillator
ever produces strictly harmonic oscillations,
and will instead tend to generate partials
which are either systematically stretched
or shrunk by comparison with the expected
harmonics. There are also excellent physical
reasons why 2- and 3-dimensional physical
oscillators will in general not exhibit either
linear or harmonic oscillatory behavior at all.
This point bears on Lucy's criticism of the
Fourier analysis mindset that characterizes
much of the discussion of this forum, and
in a larger sense it also bears on the question
of which tunings we use--or even contemplate
using.
Ultimately, the musics we choose to make are
limited by our model of the world. Whenever
we tune an instrument, we inevitably begin
to impose a structure on physical reality:
and the type of structure we choose to impose
is determined by our preconceptions.
In the West and Mideast, ever since the time
of Pythagoras and almost certainly (before
that) the Babylonians and the Egyptians, we
(in the West) have conceived of the universe as ruled
by number. The predictive value of number is a
touchstone of classical Hellenic thought.
This had profound implications
for the kind of music made in Greece and
the European cultures.
However, other cultures bring other
preconceptions to the process of tuning.
There is no evidence (as Marc Perlman has
pointed out) that the Javanese, the Balinese,
or any of the other cultures of Southeast
Asia conceive of the universe as a manifestion
of number, except perhaps in the Kabalist
sense implied by gematria, the geomancy of the
Dogon peoples of Mali, etc. Thus their tuning systems do
not arise from mathematical or physical-acoustic
considerations, and as a result Javanese and
Balinese music does not employ the 2:1 octave,
4:5:6 harmonies or harmonic-series timbres. The
same is true of the musics found in Africa,
Central Asia, most of the South Seas islands and
South America.
None of these cultures appear to conceive of
music in the mathematical framework typical
of Western thinking; indeed, as Jon Appleton
has pointed out, in many other cultures
music is often not analyzed at all, but regarded
as belonging to same realm as magic.
Thus one's preconceptions exert a potent
influence on the kind of music one chooses
to make, and different models of reality
produce different kinds of tuning.
The simple harmonic oscillator equation
is one model of reality.
There are many others even within the
confines of Western mathematics.
According to the simple harmonic motion
equation, the displacement
x = A*sin(sqrt(k/m)*t) + B*cos(sqrt(k/m)*t)
where k is the spring constant, m is
the mass, and t is the elapsed time
(A and B are constants of propotionality).
This is a recipe for perfectly harmonic
behaviour.
But the equation which describes
simple harmonic motion is a drastic
simplification of reality, and there
are many other (more sophisticated)
mathematical models which describe
the same physical oscillatory system.
For example, when a simple tube is
excited with a laminar airflow the air
in the tube will oscillate at a characteristic
frequency determined by the length of
the tube, the density of the air, the
location and number of tone holes, etc.
Increasing the airflow produces a stronger
second harmonic, and so on, according
to this model.
But in the real world two modes of
oscillation (fundamental and second harmonic)
in a tube tend to "pull in" toward each other
so that the fundamental rises slightly in
frequency, while the second harmonic drops
slightly in frequency. The simple harmonic
motion model of reality cannot explain this
because it views air molecules as billiard
balls connected by springs, and
the system cannot behave in any but
a perfectly harmonic way.
If instead we look at the air-filled tube as
an energy exchange system from a
thermodynamic viewpoint, or if we
view it as a state-space system, or
if we consider it from the standpoint
of continuum mechanics, the reasons
for the slight inharmonicity of the
oscillations become obvious. Setting
up a fundamental mode of oscillation in the
the tube changes its acoustical admittance;
a second oscillation mode will then lose energy
or gain energy, depending on the phase
of its compressions and rarefactions
relative to that of the fundamental mode.
Since the system will tend to minimize its
overall potential energy the two acoustic
modes will couple; and because the
system is closed (negigibly energy is
lost via acoustic radiation), the energy lost
from the second mode of oscillation has
to go somewhere and not all of it will be
either transmitted out the bore of the
tube or lost as friction against the walls
or in the friction created by turbulent non-laminar
flow around the tone holes, etc. Thus the
energy lost from the second mode of oscillation
will partly flow into the fundamental mode of
oscillation, which in turn forces a
change in its period.
This example serves as a warning about
the models we use to explain physical
system, and consequently to justify the
tuning systems we use. In the case
just considered, the inharmonic behaviour
is slight--as long as flow of air through
the tube remains slow and laminar. Thus
it can be handled by perturbation theory,
as in lord Rayleigh's "Acoustics" of 1896. But
as the airflow increases, the flow becomes
non-laminar. Multiphonics appear;
then the oscillation patterns become
quasi-periodic, start to break up, and
finally become completely aperiodic.
The oscillation turns to noise.
Viewed as a simple energy exchange
system, our model cannot explain this.
But if we step back yet again and realize
that the energy-exchange system we've
been picturing is a drastic simplification
the reason for this behaviour becomes clear.
If, instead, we view the partial differential equations
which describe the interaction of non-linear
acoustic admittance, non-laminar airflow and
radiated, transmitted and absorbed mechanical
energy, and boundary conditions from the
viewpoint of complexity theory... Then it becomes
obvious why the system beahves as it does.
"Little is known concerning boundary value
problems for general nonlinear differential
equations..." [Courant and Hilbert, "Methods
of Mathematical Physics," 1962, pg. 367.
For the class of hyperbolic PDEs which
describe one-dimensional physical
oscillatory systems, perturbation theory
is the classical method of dealing with
the increased airflow described above.
But as we've seen this does not work
beyond a very limited regime. Nonlinear
dynamics must be invoked beyond the region
of non-laminar airflow, and in this regime
the oscillations in the tube move from
being ordinary attractors in phase space
to being strange attractors, whose behaviour
not only jumps back and forth between
aperiodicity and quasi-peridiodicity, but
also spans the complete gamut from
pure noise to harmonic oscillation. Viewed
in phase space, the operation of the compressions
and rarefactions in the air of the tube follow
orbits which can be bounded, yet not precisely
predicted, and which depend crucially on
initial conditions.
Lest you imagine such chaotic behaviour is
restricted only to airflow in tubes,
note that in the case of the wave equation
describing the general nonhomogeneous
vibrating string "whenever an eigenvalue is
negative an aperiodic motion occurs
instead of the corresponding normal
mode." [Courant and Hilbert, "Methods of
Mathematical Physics," Vol. 1, pg 292]
Thus chaotic oscillation can occur
even in vibrating strings--in fact
in any one-dimensional oscillator
described by a Sturm-Liouville
eigenvalue problem.
This is not commonly known.
Why?
Because every textbook on acoustics
which treats the wave equation and
the vibrating string assumes
that the egienvalues will never become
negative. As a result, the true complexity
of the behjvaiour of even so-called
"simple" 1-D physical oscillatory systems
is masked from the unquestioning students.
(These simplifying assumptions are made
so that the equations can be quickly and
easily solved. The usual dodge is to claim
that "negative eigenvalues have no physical
significance." As so often in physics and
engineering, this is a fudge. In fact
the problem is just systematically
restricted and our viewpoint successively
limited until we arrive at equations which
undergrad-level mathematics can dispatch
with elegance in a neat closed form.)
And what does all this have to do with
tuning?
The debate on this forum has mostly centered
around a restricted set of musical tunings--
JI, meantone, a few equal temperaments which
well approximate the lower members of the
harmonics series. And this tiny subset of
tunings is derived from simplistic physical
models (as we've seen). These acoustic models
are described by the usual textbook
solutions of the wave equation, simple
harmonic motion, and the rest of the
18th- and 19th-century baggage.
But the reality can be very different from
the universe predicted by these 18th-century
mathematical models.
"For three centuries science has successfully
uncovered many of the workings of the
universe, armed with the mathematics of
Newton and Leibniz. It was essentially a
clockwork world, one characterized by
repetition and predictability. (..) Most of
nature, however, is nonlinear and is not
easily predicted. (..) In nonlinear systems
small inputs can lead to dramatically
large consequences." [Lewin, Roger,
"Complexity," 1993, pg. 12]
By now it should now be clear why the Fourier
transform has gained such popularity, and also
why Charles Lucy's doubts about it are
well-founded. In "a clockwork world,
one characterized by regularity and
predictability," a mathematical technique
which breaks all physical oscillations
down into sets of perfectly periodic
sinusoidal functions with no beginning
and no end makes a lot of sense. In a
clockwork 18th-century universe, the
Fourier transform enjoys enormous descriptive
power.
But we now know that the 18th-century clockwork
model of the world is not a complete picture. In the
real world, where chaotic strange attractors
characterize the action of real oscillatory
systems, the Fourier transform often falls apart.
And instead of describing reality, the Fourier
transform can put blinders on us and prevent
us from seeing the world as it really is.
In some cases, this is unimportant--because
the oscillations of real physical systems are
sometimes only a little different from the simplistic
clockwork-universe description of the Fourier
Theorem. In instruments like strings, winds
and brasses (after the initital attack of the
tone is over, and if the instruments aren't
played too loudly, and if we're talking only
about the notes in the middle range of these
instruments) a short-time Fourier analysis
tells us *something* about what's going on
in *some* of the notes, during *part* of
their duration. But even for this restricted
class of sounds, Fourier techniques fail
during the first 10 milliseconds or so of the
note's attack. FFTs fail because when t < 10ms
the amplitude and frequency of the component
partials are both changing with great rapidity,
and the Fourier transform *cannot* provide
accurate information about both the period
*and* the spectrum of an input function. An
increase in resolution in one parameter forces
a descrease in resolution in the other. This is
inherent in the mathematics of the FFT, and
*cannot* be sidestepped.
A much greater limitation on the Fourier mindset
is the fact that most of the musical instruments used
by most of the cultures in the world are not violins
or French horns or flutes. Most of the musical
instruments used by other cultures are two-
or three-dimensional oscillators which
generate inharmonic partials & noise, and store
energy from one vibrational cycle to the next...
so that their behaviour is often extraordinarily
non-linear. See Rossing's discussion of the
energy storage from one cycle to the next in
a tam-tam, for example. [Rossing, 1992]
For such instruments, the Fourier description
is a hindrance rather than an aid to understanding.
And the class of tunings to which we in the west
have systematically restricted
ourselves--namely JI, meantone and equal
temperaments with good approximations of
the lower harmonics--are also inadequate for
such instruments.
This perhaps addresses John Chalmers' doubts
about the validity of any "physical system that
would favor flat fifths over natural." Simply
moving from one-dimensional physical oscilaltors
to two- and three-dimensional oscillators
generate fifths which are either *very* flat or
*very* sharp (depending on the oscillatory
geometry)...indeed, the whole 18th- and 19th-century
armamentarium of Western acoustic terminology is
inapplicable to such physical oscillators: "fifth" and
"third" and "natural harmonics" are terms without
meaning for such physical systems. The tam-tam
or the metallophone or the vibrating drumhead
or (as we've seen) even purportedly "simple"
one-dimensional systems like the vibrating
string often operate in the region of complexity...
a region of oscillation that lies between the
complete chaos of noise and the clockwork perfection
of perfect harmonicity. The Fourier view of
the universe does not yield useful information
when applied to such acoustic systems, or
even when applied to a vibrating string characterized
by negative eigenvalues in the associated
Sturm-Liouville equations.
One of the central revelations of complexity
theory is that patterns lie hidden in chaos.
Emergent order appears ex nihilio when
systems reach the edge of chaotic behavior,
as in woodwind multiphonics, etc. In fact
the Brookhaven National Laboratory phsyicist
Per Bak has developed the hypothesis that
dynamical systems naturally evolve toward
a critical state in which the edge of chaos
spontaneously generates order. [See Bak, P.
and Chen K,. in Scientific American, Jan. 1991;
also see Packard, N., "Adaptation Toward the
Edge of Chaos," Technical Report, Center for
Complex Systems Research, University of
Illinois, CCSR-88-5, 1988.]
What does this have to do with tuning &
music?
It seems possible (if not probable) that
non-just non-equal-tempered tunings
represent an adaptation toward spontaneous
order generated by the non-linear dynamical
systems used in so many other musical
cultures (i.e., two- and three-dimensional
physical oscillatory systems: drums,
flat or curves metal plates, non-linearly
coupled oscillators like those used in
parts of Africa in conjuction with resonant
strings, etc).
Interestingly enough, throwing away the
Fourier transform does not mean a loss
of predictive power. Many other analytic
models for acoustic systems exist: Walsh
transforms, Daubechies wavelets, Gabor's
acoustic quantum, and even more recent
non-linear mathematical transforms such
as the slope transform [See Maragos, P.,
"Slope Transforms: Theory and Application
to Nonlinear Signal Processing," IEEE
Trans. Sig. Proc., 43(40, Paril 1995, pp.
864-877.]
The fact that so astute and insightful
a thinker as John Chalmers could fall
into the trap of looking at all tuning
systems and physical oscillators through
the narrow distorting lens of the
Fourier transform is an indication of
the power the Fourier mindset to
brainwash the unwary. While the Fourier
transform is a marvellous mathematical tool,
is does not describe all of acoustic reality--
only a small part of it.
Thus Charles Lucy's doubts about the
universal value of the FFT are well
founded, and the attacks he has suffered
for voicing these doubts in this forum
are an indication of just how thoroughly
the Fourier mindset can blind us to the
wonderfully complex nature of real
instruments, real tunings and real music
in the real world.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 31 Oct 1995 20:59 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA08873; Tue, 31 Oct 1995 10:58:57 -0800
Date: Tue, 31 Oct 1995 10:58:57 -0800
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

11/1/1995 10:05:06 AM
From: mclaren
Subject: Music & western science
---
On encountering the interesting book
"Measure for measure: a musical history
of science," by Thomas Levenson, one
passage in particular caught my eye:
"Music and science have been intertwined in
Western thinking from the moment of their
shared origins, of course: the first even
vaguely scientific theory of the universe
was a musical one, Pythagoras' arrangement
of the planets on the scaffolding of his musical
intervals, with every heavenly body sounding
out its note in what became known as the music
of the spheres." [Levenson, T., 1994, pg. 13]
Even though this is probably incorrect (the
Bablyonians, Egyptians & Sumerians likely
originated most of the musical and geometric
discoveries attributed to Pythagoras), Levenson
makes a cogent point.
To a large extent the current schism in music can be
described by the three stages of Western science:
Newtonian science, quantum theory and nonlinear
dynamics.
The Newtonian model of the universe is a giant
clockwork mechanism. Harmonic cycles naturally
arise from such a scheme: the well-known
example in elementary physics textbooks of
planetary orbits as clocks naturally suggests the
idea of ratios between oscillating cycles of
both planetary and (inside the atom) electron-
orbital motion.
In the macroscopic everyday world, tides,
rates of chemical reaction, the compounding
increase in velocity produced by uniform Newtonian
acceleration, as well as the densities of
Maxwell's electric and magnetic fields as a
function of distance from the charge center all
produce sets of ratios. In a Newtonian universe, it's
hard to escape from ratios--and many of them
involve small integers.
Such a worldview inevitably tilts toward just intonation.
This bias is not necessarily conscious. It is so pervasive
that it often shows up as an unconscious or even
as a subsconscious assumption--a "well, of
course...obviously" set of musical axioms
from which all subsequent musical theorems derive.
In the quantum universe, however, particles are
replaced by probability waves--exotic
critters often called "wavicles." The best way
to deal with a quantum universe is a pragmatic
approach: sum the probabilities, assume the
most likely interaction, calculate the likely result.
This sort of approach suggests equal temperament--
not a perfect intonation, but given the pragmatic
realities (and the uncertainties in performed
pitch and actual tuning) the best compromise.
Chaos theory views the universe as a playground
for nonlinear dynamics. Here, bifurcation and
period-doubling render predictions useless even
when made by the most powerful computers:
an *infinite* number of digits is necessary
to represent initial conditions accurately.
In the real universe of nonlinear mechanics,
planetary orbits grow chaotic and cannot be
used as celestial clocks over more than a few
hundred million years. (See "Newton's Clock,"
Ivars Peterson, 1994.)
This view of the universe stresses the nonlinearity
of real physical processes and naturally gives rise
to non-just non-equal-tempered tunings (that is,
tunings not generated either by Nth roots of K or
by ratios of integers). These tunings are just
as "natural" as the harmonic series...yet such tunings
are profoundly alien to Western music.
It occurs to me that a good deal of the friction on
this tuning forum between advocates of this or that
tuning system derives from a deeper cognitive
dissonance twixt contrary worldviews. One of the implicit
goals of the JI crowd seems to be a clear distinction
between consonance and dissonance: this implies, presumably,
that consonance can be unambiguously defined as
*sensory* consonance, and that more complex notions
like concordance and ambisonance can be derived
directly therefrom. In a universe thus ordered,
the JI view is that of a cosmos capable of balance,
rationality, harmony and what the Hellenes called
"taxis," along with the related concept of "logos"
(which only very rarely means "word:" more often
"logos" mean "underlying order behind," as in
meteorology = "the underlying order behind the
sky.")
The main goal of the equal-tempered crowd, by contrast,
seems to be "to get something that works with usable
instruments." Equal temperament aficionados stress
the ease of modulation, the ready-to-use simplicity of
their tunings. No infinite sea of commas, no troubling
hard-to-classify intervals like the 11/9 or the 9/7.
This accords with a subconscious view of the universe
as a place in which uncertainty and chance conspire to
defeat efforts to attain harmony, balance, simplicity:
instead, the best we can hope for (according to this
view) is a workable compromise.
The chaos-theory worldview is by far the most radical.
It's one that's still working its way through our culture.
While the Newtonian cosmos produced Baroque music
and Christopher Wren's architecture, and the quantum
worldview produced modernism and Bauhaus glass-cube
monopitch-roof architecture, the chaos-theory universe hasn't
yet made its full impact felt on art and music and
literature. A few compositions like Bruno de Gazio's
algorithmic works of the early 1990s and Mark Trayle's
Mattel Power Glove compositions have filtered into
our consciousness...but by and large the *weltbildung*
suggested by chaos theory seems hallucinogenic to
Western artists and composers and writers. The idea
that the universe is a place shaped by violent, complex,
unexpected events which grow out of microscopic
chance events...well, it's not a comfortable one.
The notion of huge effects blossoming from trivial causes
is not something with which Aristotelian dramatic
theory is well equipped to cope. It's as though Oedipus were
and his entire family were to die horribly from infections
caused by scratching mosquito bites(!) In music, however,
the seeds of this kind of exponential and uncontrolled
growth of emergent structure have always been nascent
in Western tradition. Ever since composers began to
generate huge compositions from small cellular
motifs, the notion of order boiling out of chaos seems
to have lurked just outside the peripheral vision of
Western music theory.
Of course, non-just non-equal-tempered tunings
are particularly alien to our (read: white European)
concept of music.
And so it's fascinating to note the historical Western
response to the Indonesian gamelan, which uses a
classic n-j n-e-t tuning (in fact, each gamelan
uses a different one).
The first time a Western composer appears to have
encountered the gamelan was when Debussy heard
one at the Paris Exposition in 1895. He was struck
most forcefully by the rhythms, which he ecstatically
described as "complex enough to put the finest
Western composers to shame" (or words to that effect;
this is from memory).
Mantle Hood's importation of a gamelan in 1956 appears
to have sparked Lou Harrison's interest, and a general
American gamelan movement--ironically founded on
tunings using just intonation. As Marc Perlman has
pointed out, this is a radical departure from actual
Javanese/Balinese tuning practice... And it indicates
just how completely *unable* Western composers are
to assimilate the Javanese tuning in its *own terms.*
Indeed, Lou Harrison himself admitted to being
terrified of the non-just non-equal-tempered intervals
of "slippery slendro;" without the Western rationalistic
landmark of small integers, he found himself at sea.
And finally during the 80s and early 90s digital signal
processing compositions produced with the spectra
and tones of gamelans (for example, Robert Valin's
"Tat tvam asi," 1990, UMUS CD "Bali In Montreal,"
UMM CD 104) again emphasize Fourier manipulations and
transformations. *Again* the Western composer is
reduced to grasping at harmonics and integer-ratio
frequencies *even* when manipulating the raw
non-integer, inharmonic, non-just non-equal-tempered
partials and spectra of Javanese/Balinese
gamelan: *again,* there is a complete inability to
incorporate the gamelan worldview into the composer's
milieu and assimilate it as part of Western compositional
process. Instead, the Javanese n-j n-e-t tuning and
chaos-theory worldview of clashing rhythms producing
a mysteriously regular emergent order can *only* be
assimilated by the Western composer/theorist
if *first* coated with the antibodies of Fourier theory
and harmonic overtones.
Thus it's fascinating to observe the clashes between
these three factions on this tuning forum. My
observations concerning non-just non-equal-tempered
tunings matched to n-j n-e-t additive-synthesis timbres
have provoked incomprehension, with some outright
hostility and no little puzzlement thrown in; meanwhile,
the main hot spot seems to be the flash point between
equal temperament advocates and JI enthusiasts.
This is particularly revealing because it shows not
only the tremendously long lead time for new ideas
to percolate from the sciences into the arts, but it
also clearly demonstrates the enormous staying power of
classic worldviews. Many writers on just intonation
evoke a view of Apollonian poise and balance, and a
yearning for perfect order. Indeed, the title of one
of the best current compilation series of JI music is in
itself revealing: "Rational Music For An Irrational
World." A vision of literally classical order in a disarrayed
universe. And (also revealingly) JI composers see
 have a fondness for classical Hellenic subject
matter: from Partch's exceptional series of settings
of Greek drama to Fonville's setting of poems by
Sappho, the nostalgic quest for order and balance
harks back to the Italian Renaissance, the early part of
the 19th century in England, and the early 20s
of this century in England and America. In such a
worldview, Keats' Greek vase is the ideal: "Heard melodies
are sweet, but those unheard/are sweeter; therefore,
ye soft pipes, play on;/ not to the sensual ear, but,
more endeared,/Pipe to the spirit ditties of no tone..."
[Keats, John, "Ode On A Grecian Urn," lines 11-14]
(Sounds almost as though Keats yearned for a yet-
unheard xenharmonic music far outside the 19th
century 12-tone equal tempered scheme of things...)
Modernism rarely evokes such longings. Instead, the
emphasis in modernist music is often on statistical
and probabilistic effects. From the regular distribution
of pitch classes in the 2nd School of Vienna to the
thermodynamically- and quantum-theory inspired
stochastic sound-clouds of Xenakis, much modernist
music might almost be called "quantum probability-
clouds made audible." Subsequent refinement of these
procedures in algorithmic compositions programs
changed the emphasis, but not the essential inspiration--
nor the worldview implied.
And thus the best of modernist compositions conjure
up for me a statistically determined universe of
strange and terrifying beauty...and notably one in which
tuning is a secondary consideration. Modernist
music appears to have emphasized the processes by
which pitches were *ordered,* rather than by which they
were *derived.*
Perhaps the existence of this forum signals a shift
toward the third or nonlinear worldview. As the ideas
of chaos theory and complexity theory seep
into our culture, the notion of emergent order generated
at the edge of nonlinear musical processes becomes
more "natural" (the single word most fiercely
argued over on this forum, and perhaps the one word
used by the largest number of subscribers with the
largest numbers of different meanings) and more
acceptable.
Pushing forward, it's hard to see where this worldview
might take music... The terrain ahead is indeed
alien. One imagines algorithmic compositions generated
by nonlinear processes in which even the tuning is
produced at run-time, and is a different non-just non-
equal-tempered set of pitches in each performance.
On the level of the microstructure of musical tones,
Sethares', Pierce's, Carlos', Dashow's and (yes) my
notion of matching partials to tuning raises many
possibilities from hierarchical order in n-j n-e-t
compositions: notes whose overtone structure changes
kaleidoscopically as the pitches run through various
timbral strange attractors. Jean-Claude Risset, Paul Lansky,
John Chowning, James Dashow, William Schottstaedt,
Richard Karpen, Mark Trayle, Cindy McTee, Richard Boulanger,
Hugh Davies, Jonathan Harvey, Warren Burt, William Sethares
and others have already produced compositions which
give a glimpse of this brave new musical world: and they
are indeed breathtakingly beautiful.
Still, very little work has been done in this area. To quote
Ivor Darreg, "It will require the work of many composers
for many years to map out the vastness of xenharmonic
territory."
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Wed, 1 Nov 1995 20:55 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA19194; Wed, 1 Nov 1995 10:55:39 -0800
Date: Wed, 1 Nov 1995 10:55:39 -0800
Message-Id: <199511011848.AA22720@net4you.co.at>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

11/2/1995 8:38:00 AM
From: mclaren
Subject: The Gelfond-Schneider Theorem and
non-just non-equal-tempered scales
---
It occurs to me that n-j n-e-t scales undoubtedly
seem more mysterious than either just or equal
tempered tunings because the mathematical
basis of n-j n-e-t scales is not as obvious.
This post attempts to clarify that mathematical
basis.
First, a word about traditional tunings:
If we describe the scale steps of any given
tuning as a number between 0 and 1, the
mathematical basis of just tunings is simple
and straightforward: k[n] = A/B where both
A and B are integers. The only ambiguity here
is the question of whether or not A and B are
"small." This is clearly a matter of personal
taste. John Chalmers and Your Humble E-Mail
Correspondent consider many of Erv Wilson's
CPS tuning to be examples of just intonation, and
thus made up of the ratio of "small" integers:
however, many of those integers are not small
in the usual sense defined by Partch, Doty,
Johnston, et al. For example, the [1,7, 19, 37]
hexany consists of ratios of numbers to the
generator 7*19*37 = 4921. This number is
"small" compared to a googol, or to the
number of atoms in the Local Cluster of
galaxies: but it is large compared to, say,
31 or 43. Thus, some tuning theorists
would consider this Wilson CPS tuning *not*
to be a just tuning, while others *would.*
It is a matter of taste.
For equal-tempered tunings, the scale-
step is described with equal simplicity:
k[n] = 2^[n/M]
In this case every scale-step of every
equal-tempered tuning is described by
an irrational number.
So far, so good. Just tunings have scale
steps described by ratios of integers:
equal-tempered tunings have scale steps
described by irrational numbers.
An irrational number is a real solution of an
algebraic equation. An integer is the
real solution of an ordinal arithmetic
equation.
Thus tunings can be defined by the class
of equations to which the ratios which define
the scale-steps of the tuning form solutions.
For example:
Algebraic equations involve a finite
number of integers raised to integer
powers: say, X = A + By^2 + Cy^5.
Ordinal arithmetic equations involve integers:
X = A/C + D or X = A*B, etc.
(You might have noticed that my definition
of integers is circular. This is because the
question of what an integer is happens to
be a very deep one. It is indeed very,
*very* difficult to define an integer in
abstract terms without using an equation
which involves integers. If memory serves,
the Bourbaki collective devoted an entire
volume to the definition of an integer.)
This gives us a handle on what is meant
by a "non-just non-equal-tempered scale."
Clearly, if the scale-steps of a just scale
are *integers* and if the scale-steps of
an equal-tempered scale are *irrationals,*
then the scale-steps of a non-just non-
equal-tempered scale must be given by
transcendental numbers.
What is a transcendental number?
How is such a number defined?
Is there a general mathematical procedure
for obtaining transcendental numbers?
This, as it happens, is also a deep question.
In fact it is often *VERY* difficult to *prove*
mathematically that a given number is
transcendental.
For instance, e^pi is known to be
transcendental--but pi^e has never
been proven transcendental (though most
mathematicians believe it to be).
In fact the number pi itself was not
proven transcendental until 1882.
(F. Lindemann, in the paper "Ueber die
Zahl Pi," took that honor.)
One of the perplexities which attend
transcendental numbers is the fact
that while there is a general criterion
for determining whether a number is an
integer (Does it satisfy an ordinal arithmetic
equation?) and for determining whether
a number is irrational (Does it *NOT*
satisfy an ordinary arithmetic equation and is it
a real root ofan algebraic equation?) there
appears to be NO general criterion for
determining whether a number is
transcendental.
Consequently, many different (non-obvious,
counterintuitive) equations generate
transcendental numbers.
For example, the number i^i is
transcendental--in fact it is equal to
e^[-pi/2] = 0.2078795. ("i" here refers
to the square root of -1.)
This sounds absolutely insane, but it
happens to be true...and provable!
Like quantum mechanics, many of
the results of mathematics follow
the rule: "If it makes intuitive sense
to you, then you don't really understand it."
The real part of log(i) is also
transcendental: log(i) = i*pi/2.
Thus, while there's no general
formula or method for generating
transcenedental numbers, quite a
few transcendental numbers have
been discovered over the last few
milennia...mainly by chance.
Here are few:
Liouville numbers have been
proven transcendental. They were
discovered in 1851 (much later
than pi or e) and are given by the
formula: Sum from k = 1 to
infinity over a[k]*r^[-k!] where
"!" means "factorial" and a[k]
is an integer twixt 0 and r.
There are infinitely many
Liouville numbers. For instance,
if all a[k] = 1 and base r = 10,
we get 1/10 + 1/(10^[1*2]) +
1/(10^[1*2*3]) + ... =
0.1100010000000000000000001000...
Depending on how the a[k] are
chosen, many different Liouville
numbers arise. One might pick the
a[k] as the fractional part of the
decimal expansion of e, or pi,
or of an irrational number such
as 2^[1/3], etc.
Euler's constant gamma is transcendental.
It's given by the limit for n = - infinity
of the series 1 + 1/2 + 1/3 + 1/4 + ...+
1/n - ln(n)
Gamma = 0.577215...
Catlan's constant is another lesser-known
number. It has not yet been proven
transcendental, but mathematicians
widely believe it to be. It's given by
the formula G = sum of (-1)^k/(2k+ 1)^2
= 1 - 1/9 + 1/25 - 1/49...
Chapernowne's number is also generally
believed transcendental. It is constructed
by concatenating the digits of the
positive integers: C =
0.1234567891011121314151617181920...
As mentioned in my series of posts on
generating non-just non-equal-tempered
scales last year, the zeta function also yields
transcendental numbers. However, my
post did not specify that the zeta function
must be evaluated at rational points: zeta(K)
can be either real or imaginary, since K can
be either real or imaginary. For K not equal
to a real integer, zeta(K) is in general not
transcendental. However this still leaves us
with zeta(2), zeta(3), zeta(5), etc., all
trascendental.
Of particular interest in the construction
of non-just non-equal-tempered scales is
the Gelfond-Schneider theorem. According
to this theorem, any number of the form a^b is
trascendental where a and b are algebraic (a
<> 0, a <> 1) and b is not a rational number.
This formula spews out an infinite number
of transcendental numbers, since (for example)
Hilbert's number, 2^[sqrt(2)] is clearly
transcendental, ditto 2^[sqrt(5)], 3^[fifth root of 7],
etc.
Feigenbaum numbers are also transcendental.
These numbers arise from chaos theory and
are related to properties of dynamical systems
which exhibit period-doubling and other chaotic
behaviour. The Feigenbaum number is
4.66920160910299067185320382046620161725...
Alas, equations involving transcendental numbers
do not necessarily produce solutions which are
transcendental. e^[i*pi] = 1, an integer, while (e^[i*pi]) +
2*phi = sqrt(5), an irrational number.
Clearly phi, the Golden Ratio, is not transcendental
since it is the solution of an algebraic equation:
phi = [sqrt(5) - 1]/2 = (5^[1/2] - 1)/2
One of my own amateur mathematical discoveries
(I've not seen it published elsewhere, at any rate)
is an infinite number series given by the iterated absolute
log of K, where K is an integer. I believe (but cannot
prove) that the scale-steps given by the terms of
this series form a non-just non-equal-tempered
scale.
This is a peculiar and interesting series of numbers
since the terms oscillate between 0 and 1. The first
term is transcendental, but I have not been able to
prove that the succeeding terms are (or are not)
transcendental.
For instance, the first 10 terms of the iterated
abs log of 2 are:
i[1] = abs(log(2)) = 0.30103...
i[2] = abs(log(i[1])) = 0.5213902...
i[3] = abs(log(i[2])) = 0.2828372...
i[4] = abs(log(i[3])) = 0.5484636...
i[5] = abs(log(i[4])) = 0.2608521...
i[6] = abs(log(i[5])) = 0.2338806...
i[7] = abs(log(i[6])) = 0.6310057...
i[8] = abs(log(i[7])) = 0.1999666...
i[9] = abs(log(i[8])) = 0.6990424...
i[10] = abs(log(i[9])) = 0.1554964...
and so on.
There does not appear to be an obvious
pattern to the terms.
If one were so inclined, one might call
this the McLaren series: this is surely
the first post on this tuning forum to
feature an original mathematical
discovery, albeit a trivial one.
As can be seen, all of the above methods
for generating transcendental numbers
can produce an infinite variety of 'em.
Depending on the pattern of generators
(the numbers you plug into the various
equations to produce transcendental
numbers), you get an endless variety of
non-just non-equal-tempered scales.
The choice of whether to terminate the
series with a 2/1 or not is a matter
of taste. (One might call it "terminating
the series with extreme prejudice.")
In that case, one obtains a non-just
non-equal-tempered scale which repeats
at the octave. Choosing another termination
integer (or irrational) would produce a
non-just non-equal-tempered scale without
octaves.
As can readily be seen, these are the obverse
of the equal tempered class of octave and
non-octave scales. My limited experiments
with non-just non-equal-tempered octave
and n-j n-e-t non-octave scales appear to
show that there is a marked difference in "sound"
between the two classes of tunings. As Gary
Morrison so aptly put it, "non-octave scales
sound like rich thick chocolate milk shakes."
They are very exotic and harmonically rich,
and seem to have a sonic colouration which
lends an eldritch quality to the compositions
one produces in such scales.
For n-j n-e-t non-octave scales, the same
appears to be true, only more so. They are
among the most exotic and sonically luxuriant
of all tunings in my experience, and an
extraordinary realm for new music exploration.
--mclaren


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 2 Nov 1995 18:40 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id IAA29513; Thu, 2 Nov 1995 08:40:23 -0800
Date: Thu, 2 Nov 1995 08:40:23 -0800
Message-Id: <9511020838.aa10245@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

11/3/1995 9:00:50 AM
>From : mclaren
Subject: Larry Polansky & adaptive tuning
---
Long ago in a galaxy far away, Larry Polansky
wrote an article entitled "Paratactical tuning."
In it he posited a system of dynamic tuning which
adaptively changes the size of interval classes
depending on musical context.
While in one sense this is a sophisticated
software method for facilitating modulation
in JI (similar to such arrangements as
Harold Waage's logic-gate just intonation system
which detected fingering patterns and retuned
intervals so as to produce maximally beatless
chords regardless of key)...in another sense
Larry was getting at something much deeper.
Paratactical tuning is far more than mere dynamic
retuning to produce the "best" chords according
to the location of the current 1/1. Because Larry P.
leaves it pretty much open-ended as to what
the criteria are. They could be "which 1/1 are we
on?" but they could easily be something else. Once
it's all in software, the rule-set that determines
how JI intervals shift can respond to a lot more than
local key center...for instance, the software could
respond to the player's dynamics, or the shape of
the melody line, or the nature of the harmonies
being played. More: Larry suggests that the
criteria for the rule-set geoverning the adaptive
tuning might be open-ended as well--possibly even
dynamic.
One could imagine, for instance, an adaptive
tuning which morphs JI intervals from "plaintive"
to "luminous" or from "tense" to "resolute." (As
William Schottstaedt has so dextrously done
in an intuitive fashion in the fourth movement
of his composition "Water Music," albeit in
that case moving from 13-TET to Pythagorean.
If you haven't heard this composition, you're
missing a truly spectacular piece of music.)
As a first advance, this leads directly to
a systematization of Lou Harrison's "free style"
of composition, in which successive intervals
are tuned not with reference to a fixed 1/1, but
with respect to the previous note. Clearly such a
practice produces a much more complex system of
just tonality--but, as Boomsliter and Creel pointed
out, this might also more closely mirror the way
the average person hears music. Our hearing appears
to be relative rather than centered on an absolute
1/1 (except for those rare listeners with absolute
pitch), if the psychoacoustic evidence is to be
credited.
However, Larry's idea of adaptive tuning goes
much further than simply the B&C idea of the
"the long pattern hypothesis." While useful,
B&C's concept of melodic sections moving between
separate and distinct 1/1s is in itself limiting
because it posits simplistic patterns of linear
movement twixt striaghtforward JI melodic
contours, overtone structures, etc.
When combined with the concept of morphological
metrics, Larry Polansky's paratactical (or adaptive)
tuning really begins to open a *lot* of new doors.
For instance, changing one morphological metric
into another introduces (on the level of the melodic
phrase) a complex set of requirements between
vertical and horizontal interval-relations. As
Larry so insightfully realized, this is an ideal
case for adpative tuning: and with sufficiently
complex software, paratactical tuning does
indeed seem the ideal solution to this ever-present
tension twixt vertical and horizontal relationships
in the context of changing from one vertical or
horizontal morph to another.
But when you throw in the concept of changing
from one *timbral* morphological metric to another
at the same time....! Then, adaptive tuning--in
this case extended downward into the micro-level
of the individual overtone--is the *only* practical
solution to the complexities introduced. Because
of the extraordinary explosion of interactions
between timbre, vertical and sequential structure
in tihs case, a set of adaptively tuned overtones
offers by far the simplest solution. (Jean-Claude
Risset and John Chowning and Bill Sethares and
James Dashow have produced some compositions
which use elements of this technique: the pieces
are astonishingly beautiful, yet only a preliminary
step toward the total integration of timbre with
harmony. Because of the incredibly cumbersome nature
of "doing it all by hand" in Csound, further advances
along this line seem likely to be made only with
the aid of some sort of automated morphological
metric software.)
However, there's even more to Larry's idea than this.
At the highest level, adaptive tuning can be
thought of a way to generate a single "composite"
instrument from a number of subinstruments. If
you think of each dynamically-retuned section
of the score as a specific subinstrument, the real
depth of Larry's idea becomes clear. In order to
wander over the entire solution space you'd need
huge numbers of actual physical instruments--not
a reasonable solution. Thus Larry substitutes *virtual*
instruments. And adaptive tuning shows its true
power by effect switching betwen those virtual
instruments instantaneously: the net result is
infinitely more efficient & flexible than either the
Partch solution of limiting the composition to a
single key, or the Harrison solution of requiring the
performer to be painstakingly exact in moving from one
just ratio to another as well as keep in hi/r head
where the melody has been and where it's going
to (in terms of moving 1/1s).
Larry has written a composition that puts some of
these ideas into practice. Due to be premiered
in Japan, it looks to this old score-reader like
another remarkably beautiful JI composition, yet
(as mentioned) it's more than that... Larry's
upcoming piece promises to significantly
advance the state of the microtonal art.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 3 Nov 1995 19:14 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id JAA00103; Fri, 3 Nov 1995 09:14:02 -0800
Date: Fri, 3 Nov 1995 09:14:02 -0800
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

11/4/1995 11:53:34 AM
From: mclaren
Subject: Bach wars
---
Many forum subscribers have added their 2 cents
to the long-running "Bach wars." For my part,
permit me to say that the posts by all concerned
were absolutely superb. Full of detailed quotes,
specific references, cogent reasoning. Paul Hahn,
Gary Morrison, Manuel Op de Coul, Aleksander Frosztega,
Johnny Reinhard and Helmut Wabnig did a marvellous
job of distilling the references and citing competing
sources.
As to who is right or wrong, that is not nearly
as interesting as the citations themselves. This
controversy, raked over in admirable detail, gives
*all* of us the references and direct quotes required
to decide for ourselves. That should be the goal of
scholarship...and in the "Bach wars" series of posts,
all subscribers concerned have adhered to high
standards of scholarship and logical reasoning.
Unlike so many posts in which sarcasm, appeals
to authority, or sheer naysaying substituted for
a reasoned debate, the "Bach wars" have proven
enormously enlightening. Reading this series of
posts has taught me a good deal about an important
question in the history of tuning. Congratulations
to everyone who posted on the subject. You've all
shown us how interesting and educational this forum
can be at its best.
---
As to the specific question of the "Bach wars" posts--
"What tuning did Bach use?"--it does not behoove me
to speak directly, given the abysmal nature of my
ignorance about the period and the people involved.
However, some general observations seem in order:
[1] "Statisticum radix scientiae malorum." It is my
firm belief that misuse of stastistics is the root of all
bad science. (As opposed to pseudo-science, like the
N-rays which destroyed Rene Blondlot's reputation
in 1906 and the abominable E-rays which constitute
such a blot on the credibility of German science
today. E-rays are nothing but dowsing performed on
purported electromagnetic radiations, which radiations
can neither be detected by any known instruments nor
cut off by any known form of shielding. Yet they
"cause cancer." What's the German word for "scam"?)
One of the worst uses of statistics is what I call
"stripmining the noise floor." When you've got too
little data to form a reliable representation of a
statistical universe, or when you've got dribs and
drabs of data collected at time A, time B, time C,
under wildly different conditions and with dubious
controls...you're basically pushing linear parametric
statistical methods beyond their useful limits. You're
trying to statistically analyze noise...trying to
make soup out of dishwater, mathematically speaking.
The result?
Hard numbers that look convincing but turn out to be
"junk science."
My best guess is that the "What tuning did Bach use?"
controversy is undecidable because all participants
concerned are stripmining the noise floor.
---
Let me give some concrete examples:
Statistical analysis of Bach's harpsichord compositions
looks like a reasonable strategy at first glance. However,
Bach's collected harpsichord compositions do not form
a reliably complete representation of an underlying
statistical universe for the following reasons:
[1] Bach wrote his harpsichord compositions over a number
of years. Some were penned in Cothen, some earlier, some
later. If Bach's style of composition changed, this throws
into doubt one of the underlying assumptions of a statistical
analysis: namely, that all samples derive from the same
statistical universe. To use an acoustics analogy, this is
like taking half your measurements of reverberation time
in a closet and half your measurements in Carnegie Hall.
You've got a mixed set of data and you're lumping all the
data points together willy-nilly. A guaranteed shortcut
to "junk science."
[2] Bach may have written some compositions for a harpsichord,
others for clavichord. Does the statistical analysis take
this possibility into account? Do scholars know for *certain*
which compositions were written for which instrument? Did
Bach (or his patron, the prince) use different tunings on
different keyboard instruments? Are we *sure*? Are
scholars *sure* that many of Bach's compositions weren't
written to be played on *both* a harpsichord *and* a
clavichord (whichever might have presented itself and
been available) and might thus have represented Bach's attempt
to compose music which sounded good *regardless* of the
particular tuning used?
For example, the clavichord might have used one well temperament,
the harpsichord another--or the clavichord might have used, say,
Kirnberger III, while the harpischord might have used equal
temperament because harpsichord continuo often had to
accompany an instrumental ensemble at Cothen. (Remember,
Bach's harpsichord concerti were written at Cothen.)
Did the statistical analysis take these issues into account?
If not, why not?
[3] Many assumptions are inevitably implicit in any linear
parametric statistical analysis. In order to calculate r values,
you have to fix parameters and make guesstimates about their
influence and constancy. If you correlate verbal IQ test scores
with length of stay in the US for new immigrants and assume
strong causality, you come to the bizarre conclusion that
emigrating to the United States raises people's
intelligence.
You get this garbage answer out of linear parametric statistics
because you put garbage *into* the equations: namely, garbage
assumptions. Without basis, you assumed two parameters to
be deeply causally connected for the wrong reason, and ran too
far ahead of yourself with the results.
This raises questions about the Bach statistical study. Questions
of *both* causation *and* correlation. What are the *specific* r values
by interval category for Bach's compositions? Do they imply
causation? If so, what *kind* of causation? For a classic
example of garbage science, see the r values buried in the appendix
of Charles Murray's "The Bell Curve." You'll find r values between
0.4 and 0.6. As a rule, an r below about 0.75-0.8 is a sure-fire
indicator of smoke & mirrors. The correlation is so weak that the
researcher had better answer some *very* tough questions or
risk being called slipshod, or worse, a fraud.
So I for one want to know those r values on the Bach statistics.
Did the Bach statistical analysis use linear regression?
Quadratic? Cubic? Least-squares?
What's the mean, median and the standard deviations for the r values
of each interval broken down by year of composition? What do these
profiles tell us about causation? Was multiple regression used on
*different variables*? Were the results compared? What did *that*
say about causation as opposed to correlation or
even mere coincidence (AKA low r values)? Did the
statistical analysis even bother to consider such issues?
If not, I want to know why. Bach may in some compositions have
been interested in exploring unusual dissonances: thus at certain
points in his career his compositions might have *deliberately*
used "bad" intervals in a given well temperament (if indeed he used
a well temperament). But at other times in his career, he might
have been more interested in exploring unusually perfect consonances
in a given well temperament. This would change the intervals
Bach tended to use over time: did the statistical analysis take
this into account?
We know for a fact that Bach's 7th chords were considered wildly
dissonant and highly outre in his day. Therefore it seems
reasonable to posit that he might have systematically explored
sets of intervals unusual for composers of the period. Did the
statistical analysis take this into account? Or did the statistical
analysis arbitrarily assume that Bach would have used the most
consonant (read: beatless) intervals in a given well temperament
most often, and the least consonant intervals least often?
More complexly, Bach (being the genius he was) might have switched
his interests constantly, exploring one set of unusual dissonances in
one composition, another set of exotic consonances (available only
in a given well temperament) in another composition. Conflating
all of the interval data into a single linear regression would destroy
all of this information and produce *wildly* misleading answers.
In this case, multivariate analysis would be called for. Was it
applied? Was multidimensional statistical analysis used? Did
the researcher test for correlation between (say) timbre *and*
interval and interval, or only twixt interval and interval?
[4] Bach might have preferred certain intervals (even if he used
equal temperament) for numerological or ecclesiastical reasons.
We know with surety that he considered C minor and D minor "special"
keys. Only a few of his compositions use these keys: they are
statistically underrepresented. Bach clearly invested these keys
with some special significance, because he reserved these keys
for his most ambitious works. The Chaconne (of which Bach wrote
exactly ONE), for instance, uses d minor: so does the famous prelude
and fugue. The passacaglia & fugue (again Bach wrote only ONE
passacaglia) uses c minor...and so on.
Does the statistical analysis take this into account? Depending
on the well temperament (if such a tuning was indeed used),
d minor and c minor might well have exhibited special intervallic
properties. We can be reasonably certain from statistical analysis,
for example, that a number of Buxtehude's later compositions use
keys which when transposed down on the meantone Luneborg organ offer
unusually consonant sets of intervals.
Did the person who performed the statistical analysis take this
possibility into account? Did s/he run a separate analysis on
this assumption? Were the results compared with the straightforward
statistical analysis?
If not, why not?
Bach could have had many reasons for using certain intervals.
We know, for example, that Bach was numerologically inclined.
In one of his chorales the melody enters 10 times, representing
the 10 commandments: other examples abound. Does the statistical
analysis of Bach's use of intervals take into account the possibility
that he might well have used a given interval-set for reasons *other*
than considerations of acoustic consonance and dissonance?
This same objection applies to fugue subjects and counter-subjects.
One fugue of the 48 deliberately uses all 12 tones of the chromatic
scale in succession, for instance: does the statistical analysis
take into account the effects of such part-writing?
[5] Some of Bach's klavier compositions were originally written
for organ, some are alterations or emendations of works written
for instrumental ensemble, and some are greatly modified versions
of other composer's works--specifically, the "transcriptions" of
pieces by Vivaldi, which are very much more than mere
transcriptions. Does the statistical analysis take this into account?
If not, why not?
---
Statistical arguments for or against this or that historical trend
fill me with foreboding. They're a fertile breeding ground for "junk
science" (without the researcher *intending* to do junk science, of
course--or even realizing it). So many assumptions and
presuppositions are implicit in any linear parametric statistical
analysis of historical data as almost to force me to proclaim:
"a pox on all historical statistical studies!"
Classic examples of "junk science" from statistics abound
in economics. For instance, those bogus United States GNP
charts going back to 1876--charts which completely ignore the fact
that the U.S. switched from an agrarian economy in the early 19th
century to a steam-driven Bessemer-furnace economy in the late 19th
century to an oil-and-steel-driven machine-tool economy in
the early 20th century to an information-driven service
economy in the late 20th century. What's the net discounted dollar
value of a bushel of tobacco in 1876 compared to the net discounted
dollar value of a megabyte of computer code in 1996? The
question is unanswerable. You're not just comparing apples
and oranges, you're comparing apples and *mu-mesons.* The
question doesn't even make *sense.*
Again, sociological historical studies purporting to show
improvements in "quality of life" as the century progressed
are equally flawed. While in 1880 there were no antibiotics,
it was also standard for a middle-class family to have 2 or 3
live-in servants. If you're a healthy person, would you be
willing to trade lack of antibiotics for being waited on hand
and foot and having your meals cooked for you and your washing
done by a crew of servants? Would this be an overall improvement
or decline in your standard of living, as opposed to today?
The answer isn't obvious to me. Again, apples compared
with oranges. Again, garbage science produced by a misuse of
statistics.
All told, the value of historical statistical studies of Bach's
interval-usage seems at best right on the borderline of
junk science, and at worst little more use than
examining bird entrails.
---
This leaves us with the written historical record.
Does any specific numerical record exist of the frequencies
to which Bach tuned each of the keys on his harpsichord?
Clearly not.
Thus we are left with inadequate data. *Grossly* inadequate
data. Regardless of what tuning you think Bach used, the
fact remains that (unless we want to swim in the VERY
murky waters of statistical historical numerology) we must
fall back on vaguely-worded hearsay testimony about Bach's
tuning.
My standard for this kind of historical hearsay is: would you
convict someone of murder on the basis of this stuff?
In this case, no way. You don't need O.J.'s Dream Team
on this one. The testimony is so weak and so open to
interpretation that even a grand jury would no-bill the
defendant. It wouldn't even get to trial.
Thus my sense here (reading the "Bach wars" posts) is again
that the question is undecidable on the basis of the hearsay
testimony from Bach's relatives and acquaintances.
Many of the quotes supposedly come from "eyewitness"
accounts--but can we be *sure* it was *actually*
an eyewitness account, or was Forkel remembering
long after the event? Or did Forkel miss the incident
entirely, and perhaps have to rely on C.P.E. Bach's
recollection? Or was it one of those "a friend of my
cousin's brother told me he heard someone say..." things,
gussied up in first-person narrative form?
What's that?
Did someone mention "false memory syndrom"...? Meaning:
people tend *not* to remember the event itself, but what
someone else *told* them about it...?
We know *some* generalities with reasonable certainty:
Bach was considered "old-fashioned" during his lifetime,
and his style was out of date long before he reached middle
age. The homophonic, racier, faster-paced "Italian
style" was much more in vogue by the 1720s than Bach's
almost quattrocentric polyphony.
Did this influence what friends and acquaintances
remembered about Bach's tuning? Did they unconsciously
exaggerate the "meantone" quality of Bach's music because
of the old-fashioned nature of his polyphony? Or did they
instead unconsciously redact their memories of his tuning
procedures so as to "modernize" Bach and unwittingly
make his music more fashionable to fit in with the new
music everyone was used to?
I don't know the answer to these questions. Before making
up my mind about "what tuning did Bach use?" I'd sure like
to.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sun, 5 Nov 1995 04:39 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id SAA29690; Sat, 4 Nov 1995 18:39:15 -0800
Date: Sat, 4 Nov 1995 18:39:15 -0800
Message-Id: <951105023715_71670.2576_HHB54-5@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

11/5/1995 2:30:11 PM
From: mclaren
Subject: academia, jibes, hurt feelings, the search
for truth
---
With his usual insight and common sense, Bill Alves
offered some tart rejoinders to my posts on
psychoacoustics.
As to the question of who is or is not ignorant, it is
sufficient to observe: "Hic ego barbarus sum quia non
intelligor illis." But Alves' point about my "bigoted
statements about academics" is right on the money.
Your Humble E-Mail Correspondents pleads guilty...
with an explanation.
To paraphrase Richard Preston, "Music is a lesser activity
than religion in the sense that we've agreed not to kill each
other but to discuss things." To the extent that any of
my statements about academia or people with PhDs has
hindered the discussion on this forum, or poisoned it
with unnecessary bad feeling, I certainly apologize.
For making bigoted statements about no-talent academics
who try to maintain a brain-dead status quo, however,
I do NOT apologize. I *AM* bigoted against such people.
I intend to *continue* my bigotry against such people,
and wherever possible to deeper and broaden the scope
of my bigotry against these no-talents. I make no apologies
whatsoever for this kind of bigotry. On the contrary: I
*celebrate* this kind of bigotry. Indeed, in an apotheosis
of political incorrectness, I *glory* in my bigotry
against people with big reps and no competence, big
degrees and no talent, big grants and no imagination.
I exult in my capacity to distinguish twixt crap and
the beaux arts. If this is bigotry, *great.* Sounds like
a PLAN!
Lemme at 'em!
Come get some, !@#%#+#@%$ers!
It's hard to say whether the no-talent duffers are in the
majority or the minority in music departments. Many folks
with PhDs in music use a kind of protective coloration: they
keep a low profile, don't make waves, and quietly go about
their radical xenharmonic explorations while giving an
outward appearance of conformity to the Perversions of
New Music ideal. (Namely, that the very model of the modern
composer is one who talks and talks and talks and talks, and
hardly every produces any music...and then only dribs and drabs
of warmed-over Webern and kludged-up Cage. Above all, ya
gotta have a theory! Theeee-ory! Theeee-ory! Git yer red-hot
theee-ory! Ice cold muuu-sic, red hot theeee-ory!
Can't enjoy the music without a theee-ory!)
Many of the academics who subscribe to this forum are
extraordinarily talented polymaths. For example, Larry
Polansky fills me with awe: this guy not only writes
original & groundbreaking articles, he's not only a terrific
teacher & thesis adviser (by all accounts), he's not only
a fine amateur ethnomusicologist, but he's also a world-class
computer programmer *AND* a startlingly talented composer.
Is there *anything* this guy can't do? Is he in next year's
olympics? Has he climbed Mount Kilamanjaro yet? Will
he be the first xenharmonist in space?
William Schottstaedt is another example. A top-notch
programmer, able to design entire object-oriented music
languages in a single bound, debug cranky antique computer
systems faster than a speeding bullet, but then he ducks
into a phone booth and becomes...a world-class composer.
Amazing.
David Cope is not only a fine composer, by all accounts a
splendid teacher, an excellent writer (the "New Directions
In Music" books are classics of their kind in each of their
various editions), but he's also a world-class AI music
programmer whose LISP syntactic models of composition
have broken genuinely new ground in the field.
Astounding.
Johnny Reinhard is not only one of the world's great
scholars of microtonality, a bibliomane with the most
impressive library of microtonal scores on the planet,
an administrative wizard who single-handedly forced
the New York concert scene to bow to microtonality,
AND a living link between thousands of xenharmonic
composers, researchers and performers...but Johnny
is *also* the reincarnation of Jimi Hendrix as a
bassoon player. Virtuoso par excellence.
And speaking of universally talented people...
William Alves himself is an extremely impressive guy.
Not only a first-rate JI composer, but a scholar of
tuning history, a crack computer programmer, a DSP
wizard, *and* a fine teacher (by all reports). Another
polymath.
To continue in this vein would embarrass *a great many*
PhDs who subscribe to this forum... Suffice it to say that
this tuning forum represents an extraordinary confluence
of exceptionally talented academics. It's fashionable to claim
that "the day of the Renaissance man (woman? person?) is
past." Whoever believes this hasn't logged onto the tuning
forum.
Having said all that, permit me to add not my own words,
but those of Norbert Weiner:
"What sometimes enrages me and always disappoints and
grieves me is the preference of great schools of learning
for the derivative as opposed to the original, for the
conventional and thin which can be duplicated in many
copies rather than the new and powerful, and for arid
correctness and limitation of scope and method rather
than for universal newness and beauty, wherever it
may be seen." [Weiner, Norbert, "The Human Use of
Human Beings," 2nd edition., 1954, pg. 135]
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 6 Nov 1995 00:48 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id OAA10844; Sun, 5 Nov 1995 14:48:24 -0800
Date: Sun, 5 Nov 1995 14:48:24 -0800
Message-Id: <199511052247.RAA08615@freenet3.carleton.ca>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

11/6/1995 9:18:32 AM
From: mclaren
Subject: RATED NC-17: ALL COMPOSERS OVER AGE
17 MUST BE ACCOMPANIED BY THE GHOST OF JOHN CAGE
---
"Modernism is now being seriously challenged for the first
time in almost a century or more. Which, considering the
really awful degree of narcissism, nihilism, inanity and
self-indulgence that late modernism has allowed itself,
is probably the best thing that could happen to it. What
has been permanently lost is the sense of the absolute
that the modernist movement once gave to its loyal
followers. And to that we can say: good riddance.
We are none of us now--either artists or critics or the
public--quite as susceptible as we once were to the
idea that at a given moment in time, history ordains
that one and only one style, one vision, one way of
making art or one way of thinking about it, must
triumph and all others be consigned to oblivion."
[Kramer, Hilton, The New York Times, 28 March 1982,
Section 2, pg. 32]
Most awful of all modernist excesses, naturally,
are those perpetrated by John Cage and Pierre
Boulez.
Of course everyone already knows this. There's no
controversy about it. The facts have long since been
admitted in those code phrases so beloved of the
New York critical establishment.
When TIME magazine (in its Cage obit, 1 November 1993,
pg. 87) oozes: "There was always the whiff of the
charlatan about John Cage," we all know what they
REALLY mean. ("Cage was a con artist without a shred
of musical talent.") They just don't want to come right
out and *admit* it because (after all) it would prove
embarrassing to explain why so many New York
critics kow-towed to Cage for so many years.
And when Roger Reynolds oohs and ahhs in an
interview-cum-suck-up with the Great Mountebank,"Your main
contribution has been to expand the idea of what it is
reasonable to do in music," we all know what Reynolds
is *really* saying: "John Cage gave audiences the
musical equivalent of a golden shower for 40 years."
Of course, Reynolds doesn't want to come right out
and actually *say* that. "Epater le bourgeoisie" doesn't
go over too well in the land of Oprah and Geraldo
unless you sugar-coat the pill.
And thus, while we all know and covertly admit that
John Cage was a stunt man whose musical fame is
conducive to an understanding of how the Egyptians
could have worshipped insects...even so, none of us
really want to *admit that.*
This is peculiar, especially on a microtonal discussion
forum like this one. After all, Cage's early prepared-
piano works flirted with the edges of the 12-tone equal
tempered scale. It's hard to say that Cage's early
prepared piano pieces are in any particular tuning--
least of all 12--and certainly the "Imaginary Landscape"
for radios skirted the idea of departing from 12 via
electronic sounds.
Naturally, none of these early gimmicks provoked
enough critical attention: and so stunt man Cage was
obliged to find some really TALL buses over
which to jump his musical Evel Kneival act. 4
minutes 33 seconds...a burning piano...whatever.
The end result, naturally, was that pathetic orgy of
gimmickry, fetishism and sheer silliness that
characterized Cage's so-called "musical" output
post-1948.
And so, instead of doing something to advance music,
he vanished into the tarpit of "narcissism, nihilism,
inanity and self-indulgence" so aptly described by
Kramer.
Boulez is a different story.
While Cage displayed dazzling early sparks of musical
talent in his "Three Constructions" and his prepared
piano pieces only to throw away his abilities in favor
of a career scamming the gullible (the compositional
equivalent of L. Ron Hubbard's reign as Dianetics
guru), Boulez never betrayed any such rudiments of
compositional talent.
Boulez's music created a tremendous impression in
the 1950s--until people actually heard it. Thereafter,
his popularity dropped off sharply.
To be sure, Boulez's "acknowledged masterworks"
(acknowledged by the other dry-as-dust theorists,
all of whose judgments and compostions are now
equally inconsequential and outdated) sound pretty,
albeit in an inoffensive Muzak-y sort of way...
Boulex had a gift for orchestration.
But after about 5 minutes of "Le Marteau sans Maitre,"
or "Pli Selon Pli," you realize it's just warmed-over
Webern with a Chet Baker arrangement.
Why listen to a pale imitation?
Why drink from the toilet, instead of the tap?
Why listen to Boulez when the original--
and much more interesting--Chet Baker is available
on CD? To say nothing of Webern.
All told, it's a shame that Boulez had no compositional
talent, nor any original ideas. Because in the early 1950s
Boulez (unlike Cage) actually thought seriously and at
length about microtonality:
"In considering his electronic means, the composer
has first to free himself from the conception of absolute
interval. This can certainly be done. The tempered system
of twelve equal semi-tones seems to lose its necessity
at the very moment at which it passes from chromatic
organization to the Series. There have already been
experiments with intervals of less than a semi-tone: of
uarter-, third- and even sixth-tones. (...) In fact, to
select a fundamental unit other than the semi-tone,
means to conceive a kind of temperament peculiar to
a single composition; all intervals are to be heard as
derviing from this fundamental tempering, thus affecting
the listener's conditions of perception. (...) This tempering
may take place within the octave...or, it is equally possible
to construct in such a way that the interval with which
the demarche of the scale commences in other than the
octave. (...) In this way it would be possible to derive from
one structure based on wide intervals, i.e., having a wide
compass and a semi-tone as the unit, a corresponding
structure based upon micro-intervals, in which the compass
would be greatly reduced and where the unit would be either
a very small interval or irregular intervals defined by a
series. [Boulez, P., "At the End of Fruitful Lands..." Die Reihe,
Vol. 1, No. 2, 1955: English translation 1957]
Alas, such ideas would have resulted in an actual *expansion*
of available musical resources... And that could not be
permitted. Like rock music, the modernist avant garde was
always a fanatically reactionary cult cloaked in the image
of a revolutionary vanguard. Any *real* emancipation of
the dissonance leading to a break with the sacred 12 tones
would have thrown into disarray the whole Tammany Hall-style
patronage system of orchestras, conductors, concert halls,
the Beaux Arts, circle-jerk New York music critics, and the
rest of the corrupt musical machine. Without the Tammany Hall
of 12 equal tones per octave, those who benefited from the
patronage system of the Sacred 12 Tones (like Boulez, who
now makes megabucks recycling tired 12-equal dribs and
drabs as a conductor) would find themselves
out of a job.
Boulez on a street corner?
Begging for dimes? Holding up a sign WILL CONDUCT IN 12
FOR FOOD???
Ye gods.
Such could not be permitted. Thus, after flirting with
tdea of actually breaking free of his pathological
dodecaphilia, Boulez threw in the towel and made the
obligatory obeisance to the Sacred 12 Tones.
The result was predictable:
"Just as Marxist-Leninist thought led to forms of
government meant to remedy the excesses supposedly
caused by the exhaustion of capitalism, so Schoenbergian-
Boulezian practice was touted as the alternative to
an exhausted system called `tonality.' These attempts
to revolutionize, respectively, our economic and musical
worlds had several other things in common besides
the Germanic origin. The application or enactment of
both ideologies required that their alternatives--and
those who would support them--be publicly denounced
and discredited, and a form of double-speak was
employed in support of these `revolutionary' ideas.
The apologists writing in Pravda held sway in support
of a failing system in the same way that Herbert
Eimert, Milton Babbitt, and Charles Wuorinen dominated
the pages of Die Reihe and Perspectives of New Music
for many years. What is so interesting is the
suddenness with which these applications of
science--some have said pseudo-science--to economics
and music have been rejected and are now seen as
merely interesting experiments that failed
because they denied basic human realities:
economic and cultural diveristy in the political
realm and the necessity for perceptual forms of
organization and the power of intuitive processes
in the world of music." [Appleton, Jon, "Machine Songs
III: Music In the Service of Science--Science in the
Service of Music," Computer Music Journal, vol. 16,
No. 3, Fall 1992, pp. 17-21]
Which leaves us back where we started. Now that
everyone has tacitly admitted that Cage and Boulez
were mere pimples on the rear end of 20th
century music, it's time to look around for a new
graven idol. The next Great Composer (now that
we've realized that the most famous so-called "Great
Composers" of the 1950s didn't produce anything of
lasting interest)...akin to the Next Great Rock Star.
In both cases the focus is the same: keep the rubes
gawking, wow 'em with glitter and glitz, dazzle 'em
with music videos & half-naked girls (or, in the
case of prestigious New Music Journals, ritzy-
looking hypercomplex diagrams and equations)
and hit 'em with jargon....anything to keep the rubes
from realizing that it's all just a dog and pony
show. (Meanwhile, the REAL great composers like
Nancarrow, Risset, Chowning, et alii, go all but unnoticed
and all but remarked.)
And so the focus in new music has again turned
toward the cheery cherub with the cheekiest charts,
the wildest word-count, the most scrumptious-
looking (read: indecipherable) scores:
Namely, Ferneyhough.
This is an interesting aberration, and it spotlights
one of the deepest ruts into which post-
modernist music has fallen. Namely, the
obsession with *intellectualizing* music.
Why do Western composers and critics and
music theorists so fanatically chart and
diagram and plot out and schematize modern
compositions?
Primarily (one suspects) in order to justify
the long-held euroschlock "doctrine that
Western European art music is superior
to all other music of the world," which
"remains a given, a truism. Otherwise
intelligent and sophisticated scholars
continue to the use the word `primitive'
when referring to the music of Africa,
American Indians, aboriginal Australians,
and Melanesians, among others." [Becker,
Judith, "Is Western Art Music Superior?"
Musical Quarterly, Vol. 72, No. 2, 1986, pg.
341]
Yo!
Western composers and performers might
not be able to produce rhythms as complex
as those of the Balinese gamelan, or tunings
as subtle as those used in the sub-Saharan
ugubhu, or to move audiences as deeply
as do the "weeping" pitches of Kaluli
gisalo songs, but...hey!
At least *we* euro-dudes can ALWAYS
come up with bigger, better, more
impressive *charts* of our compositions
than any other musical culture on earth!
(A typically priapic male obsession. "Mine
is bigger than yours..." My compositional
diagram, that is. No wonder there are
so few famous women composers. Can anyone
imagine a *woman* wasting 6 months of her
life straight-edging a bunch of chicken-scratches
that explain something everyone can already
*hear*???)
Thus the bizarre and otherwise incomprehensible
elevation of such duffers as Ferneyhough...whose
scores are, indeed, quite impressive--as grafitti.
Indeed, nary a subway train in New York or a wall
in South Central L.A. is as crammed with in-group
jargon and chock-a-block with meaningless verbiage as
one of Ferneyhough's articles. (In fact one very
prominent member of this tuning forum laughed
out loud while perusing one of Ferneyhough's
ludicrous "Perversions of New Music" articles,
chuckling: "Looks like the guy follows the same
aesthetic when writing as when composing... Or
should one say, the same lack thereof?" NO, folks,
it wasn't this little lad, but someone much better
known.)
This teaches an important lesson to microtonalists:
if ya wanna get famous, ya gotta make diagrams.
1/1 has made a start at this--ratio-space
charts look impressive, and to infants
or the mentally retarded or the average new music
doyen they'll doubtless exert an irresistable
attraction.
Baby go goo-goo at pwetty pitchah!
Of course, this is the Motown approach to
popularizing microtonality. According to this
guerilla strategy (practiced extensively in New
York), the objective is a "crossover" composition
that "breaks through" into the white male
New York critical establishment.
As with the de-funked un-gotten-down R&B of
Motown records, all potentially controversial and
threatening aspects of the music must be
shaved off and polished away, leaving a
bland whitebread generic product sufficiently
"mainstream" to attract a mass audience.
And while the New York composers/performers
represent the Motown approach to microtonality,
those of us on the West Coast represent the
Stax approach.
"F*** 'em!" is the West Coast philosophy with
regard to the New York critical establishment:
if they can't stand the microtonal heat, let
'em flee the concert hall. This alternative
approach to popularizing microtonality
relies on the rasty nasty snazzy sound of
strange intervals and unfamiliar musical
forms to attract the adventurous concert-goer
and CD buyer. While the New York crowd blows
dust off musty scores like Dick Stein's 1906 1/4-tone
cello piece for a concert at Juilliard, the
West Coast crowd blasts the audience with
full-bore hard-core microtones from the
git-go in exotic tunings like 13-TET and
13-limit JI and harmonic series 1-60.
Each approach has its merits. Stax or Motown,
both seem to attract their share of "mainstream"
"crossover" audience from standard bland 12.
Regardless of the approach, it remains an
unfortunate fact that "I have learned that if
I produce a complex structural diagram of a piece
of music from anywhere, the students will listen
to the piece more carefully and will regard it with
greater respect. A structural diagram gives the
music a legitimacy it does not have without
the analysis." [Becker, Judith, "Is Western Art
Music Superior?" Musical Quarterly, Vol. 72, No
2, 1986, pg. 346.]
So here's a helpful suggestion: when giving lectures or
concerts, microtonalists should project an overhead
transparency of the New York subway system and
throw in some gibberish about "pitch class
matrices" and "all-interval sets" and "maximally
symmetric stochastic distributions." This will
wow the eurogeeks and ensure that the microtonal
music is listened to with *great* attention.
After all, it requires hardly *any* skill or
intelligence to perpetrate this kind of musico-
theoretic scam, and the rewards are VAST...as
Cage and Boulez have so amply demonstrated.
--mclaren


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 6 Nov 1995 19:21 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id JAA25588; Mon, 6 Nov 1995 09:21:02 -0800
Date: Mon, 6 Nov 1995 09:21:02 -0800
Message-Id: <9511060919.aa20661@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

11/7/1995 9:55:58 AM
From: mclaren
Subject: More about n-j n-e-t scales
---
"As physicians have always their instruments
and knives ready for cases which suddenly
require their skill, so you must have principles ready
for the understanding of things..." [Marcus
Aurelius, "Meditations"]
Because of the general lack of incomprehension
and puzzlement at my mention of non-just non-
equal-tempered scales, it's clear that some
further info is required.
In general, I'm not talking about linear or
meantone tunings. While it's true that these
are technically non-just non-equal-tempered
scales, most of 'em are just one or another
way of bending twelve tones to obtain a
more consonant third or fifth in a traditional
Western triad. This surely has its value,
but meantone scales decidely represent the
outermost extreme conservative side
of non-just non-equal-tempered scales.
Instead, the kind of n-j n-e-t tunings I'm
concerned with--and have been since the
git-go--are those tunings which break
completely with the Western mold. Some
of these kinds of tunings have octaves, others
don't. In general, they're so wildly foreign
to any conventional harmonic or scalar
vocabulary that there is hardly any intelligible
way to talk about such scales as yet, except
as raw numbers.
For example:
One of the simplest ways of generating such
a completely non-Western non-just non-equal-
tempered scale is by taking the natural logarithm of
the factorial of a set of integers:
1! = 1, ln(1) = 0
2! = 2, ln(2) = 0.693147
3! = 6, ln(6) = 1.7181759
4! = 24, ln(24) = 3.17805383
5! = 125, ln(125) = 4.82831373
6! = 720, ln(720) = 6.5792512
7! = 5040, ln(5040) = 8.52516136
and so on.
Taking ratios so as to eliminate
dependence on the base of the logarithm,
we have:
scale step 1: 1.0
scale step 2: 2.478808
scale step 3: 4.5849625
scale step 4: 6.9657842
scale step 5: 9.4918531
scale step 6: 12.299208
These values can be octave-reduced or not.
If not, the scale will have no octaves.
If octave-reduced, carrying out the procedure
will produce ever larger numbers of scale-
steps within the octave, never overlapping.
This is an inherently fractal process, first
described by Thorwald Kornerup in his Golden
Section scale.
In a sense the procedure is analagous to
that of just intonation, in which successive
addition and subtraction of various small-
integer-ratio intervals produces an ever-larger
profusion of unequal divisions of the octave.
However, there are a number of differences.
At this point it's useful to introduce the
concept of the "inharmonic series." By
analogy with the harmonic series, an
inharmonic series serves as the backbone
of a non-just non-equal-tempered scale.
In this case, the inharmonic series is
Log(N!) where N runs from 1...infinity.
Of course choosing N by some other criterion
(perhaps by some recurrence relation: say,
N4 = N1 - sqrt(N2 + N3^2)*N1) would produce
an entirely different inharmonic series. There
are an infinite number of inharmonic series,
each generated by choosing N by a different
method and then applying some non-linear
operation to N.
By contrast, the familiar harmonic series is
obtained by setting N = the class of integers
and performing the simplest possible linear
operation on them--namely, the unary
operation (which leaves the operand unchanged).
Inharmnic series are important as a source
of modulation and of vertical structures
in non-just non-equal-tempered scales.
One of the most interesting implications
of non-just non-equal-tempered tunings,
however, is the prospect of generating
a scale of note durations (read: rhythms)
derived from the scale steps, by analogy
with the comparable procedure in traditional
Western music.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 7 Nov 1995 20:04 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA19700; Tue, 7 Nov 1995 10:04:49 -0800
Date: Tue, 7 Nov 1995 10:04:49 -0800
Message-Id: <9511071803.AA09055@ ccrma.Stanford.EDU >
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

11/8/1995 4:18:10 PM
From: mclaren
Subject: Rhythm, tuning & the extension of Charles
Seeger's 1930 article into n-j n-e-t scales
---
"Look within. Let neither the peculiar quality of
anything nor its value escape you." [Marcus Aurelius,
"Meditations"]
With typical insight, Ivor Darreg once commented that
the rhythms of Baroque music arose from the 18th-century
love of clockwork and chronometry. And in fact one
of the greatest achievements of Enlightment technology
was the development of a ship's chronometer capable of
keeping sufficiently accurate time to allow Royal Navy
ship to navigate along lines of longitude. The inventor won
a 10,000-pound prize--at the time a huge fortune.
In a larger sense, the infrastructure of music (the tuning) has
probably always exerted a potent influence over its overt
superstructure--specifically, over the rhythms used.
It's only in the last century, however, that the relationship
appears to have been restored to the approximate level
of complexity characteristic of the music of the late 14th
century.
Just intonation music naturally lends itself to a
complementary arrangement of ratios of small integer
numbers of beats. Ben Johnston pioneered this idea in
the rhythms of his quartets. Thus a 4:3 will often
be mirrored with 4 beats against 3, a 6:5 with 6
beats against five, and so on. Toby Twining picked
this procedure up from Johnston, and employs it in
his own just intonation choral compositions: the
polyrythms consistently mirror on a larger time-scale
the vibrational ratios produced at the waveform level.
This is not a new idea. The masters of ars subtilitas
in the 1380s played with this idea extensively and
with great subtlety: and in the 1920s Leon Theremin
implemented it with his now-lost instrument "the
rhythmicon." Theremin's instrument responded to
the presence of dancers in a space (sensed by
changes in capacitance as detected by 3 sensors)
and produced polyrhythms accompanied by just
intonation ratios. As it turns out, just intonation
intervals are very much easier to generate with
analog electronic circuits than equal-tempered
intervals. So Theremin's rhythmicon was
an early JI synthesizer as well as an interesting
example of very early integration between rhythm
and tuning.
In this century the massive strip-mining and
subsequent exhaustion of the 12-tone equal-
tempered tuning appears to have led to an
increasing dissatisfaction with chronometric
18th-century rhythms. Thus the history of avant
garde music in the 20th century is a history of
ever less regular rhythm. First 3 against 4...
most famously in the section of "Rite of Spring"
where the conductor is supposed to conduct 3
beats with one hand and 4 beats in the other.
Then, onward and upward to other beat-ratios.
"As tonally in 900, so rhythmically in 1900,
the relations 2:3 and 3:4 represented the
ultimate in harmonic comprehensibility."
[Seeger, Charles, "On Dissonant Counterpoint,"
Modern Music, Vol. 7, No. 4, 1930, pp. 25-31]
Copland's 1924 piano concerto and
Gershwin's Rhapsody in Blue from the same
period both use extensive syncopation and
hemiolas. Elliott Carter's metric modulation
procedures extended the irregularity of the
basic pulse, as did Bartok's essentially barline-
less compositions, ditto Varese's "Density 21.5,"
"Arcana," "Ionisation," etc.--in many cases
using a new key signature every
barline. The real break came when Khaikosru
Shapurji Sorabji introduced multiple embedded
n-tuplets during the 30s and 40s, and when
Nancarrow started stretching the tempi
of multiple polyphonic lines by different
rates simultaneously. You get multiple
simultaneous tempo-shift curves going on
that deform time in a completely plastic
way: rhythmic regularity has completely
disappeared. The basic pulse is continuously
changing even within the individual note.
This has led to rhythmic complexities
like those of Michael Gordon's "Yo, Shakespeare!",
or Trimpin's, computer-controlled vorsetzer works,
or Warren Burt's "I Have My Standards"
and "Notes From the Jungle of Intonational
Complexity."
It seems likely that the reason for this
increasing irregularity and complexity in
rhythm is that composers simply beat the
12-tone equal tempered scale to death.
By the 9th century A.D. they had introduced
the third as a consonance, by the 18th
century they'd started using sixths as
consonant intervals, by the 1920s and 1930s
the major and minor 2nd and major and
minor 7ths as part of the spectrum of
consonance. Schoenberg's "emancipation
of the dissonance" effectively placed all
intervals on a continuous scale--there
were no longer any "forbidden" intervals.
Everything could be a consonance, depending
on context--and the "rules" of consonance and
dissonance could be turned upside-down, if
desired. Charles Seeger's "dissonant counterpoint,"
introduced in 1916, was a typical example:
"Dissonant Counterpoint...is essentially an
inverted species counterpoint, the species of
the older discipline remaining intact but
*dissonance* (seconds, tritones, sevenths,
ninths) becoming the norm and *consonance*
(thirds, fourths, fifths, sixths, octaves)
requiring preparation and resolution." [Nelson,
Mark, "In Pursuit of Charles Seeger's Heterophonic
Ideal: Three Palindromic Works by Ruth Crawford,"
Musical Quarterly, Vol. 72, No. 4, 1986, pg. 459.]
The above prescription is a blueprint for
post-Webern modernism up to the late 1970s:
and indeed, the exhortations of Kyle Gann's
music professors to "use more good solid
20th century intervals--tritones, minor
seconds, major sevenths," is of course nothing
but standard Palestrina species counterpoint
turned inside out: the "good" intervals of the
1500s have become the "bad" intervals of the
1920s-1970s, and vice versa. Modernism did
not expand the language of music, of course:
there were no new intervals introduced. The
list of preferred intervals had merely been
swapped for the list of intervals formerly
proscribed. Thus, by the 1930s there was nothing
left to do with harmony or melody in the 12-tone equal
tempered system. The harmonic resources
had been played out. The 12-tone tuning had been
strip-mined, leaving a hole in the ground and an
enormous amount of bad wannabe-Webern.
That left rhythm.
So, starting circa 1948, composers began to explore
ever more complex, ever more irregular divisions of
the beat.
More than one commentator has suggested that
Nancarrow represents some kind of "ultima
thule" for rhythmic complexity in this progression
toward ever more complex time-relationships.
However, this is obviously incorrect.
One of the most interesting frontiers in xenharmonic
composition is, in fact, the extension of rhythm
in accord with the tuning of non-just non-equal-
tempered scales. This is nothing more than a self-
evident expansion of Charles Seeger's 1930
suggestion of "a recognition of rhythmic harmony
as a category on a par with tonal harmony." [Seeger,
Charles, "On Dissonant Counterpoint," Modern Music,
Vol. 7, No. 2, 1930, pp. 25-31.] (Since this is an
obvious extension of Seeger's classic modernist
insight, naturally no academic has yet suggested
it. Score another one for the same no-talent PhDs who
barred the greatest tape music composer of the
20th century from the Columbia-Princeton Electronic
Music Center because of his "lack of credentials"--
the composer being, of course, Tod Dockstader.)
As we've seen, just intonation compositions naturally
lend themselves to small integer ratios of beats (as
Johnston, Twining, Partch, et al., have skilfully shown).
Indeed, Kenneth Gaburo produced a composition, "Lemon Drops,"
using a bank of sine wave oscillators at the U. of Illinois,
which uses the same principle of moving micro-ratios
on the waveform level into macro-ratios on the level
of time of the individual measure. (Circa 1972?)
And, as we've seen, equal tempered compositions appear
to lend themselves to much more complex
divisions of the beat: embedded n-tuplets, metric
modulation, and so on. On the macro-level of
the individual measure this is very similar to approximating
an irrational number with two large rationals. If
you've heard a complex rhythm like 7 in the time of
4 inside 11 in the time of 9 inside 3 in the time of 2
inside 17 in the time of 13, you realize that the end
result is a set of timings that sound nearly like
ratios of irrational numbers--above a sufficient
level of embedded n-tuplets, no underlying pulse is
audible at all. This is obviously akin (on the macro-level)
to the ratio of irrational numbers on the micro-level of
the individual waveform which defines an equal-tempered
scale, in which all pitches are some Nth root of K.
By analogy, the next step in rhythmic complexity is
self-evident: move to ratios of transcendental numbers
on the macro-level of the beat, mirroring the non-just
non-equal-tempered pitches of n-j n-e-t scales.
Our present system of notating music has no way of
dealing with such divisions of the beat. They are really
impossible to notate with anything like conventional
musical notation. 5 11 17
While regular pulses like cut time or 8 or 8 + 8 can
be easily notated, and even very complex ratios made up of
embedded n-tuplets *can* be written down in conventional
notation, the kind of rhythmic pulsations I'm talking
about here lie completely outside the range of Western
notation. The system just breaks down. Conventional
notation can't handle these rhythms *at all.*
Let me give an example, so you can get an idea of what
I'm talking about here:
At a tempo of 60 each quarter note lasts exactly one
second. So common time (4/4) produce measures
lasting 4 seconds, with each quarter note lasting one second,
each eighth note last 1/2 second, each 16th note lasting
1/4 second...and so on. A triplet 8th note would use 3
8th in the time of 2, so each 8th note would last 1/3 second.
This is a simple extension of micro-ratios at the
waveform level into macro-ratios at the level of the beat.
A more complex embedded tuplet might require, say,
a measure in 4/4 to have 11 eighths in the time of 8 eighths,
with 5 in the time of 4 inside it, with 3 in the
time of 2 inside that. If we had a measure like this:
4 |-------- 11:8------------------------------------------|
4 |----5 : 4---------------------| 8th 8th 8th 8th 8th 8th 8th
8th 8th 8th |----3:2--------|
8th 8th 8th
Working from the outside in, the divisions of the beat are:

the last 7 8th notes have a duration of 8/11 of 1/2 second =
8/22 of a second; the first 3 8th notes have a duration of
4/5 of that, or 4/5*8/22 = 40/110 or 20/55 of a second,
and the 3 eight note sof hte inenrmost n-tuplet have a
duration of 2/3 that, or 40/165 of a second. This is
number complex enough that any underlying pulse (if
audible) is quite obscure and irergular-sounding. Again,
a reasonable analogy to the irrational Nth root of K
ratios of equal-tempered pitches.
However, moving on to non-just non-equal-tempered
tuning produces a new level of rhythmic complexity,
when the individual scale pitches are projected
upwards into the macro-level of the individual measure.
A typical n-j n-e-t scale is one of the subset of tunings
produced by taking ratios of inifnite continued fractions.
The fraction N1 + N2
___
N3 +
__
N4 +
__
N5 + ...
in general produces numbers which are neither simple
integers nor Nth roots of K. For example, if N1...NJ
= 1, the result is 1.61803399, or phi (the Golden Ratio).
By using a very simple computer program (3 lines)
to evaluate such a continued fraction out to, say,
N20, it's easy to calculate the frequencies of
such scale-steps to an accuracy of 7 figures.
The first 5 infinite continued fractions for
N1...NJ = 1 through 5 are:

f1 = 1.618034
f2 = 2.414213
f3 = 3.302775
f4 = 4.236067
f5 = 5.192582
f6 = 6.162277

If we let f0 = 1.0, this gives 6 scale-steps. Now,
notice that these ratios when expressed as rhythms
*cannot be notated in any conventional way.*
There is just no reasonable method of writing
down a rhythmic system in which the longest
note lasts 1.0, the next longest note last 1/1.618034,
the next longest note lasts 1/2.414213, the next
longest note lasts 1/3.302775, and so on. The
concept is totally alien to anything in our
notational convention.
These kinds of rhythms just blow Western notation
right out of the water. In fact, not only can we
not *write music* that *notates* such divisions
of the measure, at present we cannot even *talk*
about such divisions of the measure with a
comprehensible vocabulary. We do not even have
the *words* to begin a discussion of such temporal
divisions. We are, literally, mute.
Why does this matter?
It matters because one of the clearest and simplest
compositional strategies involving non-just non-equal-
tempered scales is to work with such rhythms on
the level of the individual measure, then in blocks of
phrases which last for times proportional to these
ratios, then in sections of the work which last for
times also proportional to these ratios on a larger
time-scale. This continues traditional musical practice
in a sensible and straightforward way: namely, by
systematically extending the micro-level
of frequencies up into the macro-level of the beat.
In such a non-just non-equal-tempered composition,
all timbres could (using Csound) easily be made up of
additive sets of freuqnecies described by a non-just
non-equal-tempered tuning, and all the durations
of the notes *also* described by the same
ratios, along with durations of sections, movements,
etc.
However!
Trying to keep track of the rhythms is mind-bending
and maddening. Because of the total inadequacy of
notational systems or even vocabulary, I have been
forced to notate these rhythms as delta start times:
that is, note durations--which must then be
added to the absolute run time (as demanded by
Csound). It's *incredibly* meticulous, and requires
a *great deal* of bothersome calculation.
Interestingly, the rhythms sound jazzy and almost
improvisational. They are not conventional. And
in particular, when two or more strata of such rhythms
are going at once, made of notes broken down
into subvalues of these infinite continued fraction
convergents, with each note-stream at a tempo
also described by the an infinite continued fraction,
the results are truly exotic.
Next post, some suggestions for a generalized
rhythmic vocabulary that would at least allow
an approach to coherent manipulation of time-
streams and durations derived from non-just
non-equal-tempered tunings.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 9 Nov 1995 06:50 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id UAA26952; Wed, 8 Nov 1995 20:50:30 -0800
Date: Wed, 8 Nov 1995 20:50:30 -0800
Message-Id: <951109043956_71670.2576_HHB32-6@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

11/9/1995 10:28:53 AM
From: mclaren
Subject: A new rhythmic vocabulary
---
"All things come from thence, from
that universal rule either directly proceeding
or by way of sequence." [Marcus Aurelius,
"Meditations"]
The previous post explored the outer
edges of rhythm by suggesting an
extension of the waveform periodicity
in the frequency domain of non-just
non-equal-tempered scales up into
the beat level of the individual measure.
As mentioned, there is a complete and
utter lack of generalized rhythmic
vocabulary to talk about these kinds
of divisions of the beat. We can talk
sensibly in Western music about "half notes,"
"3 eighth notes in the time of 2," and so on,
and we can even (with some difficulty)
talk about "40% of a half note" (notated
as the N-tuplet `8 in the time of 5'),
but when it comes to something like
1/1.61803399 of a half note, conventional
Western notation falls mute. Indeed, there
is not even the ghost of a clue where to
start talking about such divisions of the
beat except in raw numbers--which are
tremendously hard to deal with intuitively,
or manipulate as ensembles without huge
amounts of gratuitous calculation.
What does this have to do with tuning?
It seems clear that the natural way to compose
in non-just non-equal-tempered scales is to
extend the frequencies into divisions of the
beat. But what we want is a simple way of
manipulating such rhythms. Ideally, we should be
able to easily and simply apply such familiar
concepts as rhythmic augmentation and diminution
to sets of rhythms derived from the pitches of
non-just non-equal-tempered scale frequencies.
If we can't do this, it cripples us at the start
in composing with non-just non-equal-tempered
scales.
First, let me suggest a generalization of the
standard Western rhythmic vocabulary.
Traditionally, divisions of the beat are
handled with a descriptive vocabaular that
directly specifies the division of the beat:
half note lasts one half a whole note, quarter
note lasts one quarter of a whole, triplet
quarter packs 3 quarters into the time of 2,
and so on. This is fine as far as it goes. But
extending to anything other than simple
integer divisions of the beat is impossible:
there's no such thing as a "1.618034-note."
Instead, permit me to suggest what the
computer programmers have christened
a "call by reference," rather than the
"call by value" of conventional Western
rhythmic vocabulary.
Instead of using words for the divisions
of the beat that describe the actual
values, suppose we use a rhythmic
vocabulary which describes the successive
position of the rhythm in a hierarchy from
long to short. This kind of rhythmic
vocabulary could be applied to an unlimited
range of different divisions of the beat, rather
than the extremely limited set of integer
divisions of the beat which can be described
by traditional Western usage.
PRIMARY -- longest duration within the measure
SECONDARY -- next longest duration
TERNARY -- next longest
QUATERNARY -- next longest
..and so on.
With this change of vocabulary, it suddenly
becomes possible to write down a set of
rhythms derived from our non-just non-equal-
tempered scale:
P S S P T P
Given the non-just non-equal-tempered
scale described in the previous post, this
is a set of notes of duration:
1.0 1/1.618034 1/1.618034 1.0 1/2.414213 1.0
In the context of this new rhythmic vocabulary,
all of the traditional techniques of Western
rhythm can be applied. Here, augmentation
refers to multiplying all notes by the value
of 1/SECONDARY beat duration.
In traditional Western usage, the secondary
beat duration is always 1/2, so augmentation
is always a simple doubling of note durations.
Contrariwise, diminution is a simple havling
of note durations.
In the context of the rhythms derived from our
non-just non-equal-tempered scale, however,
augmentation means multiplying all note
durations by 1.618034, while diminution
means multipying all note durations by
1/1.618034.
Embedded tuplets can also be carried over
into the new rhythmic scheme, with a
concomittant increase in rhythmic
complexity. A triplet in traditional Western
usage is obtained by adding the primary to
the secondary duration; here it's obtained
by doing the same thing, but the result
(instead of being a 3:2 duration) is a 2.618034:1
duration. And so on.
This gives us at least some reasonable way
to *talk* about rhythmic divisions derived
by time-scaling our non-just non-equal-tempered
micro-level of frequency up into the level
of the individual measure. Because of the
obvious implications for new kinds of compositional
techniques, this derivation of rhythm from
non-just non-equal-tempered scale
frequencies clearly demands further
exploration.
The next post will discuss generalizations of
vertical structures in non-just non-equal-tempered
scales, along with an examination of consonant
vertical structures in a representative non-just
non-equal-tempered scale, along with several
kinds of near-equivalents to conventional modulation
between keys (equal temperaments) or 1/1s (JI).
---mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 10 Nov 1995 00:38 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id OAA22718; Thu, 9 Nov 1995 14:38:39 -0800
Date: Thu, 9 Nov 1995 14:38:39 -0800
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

11/10/1995 10:56:58 AM
From: mclaren
Subject: A generalized approach to harmony
in non-just non-equal-tempered scales
---
"Observe constantly that all things take place
by change, and accustom yourself to consider that
the nature of the universe loves nothing so much
as to change the things which are to make new things
like them. For everthing that exists is in a manner
the seed of that which will be." [Marcus Aurelius,
"Meditations"]
Bill Sethares and Your Humble E-Mail Correspondent
have both struggled for more than a year with the
vertical implications of non-just non-equal-tempered
scales.
While Bill's spectral mapping procedure offers a
way of adroitly controlling *sensory* consonance
and dissonance in n-j n-e-t scales, this is quite a
different matter from the tonal substructure implicit
within the scale.
Douglas Keislar's PhD thesis makes this clear. By
rendering a scale like (say) 13/oct more consonant
on the level of individual partials interacting
within the critical band, we do *not* produce
any greater sense of overall "tonality" for the listener.
Even with smoothly beatless chords, 13-TET *still*
sounds profoundly anti-tonal and non-cadential. Even
though a I-IV-V-I may be made beatless through the
miracle of modern digital signal processing, it still
doesn't sound like "the way the scale wants to behave."
And thus other compositional strategies must be
employed, different from those dragged out of 12-TET
and press-ganged into service.
In short, "I don't think we're in Kansas anymore, Toto."
And outside of 12-TET, you'd better take that into
account...or your compositions will sound like very
badly out-of-tune 12-TET leftovers.
These complexities are sufficient to give pause to
a composer contemplating a work in 19- or 53-tone
equal, or (say) 13-limit or 31-limit JI; but when
it comes to non-just non-equal-tempered tunings,
what's a xenharmonic composer to do?
The first and most important point in generating
vertical structures in non-just non-equal-scales
is to recognize that some general procedures *do*
carry over from other tunings, albeit in highly
modified form.
John Chalmers has elaborated a classical method
for scale construction which he calls "tritriadic"
scale generation. The basic idea (based on a very
ancient principle) is that scales have typically
been generated by taking the Tonic, forming a triad;
then a Mediant, and forming a triad, and finally
a Dominant, and forming a triad. The set of tones
formed by the union of all the pitches in the triads
has conventionally produced the scales characteristic
of Western music. John's inspiration was to vary
the frequency ratios of the T, D and M chords to
generate variant scales. (John Chalmers' tritriadic
techniques have been unjustly neglected as a topic
for this tuning forum; if something doesn't change,
clearly I shall have to author several future posts
on the subject.)
Interestingly, something akin to this procedure can be
employed with non-just non-equal-tempered tunings.
One of the more obvious n-j n-e-t scales is the
set of modes of the ideal vibrating cylinder free at
both ends. These are given by Lord Rayleigh (1896)
in his "Theory of Acoustics," Vol. 2, pg. 25, refining
the result obtained by Hoppe in 1871:
The frequency f of each partial is proportional to
sqrt[[s^2]*[(s^2 -1)^2]/(s^2 + 1)]
Setting s successivey to 2, 3, 4... and referring each
tone to the fundamental of the inharmonic series,
the partial frequencies are:
f0 = sqrt(4*9/5) = 2.68328/2.68328 = 1.0
f1 = sqrt(9*64/10) = 7.58946/2.68328 = 2.82842
f2 = sqrt(16*225/17) = 14.55197/2.68328 = 5.4320
f3 = sqrt(25*576/26) = 23.5339/2.68328 = 8.77057
f4 = sqrt(36*1225/37) = 34.5329/2.68328 = 12.8696
f5 = sqrt(49*2304/50) = 47.5173/2.68328 = 17.7086
f6 = sqrt(64*3969/65) = 62.5135/2.68328 = 23.297422
f7 = sqrt(81*6400/82) = 79.510699/2.68328 = 29.631905
f8 = sqrt(100*9801/101) = 98.508682/2.68328 = 36.71204
f9 = sqrt(121*14400/120) = 120.00417/2.68328 = 44.72294
f10 = sqrt(144*20449/143) = 143.49913/2.68328 = 53.47899
&c.
Reducing these values to cents gives
Pitch 1 = 0 cents
Pitch 2 = 1799.99 cents
Pitch 3 = 2926.97 cents
Pitch 4 = 3759.204 cents
Pitch 5 = 4423.074 cents
Pitch 6 = 4975.653 cents
Pitch 7 = 5450.5181 cents
Pitch 8 = 5866.8954 cents
Pitch 9 = 6237.8177 cents
Pitch 10 = 6579.5317 cents
Pitch 11 = 6889.0807 cents
&c.
The primary consonant vertical structure in this
system will be the complex Pitch 1 + Pitch 2 + Pitch 3.
This is the *least* compact vertical structure available;
notice that, because of the wide spacing between
members of this inharmonic series, Plomp & Levelt's
findings tell us that this primary vertical structure
in this n-j n-e-t tuning will be consonant (provided
that the timbre is made up of partials tuned to this
scale) because no two partials will fall within the
same critical band.
However, there are other consonant vertical structures
than the primary: for example, members 2, 3 and 4
of this inharmonic series could be used as a chord.
This would produce:
Secondary vertical structure:
Pitch 1 = 0 cents
Pitch 2 = 1126.98 cents
Pitch 3 = 1959.214 cents
A third-order vertical structure comes from
members 3, 4 and 5 of the inharmonic series:
Pitch 1 = 0 cents
Pitch 2 = 832.234 cents
Pitch 3 = 1496.104 cents
And a fourth-order vertical structure comes
from members 4, 5 and 6 of the inharmonic series:
Pitch 1 = 0 cents
Pitch 2 = 663.87 cents
Pitch 3 = 1216.449 cents
A fifth-order vertical structure comes from
member 5, 6 and 7 of the inharmonic series:
Pitch 1 = 0 cents
Pitch 2 = 552.579 cents
Pitch 3 = 1027.4441 cents
A sixth-order vertical structure comes
from members 6, 7 and 8 of the inharmonic series:
Pitch 1 = 0 cents
Pitch 2 = 474.858 cents
Pitch 3 = 891.2424 cents
A 7th-order vertical structure comes from
members 7, 8 and 9 of the inharmonic series:
Pitch 1 = 0 cents
Pitch 2 = 416.3773 cents
Pitch 3 = 787.2995 cents
And an 8th-order vertical structures comes
from members 8, 9 and 10 of the inharmonic series:
Pitch 1 = 0 cents
Pitch 2 = 370.9223 cents
Pitch 3 = 712.6363 cents
This is a slightly stretched neutral triad with
both the fifth and the third somewhat sharper
than the values of Erv Wilson's hypermeantone
scale.
A 9th-order vertical structures comes
from members 8, 9 and 10 of the inharmonic series:
Pitch 1 = 0 cents
Pitch 2 = 341.714 cents
Pitch 3 = 651.263 cents
This vertical structure is not consonant because
the distance between Pitch 2 and Pitch 3 is less
than a critical bandwidth (290.5 cents) in the midrange
of human hearing. (At extremely high frequencies this
vertical complex would sound consonant, primarily
because the upper partials lie above the range of human
hearing.)
Subsequent nth-order vertical complexes will
clearly be less consonant.
This gives us a set of harmonies which can be
transposed to different steps of the scale to
produce inharmonic progressions. (Again, we
assume the partials are matched to the tuning.)
Several points of note:
First, John Chalmers' tritriadic scale generation
techniques can be employed with the 8th-order
consonant vertical structure. It will produce
modes significantly different from those familiar
from the harmonic series.
Second, the inharmonic series considered here
requires us to travel farther up to find a familiar
vertical structure than does the ordinary harmonic
series. In the classical Western case, the 4th-order
vertical structure using harmonic series members
4, 5 and 6 forms the basis of Western harmony. Here,
the 8th-order vertical structures using inharmonic
series members 8, 9 and 10 form the basis of
n-j n-e-t harmony in this particular inharmonic
series.
Third, different inharmonic series can easily be
generated by modifying the equation for the modes
of a vibrating tube. For instance, instead of
The frequency f of each partial being proportional to
sqrt[[s^2]*[(s^2 -1)^2]/(s^2 + 1)] , we could set
f proportional to
sqrt[[(s + 1)^2]*[(s^2 -1)^2]/s^2]
or
sqrt[[(s + 3)^2]*[(s^2 -1)^2]/(s+1)^2]
or
sqrt[[(s + 5)^2]*[(s^2 -3)^2]/(s+1)^2]
or
cube root of[[(s + 1)^2]*[(s^2 -1)^2]/s^2]
or
cube root of [[(s + 1)^3]*[(s^2 -1)^2]/s^3]
or
sqrt[[(s - 1)^2]*[s^3]/s^2]
and so on. Clearly there are an infinite
number of equations describing non-just
non-equal-tempered scales, which can be obtained
merely by varying the equation for the modes
of a vibrating cylinder.
A larger question is: What physical oscillator
geometry corresponds to a given arbitrary
equation? This is an extraordinarily difficult
problem. It may be insoluble. While the inverse
problem--given an arbitrary oscillator geometry,
can the equation describing the modes of the system
be found?--can at least be attacked numerically
(if all else fails), the problem of obtaining an
oscillator geometry from an inspection of the
equations describing the modes of a cylinder may
not have a single-valued solution. That is, different
physical oscillatory system may produce the same
frequency spectrum.
This has been proven true in several cases, particularly
the case of different drum geometries, and may
be true for all physical oscillators. (For discussion of
a general mathematical proof of this proposition, see
Gordon, C., Webb D., and S. Wolpert, "One Cannot Hear
the Shape Of A Drum," Bulletin of the American
Mathematical Society (New Series), July 1992, Vol.
27, No. 1, pp.134-138)
Thus far, we have examined only the overtone-
equivalent members of the inharmonic series.
What about subinharmonic vertical structures?
That is the subject of the next post.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 10 Nov 1995 23:33 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id NAA16690; Fri, 10 Nov 1995 13:33:38 -0800
Date: Fri, 10 Nov 1995 13:33:38 -0800
Message-Id: <9511101332.aa06606@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

11/13/1995 9:45:55 PM
From: mclaren
Subject: n-j n-e-t vertical structures - part 5
---
"The safety of life is this: to examine everything
all through, what is it of itself, what is its
nature, what is its form..." [Marcus Aurelius,
"Meditations"]
Of the nature of vertical structures in a
typical subinharmonic series, some has
been said: more remains.
By analogy with just intonation, the subinharmonic
series formed from the vibrational modes of
an ideal vibrating cylinder are obtained
by inverting the values
f0 = sqrt(4*9/5) = 2.68328/2.68328 = 1.0
f1 = sqrt(9*64/10) = 7.58946/2.68328 = 2.82842
f2 = sqrt(16*225/17) = 14.55197/2.68328 = 5.4320
f3 = sqrt(25*576/26) = 23.5339/2.68328 = 8.77057
f4 = sqrt(36*1225/37) = 34.5329/2.68328 = 12.8696
f5 = sqrt(49*2304/50) = 47.5173/2.68328 = 17.7086
f6 = sqrt(64*3969/65) = 62.5135/2.68328 = 23.297422
f7 = sqrt(81*6400/82) = 79.510699/2.68328 = 29.631905
f8 = sqrt(100*9801/101) = 98.508682/2.68328 = 36.71204
f9 = sqrt(121*14400/120) = 120.00417/2.68328 = 44.72294
f10 = sqrt(144*20449/143) = 143.49913/2.68328 = 53.47899
&c.
to obtain
fsub0 = 1/sqrt(4*9/5) = 2.68328/2.68328 = 1.0
fsub1 = 1/sqrt(9*64/10) = 7.58946/2.68328 = 0.353554
fsub2 = 1/sqrt(16*225/17) = 14.55197/2.68328 = 0.1840942
fsub3 = 1/sqrt(25*576/26) = 23.5339/2.68328 = 0.1140176
fsub4 = 1/sqrt(36*1225/37) = 34.5329/2.68328 = 0.0777024
fsub5 = 1/sqrt(49*2304/50) = 47.5173/2.68328 = 0.0564697
fsub6 = 1/sqrt(64*3969/65) = 62.5135/2.68328 = 0.0429232
fsub7 = 1/sqrt(81*6400/82) = 79.510699/2.68328 = 0.0337474
fsub8 = 1/sqrt(100*9801/101) = 98.508682/2.68328 = 0.027239
fsub9 = 1/sqrt(121*14400/120) = 120.00417/2.68328 = 0.0223598
fsub10 = 1/sqrt(144*20449/143) = 143.49913/2.68328 = 0.0186989
&c.
Forming the subinharmonic series starting on inharmonic series
member 10 produces:
faleph = 53.47899*0.027239 = 1.4567142
fbeth = 53.47899*0.0223598 = 1.1957795
fgem = 53.47899*0.0186989 = 1.0
Reducing, this becomes
faleph = 831.77662 cents
fbeth = 309.52791 cents
fgem = 0 cents
Of particular interest here is the quasi-sixth formed by
falph, which happens to identical with phi, the golden
ratio. This vertical complex is very similar to one discussed by
Thorwald Kornerup, and it is particularly interesting
in this context to observe that this one arises *naturally*
out of an ordinary physical process--namely, a subinharmonic
series based on the modes of an ideal vibrating cylinder.
Since Kornerup's Golden Scale is well known and
discussed in detail in Mandelbaum's thesis (among other
references), further discussion of this similar scale is
of less interest than a consideration of general principles
for organizing vertical progressions in n-j n-e-t scales.
Thus, this example may serve to show the subtle
links between relatively familiar non-just non-equal-tempered
tunings and the purely mathematical derivation of n-j n-e-t
scales from combinations of inharmonic and subinharmonic
series.
Clearly, both the inharmonic series vertical strucures *and* the
subinharmonic series structures may be formed on any scale
member.
Equally clearly, one might imagine "modulating" between
entirely different inharmonic or subinharmonic series.
In that case, one could bring along the vertical complexes
derived from one inharmonic series into another, entirely
different, inharmonic series. For example, a series of
vertical structures derived from the inharmonic series
of the clamped metal bar might be played first in
the n-n n-e-t scale derived from the modes of the clamped
bar, then the same progression of vertical structures
might continue to play while the tuning changed to
that of an ideal vibrating sphere, and the tuning might
then change into that of an ideal vibrating cylinder, and
so on.
In fact one could just as easily "modulate" between different
modes of vibration of a sphere: zonal harmonics, torsional
vibrations, etc., each giving rise to a different non-just
non-equal-tempered scale.
This is a process conceptually akin to "modulation" in
JI and equal temperament, but more complex: for the
subinharmonic series formed on a given n-j n-e-t are,
as we have seen, in general not as closely related to
the vertical structures formed from inharmonic series
as is the minor triad to the harmonic series of the major
triad in traditional Western harmony.
Regardless, this set of posts may have given a glimpse
of the universe of new harmonies and melodies awaiting
the composer adventurous enough to dare composing
in non-just non-equal-tempered tunings.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 14 Nov 1995 07:55 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id VAA08993; Mon, 13 Nov 1995 21:55:42 -0800
Date: Mon, 13 Nov 1995 21:55:42 -0800
Message-Id: <951114005507_105935759@mail04.mail.aol.com>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

12/17/1995 8:31:26 AM
From: mclaren
Subject: Xenharmonic CDs
---
Time once again to give brief reviews of
the latest xenharmonic laser cookies...
First and most impressive, Marc Battier's
"Transparence." This CD is available from
EMF, Joel Chabade's Electronic Music
Foundation. Their catalog is available
on-line from emusic@aol.com.
Battier uses a single sentence from Henry
Chopin as source material for a virtuoso
set of digital signal processing
manipulations. This CD is beautiful,
non-12-sounding, and endlessly varied
and interesting.
Many of the manipulations appear to center
around resonant filters: the effect is to pick
out distinctly non-12 pitches and generate
effects which sound nothing like the original
material.
The entire CD can be thought of as a giant set
of "variations on a theme" of a single acoustic
input. Highly recommended!
Next, Anna Homler's "Do Ya Sa Di Do." Ms.
Homler specializes is singing what sound like
Japanese or Korean chants against sampler-
manipulated electronic backgrounds. The
net effect is often impressive, and distinctly
outside the standard 12-tone equal-tempered
scale.
The most xenharmonic tracks on this CD are
No. 1, in which she fringes her chant with
an aureole of xenharmonic "inflexional"
pitches (as in Balinese and Javanese vocal
music, where slendro and pelog are generally
used as a pont of departure, or as in the
vocal xenharmonies of Sinead O'Connor,
Louis Armstrong, Ofrah Haza, et alii).
Most wildly microtonal, however, is track
9--where Ms. Homler produces squeaks and
whistles which the human vocal track does
not appear to have been designed to
accomodate. The effect is xenharmonic,
beautiful, and altogether exotic.
The CD is available from:
Homler, PO Box 48770, Los Angeles CA
90048.
Ben Johnston's "Calamity Jane to her Daughter"
on the CD "Urban Diva" is an impressive example
of extended just intonation. This just array
appears to extend upward to around 31-limit,
and offers a formidable challenge to the
aspiring po-mo vocalist.
Forunately, soprano Dora Ohrenstein is more
than up to the challenge. Her rendition proves
both sonically idirescent and emotionally
compelling.
*Highly* recommended!
The CD "Urban Diva" is available from Composer's
Recordings Inc., 73 Spring St., New York NY 10012-
5800, phone # (212) 941-9673.
Last and decidedly least: Chris Brown's
"Lava." This composition combines electronically
manipulated sounds with live percussion and
digital keyboards.
Alas, the piece quickly grows unbearably
repetitive and wearisome. Most of the electronic
manipulations involve uninteresting delay or
filter effects applied to inherently ugly percussive
sounds. The result is less than impressive.
My patience did not extend to the end of this CD.
Not recommended.

--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sun, 17 Dec 1995 17:48 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id IAA24152; Sun, 17 Dec 1995 08:48:09 -0800
Date: Sun, 17 Dec 1995 08:48:09 -0800
Message-Id: <9512170815.aa22014@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

12/17/1995 8:48:09 AM
From: mclaren
Subject: Sonics Arts Gallery concert
---
On Friday, 5 November, Your Humble E-Mail
Correspondent gave a concert along with Jonathan
Glasier at the Sonic Arts Gallery in San Diego.
The program consisted of:
[1] Dual pianos tuned to different 12-pitch subsets
of 34/oct. With J. Glasier on one piano and myself
on another, we explored 34-tone equal temperament
in a piece which sounded as though one person was
playing, albeit with superhuman facility. By judiciously
switching MIDI channels during the piece to obtain the
third 12-tone set of 34-equal, the entire 34/oct scale
was covered.
[2] The second piece also featured Glasier and YHC on
dual MIDI keyboards--this time using a 5 and a 7-pitch
subset of 34-TET with gamelan-like timbres. The
general effect was that of a digital trip to Bali.
[3] The third piece used harmonic series 1-60 for
an old favorite, also played at last year's sonic arts
gallery concert series. A staple concert piece, extremely
xenharmonic.
[4] The fourth piece used a TX802 harmonium timbre
and a cello to explore Harry Partch's 43-tone just array
[5] In the fifth piece, a vibraphone timbre counterpointed
pizzicato cello--also in Partch 43.
[6] The fifth piece juxtaposed exotic synth timbres in
the Carlos Gamma non-octave scale against Tibetan bells,
waterphone, and struck pieces of scrap metal to generate
an eerie and unfamiliar sound-world. This may be the
first time Carlos Gamma has been performed at a public
concert.
All told, the concert was a success. In attendance: Ted
Melnechuk, the xenharmonist Ralph David Hill, and a variety
of other sonically adventurous folk.
The placement of contact mike on the cello did not produce
good recordings of the Partch pieces, but the other concert
pieces were recorded on digital tape and will form part of
a forthcoming compilation tape.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 18 Dec 1995 16:29 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id HAA03300; Mon, 18 Dec 1995 07:28:56 -0800
Date: Mon, 18 Dec 1995 07:28:56 -0800
Message-Id: <9512180728.aa02212@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

12/18/1995 7:28:56 AM
From: mclaren
Subject: Chalmers' Constant and a
family of constants giving rise to
yet another non-just non-equal-tempered
scale
---
As John Chalmers has pointed out, the
iterated absolute log function produces
a set of unpredictable numbers unless it
starts with a certain number.
In that case, the series enters a fixed point.
If the function is called the mclaren series,
clearly this constant should be named
Chalmers' Constant. It lies between
0.399012979 and 0.399012978.
This constant is different for each
logarithmic base. Thus a family of
constants is implied. For base e,
the constant lies between 0.567143289
and 0.567143291. For other bases,
the constant is different.
This implies in turn an infinite number
of non-just non-equal-tempered scales.
One could, for example, generate one
such scale from the constants for the
logarithmic bases of the primes:
C[1] = fixed point for iterated log base 3
C[2] = fixed point for iterated log base 5
C[3] = fixed point for iterated log base 7
C[4] = fixed point for iterated log base11
C[5] = fixed point for iterated log base 13
C[6] = fixed point for iterated log base 17
C[7] = fixed point for iterated log base 19
C[8] = fixed point for iterated log base 23
and so on.
Another scale could be generated by
taking
C[1] = fixed point for iterated log base pi
C[2] = fixed point for iterated log base e
C[3] = fixed point for iterated log base F
C[4] = fixed point for iterated log base L
C[5] = fixed point for iterated log base C
C[6] = fixed point for iterated log base G
and so on, where F is Feigenbaum's
constant, L is the first Liouville number,
C is Chapernowne's number, G is the
Euler gamma constant, and so on.
As musical intervals all these values
appear to be particularly intractable
from a just intonation point of view;
large rational fractions are needed to
approximate virtually all of them. Clearly
these musical intervals do not fit into
the just scheme of things.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 18 Dec 1995 16:31 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id HAA03909; Mon, 18 Dec 1995 07:31:02 -0800
Date: Mon, 18 Dec 1995 07:31:02 -0800
Message-Id: <9512180729.aa02219@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

12/18/1995 7:31:02 AM
From: mclaren
Subject: a new iterated function for
generating non-just non-equal-tempered
tunings
---
While boring holes for resonators in a
pentatonic percussion instrument to be
installed at the Exploratorium, a new
iteration function occurred to me. Since
these functions are a fertile breeding
ground for non-just non-equal-tempered
scales, this one might prove of interest
to forum subscribers.
Operating an industrial drill press is
extremely peaceful work--excellent for
mathematical contemplation.
The function is an alternating series:
start with a number, take the tan(x),
and whenever it drops below 1.0, take
e^x.
The first 10 terms of the function are:
i[1] = abs(tan(sqrt(2))) = 6.334119167...
i[2] = abs(tan(6.334119167)) = 0.05097795...
i[3] = abs(e^(0.05097795)) = 1.05229964...
i[4] = abs(tan(1.05229964)) = 1.752641506...
i[5] = abs(tan(1.752641506)) = 5.438434336...
i[6] = abs(tan(5.438434336)) = 1.126353452...
i[7] = abs(tan(1.126353452)) = 2.099871982...
i[8] = abs(tan(2.099871982)) = 1.710348942...
i[9] = abs(tan(1.710348942)) = 7.1119178021...
i[10] = abs(tan(7.1119178021)) = 1.106677438...
I believe but cannot prove that all of these
numbers are transcendental. Numbers > 2.0
when octave-reduced, and < 1.0 when added to 1.0,
produce a musical scale:
p[1] = 795.7728117 cents
p[2] = 86.07888146 cents
p[3] = 88.25468083 cents
p[4] = 971.4371163 cents
p[5] = 531.8296507 cents
p[6] = 205.991463 cents
p[7] = 164.8027358 cents
p[8] = 929.1488291 cents
p[9] = 996.2863799 cents
p[10] = 175.4817393 cents
As usual, there does not appear to be any
obvious pattern in these numbers.
Another number arises from this series:
1 + the number of successive iterations
required for switchover between e^x
and abs(tan(x)), or vice versa. That
number is:
1.21311151312121913141313111313...
This number also appears to be
a transcendental. Can you prove it?
As a musical interval this equates to
a neutral third of 334.4597716 cents.
This is a third which has not to
my knowledge appeared previously
in the musical literature.
--mclaren


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 18 Dec 1995 16:33 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id HAA04502; Mon, 18 Dec 1995 07:33:07 -0800
Date: Mon, 18 Dec 1995 07:33:07 -0800
Message-Id: <9512180731.aa02233@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

12/18/1995 6:19:28 PM
From: mclaren
Subject: transcendental numbers, the entropy
of musical scales, and Shannon's theorems
applied to tunings
---
Some time ago, Gary Morrison very unjustly
impugned his own mathematical abilities (which
are considerable) in the process of asking the
question: What is a transcendental number?
Looking back at my previous posts on the subject,
the source of the confusion is easy to spot.
My statement: "In general, it is extremely
difficult to determine whether a given number
is transcendental or not" could be interpreted
in several ways.
One could take the statement to mean that
the Turing halting problem for a general
algorithm that sieves the field of reals
and always produces all transcendental
numbers has not yet been solved.
Or one could take the statement to mean:
"It's impossible to define the term
'transcendental number.'" This is surely
untrue.
There are many ways to define a
transcendental number. One way is:
a transcendental number is the solution
of a transcendental equation--that is,
an equation involving logarithms or
antilogarithms.
This is not always true, since the equation
e^[i*pi] + X = 0 has the solution 1. However,
perhaps one could say that a transcendental
number is one which arises *only* as the
solution of an equation involving logarithms
or antilogarithms. (Manuel and John Fitch may
jump on me for this one; it may not be 100%
true all the time. Can you think of a counter-
example?)
Another way of defining a transcendental number
is: It's a number which is not the solution of an
ordinal arithmetic equation or an algebraic
equation with rational-fraction coefficients and
exponents and a finite number of terms.
Algebraic and arithmetic equations can, of course,
also involve an infinite number of terms: pi and e
both arise as a result of many different infinite
series, the most spectacular of which were
discovered by Srinivasas Ramanujan.
Pi and e also arise from equations involving
logarithms and antilogarithms: X^[i*pi] + 1 = 0
defines e, while e^[i*X] + 1 = 0 defines pi.
This latter definition is close to Grolier's,
although strictly speaking Grolier's definition
is incorrect since it appears to leave out the
requirement that a transcendental number
cannot be the solution of an arithmetic or
algebraic equation with a *finite* number of
terms. (Many of Ramanujan's series involve
algebraic products & quotients but an infinite
number of terms.)
Transcendental numbers also tend to arise when
the exponents of an algebraic equation are
imaginary.
Another way of defining a transcendental number
is by the amount of information required to
describe it. How complex an algorithm is
needed to generate the number? How long does
it take to run?
Claude Shannon proved elegantly that the amount
of information required to generate (or parse) a
message is proportional to the log to the base 2
of the number of bits in the message. By this
standard, an integer requires very little
information to parse (or generate). Even if very
long, any finite integer can be entirely written
down. Once the last digit is written, your're
finished. A simple "copy this array" algorithm
suffices.
A rational fraction requires somewhat more
information to parse (or generate). However,
the algorithm required to describe the digits
in the number 1/9 (for example) is still simple:
"Keep writing ones." I.e., 1/9 = .11111...
The information contained in an algebraic
irrational is somewhat greater. (There are
two kinds of irrational reals: algebraic
irrationals and transcendental irrationals.
Algebraic irrationals are those numbers which
arise as real roots of equations involving rational
coefficients and rational exponents.
Transcendental irrationals arise when the
algebraic exponent, for example, is itself
an algebraic irrational--as in Hilbert's number,
2^[sqrt(2)]. )
In the case of an algebraic irrational, the algorithm
required to generate the number is lengthier
than that required to generate the decimal
expansion of a rational fraction--thus the
algebraic irrational contains more information
than does a finite rational fraction.
However, transcendental numbers appear to
require the most information of all. To my
knowledge, there is as yet no known algorithm
by which the entire field of reals may be sieved
and by which all transcendental numbers will
always be found. Yet, since we live in a Goedelian
universe, such an algorithm might well exist--
and worse still, it might be very simple.
Another way of stating this proposition is that
the simplest description of a transcendental
number appears to be...itself. If true, this
makes transcendental number unique.
It has been speculated that the digits in the
decimal expansion of transcendental numbers
never repeat. One subscriber has even stated as
much on this forum. However, this proposition
has never been proven mathematically. (Most
mathematicians believe this supposition to be
true, but belief is *not* the same thing as proof.)
Thus it is not yet known how random the digits of (say)
pi or e really are. Many functions, graphs and plots
seem random from one perspective, but when rotated
they reveal hidden patterns. The same might be
true of pi or e. So it's entirely conceivable that
out beyond a googol decimal places, all the
digits of pi might turn to 1's, for example.
A disturbing thought...yet one which cannot be
dismissed until a proof of the true randomness
of pi's digits is found.
In view of this possibility, the randomness of
a transcendental number's digits cannot be
defined. If an infinite number of pi's digits
are, say, 1, or 3, or 9, or what-have-you, after
a given point, then clearly the number is hardly
random at all.
However, in order to determine this by brute
force we would have to calculate an *infinite*
number of digits in pi's decimal expansion.
For no matter how far we go, there's always
the possibility that at the next digit, the
digits fall into a predictable and eternally
repeating pattern.
Thus the devilish undecidability in so many
cases of the question: "Is a given number
transcendental?"
For example: the number 1 + a googol zeroes
+ 1 + an infinite number of zeroes might be
transcendental.
Is it?
I don't know. You don't know. It's impossible
to calculate. No proof exists that this number is
transcendental (or not transcendental).
Thus we will likely never know.
Mathematicians widely believe the digits
in pi to be completely random. If so, this
provides another way of defining transcendental
numbers: by the power spectral density of
their decimal expansion.
By taking the Fourier transform of a number's
decimal expansion, its spectrum can be
determined. The spectrum will contain Dirac
delta functions at those frequencies which
define a periodicity. Thus the number
1.212121212.... will have a sharp spike in
its spectrum at 2, since the decimal expansion
has a periodicity of 2 digits. There will be little
energy anywhere else.
Integers have an FFT which forms a narrow
or broad Gaussian, depending on the length
of the integer. Rational fractions have
broader spectra: algebraic irrationals have
spectra which are broader still.
Transcendental numbers (if the
mathematicians are right) likely have flat
spectra: that is, their digits never
repeat. (This has not been proven, but
is universally believed.)
Since Parseval's Theorem tells us that the
Fourier Transform of the autocorrelation
function is the power spectral density, this
is only as we would expect--it is, after all,
merely another way of saying that the digits of
transcendental numbers appear to exhibit
no correlation with one another. There is
no pattern hidden in the decimal expansion.
Returning to the question of musical scales,
this gives us another way of defining tunings:
by their entropy.
Since statistical mechanics teaches us that
entropy is a measure of the total number of
available states in a system, clearly the
number of states available in a number's
decimal expansion is greatest for transcendental
numbers and least for integers.
The next digit in a transcendental number could
be anything: the number has maximum extropy.
Thus the entropy of a musical scale and
be defined by summing the entropies of the
numbers which comprise it.
The harmonic series clearly has least entropy;
next come JI scales made up of rational
fractions, next equal tempered scales made
up of algebraic irrationals, and finally
non-just non-equal-tempered scales made up
of transcendental numbers.
In this sense, transcendental numbers can be
thought of as micro-universes, containing an
infinite possible number of available states,
and requiring an infinite amount of energy
to parse.
Although the full workings of the ear/brain
system are not yet completely understood,
it is safe to assume that however it operates,
the human auditory system can be modelled
as some sort of state-space machine.
In this case, we have a possible explanation
for the response of the ear to the octave
as well as Enrique Moreno's extended chroma
phenomenon. Since information is logarithmically
proportional to energy (another of Shannon's
theorems), it requires the least amount of
information/energy to parse a musical
interval which is an integer ratio of another
interval. Next most energy is required to
parse JI scales, still more to parse scales
involving algebraic irrationals (ET scales),
and the most energy/information is required
when parsing non-just non-equal-tempered
scales.
This accords well with Gary Morrison's and
my own findings about non-octave scales.
JI scales sound more "bland" than equal
tempered scales--or one might prefer
to put it the other way around and say
that Nth root of 2 tunings sound muddier
and more turbulent than JI tunings. Non-octave
scales sound "like thick rich chocolate milk
shakes," as Gary has pointed out, and my
own experience with non-just non-equal-
tempered scales indicates that these
tunings sound most complex and sonically
luxuriant of all.
However, this hypothesis is not supported
by the psychoacoustic data which demonstrate
clearly that listeners universally hear intervals
about 15 cents > the octave as "pure octaves"
and intervals of 2.0 as "too flat" and "out of
tune." Moreover, this assumes that the
human auditory system can be modelled as
a Turing machine which obeys linearity
and the superposition principle. However,
the data appears to indicate that many parts
of the human auditory system are non-linear
and do not obey the superposition principle.
In this case, the analogy with finite
automata may not be apt.
Lastly, if one wanted to go completely over
the edge, one could describe numbers in terms
of their dB signal-to-noise ratio by comparing
the normalizing power spectral density of
the integer 1 to the psd of the target number.
In this case transcendentals would exhibit
a zero dB signal-to-noise ratio, while integers
would exhibit no noise whatever and thus an
infinite signal-to-noise ratio. (Describing
numbers in terms of their signal-to-noise
ratio sounds absolutely insane until you
realize that this explains the extreme
noisiness of digital reverb systems; the
input signal becomes progressively degraded
by roundoff error during its trip through the
recirulating delay lines of the reverb
algorithm, and thus each sample of the
input suffers a progressive randomization
of its bits and thus a progressive decrease
in its singal-to-noise ratio.)
Incidentally, Gary's purported "mathematical
idiocy" pales before my own. Alert readers
will still be guffawing at my statement that
"i is the square root of -1." Obviously untrue,
since -i is also the square root of -1... As
Manuel op de Coul so delicately pointed out
during our meeting across the street from
Disneyland (an apt venue for wild-eyed
microtonalists).
Moreover, e^[i*pi] = -1, not 1.
No duh dude, as Gauss would doubtless say.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 19 Dec 1995 15:20 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id GAA28265; Tue, 19 Dec 1995 06:20:47 -0800
Date: Tue, 19 Dec 1995 06:20:47 -0800
Message-Id: <199512191519.QAA09965@elevator.source.co.at>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

12/19/1995 2:32:36 PM
From: mclaren
Subject: A window of opportunity for synth
manufacturers
---
Csound allows users to generate csound
timbres in real time from MIDI input. At
present, this capability is limited. But as
time passes and massively parallel
P7 desktop machines become typical,
this will change.
This means that synthesizer manufacturers
have a limited window of opportunity. They
can either get off their rear ends and start
building *real* synthesizers--instead of digital
sample playback boxes full of canned sounds
burned into ROM--or they can go the way of
the dinosaurs.
Within 5-10 years, the average computer
user will be able to generate complex
and interesting csound timbres in real
time via MIDI using a standard desktop
computer.
This brings up the issue of Csound's support
for real-time microtonality.
There isn't any.
Having mentioned this to John Fitch, and
have received no reply, it seems appropriate
to bring it to the attention of the rest of the
forum subscribers. If you examine the source
code for Csound, you'll discover that Csound
translates MIDI note input into real-time
frequencies by using a 2^N/12 function.
Yes, having thrown off the shackles of
the piano keyboard and all limitations on
scale and tuning, Csound now makes it
possible for us to...play in 12 tones per
octave via MIDI.
Unbelievable. Disgusting. Outlandish.
Yet true.
Csound is locked into 12-TET for MIDI
playback until someone, somewhere
changes the source code. Naturally, since
this is the prime venue for non-12
computer applications in musical tuning,
not a single person on this forum has
ventured to deal with the problem.
Thus, as always, we head forward into
the past at the speed of light! Soon,
extraordinary sounds will issue from
our desktop computers...sounds locked
into 12-TET, thanks to the crippled
artifically limited MIDI-to-Csound
routines frozen into the current
generation of Csound.
There's another issue of concern to
microtonalists:
Has anyone noticed that frequencies
in the HETRO output are specified as
16-bit integers? With a frequency
range of 20 Khz, this gives a precision
of 20000/32768, not adequate to
describe the fine frequency shifts
that take place within individual
partials during the course of real-
world musical notes. 2/3 of a cent,
for instance, is 4% of a 72-TET scale
step. While this kind of tuning accuracy
is perfectly acceptable for the overall
musical scale-step, it is surely INadequate
for specifying the fine frequency shifts that
give each changing overtone its "lifelike"
sound during resynthesis, especially when
the resynthesized timbre plays in a microtonal
tuning.
Naturally, no one has bothered to mention this
and naturally, no one has bothered to correct it.
More neglect of microtonality in the very field
(computer music) which ought to be most
congenial to new tunings & new scales.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 19 Dec 1995 23:36 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id OAA04729; Tue, 19 Dec 1995 14:36:47 -0800
Date: Tue, 19 Dec 1995 14:36:47 -0800
Message-Id: <9512191434.aa18824@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

12/19/1995 2:36:47 PM
From: mclaren
Subject: The dirty truth about Csound
---
Since no one has ever bothered to respond
to any of my e-mail about the problems
with Csound, time to go public.
Around 40% of the features of Csound do
not work and appear to be unimplemented.
Most of these features deal with spectral
modification of input signals, but the delay
line features also do not work.
One at a time, Csound's current problems are:
[1] One of the most obvious reasons to use
Csound is to analyze input timbres with
the Csound HETRO program and resynthesize
them with the partials warped into the
frequencies maximally consonant for a given
tuning.
Naturally, HETRO does not work and Csound's
ADSYN routines does not work. Way back
when, HETRO was written as an add-on to
analyze input sounds with a hetrodyne filter.
It blew up if the input exceeded 32 kilobytes
in length. The problem was inherent to the
HETRO code, not the machine on which it ran.
Barry Vercoe fixed this problem in December
1994 but in so doing changed the format in
which HETRO stores numbers. The old format
stored partials in pairs: frequency, amplitude,
timeslice, frequency, amplitude, timeslice...&c.
The new format uses the same data structure
but the timeslice is now variable and indicates
a detla-t in milliseconds to the next envelope
breakpoint.
In the old HETRO, the user specified the number
of breakpoints and the spectral analysis output
automatically took care of giving a spectral
"snapshot" every length/(number of breakpoints)
milliseconds.
In the new HETRO, an internal algorithm minimizes
the number of breakpoints. This number is thus
entirely variable and the time between spectral
"snapshots" for each harmonic is unpredictable.
Alas, Csound cannot read spectral analysis files
in the new HETRO format.
This is of some concern to xenharmonists on this
tuning forum, inasmuch as one of the main areas
of interest for microtonalists working in computer
music is modifying acoustic spectra to render them
maximally consonant in a given tuning.
Thus, the two primary routines designed to analyze
and resynthesize Csound timbres are vaporware.
ADSYN and HETRO no longer work.
(Naturally, various forum subscribers will
deny this, and naturally they'll be--let us say--
"deliberately mistaked.")
This is a classic example of academic garbageware,
a subject to which I shall return in future posts.
Garbageware promises wonderful results, doesn't
work, and is always undocumented and bug-ridden.
Garbageware is always written by someone with 3
PhDs as a diversion, and naturally no support is
ever available for the software, since the programmer
is now in the Arctic studying the mating habits of the
krill shrimp under the permanent ice pack. Garbageware
always *almost* works--it does *just enough* to
tantalize the unwary user, but *never* enough to
be useful.
Naturally, no one has bothered to mention
garbageware in Computer Music Journal, ARRAY, or
anywhere else--so it's up to me (as usual).
Csound's "spectral modification" routines are
a classic example of garbageware.
If you try to use HETRO, you'll get an output
file, all right--an output file unreadable by
Csound. (Various people will now claim this
is not so, and they'll be--let us say--"deliberately
mistaken.")
Try it. Feed a sound into the HETRO program.
You'll get a .HET file out. Now feed it into
your Csound program using the ADSYN command.
Guess what? You'll get the message "bad
file read."
Yes, Barry Vercoe has changed the Csound
HETRO format without telling anyone. Thus
Csound's ADSYN command now cannot be used,
and has not run since the December 1994
build of Csound.
(Again, many forum subscribers will claim
this is not so, and again they'll be--let us
say--"deliberately mistaken.")
[2] Let's talk about ADSYN. This is an almost
entirely undocumented feature of
Csound. Vercoe's docs from a 1989 release
of Csound (not present in the current 1994
release of Csound available on John Fitch's
bulletin board) claim that "more details about
ADSYN will be given later." They never are.
ADSYN's command syntax is a mystery. No
one knows what it is. Why? Because only
Vercoe knows the full command syntax for
ADSYN, and he isn't telling.
Thus ADSYN is useless and might as well
not be a part of Csound. (Again, various
forum subscribers will deny this, and
again they'll be--let us say--"deliberately
mistaken.")
[3] One of the most grotesque paradoxes
of Csound is that the program is theoretically
capable of generating the world's most
impressive reverb. Naturally, no diagrams or
sample reverb programs exist. Of course
many diagrams and sample reverb programs
exist for MUSIC11 or other ancient programs; and
and of course all these sample reverb programs
require special instructions not present in
the C versin of Csound. Naturally, no one has ever
made public the code for any of the high-quality
reverbs used in Csound instruments from places
like CCRMA or IRCAM. Therefore (naturally!)
the only reverb currently available to users of
Csound is the 1970-vintage reverb using
4 all-pass filters and 2 comb filters. This is
one of the world's worst-sounding reverbs:
it sounds like a tile bathroom.
The result?
If you want to add high-quality reverb
to your Csound composition, you must
compile the Csound compisition dry and
then play it through a commercial digital
reverb unit, then re-record the reverberated
Csound composition.
(Naturally, various forum subscribers will deny
this, and naturally they'll be--let us say--"deliberately
mistaken." There is a less polite term.)
This is so grotesque and so unthinkably
bizarre as to defy credibility. Yet there it is.
You want high-quality Csound reverb? Record
your Csound composition through a PCM-80
or an Alesis Quadraverb and re-record it.
When you realize tha this means many, *many*
layers of re-recording to get different levels of
reverberation on a Csound compotiion, you
begin to realize the utterly insane nature
of the situation. Yet it persists. No high-
quality digital reverb instrument has ever been
published, no source code for a Csound
high-quality reverb is available anywhere,
at any time, in any way, for any reason.
(Naturally, various forum subscribers will deny
this, and naturally they'll be--let us say--
"deliberately mistaken.")
[4] Aside from a hi-fi type "tone control"
and a sharply resonant filter called reson,
Csound offers no facilities whatever for
spectral modification of input sounds. (Naturally,
various forum subscribers will deny this,
and naturally, they'll be--let us say--"deliberately
mistaken.")
Mark Dolson's LPC and the Lansky LPC routines
built into Csound produce unbearably distorted
output with so many artifacts as to be musically
unusable. Cutting down the input volume doesn't
help. Naturally, Paul Lansky's own LPC-
processed sounds exhibit *none* of these
artifacts...so (naturally) Paul Lansky must
be using a bunch of special C code he hasn't
bothered to make public.
However, not only do the Dolson and Lansky
LPC modules in CSound produce unlistenable
garbage audio output, there are no other
less sophisticated spectral modification
routines in Csound (aside from the reson
and tone modules).
There is, for example, no way to apply
a 50-peak formant filter to an input sound,
or to a Csound instrument on output. There is,
for example, no way within Csound to apply the
equivalent of a graphic or parametric
equalizer to the sound. There is assuredly
no way to apply anything like 256 bands
of boost and cut to various frequencies,
with the boost and cut specified to the fraction
of a dB.
Naturally, this would be trivial given Csound's
processing capacbilities. So, naturally,
it's impossible.
(Again, various forum subscribers will deny this,
and again they'll be--let us say--"deliberately
mistaken.")
[5] The IRCAM fof module is almost completely
undocumented. I've never been able to get it to
work.
[6] The -f option to output floating point format
soundfiles doesn't work. Output is a blaring roar.
Pure high-quality noise.
[7] On the GCC builde of Csound for the IBM PC,
instrument .orc files crash when the number of
variables exceeds 65535. It's fairly easy to
exceed that number with a single large additive
synthesis instrument--day, 128 partials with
128-point amplitude and frequency envelopes.
So let's see:
Fof doesn't work, HETRO and ADSYN don't work,
there's no way to build reverb in Csound, the -f
flag doesn't work, and there are no spectral
modifiers other than RESON and TONE--and the
Dolson and Lansky LPC produce unlistenable
distorted garbage output when used inside
Csound. You can't compile large additive
synthesis .orc files. And the Dolson PVOC routines
produce the message MATH ERROR and a register
dump after time-stretching long files, but it
doesn't appear to affect the soundfiles.
That's a good 40% of Csound that doesn't work.
While this sounds awful, it's actually a tremendous
achievement. A full 60% of Csound actually WORKS.
By contrast, the winner and all-time champion of
academic garbageware, F. R. Moore's cmusic, is
100% non-functional.
Carrying on the UCSD music department's tradition
of producing unusable junk software, csmusic is *classic*
garbageware. In the words of one of the members of
this tuning forum: "Even I know better than to download
that crap. With more than 3000 files in hundreds of
directories, it would probably take a month of recompiles
just to get cmusic to work on the machine it was
written for--much less port it."
As world-class garbageware, cmusic exhibits
all 4 of that species' salient characteristics:
[1] It's totally undocmented, and totally
unsupported. A vaguely-worded ASCII
file always arrives with the garbageware
executables--"You can do X, Y and Z with
this wonderful pieceof software developed at
IRCAM!" And naturally, there's not a ghost
of a clue *how* to use the software. Naturally,
the command syntax always involves something baroque
and outlandish--CRT-ALT-SHIFT-LEFT PARENTHESIS-
+-BACKSLASH-SUB-COLON-COMMAND-F7, or
some such. Naturally, you'll *never* find this out
from the "docs" which arrive with the
garbageware, so naturally the software
is useless. Inputting a "?" or "HELP COMMAND"
message always produces the message "INCORRECT
SYNTAX."
[2] Garbageware always needs 5 special hidden install files
to run on *your* computer. Meanwhile, it comes with 5693
different install files for OTHER computers--a PRISM compiler
for the Connection Machine, an assembly loader for the
Commodore PET, and an INSTALL routine that runs on the
mercury delay line of a 1948 ENIAC--but if you want
to run the Garbageware on *YOUR* computer (a Mac or a
PC), hey! Guess what?
Yep.
You're out of luck.
Naturally!
Yes, indeedy, the special install files *aren't* included
with YOUR version of the garbageware. And where can
you find them?
You can't!
The programmer wrote the garbageware to run only on
(say) his Kaypro Robby, and never anticipated that anyone
would run the software on an exotic outre machine...
like, say, a Macintosh or an IBM PC.
[3] Once you get the garbageware up and running,
you'll discover some delightful bugs. The
garbageware goes south on Tuesday during full
moons but only when it's raining. Naturally,
the developer at IRCAM or wherever
will describe this as a "special feature."
One of the most interesting of the "special
features" of the latest GCC compile of
Csound is that it never stops compiling on
long scores. Yes, the compile continues from
forever to forever--infinite compilation! What
an advance! Why not devote an issue of Confuser
Music Urinal to this marvellous feature???
Do you have 400 megs free on your hard disk? Want
to use it up? Compile a 7-minute Csound score under
the GCC Csound--you'll use up all 400 megs in 1 file!
Meanwhile, if you want to end the comiple session,
you do so by hitting CTRL-BREAK. Clever command
syntax, eh?
[4] Last and best of all, you'll finally come
across docs for the garbageware--docs to version
2.19 from Carnegie-Mellon. However,
all ftp sites curently carry v. 5.62 from
CCRMA. And guess what?
Yep! The command syntax has changed toally!
The command CSOUND -D -H %1 %2 now does
something interesting and different--perhaps, say,
wiping your hard disk and reformatting it. (Gosh.
What a useful feature...)
Hurrah for academic garbageware! Without
it, where would we be?
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Wed, 20 Dec 1995 06:05 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id VAA07412; Tue, 19 Dec 1995 21:05:36 -0800
Date: Tue, 19 Dec 1995 21:05:36 -0800
Message-Id: <951220000424_59067722@mail06.mail.aol.com>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

12/20/1995 1:02:56 PM
From: mclaren
Subject: The collapse of innovation
in post-1988 synthesizer technology
---
In a previous post, Your Humble E-Mail
Correspondent mentioned the general
excellence of Ensoniq's synths. They
sound about as good as anything else out
there, Ensoniq synths are rock-solid
reliable, and their sampler operating
systems are particularly intuitive and
easy to use. Anyone who battled the
hellish TX16-W Yamaha sampler operating
system or the botched E-Mu sampler OS's
from the late 80s is well qualified to
appreciate the excellence of the EPS/
EPS-16/ASR-10's operating system.
However, there's still plenty of room
for improvement in the Ensoniq line.
Someone posted a query about that--how
can anyone say Ensoniq isn't up to date
technologically?
Here's how:
[1] The ASR-10 needs more RAM. A *LOT*
more RAM. Right now, the Kurzweill 2500
series can take up to 128 megs. This
is a reasonable minimum amount: 256 megs
would be more like it. But 128 megs is a
start.
The ASR-10, by contrast, is stuck with 16 megs.
This is around 90 seconds of sampling time at
44.1 khz stereo. Completely unacceptable. Far
too small an amount of RAM to be useful.
Part of the problem is the kbd controller chip
Ensoniq uses to address the RAM, part of the
problem is the burden of maintaining backwards
compatability with the EPS/EPS-16 &c.
Backward compatability must go. The ASR-10's
successor needs more RAM. A *LOT* more.
With EDO, the people at Ensoniq need to start
thinking in terms of *gigabytes* of RAM.
(As always, readers will call this "insane"
today, "a little over the top," in 6 months,
and "very sensible, but somewhat conservative"
in a year.)
[2] A rule of thumb is that a decent saxophone
or clarinet needs 25-40 multisamples. The
ASR-10 allows 8 layers per instrument,
8 instruments total. This is utterly inadequate.
Backward compatability must go. Dump the 8
layer limit. At least 127 A-B crossfades should
be allowed per multisampled instrumnet, at least
16 MIDI channels/instruments at a time.
[3] David Doty made a point about accessing various
layers during performance. Clearly Ensoniq needs
to upgrade the ASR-10's successor to allow MIDI
access to each of the 127 layers within an
instrument. Since these would often be used for
alternate tunings, it's a particular priority.
[4] Ensoniq may want to think about implementing
some new technology.
Ever since innovation ground to a screeching halt
in the synthesizer industry, somewhere around 1988,
industry pundits have wondered why sales of
digital keyboard instruments have dropped
steadily.
There's no mystery.
The Korg M-1 provided the original and ever since
then all the keyboard manufacturers have concentrated
on cranking out endless xerox copies of that instrument,
all using exactly the same antique technology:
sample playback.
With the exception of the Yamaha VL-1M/VL-7 and
the E-Mu Morpheus, all current synthesizers are
nothing but sample playback machines that spit back
digital recordings burned into ROM when you press
a key.
Now, there's nothing wrong with sample playback. It's
a fine technology. But after a while, you get tired of
hearing nothing but sample playback.
Even today's samplers use exactly the same technology--
with the only wrinkle being that *you* get to choose
what digital recording is regurgitated when you press
a key, instead of *the synth company** choosing the sound.
All today's synths are basically nothing but digital
Mellotrons. Where's the synthesis?
Does anyone remember the origin of the term "synthesizer"?
These instruments are supposed to *generate new sounds.*
Instead, we get yet another canned B-3 sample
burned into ROM. And no matter how mich reverb, phasing,
ring modulation, flanging or delay you slather on top of
a sound burned into ROM, it all sounds pretty much the
same.
Around 1988, synth companies stopped making synthesizers.
Instead, they all followed the cattle stampede toward
the easy dollar.
The net result is that there is today hardly any reason
to buy one synth from one manufacturer rather than another.
Ensoniq's tuning tables make a difference. But if they *really*
want to increase sales, how about building some actual
synthesizers for a change?
Even Yamah has dropped the ball. Today, if you want to
buy an FM synth you're out of luck. You have to buy one
used, or pay for a Kyma.
The brutal reality is there are *dozens* of synthesis
methods: digital additive, subtractive, frequency
modulation, amplitude modulation, Chebyshev
distortion, Miller Puckette's formant synthesis,
Lansky's LPC synthesis, phase vocoder analysis/
resynthesis, Daubechies wavelets, Walsh function
analysis/resynthesis, Dashow's exponentiation
synthesis, Hiller & Ruiz's physical modelling
synthesis, waveguide synthesis, many others.
Yet aside from Yamaha's VL-7/VL-1M physical
modelling synth, not a single manufacturer has
implemented *any* of these synthesis techniques
in a currently available commercial synthesizer.
Amazing.
Shocking.
Yet true.
If Ensoniq wants to jump-start synth sales,
they might think about implementing some of
these synthesis techniques.
Now that Korg's OASYS synthesizer has been
discontinued--yet another case of classic
vaporware--and the Gibson/G-WIZ labs' FAR
Fourier resynthesizer cannot be purchased
by anyone, anywhere, for any reason, at any time,
any way, shape or form...well, now that these
vaporware instruments have bitten the dust,
what else is there on the horizon?
Zero.
Zilch.
Zip.
Squat.
Diddly.
Nada.
These synths have joined the parade of
vaporware formed by the Prism synthesizer
(remember that one?), the additive synth
built from the Amiga's Amy sound chip
(256 additive partials--it gobbled the Amiga's
entire CPU and memory so the company dumped
it and licensed the rights to a start-up which
was promptly sued out of existence), and the
marvellous Technos 16pi...a synthesizer which,
if it had ever existed, would have been
superlative.
Well, chances are this is all blue-sky
fabulation. Chances are Ensoniq won't bother
to actually build synthesizers. Too much work.
And the MR rack tends to bolster that
viewpoint. Yet another wannabe sample-playback
box, yet another digital Mellotron.
It's ironic that Ensoniq is giving up the opportunity
to crush the Japanese synth companies. What with
their little Yen crisis and teetering financial
system, this is an ideal chance for American synth
companies to steal back the initiative that was
lost when the Japanese licensed FM technology
and ground the American analog synth manufacturers
into the dirt back circa 1983.
In any case, these remarks should be understood
inthe context of making Ensoniq's excellent products
better. Rather than angering the engineers and
management at Ensoniq, perhaps these words
will irk them into improving already fine
synths.
N.B.: Even though the Kurzweil 2500 series offers
gobs of RAM, the sampler does *NOT* have a
full-keyboard tuning table. Thus my next sampler
will be an ASR-10. Also, Dave Rossum at E-Mu
needs to take a look at Ensoniq's multiple tuning
tables and realize the *immense* importance of
more than one tuning table. JI compositions
which change key centers demand multiple tuning
tables, as does work with (say) a Wilson 70-note
hebdomekontany in only 128 MIDI notes. Allen
Strange has mentioned that he gets only 3 octaves
of Partch's 43-tone just scale in MIDI's 128 notes;
using the same timbre on 3 MIDI channels tuned
3 octaves apart would increase his range to
9 octaves. And, as usual, Ensoniq is the *only*
current manufacurer to support multiple tuning
tables.
Thus, for many microtonal applications, Ensoniq
synths remain the only real choice.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Wed, 20 Dec 1995 22:57 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id NAA26137; Wed, 20 Dec 1995 13:56:54 -0800
Date: Wed, 20 Dec 1995 13:56:54 -0800
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

12/21/1995 4:00:01 PM
From: mclaren
Subject: unsavory habits of the musical intelligentsia -- or --
the 12-TET musical elite have a point, but if they comb their hair
properly, it won't show
---
The savage dictatorship of the concert-hall-and-conservatory
musical establishment, which Greg Taylor is pleased to
imagine the product of my paranoid delusions, will brook
no deviation from the totalitarian musical status quo.
The clearest example of this Orwellian and ruthless
state of musical conformity is of course the string
quartet. Offhand, there's no reason at all why 4 fretless
string instruments couldn't perform in any scale desired--
19-TET, 31-TET, 53-TET, 11-TET, the free-free metal
bar scale, the Bohlen-Pierce scale, or any other tuning.
Naturally, any string quartet that gets handed such a score
will burn the offending sheaf of music paper and
bury the ashes. Naturally, all string players have been
programmed to perform in 12, only 12, always 12, forever
12.
Another example of ruthless 12-TET tyranny is the computer
music scene. Given the opportunity to explore any possible
musical scale, any possible set of harmonies, any conceivable
set of melodic pitches, the contributors to Confuser Music Urinal
choose to...explore 12 tones per octave. (For the most part. There
are a few exceptions. Dashow, Schottstaedt, a few others. All
have been ostracized and marginalized for their troubles.)
This is reminiscent of aged Devil's Island convicts who, being
set free of their ball-and-chain, still drag one foot and move
with snail-like gait. In this case the musiKKKal establishment has admirably attained its implicit goal of brainwashing
all & sundry into the use of 12 tones per octave: indeed, the Red
Chinese during the Korean war could hardly have hoped for better
results with U.N. prisoners. Such a state of mindless (musical)
conformity would bring joy to the heart of Stalin, and
send Hitler to sleep with an ecstatic smile on his face.
After composing some pieces recently in the Greek enharmonic
genus for an upcoming performance in a San Diego experimental
music club, the true bestiality of the current musiKKKal status
quo really began to become clear to me. Think of it: for hundreds
of years, it would have been a simple matter to retune a harpsichord
or a piano to the Greek enharmonic...yet *no one* did so. This is an
case of conformity unexampled in the history of mind control.
Such machine-like zombification would make even Mussolini cringe--
yet, somehow, we accept this grotesque and obscene lockstep
mentality as "the progress of music." ("From 12 to 12...in the beginning
was 12, and 12 was the Word, and the Word was 12..." One's puke-
meter pegs. One's gag reflex kicks in. And *still* it continues...)
The enharmonic genus is, as Ralph David Hill has pointed out, "almost
the most beautiful of all the Greek genera," and simple to
obtain on a 12-TET retunable instrument. No note would need to be
retuned by more than about half a semitone. Yet not a single composer,
not a single adventurous soul, dared to compose a piano sonata in
the Greek enharmonic genus. Too, with 12 pitches available, more
than one key center could have been explored: yet this was apparently
too terrifying an extremum for generation upon generation of
composers to contemplate.
In the words of the Outer Limits episode "O.B.I.T.": "O savage
despairing planet! When we come here to live, you will fall
without a single shot. Enjoy the few years left to you..."
Naturally these brutal and disgusting facts will be prestidigitated
away by gtaylor and his cronies with yet more smoke and mirrors,
reason upon complicated and unlikely reason why the iron fist of
12 equal tones doesn't *actually* rule inside the velvet glove of the
musiKKKal concert-hall-and-conservatory establishment.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 22 Dec 1995 16:29 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id HAA22330; Fri, 22 Dec 1995 07:29:49 -0800
Date: Fri, 22 Dec 1995 07:29:49 -0800
Message-Id: <9512220730.aa20418@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

12/22/1995 7:29:49 AM
From: mclaren
Subject: Transfinite systems Nugget or Gold Brick
---
Having obtained a used Mattel Power Glove from
one of the members of this tuning forum, I
recently learned that the company which makes
the widget that goes twixt the Power Glove
and the Mac ADB port has changed its address.
Called the number for Transfinite Systems.
Someone answered. Not Transfinite Systems.
Didn't know what had happened to the company.
Folks, it shouldn't be this goddamn hard. You
try and you try and you try, and the net result
is: zero.
Does anyone out there have a new address or
phone number for Transfinite Systems?
Does anyone out there have a lead on a Gold Brick
or a Nugget interface box?
--mclaren


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 22 Dec 1995 16:32 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id HAA22937; Fri, 22 Dec 1995 07:32:34 -0800
Date: Fri, 22 Dec 1995 07:32:34 -0800
Message-Id: <9512220730.aa20424@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

12/22/1995 7:32:34 AM
From: mclaren
Subject: Yamaha VL-1M and VL-7
---
The Yamaha VL-7/VL-1M physical modelling
synthesizers implement tuning in a weird way.
Here's the skinny:
You can't edit the 2 user tuning table I01 and I02.
Instead, you have to edit them on a TG-77 or SY-77
or on JICalc, then do a sys-ex dump to the VL-7/
VL-1M.
Why Yamaha chose to implement microtuning this
way is beyond me. It certainly makes it less
convenient to retune the instrument. As a plus,
user tunings are reportedly stored with the
instrument patches and are loaded from the disk
automatically.
The way Yamaha implements physical modelling
on the instrument is also peculiar and worth a
metnion. The physical model is fixed: a blown tube.
To get a Karplus-Strong plucked string sound (typified
by the fretless bass and sitar patches), the physical
model's mouthpiece is connected to its output. A
kludge--but one that works. A recirculating system
is created which, with appropriate losses for
acoustic admittance, mimcs the Karplus-Strong
algorithm pretty well.
To get a vibrating string, the tube is apparently
shrunk down to near-zero width. The resulting
one-dimensional tube subs for a vibrating string
and apparently also allows the user to apply a
mouthpiece with "embrouchure" to the vibrating
string--something not possible with a standard
Hiller-Ruiz vibrating string physical model or
the classic Julius Smith waveguide physical
model of the string.
Rumor has it that Yamaha has a MAX patch
available that'll allow users to completely
change the internal physical model. Instead
of being limited to a blown tube, the user can
dunk with internal VL-7 parameters and specify
any acoutsical system desired. Apparently, the
MAX patch comes with a WARNING -- KNOWLEDGE
of PHYSICAL ACOUSTICS IS REQUIRED TO USE
THIS EDITOR. Apparently it's easy to specify
an acoustic system which *cannot* produce
sound output. (Arthur Benade called these
things "tacit horns." Nice design, no sound.)
Does anyone have any knowledge of this
mythical MAX patch? As a xenharmonizing
future VL-7 owner, this question is of
some interest
--mclaren


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 22 Dec 1995 16:51 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id HAA23570; Fri, 22 Dec 1995 07:51:24 -0800
Date: Fri, 22 Dec 1995 07:51:24 -0800
Message-Id: <0099B415E1E82CFE.5467@ezh.nl>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

1/4/1996 8:23:26 AM
From: mclaren
Subject: Paul Rapoport's article
"The Notation of Equal Temperaments"
---
Xenharmonikon 16, available now from Frog Peak
Music or from John Chalmers, contains Paul's
latest essay on non-12 notation.
This is a subject littered with caltrops. To date, no
one has succeeded in producing a universal notation
which allows both easy performance AND analysis
of xenharmonic music.
Nonetheless, Paul makes many good points in
this article.
"Despite the considerable history of ETs, few
theorists have studied their properties or created
musical notations for them systematically. Because
most of the ad hoc solutions in notation of one ET
do not extend easily to many other ETs, such solutions
obscure similarities in related temperaments."
[Rapoport, P., XH 16, pg. 61]
However, the worm soon munches its way free of the
apple: "This article explores almost exclusively ETs
of the octave and implications based on harmonics
no higher than 5."
The implicit presumption? That one ought to strive for
a unified system of notation which relates all the
especially 12-like tunings with good fifths.
While this is a laudable goal, one needs must ask:
why spend so much effort on the 12-like tunings?
By far the most interesting equal temperaments
are those which have *little or nothing* in common with
traditional Pythagorean and 5-limit tunings. There
exist many oddball equal temperaments which
cannot be notated usefully by a 5-limit scheme,
yet whose "sound" compels ever-growing interest.
In this regard, 9-TET, 10-TET, 23-TET, 22-TET
and 21-TET stand out in particular.
9-TET, for example, violently abjures traditional notation,
yet it remains an endlessly fertile breeding ground
for xenharmonic compositions. In fact Erv Wilson has
called it one of his favorite tunings.
Paul derives several guiding principles:
[1] Additional signs must be perceptually distinct
from one another.
[2] Notation must reflect the determined nature of
the termperament. If there's more than one
way of deriving the temperament, there should
be more than one way of notating it.
[3] Additional signs beyond bb X, etc, should be
created for kommata representing the differences
in pairs of multiples of just intervals.
The first 2 points are eminently sensible.
Point 3 seems incomprehensible to me. If you're
going to relate your equal temperament explicitly to
just intonation via the notation used, you might as
well abandon equal temperament and go straight to
just intonation. Few equal temperaments < 48 are awfully
JI-like, and all of 'em contain internal structures
which inevitably frustrate the attempt to view
them as this or that outgrowth of just intonation.
As a result, many of the intervals which Paul
defines as "kommata" do not function as such in
equal temperaments below 48-TET. For instance, in 22-TET
a single scale-step is numerically a fair approximation
of the Pythagorean komma, but it does NOT REMOTELY
function like a just interval in context.(Significantly,
Paul does not discuss 22-TET.)
Again, the 1/17 octave interval in 17-TET, while
numerically near-indistinguishable from the
difference twixt the 6/5 and the 5/4, sounds in
practice like a semitone. In an actual 17-TET
composition, this interval contains NO audible
implications of just intonation whatever.
31-TET contains an interval quite close to the
41.059 cent "diesis," yet a single scale-step of
31 conjures up no auditory ghost of just
intonation. In fast chromatic runs, the 31-TET
scale-step functions like a very strange semitone,
and when used as a passing-tone twixt chords
its function is a somewhat small chromatic
passing-tone.
The idea of imposing on any but a few ETs with very
good fifths (all > 48, 46, 43, 41, 39, 37, 36, 34,
31, 29, 24, 19, 17, 12) a set of just kommata purporting
to represent the substructure of the tuning
seems suspect to me. Many intervals numerically
close to JI kommata do not function as such
such in the context of the equal tempered scales below

48-TET, and the futile attempt to force them to do so

produces mediocre music.
Examples of such music include some of Easley
Blackwood's "Twelve Microtonal Etudes For Electronic
Music Media," a project only partially successful.
The etudes which hew most closely to Blackwood's
theoretical and notational principles prove LEAST
successful as music: in particular, 17-TET, 16-TET,
19-TET and 20-TET. Blackwood jams 19-TET into a
12/oct straitjacket, and his 19-tone etude
suffers greatly from his refusal to showcase 19's
exotic non-Pythagorean intervals.
In the 16-TET etude, Blackwood derives an awkward
pseudo-tonal mode which goes against the inherently
anti-tonal, non-cadential grain of the tuning,
and the result sounds bad. The 17-TET etude suffers
grievously from Blackwood's unwillingness to recognize
& use the 5-unit neutral third as the ONLY functional
triadic third, while the 20-TET etude is mangled
by Blackwood's relentless insistence on generating
diatonic modes from 20/oct, rather than exploiting its
intercessant circles of NON-diatonic pentatonic 5ths
(as he did in 15-TET).
All of these problems stem from Blackwood's
doomed attempt to extend 12-like structures
into entirely xenharmonic tunings. He could have
avoided these faux pas if he'd ignored his
Pythagorean notational kommatic calculations and
relied on his ear instead of v, a, k, t and p.
To put it succinctly, Easley Blackwood concentrated
far too much on the "harmonic" and far too little
on the "xen" aspects of these xenharmonic scales.
By contrast, those etudes in which Blackwood
pretty much tossed out his theories and just sort
of gave up & winged it seem by FAR the most successful
musically: 13-TET, 23-TET, 14-TET.
Moreover:
In calling Blackwood's project "the most substantial
non-improvised recorded project in different
equal temperaments," Paul renders a decidedly
peculiar definition of "notation." Most of us would
argue that William Schottstaedt's and Jean-Claude
Risset's and James Dashow's and John Chowning's
and Richard Karpen's and Richard Boulanger's
computer compositions are FULLY notated...they
simply use a notation radically at odds with Paul's
cherished Pythagorean common-practice-period-based
paradigm.
Moreover, my own MIDI compositions and those of
many other xenharmonic composers use a notation
which is also perfectly standardized, entirely
reproducible, and which allows others to examine
and analyze the compositions--we use the MIDI
file format in which notes are represented as numbers
from 0 to 127. Again, presumably because this
notation is something that Beethoven wouldn't have
used, somehow our compositions don't exist and
aren't worthy of audition, analysis or (presumably)
mention.
Peculiar indeed.
This leaves aside, of course, algorithmically
composed xenharmonic music. That's a whole 'nother
passel o' varmints, chilluns.
Yet all the pieces of music mentioned above are carefully
and painstakingly composed, note by note, with *AT
LEAST* as much attention to detail as shown in
Blackwood's scores. The main difference is not
that "many such [electronic/computer music] works
are created without a score," but rather that they use
musical scores which Paul chooses *NOT* to recognize
as a valid symbology.
Thus, while Paul mentions "this article does not
address the issue of the utility of scores," it ALSO
(much more glaringly) does not address the issue of
whether a piece of carefully-composed, closely-
worked-out music notated in a completely untraditional
way (e.g., Csound .sco file or MIDI file) ought to be
treated as though it doesn't exist merely because
the notation cannot be viewed as a variant of
some 19th-century central European concoction.
(Slippery slope time: do Gregorian chants notated
with neumes qualify as musical scores? If so, why
don't MIDI files printed out in piano-roll notation?
And why aren't sonograms of Risset's and Chowning's
and Dashow's and Schottstaedt's and Lanksy's
computer music pieces *also* scores? ...We're on the
slippery slope, and it's gettin' slipperier by the minute...
In desperation, one MIGHT claim that 19th century musical
scores allow acoustic performers to reproduce the music.
This (such logic goes) distinguishes them from MIDI
files or Csound .sco files.
But how many live acoustic xenharmonic concerts did
YOU attend last year? And didn't ALL of 'em use one-of-
a-kind exotic homebuilt acoustic microtonal instruments?
So what good is a 19th-century-type score if there's only
one set of microtonal instruments in the world that
can play 'em?
..Slippery slope time, kiddies! Let's face it:
essentially no one attends or gives live acoustic
concerts any more, and since 99.999999% of the music we
all hear is now delivered via electronic media, this is
a VERY weak and flabby and etiolated argument for ANY
particular flavor of musical notation.)
Paul's chart of kommata is admirably clear and his
ranking exemplary; he is probably right that,
for ETs with good fifths, the syntonic komma is
most important for quasi-19th-century notation.
Paul makes a good point in dealing with 17:
"the third in question (5 u) happens to lie very close
to half way between the actual just major third
(386.314 cents) and just minor third (315.64 cents).
It may therefore be interpreted as either or neither,
depending on musical treatment of the temperament."
This leads to an alternate notation which does not
involve sharps or flats, and proves much superior to
Easley Blackwood's notation for 17.
However, those of us who've worked extensively
with 17 would go even farther--many of us would
contend that 17 has only ONE functional triadic third:
the 5-unit third. The so-called "major" third in 17
is unbearably dissonant and useful only in melodies,
or vertically as a passing tone or a cambiata. Thus
many of us would urge that the so-called "major" third
of 17 be notated as a species of fourth--since, like
that interval, it is functionally unstable when employed
vertically.
Paul's 2nd notation of 53 seems as good as any other.
It avoids the problem of too many flats and sharps,
as usual by substituting odd new symbols. Again,
this eases clutter but reduces sight-readbility. New
symbols instead of the familiar sharp & flat will
always prove more ambiguous for sight-readintg,
since they're by definition unfamiliar.
Paul's first notation for 25-TET seems less than
successful, since it uses note-names E & F.
25-TET's most obvious audible characteristic is
its pentatonic "mood." This, because 25 boasts
not one but 5 circles of identical 5-TET fifths.
The 25-TET fifth sounds unmistakably pentatonic--
it's the same 720-cent fifth found in all multiples
of 5-TET up to 45-TET.
Thus, a notation which implies that there are
more tones than sharped- or flatted-versions
of C, D, E, G and A proves less than useful.
Paul's 2nd proposed notation admirably corrects
this problem and exposes the five pentatonic
circles of fifths, as does Paul's third
suggested notation. This is a big improvement
over Blackwood's notation, which retained
E and F and B and C as exact enharmonic
equivalents (talk about willful obfuscation!).
Paul also points out that even *his* generalized
notation breaks down for ETs without fifths.
As an example, he gives 13...which has certainly
resisted any attempt to force it into a traditional
notational mold.
This is inevitable. No notation can cover all
equal temperaments. The main question is:
where ought the notation to break down?
And how?
Curiously absent in this regard are tunings with
good fifths but absolutely no point of contact to
traditional tunings: 26-TET, 22-TET, 35-TET, etc.
Paul's treatment of 13-TET as every other note
of 26-TET strikes me as peculiar, inasmuch
as the two tunings bear not even the most remote
audible relation to one another. Any relationship
is a purely augenmusik calculated-numbers-on-
paper sort of thing, and does not strike me as
productive. Similarly, notating 11 as every
other note of 22 would be equally fruitless--the
mind can calculate, but the ear cannot hear,
a relationship between the two. In both cases,
one must *listen* to the tuning and *throw out* the
numbers if they conflict with common sense.
In both cases, notation for 11-TET ought to bear
NO resemblance to notation for 22, ditto
notation for 13 and 26. If the two tunings sound
utterly different, they should be notated utterly
differently.
Perhaps Paul should add this as a 4th general
principle...?
Paul's treatment of negative kommata (33-TET)
seems eminently reasonable. By avoiding sharps
and flats, many notational paradoxes are averted.
Of course, dispensing with sharps and flats also
eliminates much of the analytic value of a
musical notation. If you can't tell at a glance
whether one note is higher or lower than another,
it automatically poses problems for musical
analysis. In this case, one might be better off
studying a printout of the MIDI note numbers, or
the Csound .sco file Hz values. But for Paul's
purposes it is obviously better not to raise such
unsettling questions.
His treatment of 14-TET proves less satisfactory.
Alas, in 14 (as in 7-TET) the modes collapse back
into the keys. There is no major or minor: 3
scale-steps give 257.1 cents, too small to
sound or function as a minor third, while 4
scale-steps yield 342.8 cents, a neutral
third antithetical to Pythagorean theory. 5
units = 423.5 cents, too large to function
as any kind of recognizable major third.
This situation proves so puzzling to devotees of
19th-century-style symbology that it forces
the unwary notation-theorist to twist hi/rself
into knots to get out of the problem.
Notating 14 by taking every other note from 28
begs the problem. In fact, the issue is that
14 has two circles of 7 identical fifths, whereas
28 has 4 of them, and they ALL use neutral intervals
as building-blocks. The 2-out-of-28 dodge
obscures this basic fact, and tends to dupe the
inexperienced xenharmonist into imagining
that 14 has something like a major or a minor
mode when in fact it has neither: merely two
overlapping neutral 7-tone scales melding into
a neutral 14-tone scale--and not a diatonic 14,
either. Logic would indicate two simple
overlapping sets of identical A B C D E F G
symbols. Perhaps A A* B B* C C* D D* E E*
F F* G G*? (Ivor Darrg's notation.)
This example illustrates the problems that Paul's
generalized notation creates when there are
NO Pythagorean landmarks--in this case, because
the multiples of 7-TET up to 42-TET are constructed
from completely non-diatonic building blocks.
A Pythagorean musical paradigm fails when faced
with 7 anti-diatonic utterly equidistant tones:
it flails like a moth caught in a searchlight.
The product of a xenharmonic notation based on
inappropriate diatonic kommatic assumptions is bound
to falter & collapse for multiples of 7-equal.
In fairness, Paul points out the problems his notation
encounters with 50-TET, which is certainly no
surprise: no proposed notation has dealt adroitly
with such an oddball tuning. (Ditto 32, 27, 29, and
particularly 35, which is probably the ultimate
nightmare from notation hell!)
Paul's introduction of numbers as superscripts is
clearly UNsuccessful. The entire reason for using
symbols to notate notes, rather than Arabic numerals,
is that the human brain has evolved a marvellous
pattern-recognition facility which operates at vastly
greater speed than any possible number-calculation
facility.
Once memorized, symbols are instantly processed
by a huge glob of visual cortex. Ergo, the
miracle of sight-reading. Not so numbers. Numeric
stuff crawls through the forebrain, where it clogs
everything up and bogs everything down.
Thus, it is impossible to instantly sight-read
columns of numerals, whereas one can easily sight-read
and musically analyze a bunch of visually striking
symbols.
Combining numerals with symbols forces the
brain's spiffy pattern-recognition wetware to slow
down to the pace of the number-recognizing forebrain
(a much more recent and thus less efficient evolutionary
addition), auguring ill both for the prospective
sight-reader AND the would-be music theorist.
Paul points out that F. R. Herf's and E. Sims' 72-TET
notation is a one-off chimera, not useful for other
temperaments. True, alas, and typical of all too many
xenharmonic notations based on but a single tuning.
The same could be said of Joseph Yasser's 19-TET
notation, etc.
Overall, the article is refreshing and offers
excellent insights. While many of us would quarrel
with the issue of what constitutes notation, Paul
appears to have generally succeeded in producing
a notation flexible enough to accomodate non-weird
non-oddball equal temperaments below about 53--
or at least, those which boast good fifths.
In my judgment the "weird" tunings like 26
and 19 and 22 and 32 demand entirely separate
treatment. Ideally, thse tunings ought to have their
OWN notations--preferably as distinct as possible
from any others.
It seems clear that sharps and flats are most useful
in those tunings which *sound* as though though they
have recognizable semitones. Thus, use of sharps and
flats in 19 is wilfully perverse--and hellishly confusing
in 9 or 10. There may be no way out of this conundrum.
The issue of ETs without fifths was deftly resolved
by Augusto Novaro, who simply proposed using
numbers instead of noteheads on a single staff
line.
Incidentally, by proposing a notation for 171-TET
Paul has also notated the non-octave scale Carlos
Gamma, since it is audibly identical to every 5th note
out of 171-TET.
A much larger issue is the quesion of whether 7 basic
note-names is or should be the be-all and end-all of
music notation. Miller's article "The Magic Number Seven,
Plus or Minus Two" (J. Psychol., 1956) makes it clear that
the human brain can efficaciously process as few as 5
or as many as 9 different note-names. Yet there have,
to date, been almost NO suggestions for eugmenting
the basic A B C D E F G seven note names by
including up to two more--say, H and I (NOT to
be confused with the German H for "B flat").
There have also been NO discussions whatever
to my knowledge about 6-note or 8-note modes,
especially in prime-number ETs.
(How about it, Mnauel? Any chance of your
writing a computer program to find & list all
the 5-, 6-, 7-, 8-, and 9-note modes of every

relatively prime ET with fifths less than 21

cents away from 3/2, from 5/oct through 53/oct?)
Why such 12-centric thinking?
Why must ALL xenharmonic equal temperaments
employ always and only SEVEN note names?
Why must ALL xenharmonic musical modes use always
and only 5 or 7 notes?
True, 5 and 7 are relatively prime to 12--and so what?
6 and 8 and 9 are relatively prime to 19, or 17, or
29.
Why only 5- or 7-note modes when we move outside
of 12 tones/oct?
Why not 6?
Why not 8?
Why not 9?
Paul will probably take issue with some of
these points, particularly where music-theory students
analyzing pieces of xenharmonic music are concerned.
However, I would point out that so few xenharmonic
pieces of music have been composed--and so few
music students have gotten together to analyze them!--
that to date the issue remains a pie-in-the-sky
abstraction at best.
It is entirely possible that xenharmonic composition
will demand such a schismatic break with the past that
previous 19th-century notational paradigms must
be thrown out. However, we must be wary of such
proposals. John Cage and other foolish folk made
similar noises in the 50s about *their* brand
of foofaraw, and--as the magazine "The Wire" put it
so concisely in its November 1995 issue--"John
Cage's music was intensely theoretical and centered
around the cult of personality of John Cage, and as a
result most of it is today unlistenable."
Claims that "THIS musical revolution requires a COMPLETE
break with the past!" are perennial, and have never proven
true. Thus we must view with the utmost skepticism any such
pronouncements made on behalf of microtonality.
For the moment, until this issue is resolved, Paul's article
seems an admirable advance in the state of the art of
microtonal notation.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 5 Jan 1996 08:43 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id XAA29081; Thu, 4 Jan 1996 23:43:23 -0800
Date: Thu, 4 Jan 1996 23:43:23 -0800
Message-Id: <01HZMVBYZ5YA9D7TAS@delphi.com>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

1/5/1996 2:16:09 PM
From: mclaren
Subject: novelty, craftsmanship and microtonality
---
Among its many blessings, musical modernism bequeathed
post-1945 composers the freedom to explore new
musical forms and new musical languages. Among its
many sins, musical modernism elevated novelty as
sole yardstick of musical value.
Like Marxism, musical modernism is now defunct. Each
ideology discredited itself after failing its promise to
engineer the ultimate state of human affairs.
In the case of Marxism, history had reached
its end...or so its followers were told.
Any day, "real soon now," world communism would produce
a workers' paradise.
Similarly, in the case of musical modernism,
musical history supposedly ended in the 1920s with
Schoenberg's invention of the tone row and the various
fetishes concocted by post-Webern serialism. After 1930,
(according to the modernists) no further musical
evolution was possible. All future serious music, from 1930
until the end of time, could consist only of successively
more subtle refinements of serial atonal technique.
(Some would call them successively more bizarre
perversions of the basic fetish, but this is a matter
of terminology. Whether one calls one's attire "a stylish
informal outfit" or "leather S&M bondage gear" depends on
one's point of view.]
Of course the existence of *this* microtonal tuning forum
disproves musical modernism's claims. If 12-tone
serialism was in fact the beginning and end of all musical
wisdom, why did subsequent generations of composers
bother to reach outside the 12 sacred tones? After all,
serial atonality stood at the very apex of musical evolution--
so any deviation from that orthodoxy constituted a fall from
grace. Microtonality can only be viewed by the modernists
as, in John Cage's words, "just another wing on the chapel,"
one of the most breathtakingly short-sighted faux pas
by a Zen master of short-sightedness.
Thus anything other than the standard 12 tones per octave
constitutes a debased state of musical practice, according
to the Holy Writ brought down from Mount Princeton by
Milton Babbitt and his toadies.
This, of course, shows up one of the most glaring flaws
of modernist dogma: in idolizing novelty for its own
sake, modernism creates a self-destructive paradox.
To wit: if it's the ultimate endpoint of musical evolution,
then any other kind of music cannot be taken seriously.
But if novelty is the exclusive measure of musical value,
then music MUST constantly change in order to be taken
seriously.
Thus modernism demands that, in order to measure
up, new music simultaneously remain the same and
constantly change. Since this is obviously
impossible, modernist music faced irreconcilable internal
conflicts.
One harks back to the berserk computers in old Star Trek
episodes: "ERROR! ERROR! ILLOGICAL! ERROR!"
There remains the question of which of the three tenets
of musical modernism can be salvaged for future
generations of composers--if indeed *any* of its tenets
can be salvaged.
The existence of this forum would tend to undermine
the odd notion that there is something sacred about
the number 12 when applied to divisions of the octave.
The whole idea is reminiscent of those alleged "666"s
in the Procter & Gamble's logo.
But what about the value of atonal serialism, and
of using novelty as the exclusive basis for judging
the quality of new music?
Like most late 20th-century trends, the reaction against
serialism has gone overboard. Some excellent serial
music was composed early in this century--almost
all of it prior to 1945. Perhaps with a reduction
in the total number of tones (Schoenberg's first serial
composition used 11 out of 12) or a change to new
tuning sytems, or the separation of the yoked requirements
of atonality and serialism (in 19-tone equal temperament,
for instance, a 12-tone serial row can modulate from one
key center to another--see M. Joel Mandelbaum's Prelude
No. VI, 1961) serialism will provide a useful direction
for future composers.
This leaves the question of using novelty as a yardstick
for quality.
By itself, novelty is a dead end. One of the most peculiar
and interesting experiences I've had recently is in
making up a computer hard disk file of 3 CD recordings
interleaved at random. The three compostions are
the second movement of Schoenberg's "Five
Orchestral Pieces" from 1915, Stockhausen's "Gruppen"
from 1958 and Elliott Carter's "Orchestral Variations"
from 1989. Each of these compositions was created
about 30-40 years apart, yet the overall effect of listening
to them is that they're basically the same piece of music.
This illuminates the paradox of using novelty as the standard
for judging new music. After a generation of trying
all possible new combinations of instruments and
musical structures, new music got caught in a rut.
Very quickly all possible wacky schemes for generating
new shock-value stunts are used up: scraping phono cartridges,
shooting a machine gun at a piece of manuscipt paper,
rolling naked women in paint on a graphic score,
performing a score without sound so that only the
finger-clicks of the woodwinds and rustle of string players'
sleeves make noise; composing huge textural
pieces in which every piece in the orchestra perforrms a
different melody, notating impossible-to-play solo
instrumental pieces with far too many embedded
tuplets and extended techniques for humans
to perform; ad nauseum.
Wjether it's whipped cream and hamsters, or flipping coins
and burning pianos...the whole sorry spectacle tends to blur
after a while.
After a few years, every possible three-card musical
monte trick that could be tried *had* been tried.
Thereafter, so-called "serious" modern composers ran
up against the limits of the human perceptual system.
While they continued to produce music that *looked*
ever more complex on paper, to the human ear it *sounded*
the same as last week's purportedly "breakthrough" new
composition, and as next week's supposedly "groundbreaking"
new composition, because the human perceptual system
had saturated.
Beyond a certain level of complexity, all those notes
lumped into a big random glob; beyond a certain
level of rhythmic subtlety, all the embedded n-tuplets
sounded like a Parkinson's patient playing
"chopsticks."
Thus novelty (paradoxically) when pushed to its outermost
limit forced modern composers away from so many perceptible
and comprehensible musical structures that the only
structures and techniques left were imperceptible.
The result?
Random-sounding junk.
This is the state at which so-called "serious" composers
(most of whose compositions could not be taken seriously)
had arrived by the late 70s, and it is also the reason for
the existence of this tuning forum.
As a result, novelty is not a useful yardstick of compositional
value. Any more than the length of a composition is useful as a
measure of value... Other measures of compositional
quality must be found.
I would suggest craftsmanship and competence, at
the risk of being burned at the stake--since these values
are even more discredited nowadays than atonal
serialism.
As witness young composers like Alison Cameron--folks
with plenty of raw musical talent who haven't yet mastered
elementary musical skills like learning when to take a
breath (metaphorically speaking), or constructing a
musically interesting dramtic arc... Much less the
arcane and forgotten art of counterpoint.
Oddly, although serialism and atonality have been completely
devalued by the doyens of today's musical avant garde, most
composers who call themselves post-modernists still worship
at the musty altar of novelty and still obsess over the
length of their compositions.
This is true even in microtonality, and it's proven a real
surprise to me. More than one person has dismissed
this or that just intonation composition on the grounds
that "it's just another 7-limit piece," or "it's just another
example of 13-limit."
We who compose outside the 12 tone scale should take note
(all puns intended) of this lamentable trend
and be on our guard against it. Just as novelty was
ultimately self-destructive and trivializing when misused
as a measure of musical quality, it is equally self-
destructive when applied as the gauge of
a microtonal composition's worth.
One of the greatest sins of musical modernism was the
devaluation of basic competence in favor of stunts and
scams. This ultimately led to the eradication of a whole
spectrum of basic skills from an entire generation
of composers.
Until the recent advent of the MAX composition
language and the widespread use of MIDI in
post-modern music, counterpoint was a lost
art among modern computer composers. (With notable
exceptions: Lansky, Schottstaedt, et alii.) The ability
to write an interesting melody, add another equally
interesting melody on top of it, turn them both upside
down and add another interesting melody on top, then
reverse the whole front-to-back and add another interesting
melody on top, ad infinitum...
This is a forgotten skill.
Just as few post-modern artists have any aptitude at
draughtsmanship because drawing is no longer
emphasized in modern art classes, today's generation
of composers have virtually no skill at counterpoint--
because it is a subject no longer emphasized in modern
music classes. Instead, elaborate formal methods
are the focus of contemporary composition courses--
beginning (naturally) with pitch class matrix trivia and
progressing through ever-more-convoluted, ever-more-novel
algorithmic contortions. (I should add here that the
current species counterpoint exercises used in
composition classes are not only useless in teaching
real-world contrapuntal skills, but probably destructive.
Students get the idea that counterpoint is a dusty 16th-
century academic exercise without redeeming practical
value; the only way to *truly* teach counterpoint is to
require students to compose *real* pieces of music
using the techniques perfected in the era of ars subtilitas.
Since few music teachers are nowadays qualified to do this,
it's hardly any surprise that counterpoint is a lost art.
After all--how many of today's music professors can
even *pronounce* "ars subtilitas," much less demonstrate
expertly the contrapuntal techniques perfected in that
era?)
I have not addressed the question of serial counterpoint
as such since with more than two widely-separated notes
serial counterpoint is neither interesting nor perceptible,
and thus cannot be said to exist save in an abstract sense.
To his great credit, Webern understood this; the bulk of
his middle-to-late works use no more than two notes
(lines) at once. To their great discredit, subsequent
generations of serialists ignored this lesson.
For proof of my contention one need look no farther
than the alleged compositions of John Cage, John Corigliano,
Brian Ferneyhough and Larry Austin. These duffers
demonstrate a complete lack of contrapuntal
skills--indeed, their level of contrapuntal ability
is so remedial as to embarrass even a junior high
school student.
Fortunately, MIDI and MAX have radically changed
the character of avant garde. Music and composers
of more recent vintage are beginning to discover
that some rudiments of contrapuntal craftsmanship
are helpful when algorithmically combining
separate melodic strata.
Oddly enough, formal gyrations and contortions
with this or that fractal or this or that chaotic
attractor do not suffice to produce interesting
melodies combined and manipulated in interesting
ways.
Gosh... What a shock, eh?
In the same way, the extinction of the short
composition is a trend much to be lamented. Indeed,
short pieces of music survive nowadays only as
commissions for large orchestra--the truism being that
if you compose anything too long and too hard,
it will take more than 1 rehearsal to learn and the
orchestra won't play it properly as its one and
only public performance.
The idea that a 2-minute composition is inherently
less "weighty" or less "substantial" than a 2-hour
composition is a bizarre notion, and one I'm at a loss
to explain. One would expect that the collapse of the
romantic-composer-as-titan myth would also have
discredited enormous complex multi-hour-long
pieces of new music as the ultimate ideal for
the po-mo composer... But no.
Oddly enough, po-mo compositions seem to have
suffered *more* hypertrophy of late, rather than *less*--
po-mo works have grown even *more* Wagnerian
as the 20th century winds to a close. Thus
Stockhausen's wacky unlistenable multi-
day-long opera "Licht," LaMonte Young's
preposterous day-long drones, and the rest
of the sorry spectacle of longer-is-better
snore-a-thons.
The idea that a 2- or 3-minute-long composition
isn't a serious piece of music seems to have taken root
even in this tuning forum.
Amazing!
It's a weird and outlandish delusion...
According to this off-kilter topsy-turvy logic,
Bach's inventions aren't "real music," they're
just "sketches" or "demonstrations." This is
a concept so strange that it just bounces off
my brain...I cannot imagine the hebephrenic state
in which such a conclusion makes sense.
The quality of a composition depends, one
would expect, on the quality of the compostioin...
not on its tuning, its length, its instrumentation,
or any other incidental factor.
It seems to me that this is an especially mischievious
misconception, and one against which we must be
ever-vigiliantly on guard. Particularly in
the case of microtonal compositions. With so
many tunings to explore, xenharmonic
composers are especially liable to produce
many short compositions rather than a few
long ones. ("So many tunings...so little time.")
Thus the pathological and fetishistic
worship of sheer length--the more minutes,
the better the piece--is particuarly pernicious
when misguidedly applied to microtonality.
--mclaren


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sat, 6 Jan 1996 06:20 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id VAA19051; Fri, 5 Jan 1996 21:20:24 -0800
Date: Fri, 5 Jan 1996 21:20:24 -0800
Message-Id: <960106001847_33235881@mail04.mail.aol.com>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

1/6/1996 11:33:49 AM
From: mclaren
Subject: CLM, Linux, and microtonality
---
In graciously responding to some of my
gripes about Csound, William
Schottstaedt mentioned that his CLM
emvironment runs under Linux.
This was news to me. Since CLM is by all
accounts one of the most deluxe composition/
synthesis environments, it's welcome news
indeed.
This forum has, since its inception, served
as hang-out for 2 kinds of subscribers:
ordinary schmucks with IBM or Mac desktop
systems and mostly MIDI synths and software,
and the few, the rich, the tenured--who generally
use some flavor of UNIX (NeXTstep, typically)
to run software-based synthesis apps like CMix,
CLM, or Csound.
Relatively few of the ordinary yutzes (like,
say, moi) compose much with software-based
synthesis languages. In part, this is because
it's so much faster & more efficient to use MIDI.
In part it's because the academic freeware
is so damn hard to use and almost completely
undocumented, but let's not beat *that*
dead horse.
This leads to a huge echoing chasm twixt
the academic microtonalists and their
dirt-poor scumsucker "just folks" counterparts.
The two groups literally speak different musical
languages.
MIDI is a dumb, slow protocol that basically
tells dedicated hardware when to goose-step.
99% of MIDI's messages revolve around
modulation--vibrato, filter cutoff, tremolo
settings, reverb depth, envelope bias, etc.
There is no MIDI continuous controller
dedicated, for example, to making a timbre
more inharmonic or adjusting the Nyquist rate
of the synth.
MIDI commands reflect this concern
with note start times. There's a note-on, but
no message that tells the synth how long the
note will be--a clear indication of MIDI's design
as a real-time performance protocol. 90% of the
parameters in all MIDI synth patches control
real-time modulation--response to aftertouch,
vibrato depth as a function of wheel position,
attack rate as a function of key velocity, etc.
MIDI is a coarse-grained protocol. If you
send more than 20 or 30 note-ons, you'll
get arpeggios instead of chords.
In contrast, the software synthesis languages
like CLM, Csound and CMix are relatively smart
and very finely granular. While MIDI doesn't
know or care how a synth responds to a note-on
message, software instruments can change
their behaviour depending on the note's
pitch, its duration, the number and type of
other notes playing at the same time, the
location in space or in the overall composition
at which the note appears, etc.
Software synthesis instruments are typically
specified at the level of the individual overtone
and involve no real-time modulation parameters.
Computer composers often write programs to
explicitly generate dozens or even hundreds of
parameters for each software-synthesized note
outside of real time, so responsiveness to
real-time modulation parameters is a
non-issue when composing in Csound or CLM
or CMix. The parameters are "built in" for each
individual note and can be exquisitely fine-tuned.
This has important consequences for microtonal
composition.
Software-synthesized xenharmonies tend to be
very finely wrought, with layer upon layer of
acoustic complexity. Software-synthesis
microtonality appears to be centered as much
around timbre as around notes. Even in the works
of those composers notable for their contrapuntal
skill (Paul Lansky, William Schottstaedt, Bill
Alves, Jonathan Harvey) timbre remains uppermost
as a factor in the "non-12" sound of the music.
By contrast, MIDI microtonality centers almost
entirely on harmony and melody. Timbre is of minimal
concern, because commercial synthesizers offer
nothing in the way of detailed fine-grained
control over that parameter.
Retuning individual overtones remains outside of
the purview of the MIDI composer, *especially*
during the course of the composition. (William
Sethares has written some custom FORTRAN
routines to read LEMUR analysis files and warp
FFT'd sounds into a given tuning, but this is
a rare exception, takes gobs of time, and demands
a sampler with enormous amounts of RAM. Thus Bill
Sethares remains the lone exception in this regard.)
The very model of the MIDI microtonal composer
is Warren Burt, whose compositions are essentially
contrapuntal and harmonic. Timbre is a non-
issue.
This gives an interesting 18th-century "gebrauchmusik"
sound to Warren's compositions, even though they
use rhythms, melodies and harmonies no 18th-century
composer would ever contemplate, while it gives
the compositions of someone like Paul Lansky an
oddly modernistic sound--despite the fact that Lansky
often uses conventional 12-tone equal temperament
(as in the "idle chatter" series of compositions, all
in g minor, whitebread 12/oct).
To date there's been little contact twixt the two
camps of microtonalists. Thus, many MIDI xenharmonists
confidently make statements about timbre, consonance
and dissonance which are simply untrue when software
synthesis (allowing individual partials to be retuned)
is involved. In like manner, many academic microtonalists
lose sight of the musical forest by exploring mathematical
partition-function and pitch-class set/chaotic note generator
minutia, rather than asking the larger questions: What kind
of intervals does this tuning have? How does it "sound"?
How can harmony and melody be used in this tuning in
ways which differ productively from harmony and melody
in 12/oct?
Thus, the investigations of John Clough, Gerald Balzano
and Carlton Gamer appear to have had much more impact
on MIDI microtonalists (for example) than on software
synthesis composers.
Meanwhile, the ideas of folks like William Sethares and
James Dashow appear to have percolated more thoroughly
into the software synthesis camp than the MIDI contingent.
This has led to further confusion and miscommunication.
Academics often write some of the most insightful
discussions about the internal structure of non-12 tunings,
while non-academics often write some of the most
useful monographs on the "sound" of non-12 tunings
and the interaction of tuning with timbre.
NeXTStep might have bridged the gap twixt these two
microtonal factions--but alas! That operating system
is now dead and buried. It's been overpriced FAR out
of reach both of academics *and* the rest of us, and is
now essentially defunct except on antique legacy
machines like the NeXT cube. (A machine roughly
25 times slower than the P6 @ 200 Mhz.)
Recently, however, Linux appeared...and this
operating system might finally offer a bridge twixt
the two worlds of microtonal composition.
Linux is extremely stable, according to my UNIX-
guru friend. Under X Windows, it's reportedly easy
to use. The big stumbling block right now appears
to be setting up Linux to run under X Windows
with your particular monitor... The process is
more complex than superstring theory.
(Do YOU know the "dot rate" of YOUR monitor?
Not me!!)
Only within the last 5 years have desktop
IBM machines grown fast enough to fully shoulder
the burden of software synthesis. But now that
it's happened, IBM PC prices have dropped so
far so fast that it's hard to imagine anyone in or
out of academia will be using NeXT cubes or
other legacy antique machines for very much longer.
The InfoMagic 4-CD set of Linux with complete
X Window support and all utilities now runs a
whopping $20 down at your local software store.
It's very hard for me to believe that NeXTStep-486,
at a cost of 5 thousand dollars (yes, $5000.00),
will survive the competition with the $20.00 Linux
operating system.
Linux can read DOS disks and apparently
offers the user the option of setting aside one
partition of the hard disk for DOS files. This
clearly would go quite a ways in bridging the
gap between software and MIDI synthesis.
Ideally, as a microtonal composer, I want total
control over EVERY asect of the composition--
and reasonably easy, fast, efficient control.
Combining Linux-based spectral modification
programs with CMIX, a sampler, and MIDI sequencers
might accomplish this.
My ideal system would let me tear apart an
acoustic sound, retune the individual partials,
then generate real-time scores with the ease
and simplicity of a MIDI sequencer.
Obviously this goal lies some years in the future,
but the Linux and DOS/Windows combo seems at
least a step in that direction.
Right now this kind of integration is essentially
impossible. Even the NeXT cube didn't offer
MIDI sequencing with audio synchronization,
nor analysis with real-time resynthesis.
But now, with Linux, there might finally be a
bridge between these two styles of microtonal
composition. Especially if & when someone
produces a microtonal synthesis program like James
McCartney's SuperCollider that runs in real
time on an *affordable* computer (the Power Mac
is not currentlfordable and probably never
will be--everyone with $4000 to spend on
a Power Mac raise your hand, please).
Such a program would bode well for interactive
real-time (those 90s buzz-words!) acoustic-and-
digital microtonality.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sun, 7 Jan 1996 00:16 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id PAA00718; Sat, 6 Jan 1996 15:16:46 -0800
Date: Sat, 6 Jan 1996 15:16:46 -0800
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

1/6/1996 7:36:15 PM
From: mclaren
Subject: A wish list for microtonal
synthesizers
---
Let me tell you a story...
The tale starts back in 1975, when Hal Alles
at Bell Labs designs a spiffy board that
spits out 256 separate additive sine waves &
uses it to build the Bell Labs DIgital
Synthesizer.
Fast forward to 1978:
Crumar (then a big synth manufacturer, now
defunct) decided to build a digital synthesizer
based on Alles' board.
4 years later, Crumar rolled out the Synergy.
This instrument was controlled by (gasp!)
two Z-80 chips. Super hi-tech, eh? Wow!
A full eight bits of computing power! And
running at the awesome speed of 4 Mhz!!!
Well, now that the laughter's over, here's
a sobering thought:
The Synergy STILL has, even TODAY, by far
the most complex architecture of any digital
synthesizer ever built.
Hello!
Ensoniq?
Are you there?
Q: What's the most important part of any
synthesizer?
A: Envelopes, envelopes, envelopes!
The complexity of the synthesizer's envelopes
ENTIRELY determines how complex and subtle its
sounds can be.
A synth with the world's most elaborate synthesis
algorithm, the most exotic & beautiful wavetables
ever designed, and the most sophisticated effects
buss in christendom, still sounds like crap if it
uses crude 4-stage ADSR envelopes for the
oscillators.
It's shocking and alarming to me that the Synergy,
designed in 1978, STILL has not been approached in
the sophistication and flexibility of its envelopes.
The Synergy allowed up to 16 envelope points
for each oscillator. You could set any two of
the points as loop points for sustain
when the key was held down.
But wait! There's more!
You voiced each oscillator TWICE--one 16-point
envelope for minimum key velocity, the other
16-point envelope for maximum key velocity.
Then the synth interpolated between those
2 envelopes in real time for all other
key velocities.
This gave the oscillators a remarkably subtle
"lifelike" quality.
But wait! There's more!
You also voiced each oscillator TWICE for
the frequency envelopes--which could also
contain up to 16 points, for both min and max
key velocity.
The synth gave you a pool of 32 oscillators.
You could assign 'em any way you liked.
You could add oscillators or use 'em to
modulate one another--MORE flexibility. Not
only that, but you could choose from 8 different
waveforms--for each individual oscillator.
But wait! There's more!
Lastly, the Synergy let you set the amplitude
of each oscillator for each group of 3 keys--this
was essentially what Yamaha now calls "fractional
scaling."
In effect, a digital formant filter.
The result?
Unparallelled subtlety and complexity of sound.
The Synergy has NEVER been equalled by ANY
other digital synthesizer in this regard.
It had aperiodic vibrato--that is, it allowed you
to mix a controllable amount of digital noise
with the LFO...again, giving the Synergy
a remarkable lifelike vibrato or tremolo.
Now, let's fast-forward 20 years...
Desktop supercomputers...cheap 1 gig disk drives...
magneto-optical storage...DSP chips cranking out
hundreds of MIPs...Csound on desktop machines
running at lightning speeds...
And guess what?
NO synthesizer manufacturer has YET implemented
envelopes REMOTELY as flexible and complex
as those on the antique 2-Z-80-controlled
Synergy of 1978.
C'mon, folks!
The problem CAN'T be hardware! We've got hardware
up the wazoo. We can handle such a synthesis
architecture with elan. Today's synths could eat
those kinds of envelopes for breakfast.
Yet no one, absolutely NO synthesizer manufacturer,
has implemented such flexible envelopes.
To its credit, Ensoniq has done slightly better
than the rest of the synth manufacturers in this
regard.
Ensoniq's 8-point interpolating amplitude envelopes
are the closest I've seen...but that ain't
too close.
As a microtonal composer, my most basic need
is for a synthesizer with complex, flexible envelopes
each of whose oscillators can be precisely detuned.
The ideal would be a synth that can do what my
Csound instruments do: a synth that allows
20-point frequency AND amplitude envelopes with
30 to 60 oscillators at a time in real time.
Offer me such a synth with a tuning table, and I'll
fight through a nest of amphetamine-crazed
echidnas to buy it.
Until then, I gotta ask myself: why does the latest
issue of Confuser Music Urinal make a big deal
about an article that describes a chip with 127
additive oscillators?
C'mon, folks. The Synergy offered 32 oscillators
(2 banks of up to 16 each) back in 1978.
Gimme a break.
It's time we moved up to the level of sophistication
attained 20 years ago with a pair of 4 Mhz eight-
bit Z-80s, don't you think?
--mclaren


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 8 Jan 1996 16:48 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id HAA13530; Mon, 8 Jan 1996 07:48:32 -0800
Date: Mon, 8 Jan 1996 07:48:32 -0800
Message-Id: <9601080749.aa17311@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

1/8/1996 7:48:32 AM
From: mclaren
Subject: Bruce Kanzelmeyer's post,
Neil Haverstick's ideas about Bach,
guy xenharmonists vs. girl xenharmonists
---
As usual, Bruce Kanzelmeyer makes a number
of excellent points. His post of some
weeks ago questioned the utility of
ever-more abstract investigations into
the mathematics behind various tuning
systems.
On balance, I agree. In the end the music's
what matters. If the elaborate theories lead
to music that sounds like crap--as in the case
of IRCAM, Pierre Boulez, Brian Ferneyhough,
John Cage, and Milton Babbitt, well, hell.
Dump the theory.
Go back to noodling on the B3 in a
lounge in Schenechtady.
Questions of musical quality are necessarily
subjective. My best take on the state of
the art in xenharmonics is that a whole lotta
good music's being produced. Some of it
comes out of highly theoretical considerations--
some of it comes out of a wing & a prayer.
Given the unsavory tendency of the 20th century
to take every trend to its wildest possible
extreme, obviously there WILL be theorists who
run amok with math at the expense of music, sanity,
and general good taste. This is surely a trend
we need to keep an eye on in microtonality,
as in the rest of contemporary music.
The 20th century will NOT be remembered as an
era of moderation, but instead as a profligate and
outlandishly undisciplined period of wild excess
where every possible harebrained scheme was pushed
to the outermost edge of its wackiest implications,
and then several light-years beyond.
Well, what else is new? What other century gave
us both a Hitler AND a Ghandi? What other century
boasted BOTH a Piet Mondrian AND a Pablo
Picasso?
But there's room for a lot of different
apoproaches in a field as large as non-12
composition... As Ted Melnechuk put it,
"Music's house has many mansions." It seems
unfair to penalize or ostracize good microtonal
composers just because they arrived at their
end product via the route of mathematics,
rather than intuition--or whatever other approach
is the fashion de jour.
Incidentally, this is similar to the schism twixt
guy & girl xenharmonists on this forum. Women
consistently get repelled by the math, the relentless
theory, the endless terminology. Their usual
objection seems to be:
"Where's the music???"
Guys seems to be afflicted with a certain amount of
"number macho." Something along the lines of, "Hey!
MY list of intervals is bigger than YOURS!"
Women have persistently called fsr more discussion of
aesthetc isses in microtonal music... A topic
which male subscribers to this forum appear to be
unwilling to touch.
Not sure why.
(There are exceptions. Laudably, the Scarlet Aardvark.)
Maybe aesthetic concerns are too touchy-feelie?
Maybe talking about the MUSIC, rather than the
NUMBERS, would leave the males a wee bit...vulnerable?
Or perhaps such colloquy might even touch upon
(horror of horrors!) the EMOTIONAL impact of
xenharmonic music, rather than this or that
komma or skhisma?
Well...hard to say.
But it do bear thinkin' on, chilluns.
Typical of a sizable contingent on this forum, Bruce
evokes the horrors of musical chaos and paints
a glowing picture of Appolonian order. Of course,
some of us LIKE chaos. Some of us view chaos as
a fertile maelstrom wherein brew dandy new
worlds of harmony & melody... As Ilya Prigoine
points out, order only appears in thermodynamic
systems at the edge of disorder. Without uncontrolled
and promiscuous chaos, a system tends to enter
stagnant cyclic states.
Of course the JI crowd will view this notion with the
utmost horror..."son cosas de la vidas."
Re: Bruce's objection that listeners will inevitably
disagree, etc. etc., & there's a margin of error
in all auditory systems, therefore the psychoacoustic
data can be explained away...
Nope.
If we were talking about ranges of error or some
such, the complexities of the human auditory system
COULD be swept under the rug. But it ain't
that simple.
Alas, the psychoacoustic data clearly show that
many auditory stimuli produce contradictory
results when applied under different circumstances.
This is a conclusion that CANNOT be swept under
the rug. NO amount of talk about "ranges of error"
or "imperfections in the ear/brain system" will
suffice to esplain away outright paradoxes and
contradictions in the human auditory system.
This kind of argument was put forward in the 1870s
to bolster Helmholtz's crumbilng Fourier analysis
model of the ear. The argument didn't explain away
Seebeck's siren experiment, the extreme contradiction
twixt calculated jnds and observed just noticeable
differences, nor did it explain combination tones,
the Zwicker tone, or any of the other paradoxes
and puzzles which continue to plague modern
psychoacousticians.
Alas, such arguments failed in the 1870s and they
still fail today. In the end, the only resonable
conclusion is that a LOT of complex phenomena are
taking place in the ear/brain system, many of
which appear to require contradicotry and mututally
exclusive explanations...and no one model of hearing
suffices to explain even a significant fraction of
the extant data.
As for Neil Haverstick's deification of Bach...well,
permit me to demur. Bach was a fine composer who
also churned out a fair amount of Muzak. His cantatas
are classic Muzak, the notebook of Anna Magdalena
Bach is pure make-work, and many of his chorales
and even a few of the 48 constitute mere busywork
noodling-around.
Bach was certainly an excellent composer when he
was at the top of his form. Equally certainly,
he wasn't always.
In particular, the destructive idea that no human
can excel Bach is bizarre and outlandish.
In fact, Bach's lute suites (which are transcriptions
of his cello suites, by the way) are good music...
But infinitely inferior to the far more impressive
lute masterpieces of the late renaissance.
John Dowland in particular wrote many superb pieces
for lute which put Bach's lute suites to shame.
The idea that Bach is some sort of unapproachable
god strikes me as just plain silly.
As for Neil's contention that there are very few
great pieces of xenharmonic music, well,
bosh and twaddle. I can name 20 masterpieces
of xenharmonic music without breaking a sweat:
[1] Easley Blackwood's 15-tone etude
[2] Easley Blackwood's 23-tone etude
[3] Paul Lansky's "Late Autumn"
[4] Richard Boulanger's "In Slow Glass"
[5] Gary Lee Nelson's "Fractal Mountains"
[6] William Schottstaedt's "Water Music"
[7] Jean-Claude Risset's "Inharmonique"
[8] John Chowning's "Stria"
[9] Larry Polansky's variations on "My Funny
Valentine"
[10] Ivor Darreg's "Prelude No. 1 for 19-tone
guitar"
[11] Ezra Sims' "Quintet" (1987)
[12] Charles Ives' "Three Pieces for Quarter-Tone
Piano"
[13] Julian Carillo's "Prelude A Cristobal Colon"
[14] Ivan Vyshnegradsky's "Quartet en quarts a tons."
[15] Alois Haba's opera "Die Mutter"
[16] Edgard Varese's "Arcana"
[17] Louis & Bebe Barron's tape score for "Forbidden Planet"
[18] Ben Johnston's Quartet No. 4
[19] Mayumi Reinhard's "Peach"
[20] Jonathan Harvey's "Mortuos Plango Vivos Voco"
And I could rattle off 30 or 40 other masterworks of
non-12 music without any trouble at all. But, as always,
there's neither time nor space.
The idea that there are "very few xenharmnic masterpieces"
just doesn't jibe with the facts. On the cotnrary: it seems
clear that xenharmonic composition as a field has
generated a hugely disproportiate number of masterworks--
and at a rate that seems to be *constantly increasing.*
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 9 Jan 1996 02:48 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id RAA21888; Mon, 8 Jan 1996 17:48:11 -0800
Date: Mon, 8 Jan 1996 17:48:11 -0800
Message-Id: <199601090147.BAA22400@smtp-gw01.ny.us.ibm.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

1/21/1996 4:07:15 PM
From: mclaren
Subject: The adoration of the Cage-i (Round 2 of 15)
---
A recent recording session involved overdubs of
a cassette tape of harmonic series members 1-44
made 10 years ago by Jeff Stayton with megalyra
solos recorded last week.
After hearing the resulting composite, Bill
Wesley offered some choice comments.
"That sounds like an interesting use of chance. As
opposed to John Cage's *stupid* use of chance."
"You mean the idea of flipping coins to choose notes?
What's stupid about that, Bill?"
"You get a bunch of random pitches. White noise.
But the human ear is specifically adapted to detect
and filter out noise--because if you can't hear a
sabre-tooth tiger's roar through the noise of the
jungle, you're dead. So what did John Cage elevate
to highest status in his brain-dead musical theories?
Just what the human ear is best designed to
throw out...noise. Another word for chance."
"But wait a minute, Bill. Cage worshippers like to
point out that he didn't just pick notes at random--Cage
*constrained* the choice of notes and phrases. In
'Music of Changes,' for instance, he used coin flips
to choose among constrained start-times and
transpositions of a set of pre-composed phrases."
"It doesn't matter, Brian. If you throw the aces
out of a card deck, that doesn't make the distribution
of cards any less random after you shuffle 'em."
"That's a good analogy. I wonder why none of these
people seem to understand that?"
"Because the cult of personality tends to impair judgment.
In the end, no matter how you constrain it, noise is noise.
Even if you narrow down the choice to two notes, it's
still random. And you'll detect and be bored by that
randomness."
"I've noticed that, Bill."
"It's why, even if you alternate between only two
phrase start-times, your ear will hear the random
distribution and find it trivial."
"You make a good point there, Bill. I have to admit
it certainly is easy to pick a conversation out of the
noise of a crowded room. And it certainly explains
why 'Music of Changes' sounds like complete crap."
"Exactly. Noise is noise, no matter how you constrain
it. The notion that there's anything interesting about
a random distribution is a stupid idea--pure and simple.
It's dumb. But then, that's the whole point."
"Eh?"
"It's what always happens in a group of monkeys. Whenever
some lesser monkey shows imagination or initiative, the
alpha males always crush him. They've got to--to maintain
the status quo."
"I don't understand, Bill."
"The whole idea behind Cage's music is that he wasn't part
of a revolution at all. He was part of a *suppression* on the
part of the aristocracy. The rich people want to play golf
all day and swig martinis. They don't want *anyone* to rock
the boat. You start adding extra pitches to the octave, and who
knows what's next? Everything could come unglued. So the
artistocracy hire someone like Cage to crush people with
*real* imagination and drive them out of music."
"People with imagination? You mean...like, microtonalists?"
"Yep."
"Well... Cage *did* call microtonality 'just another wing
on the academy...'"
"Sure. The whole point of Cage's music was to take anyone
with enthusiasm and originality and horrify him to the
point where he gets out of the field and goes to work at
McDonald's."
"Isn't that kind of harsh, Bill? After all, people like Warren
Burt are already accusing me of 'showing bad manners
freely, and displaying an appalling lack of gentleness and
generosity' to people like Cage. God only knows what they'd
say if I repeated *your* comments on the tuning forum."
"Of course they accused you of showing 'bad manners.' Creativity
is considered the *ultimate* in bad manners by the aristocracy.
It's another way of maintaining the status quo."
"How's that, Bill?"
"You call imagination 'rude' and then hire people like Cage to
make sure that only cynical, empty people have any power
in the field of music. All that talk about 'removing intention
from the music' and 'letting the music be itself' was nothing
but another way of making music students into a bunch of
monks who flagellate themselves on command."
"Flipping coins never did seem like much of a musical
revolution, I must admit... But, gosh, Bill--this is pretty
blunt talk, isn't it?"
"That's a laugh! These people bleat about 'gentleness
and generosity'--and the minute you suggest emperor Cage
has no clothes, they come after you with a bowie knife
and murder in their eye. So much for 'gentleness.' So
much for 'generosity.'"
"It *is* strange... You'd think the microtonalists would stick
together, despite their minor differences."
"Well, remember what Harry Partch said in his program
notes to 'Water, Water':
'The creative man is not specialized by inclination, but
by the autocracy of modern education. (..) Ordinarily,
however, he is so closely intimidated by his specialty
that if he decides to make some slight deviation
from the norm, in some creative work, it will seem
like a 'revolution,' both to him and to others, and
he can easily become the progenitor of a 'new'
movement. But the deviation must be slight, because a
large deviation is not only incredible, it isn't even
recognizable. In the end, it is just ridiculous.'"
"You mean a deviation like throwing out octave
equivalence, Bill? Or pointing out that the
psychoacoustic evidence doesn't support just
intonation?"
"Exactamundo."
"Gee...when you put it that way, Bill, it *does* sound
as though Cage was just another toady."
"Right. Another errand boy elevated by the musical
aristocracy into a position of godhood to keep the
*really* creative people from rocking the boat."
"Creative people? Like Partch? Or Carrillo?"
"You got it."
"But if I ever dared to post something like *that* on the
tuning forum...ye gods. They'd go ballistic, Bill."
"Maybe. But from what you've said, most of the academics
on that tuning forum are too lazy even to write a letter.
So I don't think *I* have anything to worry about."
"Come to think of it, Bill, you're probably right."
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 22 Jan 1996 07:38 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id WAA10477; Sun, 21 Jan 1996 22:38:53 -0800
Date: Sun, 21 Jan 1996 22:38:53 -0800
Message-Id: <9601212325167216@csst.com>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

1/22/1996 2:39:03 PM
From: mclaren
Subject: 10 smart ideas in late
20th century music theory
---
Everybody knows that the most important
composers and music theorists are those
who influence the largest number of
people.
Everybody knows this, and it's wrong.
The rock group KISS influenced far more people
than any avant-garde theorist or composer. Should
we devote a chapter of every modern music textbook
to KISS?
I propose a different (and more sensible) definition of
"influential."
If a dumb fad influences 500 composers to write
rotten music, the idea is NOT influential. It's a craze,
like pogs, or hula hoops, or pet rocks.
And it has no importance.
But if a smart idea influences 5 composers to write
excellent music, then the idea IS influential. And it
has *vast* importance.
As I've mentioned before, Tops 40s corporate rock and
the avant garde are evil twins. They both use identical
means of promotion: hoopla, ballyhoo, shock-value stunts
and the cult of personality. They both use identical means of
measuring a composer's importance: SHEER QUANTITY.
If rock star X sells 8 squillion CDs, he's a "genius" and
a "brilliant composer." If avant garde music theorist
X inveigles 50,000 gullible students to spew out
reams of bad music derivative of the latest craze,
he's a "genius" and a "brilliant composer."
Obviously, a new definition of "influential composer"
is required.
Now, what this has to do with microtonality is obvious:
the two most influential (as opposed to faddish) composers
of the second half of the 20th century are clearly Conlon
Nancarrow and Harry Partch.
Neither of these guys were on the cognitive elite's TOP
TEN charts during the 50s or 60s. On the contrary. As
Joel Mandelbaum has pointed out, "It is a matter of
everlasting shame that the musical establishment
gave Harry Partch the back of the hand treatment."
This alone should give all music students
pause.
When you read the conventional history texts of 20th
century music, ask yourself: Why isn't microtonality
mentioned? And why isn't music by "the big names" of
the post-1950s played any longer?
The answers to these questions are connected.
And they can be found in the following list of the 10
best ideas of post-1950 music theory:
---------------------------------------------
SMART IDEA #1:
Joseph Yasser pointed out in his 1938 book "Theory
Of Evolving Tonality" that musical tunings change
with time. Intonation fashionable in one era becomes
unfashionable in another.
Any composer who, in the 1940s or 1950s, had read
Yasser's book would have realized that serialism was
just another fad...neither better nor worse than
Venetian antiphonal brass choirs, Baroque quodlibet,
or late Medieval mensuration canons. Yasser's realization
that tunings are not static, and that musical cultures
influence one anther and tend to blend and intermingle
over time, is a lesson that STILL hasn't been absorbed
by the writers of conventional music theory texts.
---------------------------------------------
SMART IDEA #2:
Harry Partch proposed abandoning the conventional
12 tones. Instead, he built his own instruments
and trained his own performers. It doesn't much
matter what you abandon those 12 tones for...
just intonation? Non-12 equal temperaments?
The original tunings of Dowland and Byrd and Bach?
Non-just non-equal-tempered scales?
The crucial decision is to kick over the chess board
by building your own instruments. This alone
changes the rules of the conservatory-and-concert-
hall con game.
The fact of the matter is that Partch's decision to
step on the 12-tone anthill breathed much-needed
life into post-1950s music... And the existence of
this tuning forum is testimony to the continuing
power of that idea.
--------------------------------------------
SMART IDEA #3:
Jean-Claude Risset's idea (following John Pierce's
and Max Mathews' 1966 & 1969 papers, & later taken
up by John Chowning, James Dashow, William Sethares
and most recently Parncutt and Strasburg in the
1995 PNM article "'Harmonic' Progressions of
Inharmonic Tones") of basing non-12-tone methods
of tonal and timbral organization on the findings
of modern psychoacoustics was a brilliant one.
It has consistently led to beautiful music.
-----------------------------------------
SMART IDEA #4:
Erv Wilson's notion of augmenting with permutation
techniques the conventional Partchian organization
by harmonic and subharmonic series (viz., the tonality
diamond). Erv's technique offers a more tonally
efficient alternative to Partch's tonality diamond,
and it has proven exceptionally useful to just intonation
composers.
As Kraig Grady wrote in 1/1, "With the introduction
of Erv Wilson's combination product set, Just intonation
took a giant leap forward." [Grady, K., "Erv Wilson's
Hexany," 1/1, 7(1), 1991, pp. 8-11.]
If anyone needs further proof, Warren Burt's superb
composition "Vingt Enflures Sur L'Enfant Melvin" is
a vivid demonstration of the musical value of Erv's ideas.
----------------------------------------------
SMART IDEA #5:
Fokker's introduction of ratio space has influenced
generations of composers to produce interesting
and impressive music. The idea has been extended by
Tenney, Polansky, Johnston, Chalmers, Scholz and
many others. One of the very best po-mo xenharmonic
compositions, "Lattice [2237]" by Carter Scholz, would
be impossible without Fokker's original organizing
principle.
----------------------------------------------
SMART IDEA #7:
Lou Harrison's notion that all music students should
be trained in at least one other culture's musical
traditions. If this were done, it would end at one
stroke the onanistic over-theorizing, the bizarre
yearning to convert music into a species of
mathematics... Yes, it might even straighten out the
tortuous verbiage that has made a bottomless
chum bucket of 12-tone music theory.
Lou's idea is a brilliant one, long overdue.
When will someone put it into practice?
----------------------------------------------
SMART IDEA #8:
Ben Johnston's idea of training conventional performers
in non-12 techniques. Just as dinosaurs turned into
birds, the smart post-Webern serialists turned into
extended JI composers working with conventional
performers. If Webern had lived, he'd obviously have
given up 12 by 1950 at the latest.
-----------------------------------------------
SMART IDEA #9:
Max Mathews did what all geniuses do when he
applied the computer to music: something
that at first looked bizarre, then became
blindingly obvious, and finally seemed inevitable.
With its binary precision and enormous speed,
the computer was and is an ideal musical
instrument. Max Mathews gave a huge impetus
to 20th century composition in general (and non-
12 composition in particular) by writing the
first acoustic compiler.
------------------------------------------
SMART IDEA #10:
Ivor Darreg pointed out in 1975 that every kind tuning
has its own "sound" or "mood" or "sonic fingerprint."
Choosing the "sound" of a composition by choosing
the tuning has proven an endlessly productive idea,
and inspired a wide variety of xenharmonic
composers.
------------------------------------------
It's worth noting that every one of these post-1950
ideas is inherently xenharmonic. That ought to tell
us something about the direction of the vital
currents of late 20th century music...
Attention, music students! How about asking your
professors why THESE ideas aren't mentioned in
your textbooks?
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 23 Jan 1996 08:01 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id XAA26305; Mon, 22 Jan 1996 23:01:20 -0800
Date: Mon, 22 Jan 1996 23:01:20 -0800
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

2/1/1996 3:56:58 PM
From: mclaren
Subject: melodic modes in 19 & 17
---
As mentioned some months ago,
efforts to force-fit the 19 tone
equal tempered scale into the
melodic patterns familiar throughout
Western music are doomed to
failure.
This, because there's nothing like a
semitone in the 19-tone system.
Since 19 is a member of the third-tone
family of scales, this is obvious. Yet
many notation schemes and suggested
melodic modes depend on the mistaken
idea that 19 has something that sounds
or functions like a semitone.
In actual musical practice, two 19-tone
scale-steps do not sound anything like
a semitone--the resulting interval,
126.315 cents, sounds a full quarter
of a semitone larger than the semitone
found in 12. Alternating this interval
with the 3-step whole-tone in 19
produces a queasy effect that cannot
be described as either "major" or
"minor"--instead, the overall impression
is that of a 7-out-of-48-tone mode
which sounds distinctly out of tune.
Since major and minor melodic modes
do not exist as such in 19, this creates
a substantial conflict. After all, major
and minor *vertical* triads are easy to
play in 19--but a major or minor melodic
mode are not to be found.
So what's the solution?
The experience of the Southern California
Microtonal Group has shown definitively
that the concept of major and minor
melodic modes must be abandoned in 19.
To avoid producing a bizarre and queasy
out-of-joint melody, 12-tone melodic
paradigms must be thrown overboard and new
forms used.
Recently, Jeff Stayton recorded a duet
with me in the 19-tone system. Stayton
used one melodic mode, while Your Humble
E-Mail Correspondent used another. Neither
mode, however, had any point of contact
whatever with 12--and as a result, the
combination sounded entirely natural in
19.
My melodic mode used ascending and
descending subsets of the following:
LssLssssLsL L = 3 scale-steps in 19,
s = 1 scale-step in 19
This "super-mode" appeared as two
different modes, one ascending, the
other descending:
ASCENDING:
LssLsLLsL (intervals shown as number
of 19-tone scale-steps)
I.e.,
1 4 5 6 9 10 12 15 16 (20=1)
(scale degrees played: numbered
from 1 to 19, with 20 = 1)
DESCENDING:
LsLsLsLsL (intervals shown as number
of 19-tone scale-steps)
I.e.,
20=1 17 16 13 12 11 8 5 4 (20=1)
(scale degrees played: numbered
from 1 to 19, with 20 = 1)
Variety was added by borrowing additional
steps from the "super-mode" and adding
them ornamentally to either the ascending
or descending mode.
Jeff Stayton's guitar accompaniment used
an entirely different mode:
(Descending)
LsmssmsL where L = 3 scale-steps in 19
s = 1 scale-step in 19
m = 2 scale-steps in 19
I.e.,
4 20=1 19 17 16 15 13 12 9
(scale degrees played: numbered
from 1 to 19, with 20 = 1)
Stayton varied his mode by substituting 2
single scale-steps for a 2-degree "m" step:
thus he might play
LsssssmsL instead of LsmssmsL, for
example.
The combination of these two melodic modes
worked smoothly and sounded entirely natural
in 19. However, they do NOT derive from any
12-tone or Pythagorean paradigm.
More to the point, both of these modes use
*more* than 7 tones. It has been my experience
when improvising in or writing scores in 19
that more than 7 tones are required for a
natural-sounding melodic mode.
This should not come as a surprise. In 1956,
Miller's paper "The Magic Number Seven, Plus
or Minus Two" pointed out that the human
cognitive system is capable of easily
assimilating as few as 5 or as many as 9
"units." If each step of the melodic mode
is thought of as a unit, this explains why
9-tone modes come so naturally to 19--
even though such modes use 2 more steps
than the Pythagorean-based 12-tone
melodic modes familiar from Western music
theory, a 9-tone mode still fits comfortably
the limits of the channel capacity of the
human sensorium.
(By contrast, serialism's 12 notes are too
many.)
Thus my experience indicates that at least
8 and usually 9 steps are needed to create a
convincing and natural-sounding melodic
mode in 19.
On the other end of the perceptual scale,
the 17-tone equal-tempered scale sounds
best melodically when used with 5-step
or 6-step modes:
1 6 4 1 5 (ascending: intervals in
number of 19-tone scale-steps)
I.e.,
1 2 8 12 13 17 (18=1)
or
1 6 1 4 4 1 (ascending: intervals in number
of 17-tone scale-steps)
I.e., 1 2 8 9 13 17 (18=1) (ascending: scale
degrees played--numbered from
1 to 17, with 18 = 1)
Not all xenharmonic equal temperaments
require a complete rejection of 12-tone
melodic modes. 22, 24, 27, 29, 31, 36 and so on
all have familiar 7-note major and minor melodic
modes.
19, however, does not. This startling contrast
between the familiar-sounding major and minor
triads available in 19 and the utterly alien-sounding
melodic modes is one of the greatest resources
of the 19 tone equal temperament. Composers who
ignore this fact do so at the peril of producing
awkward-sounding out-of-joint music that
gives the impression of 12 badly mistuned.
--mclaren
ATDT *70, 633-4360


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 2 Feb 1996 04:20 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id TAA26453; Thu, 1 Feb 1996 19:20:00 -0800
Date: Thu, 1 Feb 1996 19:20:00 -0800
Message-Id: <960201221740_412518206@emout10.mail.aol.com>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

2/2/1996 9:57:52 AM
From: mclaren
Subject: A saga of low cunning and
feral persistence
---
As a chronically unrepentant electronics
gonzo, permit me a small confession:
Yes, I finally built myself a duplicate of
Harry Partch's Harmonic Canon I (minus
Harry's bad design features).
And it sounds INCREDIBLE.
Johnny Reinhard has roundly chided me
for slighting acoustic music. Of course,
he's right.
In the end, there's no substitute for live
acoustic music played by good performers
on real acoustic instruments. The richness
and subtlety of the sound is nonpareil.
On the other hand, some of us are interested
in exploring worlds of timbre and massed
sonority which would not be possible to
realize (xenharmonically, anyway) without
the aid of "pushing a button on electronic
boxes."
Thus circumstances will doubtless force
me to continue "pushing buttons." If you
can get together a 100-piece orchestra
that can play accurately in 15-TET, though,
Johnny, let me know. In that case my
computer will definitely be mothballed.
:-)
The Harmonic Canon is a marvel, though. A
universe of subtle & gorgeous xenharmonies
lie within its bridges and pinblocks.
For example, entirely different timbres can
be gotten by stroking the strings with one's
fingers; by plucking them with guitar picks;
by tapping them with a knitting needle; by
thumping them with a piece of a piano action
(Harry gets his revenge against the piano--
50 years late!); and by whanging the strings
with a soft paint roller.
Moreover, non-Partchian string-bending
koto-style performance techniques
bring out an entirely different side
of the instrument.
The great virtue of the Harmonic Canon
lies in its potential as a kind of mechanical
sequencer. You set up various justly-intoned
melodies by moving the many independent
bridges, and you can get triplets, repeated
notes, single notes, entire melodic chains
playing at a rate entirely controlled by
the rate at which you move your finger or
your pick across the strings. Add to this
the potential gestural effects--pitch-bent
ji chords, for instance, or dissonant clusters
obtained by rapidly brushing groups of
strings--and a single player has got a
whole galaxy of microtonal sounds at
hi/r beck and call.
Harry called this instrument his "blank
canvas," and after playing it for a while it's
easy to see why. He also mentioned that
placing the bridges was almost as much of
an art as playing the instrument--another
truism which becomes even clearer with
personal experience in sliding 37-odd
bridges around.
The Harmonic Canon is to my mind the most
impressive of Partch's instruments. It's one
of the few that can't be approximated by a sampler
or a DX7. To everyone who's interested in
composing acoustic xenharmonic music, my first
suggestion wuold be: build a Harmonic Canon I.
Costs less than $100, and it'll open your
ears to a new cosmos of xenharmonic
harmonies and melodies.
To construct one, follow the plans in Harry's
Genesis Of A Music--sans the bad design ideas.
N.B.: Harry's bad design ideas were 1) Using
guitar tuning gears; 2) using glued wooden
pegs to anchor the guitar strings on the
other pinblock; 3) sliding that wacky plexiglas
pitch-bender under the strings.
Instead of tacking triangular wooden tongues
onto the end of the left-hand pinblock and
then mounting guitar tuning gears on 'em,
just anchor 44 piano tuning pins directly in the
left-hand pinblock. It works fine. The problem
with the wooden tongues is that they will
inevitably crack under all the tension from
those 44 guitar strings--Harry himself had
to bolt metal supports under the wooden
tongues to keep 'em from splitting off
entirely. Moreover, the guitar tuning gears
never stay in tune long. So the blasted original
Partch-design Harmonic Canon was *always*
going out of tune during performances.
By contrast, our Harmonic Canon stays in
tuen for days at a time and can support
a much higher tension--thus the sound is
louder, and the plucked or struck guitar
strings will ring much longer than Harry's
strings did.
Also: avoid the wooden pegs. Bad idea.
Instead, use 1/4" machine screws on the
right-hand pinblock. (Make sure both
pinblocks are hardened rock maple.)
Under the screws, settle 5/16" washers.
Between the screws and washers thread the
guitar string, and voila! The brass loop end
of the guitar string will automatically catch
tight when you sink the screws with a screwdriver.
This was Bill Wesley's inspiration, and it's
infinitely simpler and less trouble-prone than
Partch's original design.
Q: What did Partch do when he went surfing?
A: He used to "hang eleven."
--mclaren


Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 2 Feb 1996 19:01 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA16240; Fri, 2 Feb 1996 10:01:20 -0800
Date: Fri, 2 Feb 1996 10:01:20 -0800
Message-Id: <0099D528F3AA41A5.9363@ezh.nl>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

2/3/1996 2:42:50 PM
From: mclaren
Subject: Xenharmonics on a broken
shoestring
---
While it's easy to gripe about the lack
of this or that sophisticated synthesis
algorithm on today's MIDI synths, it's
sobering to realize how far we've come.
In many ways we live in the golden age of
xenharmonics.
A whole lot of dirt cheap fully retunable
MIDI synthesizers are available used--and
most of 'em for a pittance. For example,
the prospective microtonalist can today
grab a used TX81Z for about $250, or a
used VFX for about $600. These are both
excellent synthesizers. Add an antique DOS
286 machine, used, for another $200 or
so, tack on a DOS sequencing program
like Cakewalk or Texture, a cheap MIDI
interface, and you can do an astounding
amount of sophisticated microtonal
composition.
Move up to a Windows 386 machine (for
about $100 more) and a program like
Finale, and you've got the ability to
score and perform compositions of
a complexity unthinkable a few years
ago. You can record xenharmonic scores
that world-class ensembles would have
had to practice for 6 months to perform!
None of this was possible just 10 years
ago.
So much inexpensive high-quality
equipment has washed up in the USED
section of the classified ads today
and in the backs of cheapo guitar shops
that it's mind-boggling.
A look at a 1986 issue of "Keyboard" puts
the situation in focus--back then, analog
MIDI synths were the state of the art.
2,000 note sequencers running on the
Commodore 64 were considered "powerful."
The only affordable sampler was the Mirage,
and to detwelvulate *that*, you had to
buy an alternative operating system from
Dick Lord in New Hampshire.
Recently, US Snail brought me the latest tapes
by Warren Burt and Gary Morrison. While both
of these composers have asked me not to
review their work in public, they probably
wouldn't mind my saying that their
latest work is excellent.
And in both cases what's especially
impressive is how much they were able
to do with modest resources.
Gary Morrison, for instance, has an obsolete
68030-vintage Macintosh with a DAT
machine, a two-track Sound Tools setup,
and an Ensoniq ASR-10. Yet he's been able
to simulate a very convincing orchestral
wind and percussion ensemble.
Warren Burt has an equally modest set-up.
An "obsolete" 286 DOS laptop, a MIDI
interface, a little A/D-D/A box that hooks
into the computer's parallel port (cost
$150, maximum stereo output rate 22.05
khz), the public-domain program US from
the U of Illinois, a commercial DOS sample
editor, the Buchla Lightning MIDI controller,
a Proteus I, a Roland SCC-1 Sound Canvas
sound card, and a couple of reverb and delay
boxes, along with an obsolete 13-bit EPS
sampler. (Still an *extremely* useful synth--
as I can testify, since I still use one myself!)
Yet with this modest setup--which wouldn't
even rate a sneer from a Keyboard or Electronic
Musician reviewer--Warren manages to tease
a kaleidoscope of interesting music.
His latest work ranges from digital musique
concrete, to algorithmic music which
uses William Sethares' idea of matching
partials to the microtonal tuning, to pastiches
which employ public-domain algorithmic composition
programs processing musical material from
neoclassical composers in a 19-tone extension
of serialism.
This kind of fine work done with so-called "obsolete"
MIDI equipment should tell us something important.
In the end, you don't really need a DigiDesign TDM
48-track Power Mac system. You don't really
need NeXTStep-486 running on an 80686 machine.
You don't really need a monster 16-bit sampler
with 64 or 128 megs or RAM.
This kind of bleeding-edge technology is nice--
but it's not *necessary* to produce good microtonal
music.
In the end, what matters most is imagination and
ingenuity. My own computer music never uses
a sampling rate higher than 20 khz; and you
can do a surprising amount with a 20 or 30
khz sampling rate on a 13-bit 1 megaword
sampler like the EPS.
Moreover, vintage synths like the 1986
TX81Z or the 1989 VFX have so many features
that it's difficult to believe *anyone*
has come close to exhausting their sonic
potential, even though they've been in use for
years and years.
Beyond that, there remains the largely unexplored
option of combining live acoustic home-built
xenharmonic instruments with digital synths
in live and recorded performances.
For some reason, microtonalists have long faced off
into oposing groups: the "acoustic only!" camp
and the "digital only!" camp. But why not mix
and match instruments of both kinds?
Why not combine *both* sound-worlds?
This is a direction we in the Southern California
micorotnal group have been pursuing for years,
and it has so far proven fruitful.
Recently I finished building my own copy of a
Harry Partch-style harmonic canon I.
Essentially a monochord multiplied times 44,
with movable bridges for each string, the
instrument turned out to be much simpler
to construct than my forebodings indicated.
Best of all, it cost less than $100.
Yet with a harmonic canon you can tune up
all the tetrachords listed in John Chalmers
magnum opus "Divisions of the Tetrachord"--
four or five at a time, simultaneously.
Or you can tune up a single tetrachord with
a variety of harmonizations.
You can also get Partch's 29-note, 37-note, 39-note,
41-note and 43-note just scales, or 43-tone
or 41-tone equal temperament. Not to mention
multiple courses of strings with lesser divisions
of the octave. The harmonic canon is an
endlessly useful instrument--it sounds splendid,
yet it's easy to maintain (run emory cloth along
teh strings to get rid of rust once a week, and dust
and oil the wood) and almost trivial to
build.
Building my own megalyra has proven even
easier, and cost considerably less than $100.
These instruments require no special carpentry
skills--even a duffer like myself can cut and
plane maple and pine planks, drill holes, and
screw in 44 piano pins. Making a megalyra
is literally no more complex than drilling
11 pairs of holes, sinking 11 pairs of
piano pins, and winding tight 11 pairs of
piano strings across a piezoelectric pickup.
That's essentially all there is. (Hint: get
used rusted piano pins from a piano repair
shop. You can emory-cloth the rust off the
pins, and they work just fine. The only real
expense is the piezo pickup, the 11 piano
strings, and the piano tuning hammer. Once
again, rusted used piano pins work fine.)
Anyone can build these simple xenharmonic
instruments, yet the resulting music
sounds impressive in live performance--
especially when combined with digital MIDI
synths and/or playback of computer-generated
soundfiles.
When I see the latest issue of Computer Music
Journal, I have to wonder: Why aren't these
people talking about "more is less" in computer
music?
As Warren Burt has pointed out, in the age of
downsizing, most of our incomes are dropping
even more rapidly than the price of computer
hardware--so getting the mostest out of the
leastest is a matter of real concern.
It's also a fun challenge.
So every time that latest, greatest new
synth beckons to me from the music shop
window, a little voice in the back of head
whispers: You still haven't used more than
1/10 of the capabilities of the equipment
you've already got!
Frankly, both Gary and Warren would do us
a favor if they'd post more about getting
"the mostest from the leastest."
Like Ivor Darreg's, their work has produced
impressive results with very modest resources,
and we could all learn a thing or two from
these fine composers.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Sun, 4 Feb 1996 19:47 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id KAA23868; Sun, 4 Feb 1996 10:47:46 -0800
Date: Sun, 4 Feb 1996 10:47:46 -0800
Message-Id: <9602041045.aa27022@cyber.cyber.net>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"John H. Chalmers" <non12@...>

2/4/1996 10:47:46 AM
From: mclaren
Subject: miscellany
---
Congratulations to Steven M. Miller, Harold Fortuin,
Matthew Puzan, Enrique Moreno and Neil Haverstick.
Even after a month, Neil's amazing MicroStock continues
to reverberate in the form of the recordings of the
concert. A first-rate event. Utterly superb!
Way to go, Neil.
Steven M. Miller deserves kudos for reaching outside of
academia to solicit tapes for the U. of Santa Fe's upcoming
electronic music concert series. In so doing, he discovered
a remarkable fact: some of the best music out there is
being done by ordinary schmucks with NO university
affiliation, NO resources, and NO predelection for
elaborate mathematics.
Congrats also to Harold Fortuin both for building a
xenharmonic generalized MIDI keyboard (see the Huyghens-
Fokkers 1994 Yearbook for more info). Any chance of
commercializing the widget?
Good idea also to offer to exchange copies of your
music.
All too few forum subscribers seem to be interested
in making and sharing MUSIC, as opposed to words
about numbers about theories about words about
numbers about...
Especially praiseworthy: the scholarly labors of
Mssrs. Moreno and Puzan, whose theses promise a
comprehensive survey of xenharmonic incunabula.
Speaking of which--
Several articles touching on xenharmonics have
recently been published. Of particular interest
is Contemporary Music Review, Vol. 10, 1994.
The issue is *entirely* devoted to "composition
with timbre." As we all know, composing with
timbre is the gateway to non-12.. Thus this issue
is of inherent promise to xenharmonists.
Also of interest: "Musical Scales In Central Africa
and Java: Modelling by Synthesis," by Frederic Voisin,
Leonardo Music Journal, Vol. 4, 1994, pp. 85-90.
Voisin describes the results of a series of
experiments in which indigenous tuning experts
were allowed to tune up their own scales on
a DX7II: the process was recorded with a MIDI
sequencer, and the researchers returned after
several days in each case and asked the same
tuning expert to evaluate the tuning again (so
as to confirm the reliability of the results).
When the DX7II was tuned correctly, the researchers
report that the tuning expert and a group of
bystanders would spontaneously perform a piece
from their repertory.
This methodology sounds like a significant advance
in ethnomusicology. It's orders of magnitude
beyond the relatively useless previous practice
of asking the indigenous tuning expert a series
of questions--if the questions are phrased in
terms of Western European musical assumptions,
what possible use can the answers be? It's as
though a bunch of Javanese music theorists came to
the U.S. and asked Western symphony orchestra
musicians: "What bem do you use? And how
do you arrive at your rasa?" The answers could
not possibly make sense.
And the previous practice of simply measuring
tunings in terms of cents and trying to fit them
into Western equal-tempered or ji tuning models
was also less than satisfactory. After all, a set of
measurements of native instruments does not
*by itself* tell us what the local tuning experts
were trying to achieve...while a digital record of the
tuning process *might just do that*
"Applying Psychoacoustics in Composition: 'Harmonic'
Progressions of 'Nonharmonic' Sonorities," by Richard
Parncutt and Hans Strasburger, in Perspectives of New
Music, Vol. 32, No. 1, 1995, is also likely to prove
interesting to the prospective xenharmonic composer.
Finally--finally!--music theorists are beginning to wake
up to the reality Plomp and Levelt revealed in 1965:
--An INharmonic series of vertical partials will sound
entirely transparent and consonant as long as the distance
between each successive partial is greater than the
critical bandwidth at that frequency.
While visionary composers like Jean-Claude Risset,
John Chowning, James Dashow and Jonathan Harvey
have long taken advantage of this psychoacoustic fact,
for just as many decades the music theorists have
ignored this reality.
Parncutt & Strasberg's paper is thus a welcome addition
to the literature. By basing their compositional theories
on Ernst Terhardt's model of hearing, they've gone
back to basics--to the way the human ear hears, rather
than abstract set theory, or partition functions, or
abstract algebra, or combinatorics, or matrix operations,
none of which has any necessary connection to what
the ear actually HEARS.
Among other praiseworthy ideas, P & S suggest:
"(i) the model takes as its starting point the spectrum of
a sonority, rather than its musical notation." This is
a VITAL advance in music theory. Now that arbitrarily
complex and inharmonic timbres can be generated by
computer, the composer needs to consider the implications
of timbre as well as such familiar concepts as root note,
voice-leading, the relationship twixt harmony and melody,
etc.
Moreover, P & S insure that "The model is not octave
generalized; it is based on pitch height...rather than pitch
class." Another extremely important point. Octaves
certainly have their uses, but many of us compose in
tunings without octaves. Why ought we to be hamstrung
by 600-year-old dogmas obsessed with subdividing a
2:1 interval...which interval does not exist in many of
the tunings we happen to use?
P & S also point out many subtleties often unrecognized
by music theorists: for instance, the psychoacoustic
effects produced by mistuned harmonics, etc. Listeners
still perceive substantially mistuned harmonics as
making a contribution to the formant--and so on.
P & S offer C code for an algorithm to evaluate vertical
sonorities according to their theory--something sure to
be useful to many xenharmonists who compose with timbre.
Alas, the article also suffers from a number of
omissions and oversights.
First and most important, P & S base their compositional
theory solely on Ernst Terhardt's theory of hearing.
Terhardt is a well-known psychoacoustician who's done
excellent work in the field. However, Terhardt's theory
of hearing is merely one among many. More: Tehardt is
a dyed-in-the-wool place theorist, and this tends to
bias some of his conclusions. There is, for
example, *no* discussion in any of Terhardt's papers
of the contradictions and paradoxes bedeviling place
theory--for example, that measured jnd's substantially
exceed those predicted by the physics of the place
theory of hearing.
Thus there is good reason to believe that the model of
hearing on which Parncutt & Strasberg have based their
compositional theory is incomplete, and does not explain
many important aspects of the human auditory system.
P & S also make statements which tend to mislead
the unwary reader: "The perception of the pitch of a
complex tone such as a musical tone (piano, violin, voice,
and so on) involves pattern recognition (Goldstein
1973, Terhardt 1972 and 1974)."
In fact there is no consensus, nor yet any convincing
body of evidence, as to the exact processes involved in
the perception of the pitch of a complex tone. While
Parncutt & Strasberg claim that musical perception
is a matter of "pattern recognition," in actual fact
this is merely one of three competing theories of
the way the ear/brains system operates.
The other theories are that the inner ear performs
a mechanical Fourier transform, and that the
auditory nerve running from the inner ear to the
brain extracts & encodes the underlying periodicity
of sound waves detected by the inner ear.
While there is some evidence in support of the
pattern recognition of hearing (first advanced
by Wightman, by the way--not Goldstein!),
there is also some evidence AGAINST the
pattern recognition theory of hearing. There is
also a great deal of evidence FOR the other two
theories of hearing, none of which Parncutt &
Strsberg seem to be aware of.
On top of these lacunae, P & S proceed to collapse
their model down to 12 pitch-classes. This is a
pretty low blow. After much fine talk about freeing
the composer from octave equivalence, etc., they
wind up giving another recipe for producing pretty
sounds in 12-equal.
File that one under the category "The beatings were
scientifically designed to enhance creativity."
Lastly, P & S betray a profound lack of familiarity
with the microtonal and psychoacoustics literature.
Their bibliography is full of extremely glaring
gaps and omissions: for instance, they omit completely
most of the CLASSIC articles on composing with inharmonic
sonorities: John R. Pierce's letter "Attaining Consonance
in Arbitrary Scales," In JASA, 1966; Mathews' and Pierce's
"Control of Consonance and Dissonance With Nonharmonic
Overtones" in "Music by Computers," ed. Beauchamp & Von
Foerster, 1969; Jean-Claude Risset's "Digital Experiments:
1964...." in Computer Music Journal, 1984; James Dashow's
"Spectra As Chords" in Comptuer Music Journal, 1980;
William Sethares' "Local Consonance and..." in JASA, September
1992, my own "The Uses and Characterisics of Non-Just
Non-Equal-Tempered Scales" in Xenharmonikon 15, 1993,
and last (but not least!) Slaymaker, "Chords From Tones
Having Stretched Partials," JASA, 1970 and Geary, J. R.,
"Consonance of Pairs of Inharmonic Tones," JASA, 1980.
If a nudnik like myself can rattle off this many obvious
references from memory, shouldn't Parncutt & Strasberg
have been able to do the half an hour or so of research
required by minimal standards of scholarships? Ought
not these two distinguished researchers to have been able
to dredge up adequate citations for their article?
Such negligence bespeaks more than mere laziness; it
augurs a total lack of interest in looking outside their own
narrow little sphere of interest--namely, the theories
developed "by Ernst Terhardt and his colleauges at the
Institute of Electroacoustics, Technical University of
Munich."
Thus, while Parncutt & Strasberg's article is a promising
start, it falls short on many counts.
Withal, still a worthwhile and useful article.
--mclaren

Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Mon, 5 Feb 1996 00:17 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id PAA29226; Sun, 4 Feb 1996 15:16:58 -0800
Date: Sun, 4 Feb 1996 15:16:58 -0800
Message-Id: <960204231257_71670.2576_HHB69-6@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗emoon@netvoyage.net (Eric Moon)

6/7/1996 10:50:23 AM
>>>>>This attitude is the natural consequence of a causal relationship
>>>>>where the great works are first and the theoretical interest
>afterwards.
> It would seem equally natural, then, that there is no interest for
>a generalized practice of educating music students in
>the *how-to*s of some "exotic" intonation systems for which there is no
>significant corpus of real masterpieces.<<<<<<


To the extent that this is tru, it certainly underscores the irrelevance of
academia to the creative process.

However, I was "turned on to", if not educated in, microtonalism through my
university composition teacher.

Even in a purely historical context, it would be nice to see a broader
understanding of the nature and use of pre-ET in the rennaissance and
baroque. I am amazed at how few music students are aware of the existence
of anything but ET.




Eric Moon
Temiqui Music




Received: from eartha.mills.edu [144.91.3.20] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 7 Jun 1996 21:13 +0100
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id MAA16266; Fri, 7 Jun 1996 12:13:20 -0700
Date: Fri, 7 Jun 1996 12:13:20 -0700
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗"Jonathan M. Szanto" <jszanto@...>

10/25/1996 12:28:10 AM
Buddies,

Since I have been lurking, a couple of comments came together to get me
babbling again, and they be:

>and boy, this list is sort of dull lately...too much polite math
>talk...we need a good controversy to get things rolling...Hstick

.. followed closely by ...

>Or perhaps people don't use ANY notation if they can avoid it. Those of
>us who use non-realtime instruments might need some sort of chart or
>table to help in making a piece, but a detailed score seems superfluous.
>Of course, the situation may be different where performers are involved
>but I find it a tremendous relief not to have to bother with it.
>
>Paul Turner

Seems to me that the last few weeks have been looking as if sponsored by the
old HP Calculators. Question in general: being that most all involved here
are interested in tuning alternatives to 12TET, and sounds like most wish
that more people could experience the lovely/ugly/amazing worlds that these
intonations open up, wouldn't we all be forwarding the cause if more music
were created that *involved* people, rather than spanking the
software-monkey another time?

I know the pleasures myself of having the control of every note and nuance,
or letting an algorithm do it's thing. Nonetheless, and cognizant of the
wonderful pieces that have been done in 'non-realtime', does any of it
compare to the community experience of, say, playing in a gamelan? And
doesn't Paul's last bit, "the situation may be different where performers
are involved but I find it a tremendous relief not to have to bother with
it" read almost like "bother with them" (i.e., live musicians performing the
work)?

[NOTE: apparently the PC police have been attempting to control the terms
here on the tuning list, too: is it non-12TET? allotonal? microtonal?
xenharmonic? what???]

Given the choice of *doing* intonational music or *having it done to me*, I
know which I would choose, and it is the same one that I propose would be
most, um, beneficially nurturing to a larger audience and/or new performers.

..or something like that. Gad, it's true: Neil H. and I are twin
love-children of Elvis and Nadia Boulanger!

Cheers,
Jon

PS: What I find lacking in the list of late are the laconic and luxurious
linguistics of G. Taylor, but maybe that's just me...
*--------------------------------------------------------------------*
Jonathan M. Szanto | If spirits can live online . . . . . . . . .
Backbeats & Interrupts | . . . then Harry lives in Corporeal Meadows
jszanto@adnc.com | http://www.adnc.com/web/jszanto/welcome.html
*--------------------------------------------------------------------*


Received: from ns.ezh.nl [137.174.112.59] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Fri, 25 Oct 1996 14:56 +0200
Received: by ns.ezh.nl; (5.65v3.2/1.3/10May95) id AA08319; Fri, 25 Oct 1996 14:57:58 +0200
Received: from eartha.mills.edu by ns (smtpxd); id XA08303
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id FAA06190; Fri, 25 Oct 1996 05:57:55 -0700
Date: Fri, 25 Oct 1996 05:57:55 -0700
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗bte@MIT.EDU

11/4/1996 10:15:39 PM
i'm looking for a chinese musical scale in cents,
if anyone knows one :)

thanks.

Ben Erwin




--<-@--<-@--<-@--<-@--<-@--<-@--<-@--<-@--<-@--<-@
it is then unconditional positive regard or love which releases
the infinite potential of creativity.
- Paul W. Dixon




Received: from ns.ezh.nl [137.174.112.59] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Tue, 5 Nov 1996 07:16 +0100
Received: by ns.ezh.nl; (5.65v3.2/1.3/10May95) id AA00495; Mon, 4 Nov 1996 03:27:31 +0100
Received: from eartha.mills.edu by ns (smtpxd); id XA00493
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id SAA15377; Sun, 3 Nov 1996 18:27:29 -0800
Date: Sun, 3 Nov 1996 18:27:29 -0800
Message-Id:
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗Manuel.Op.de.Coul@ezh.nl (Manuel Op de Coul)

1/2/1997 9:08:17 AM
Gary says:
> Manuel refuted my claim that there is not historical precedent for
> "pure" being synonymous with "just", refering to small-whole-number-ratio
> (SWNR) pitch relationships. (Or more specifically, I claimed only that I
> did not no of any such precedent.)

Ok, I was merely saying that the connotation of "pure" has a meaning in
tuning theory. Maybe I misread your statement in that.

> Adding a new definition to the synonym stew only risks novices thinking
> that the two mean two subtly different things.

Hmm, I don't see that. Many words have an inherently vague meaning,
including "pure". If you take every word literally, life would become
difficult.

> So I personally think that we ought to accept the hand of vocabulary
> cards the English language deals us whenever possible.

Languages are constantly changing. It happens all the time that words get
new meanings, sometimes they even get the opposite meaning of what they had.

Manuel Op de Coul coul@ezh.nl

Received: from ns.ezh.nl [137.174.112.59] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 2 Jan 1997 18:36 +0100
Received: by ns.ezh.nl; (5.65v3.2/1.3/10May95) id AA00768; Thu, 2 Jan 1997 18:38:59 +0100
Received: from eartha.mills.edu by ns (smtpxd); id XA00766
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id JAA07064; Thu, 2 Jan 1997 09:38:56 -0800
Date: Thu, 2 Jan 1997 09:38:56 -0800
Message-Id: <62970102173726/0005695065PK2EM@MCIMAIL.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗Heikki Jamsa <hjamsa@...>

1/7/1997 2:12:07 PM
>From: William Sethares
>To: tuning
>Subject: stretching of strings

>A couple of days ago, someone asked
>how close to harmonic real strings are.
>One early article that addressed this is

>Young shows that the partials of piano wire are
>``stretched" by a factor of about 1.0013, which is
>about 2 cents per octave.

>-Bill Sethares

Mathematical formula is in small amplitudes

f a ( n + k n^2 ) / ( 1 + k )

where
f is frequenz of n:th partial,
a is frequenz of grundtone, i.e 1:st partial,
n is integer, number of partial,
k is small constant, it depends from stiffness, tension and mass of
the string.

Heikki Jamsa




Received: from ns.ezh.nl [137.174.112.59] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Wed, 8 Jan 1997 02:39 +0100
Received: by ns.ezh.nl; (5.65v3.2/1.3/10May95) id AA19476; Wed, 8 Jan 1997 02:42:04 +0100
Received: from eartha.mills.edu by ns (smtpxd); id XA19474
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id RAA23110; Tue, 7 Jan 1997 17:42:00 -0800
Date: Tue, 7 Jan 1997 17:42:00 -0800
Message-Id: <199701072039_MC1-E45-7C84@compuserve.com>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗Heikki Jamsa <hjamsa@...>

1/8/1997 12:56:36 PM
>From: Gary Morrison <71670.2576@compuserve.com>

>> Yes, 41 is simplest, but 53 system is much better. Fifts are even better,
>> and thirds are much better than in 41 system.

> That's certainly true for the traditional thirds, but 41 fits 9:7 and
>7:6 better, although only slightly better. 41 is however significantly
>better at the 11:9 neutral third. Then again, 11:9 isn't 9-limit of
>course.

So we see, that in 9-limit 53 system is better.

Heikki Jamsa






Received: from ns.ezh.nl [137.174.112.59] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 9 Jan 1997 18:05 +0100
Received: by ns.ezh.nl; (5.65v3.2/1.3/10May95) id AA12433; Thu, 9 Jan 1997 18:08:45 +0100
Received: from eartha.mills.edu by ns (smtpxd); id XA12436
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id JAA03966; Thu, 9 Jan 1997 09:08:42 -0800
Date: Thu, 9 Jan 1997 09:08:42 -0800
Message-Id: <32970109170523/0005695065PK2EM@MCIMAIL.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗Judith.Parkinson@anu.edu.au (Judith Parkinson)

1/15/1997 4:22:35 PM
I would like to unsubscribe. How do I do it?



Received: from ns.ezh.nl [137.174.112.59] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Thu, 16 Jan 1997 16:48 +0100
Received: by ns.ezh.nl; (5.65v3.2/1.3/10May95) id AA12870; Thu, 16 Jan 1997 16:51:54 +0100
Received: from eartha.mills.edu by ns (smtpxd); id XA12932
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id HAA26757; Thu, 16 Jan 1997 07:51:52 -0800
Date: Thu, 16 Jan 1997 07:51:52 -0800
Message-Id: <52970116154525/0005695065PK1EM@MCIMAIL.COM>
Errors-To: madole@mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu

🔗From: iann@inch.com (Ian Nagoski)

12/30/1996 9:03:39 PM
Hey, David - Would you do me a big favor and post this to the tuning list?
Thanks!

MELA Foundation Inc. 275 Church Street New York, NY 10013
212-925-8270

January 1997

Mela Foundation is seeking interns for unpaid volunteer positions
of Monitor for Dream House exhibition.

Dream House: Seven Years of Sound and Light, a collaborative
Sound and Light Environment by composer La Monte Young and
visual artist Marian Zazeela, is presented in an extended exhibition
at MELA Foundation, 275 Church Street, 3rd Floor. Young and
Zazeela characterize the Sound and Light Environment
as a "time installation measured by a setting of continuous
frequencies in sound and light."

POSITION: MONITOR for DREAM HOUSE exhibition
(Volunteer Interns)

Hours: Exhibition is open Thursdays and Saturdays from 2:00
PM to Midnight. Time slots of four to six hours need to be filled
on those days.

Description: Monitor will open or close exhibition; turn on
electronic sound equiptment and turn up light environment;
make sure all technical equiptment is running properly; greet
visitors; distribute information; answer questions concerning
the environment; sell books and recordings.

Contact: Call Ian Nagoski, MELA Foundation, 212-925-8270,
or email at iann@inch.com. If you call, leave a message on the
answering machine with your phone number and times we can
reach you. Or come to 275 Church Street, 3rd Floor, Thursdays
and Saturdays, 2:00 PM to Midnight, and experience the environment
and speak to the monitor on duty.

Press Commentary on Exhibition:

"... the multifaceted form of the 35-frequency construction of Young's
current installation is the principal reason it changes hallucinogenically
with every shift in perspective and why the tones freeze in place as long as
one is perfectly still while the slightest gesture will startle forth
unnamable, wildly plumed melodies from the luxuriant harmonic foliage.
Zazeela's light sculptures have invariably, teasingly refused to surrender
their entire secret to photographic reprodution, so much do they depend on
the retinal impact of activated photons in real time and so much do they
exploit, in ways analagous to Young's techniques, the creation of visual
combination tones and an accumulation of after-images."
-- Sandy McCroskey, 1/1, The Journal of the Just Intonation
Network

"Young's newest sine-tone sculpture shimmers and swirls as you walk
around the room and, amazingly, when you freeze, it does too.
Stay at least long enough to stare at Zazeela's Imagic Light and
Ruine Window, which will imprint your retina with blues and
purples you haven't felt before."
-- Kyle Gann, The Village
Voice

"The visitor with an acute ear can actually 'play' the room like an
instrument: explore the sound close to the wall, close to the floor,
in the corner, or just standing still. Or lie on the floor and allow
the sound to float you into heaven, slide you into hell, or transport
you wherever you want to go. See if you agree with those who
call Young's sound sculpture a precursor of ambient music.
Zazeela's light installation, "Imagic Light," offers an intriuging
complement to the sound, even though it is equally effective when
viewed in silence. Using pairs of colored lights and suspended
aluminum mobiles cut out in calligraphic shapes, Zazeela explores
the relationship between object and shadow, making the tangible
intangible, and vice versa. Enjoy the installation for its
mesmerizing beauty, or try to analyze how the different colors
are achieved, how the mobiles create the resulting shadows, or
perspective the infinite number of symmetrical patterns in the room."
-- David Farneth,
Metrobeat

Music Eternal Light
Art

-------------------------------------------------------------------------
--------

Received: from ns.ezh.nl [137.174.112.59] by vbv40.ezh.nl
with SMTP-OpenVMS via TCP/IP; Wed, 1 Jan 1997 05:55 +0100
Received: by ns.ezh.nl; (5.65v3.2/1.3/10May95) id AA06186; Wed, 1 Jan 1997 05:58:17 +0100
Received: from eartha.mills.edu by ns (smtpxd); id XA06079
Received: from by eartha.mills.edu via SMTP (940816.SGI.8.6.9/930416.SGI)
for id UAA21343; Tue, 31 Dec 1996 20:58:06 -0800
Date: Tue, 31 Dec 1996 20:58:06 -0800
Message-Id: <970101045521_75023.2426_GHJ56-1@CompuServe.COM>
Errors-To: madole@ella.mills.edu
Reply-To: tuning@eartha.mills.edu
Originator: tuning@eartha.mills.edu
Sender: tuning@eartha.mills.edu