back to list

on MIDI realizations and microtonality

🔗D.Stearns <STEARNS@CAPECOD.NET>

11/21/2000 6:52:59 PM

Much has been made at here and elsewhere about the evils and the
glories of MIDI realizations; and by MIDI realizations I mean MIDI
realizations of original or other's scores intended for existing
instruments (and here that generally means orchestral instruments),
and not the self-sustaining, freestanding electronic classical
tradition as personified in say Schultz, Carlos, Vangelis, et al.

I've weighed in with my own view that while greatly liberating -- and
especially so for the microtonalist -- I also almost always find MIDI
realizations gnawingly problematic as well. It is my opinion that most
of these "problems" stem from the exclusive and rote use of
uninteresting canned instrument simulations, resulting in varying
degrees of static (and generally unlistenable) realizations.

The strengths of MIDI realizations are all stacked in the direction of
ease of execution and realization, but this is all at the expense of
interesting, detailed sonic tapestries... it's not impossible to
overcome these inherent obstacles, but a piece of music/composer has
to either work eighty-seven times harder to do so, or strategically or
intuitively point the music towards the inherent strengths and
characteristic compass of the medium. (Luck, talent, and a strong
"personality" go a ways as well of course...)

I've posted two ideologically likeminded articles here that I think
show some relevant ways to approach the inherent blandness of all MIDI
realizations (the Meyer's article is a repost)... When Meyer's writes
"milquetoast sounds for milquetoast music" in his article "Limited by
Your Imagination", he's bemoaning the encroaching loss of sonic
personality at the dawn of the MIDI era. On the large scale, i.e., pop
music, things got worse before they got better... but in the curious
world of the modern DIY home-composer, things would appear to be dead
smack in the middle of the finding-the-way, worse phase... note
Moore's parenthetical, "Debussy wrote somewhere that he could spend
days trying to decide between two piano chords", quote in the
following "The Recording Studio as Musical Instrument" article...
shouldn't the same care and important deliberations go into the sonic
carriers of ones ideas? I believe so.

"The Recording Studio as Musical Instrument"
by Steve Moore

I

Using the studio as a musical instrument begins when the
transformations of sound, if effects, become essential to the piece.
In classical music, the recording studio is exactly a recording
studio--a sonic camera. In rock, the role of the studio has progressed
from this over the years, but for most conventional groups it is
usually not much more than a set of very sophisticated effects pedals;
the nature of the transformations is peripheral. The true musical (as
opposed to reproductive) potential of the studio is tapped when groups
and composers build sounds and structures which are not possible to
realize outside the studio, and which truly contribute to a work, to
the spirit of a work in particular, even though that work may contain
more conventional musical material recorded in "photographic"
passivity.

For example, to underline the idea of symmetry in a text, that text
may be reversed on tape and edited together with the original--a
device used, for instance, on the Art Bears song "First Things First."
This truly contributes to the spirit of the text, and is something
that can only be achieved in the studio (unless you have learned to
speak your own language backwards). Of course there are disadvantages
to making the studio essential to the music in this way--most obvious
is that live performance is not possible. In my own experience even
the simplest studio transformations are very difficult to set up live;
it is probably best not to attempt it, but to start from the other
direction--exploit the particular nature of live performance,
including its unpredictability, even its imperfection, and write for
live performance alongside studio composition.

I have been using the studio "as a musical instrument" for the last
seven or eight years, and I probably learned more about musical
composition in general in the first few years than I would have if I'd
taken a course. Every week is different and poses new problems, the
solutions to which I usually find by instinct more than anything
else--it is only afterward that I can see intellectually why one thing
was right and another wrong. Instinct is an underestimated authority.
It can be sensitive to very fine distinctions, making aesthetic
choices difficult but inescapable. (Debussy wrote somewhere that he
could spend days trying to decide between two piano chords.) Studio
composition in particular poses new problems--for example, how to
organize textures. In fact it poses far more problems than a composer
who has only ever written notes down on manuscript paper can imagine.
Traditional composition involves determination of the basic parameters
of pitch, rhythm, harmony, and (since the end of the last century)
orchestral color, but the studio composer is required to determine
everything beyond these, and moreover, to determine them once and for
all--a daunting task.

II

My own work method is disorderly and unmethodical. Pieces may start
life after "accidents" happen in the studio, or I may have a definite
idea for either the structure of the piece, its content, or its
"spirit"--usually the latter--supported sometimes by definite musical
ideas, such as melodic phrases or chord sequences or sets of concrete
images. However, I have never done a piece which has not changed
considerably from my original conception of it by the time it is
finished. I find that the music tends to "write itself"; I guide it in
the directions it seems to want to go, and don't worry too much if my
original conception has not been preserved intact. I have worked
particularly with field recordings, and these are as good an example
as any of something about which traditional composition has nothing to
teach us, but which can be treated as musical events in
themselves--indeed, must be if a piece using them is to work. Here I
should say that I mean a piece using them without any modification,
rather than a musique concrete work that uses them as starting points
for the morphological transformations which are the real material of
pieces in this genre. Using field recordings, untreated, subjecting
them only to collage and juxtaposition, teaches you a great awareness
of their psychological as well as musical characteristics. In the same
way that a melody may produce an emotional response, sounds from the
environment all have their own psychological "color," which has to be
considered alongside their pitch, rhythm, duration, texture, dynamic,
and so on. One learns in particular how such sounds can function in
the structure of a piece; how, for example, the ambience inside a
cathedral can "drive forward" short, isolated sounds--which is much
the same function harmony performs in "driving forward" melody in
traditional music.

I prefer to work at night on my pieces, mainly because the studio
environment is very distracting during the day--with people going to
and fro--and I find increasingly that I'm at my best creatively when I
can have complete privacy and it is quiet and still. I may make
substantial progress on a piece during an eleven- or twelve-hour
overnight session (most pieces take between fifty and two hundred
hours to complete, depending on their length and the amount of studio
treatments involved). Composition can be very dull at times; it can
look like cat's meat when you don't seem to be getting anywhere--but
then something unexpected happens and it suddenly all seems
worthwhile.

"The Recording Studio as Musical Instrument"
by Steve Moore

I

Using the studio as a musical instrument begins when the
transformations of sound, if effects, become essential to the piece.
In classical music, the recording studio is exactly a recording
studio--a sonic camera. In rock, the role of the studio has progressed
from this over the years, but for most conventional groups it is
usually not much more than a set of very sophisticated effects pedals;
the nature of the transformations is peripheral. The true musical (as
opposed to reproductive) potential of the studio is tapped when groups
and composers build sounds and structures which are not possible to
realize outside the studio, and which truly contribute to a work, to
the spirit of a work in particular, even though that work may contain
more conventional musical material recorded in "photographic"
passivity.

For example, to underline the idea of symmetry in a text, that text
may be reversed on tape and edited together with the original--a
device used, for instance, on the Art Bears song "First Things First."
This truly contributes to the spirit of the text, and is something
that can only be achieved in the studio (unless you have learned to
speak your own language backwards). Of course there are disadvantages
to making the studio essential to the music in this way--most obvious
is that live performance is not possible. In my own experience even
the simplest studio transformations are very difficult to set up live;
it is probably best not to attempt it, but to start from the other
direction--exploit the particular nature of live performance,
including its unpredictability, even its imperfection, and write for
live performance alongside studio composition.

I have been using the studio "as a musical instrument" for the last
seven or eight years, and I probably learned more about musical
composition in general in the first few years than I would have if I'd
taken a course. Every week is different and poses new problems, the
solutions to which I usually find by instinct more than anything
else--it is only afterward that I can see intellectually why one thing
was right and another wrong. Instinct is an underestimated authority.
It can be sensitive to very fine distinctions, making aesthetic
choices difficult but inescapable. (Debussy wrote somewhere that he
could spend days trying to decide between two piano chords.) Studio
composition in particular poses new problems--for example, how to
organize textures. In fact it poses far more problems than a composer
who has only ever written notes down on manuscript paper can imagine.
Traditional composition involves determination of the basic parameters
of pitch, rhythm, harmony, and (since the end of the last century)
orchestral color, but the studio composer is required to determine
everything beyond these, and moreover, to determine them once and for
all--a daunting task.

II

My own work method is disorderly and unmethodical. Pieces may start
life after "accidents" happen in the studio, or I may have a definite
idea for either the structure of the piece, its content, or its
"spirit"--usually the latter--supported sometimes by definite musical
ideas, such as melodic phrases or chord sequences or sets of concrete
images. However, I have never done a piece which has not changed
considerably from my original conception of it by the time it is
finished. I find that the music tends to "write itself"; I guide it in
the directions it seems to want to go, and don't worry too much if my
original conception has not been preserved intact. I have worked
particularly with field recordings, and these are as good an example
as any of something about which traditional composition has nothing to
teach us, but which can be treated as musical events in
themselves--indeed, must be if a piece using them is to work. Here I
should say that I mean a piece using them without any modification,
rather than a musique concrete work that uses them as starting points
for the morphological transformations which are the real material of
pieces in this genre. Using field recordings, untreated, subjecting
them only to collage and juxtaposition, teaches you a great awareness
of their psychological as well as musical characteristics. In the same
way that a melody may produce an emotional response, sounds from the
environment all have their own psychological "color," which has to be
considered alongside their pitch, rhythm, duration, texture, dynamic,
and so on. One learns in particular how such sounds can function in
the structure of a piece; how, for example, the ambience inside a
cathedral can "drive forward" short, isolated sounds--which is much
the same function harmony performs in "driving forward" melody in
traditional music.

I prefer to work at night on my pieces, mainly because the studio
environment is very distracting during the day--with people going to
and fro--and I find increasingly that I'm at my best creatively when I
can have complete privacy and it is quiet and still. I may make
substantial progress on a piece during an eleven- or twelve-hour
overnight session (most pieces take between fifty and two hundred
hours to complete, depending on their length and the amount of studio
treatments involved). Composition can be very dull at times; it can
look like cat's meat when you don't seem to be getting anywhere--but
then something unexpected happens and it suddenly all seems
worthwhile.

some thoughts on MIDI realizations: Part III

"Limited by Your Imagination"
by David Myers

A number of factors can be credited for the development of the
independent, sub-indy, and "underground" music scene from the late
seventies through the late eighties. Experiencing lean times from the
excesses and bloated financial expectations of the early seventies,
major record companies tightened their belts, investing in nothing but
sure-bet megagroups or their clones. Free-form FM radio died and
programming fell completely under the control of paid "consultants"
who dictated what musical mush would yield the maximum number of
listeners. Under these conditions, musicians even marginally outside
the plain vanilla mainstream had no choice but to create and circulate
their music via self-developed means and channels.

Equally significant during this period, technologies emerged which
allowed musicians to create increasingly sophisticated recordings
without relying on large commercial studios. Audio cassettes and
recorders developed into a surprisingly high-fidelity medium and
provided an accessible means of distributing independent music.
Musician-produced recordings have since thrived in an environment of
four-track cassette portastudios, affordable synthesizers with
remarkably detailed sound, digital delay and reverb units, sampling
machines, and of course the Musical Instrument Digital Interface
(MIDI). Along with digital audio technologies, the much-lauded
phenomenon of MIDI has brought equipment costs down and allowed
greatly enhanced compatibility between instruments, processors, and
even recording machines.

Being an "electronic musician" and shameless tech-head, I might be
expected to join the ranks of those praising such developments to the
heavens. But since I consider myself a troublemaker first and a
musician second, and considering the fact that there appear to be more
than enough praise-singers here and about, you'll have to indulge my
playing devil's advocate in the present instance. My task here is to
examine the neglected underside of these developments.

What underside? Since hardware has only become
"faster-cheaper-better," and standards for equipment compatibility and
control have exceeded any expectations, one may well wonder what there
is to bitch about. Ten years ago independent musicians certainly found
life more difficult and more expensive. In the area of electronic
music, synthesizers were only slowly coming to programmability; one
might work for hours to attain a particular sound on a
patch-cord-style machine, and have only a dim possibility of
recreating it with any accuracy at a later time. Such synths were
typically monophonic, time-consuming to program, bulky, and pricy. The
workhorse home" multi-track tape recorder at the time, the Teac 3440,
was considerably more costly than later portastudio-type machines, and
of course additionally required an external mixing board. As far as
the most essential sound processor--reverb--the situation was even
worse; manufacturers continually struggled to produce the least
crappy-sounding spring reverberation, few having much success, and not
many musicians self-producing their recordings could afford the cost
or space requirements of the much-preferred plate reverbs. In
retrospect it all seems to have been unbearably difficult; every
production was a challenge to one's resourcefulness and innovative
craftiness. Many times things seemed held together with spit and
exhibited a fidelity to match, by today's standards. But at least in
the days of modular and other non-programmable synthesizers, we didn't
turn on the radio and hear the same patch over and over again. The
3440 did and still does offer superior sound to any cassette
portastudio. And as far as reverb units...well, that one is a bit
tougher to justify, but let's at least say that the truly adventurous
who built their own plates or pressed unoccupied bathrooms and
stairwells into service certainly had something special to show for
their efforts.

Today, of course, much higher sound fidelity is attainable, we have
access to more, ah, beautiful sounds, the price of outfitting a home
studio is within reason for many, and naturally of greatest importance
these days--it is all so much easier. The current hot item is the
"workstation," a piece of gear incorporating keyboard, multiple-voice
synthesizer, realistic sampled instruments (usually including several
full drum kits), delay, reverb, and other processors, and a digital
multi-track sequencer/recorder. As the ads say, "Now you can get down
to the real business of making music," the dirty job of creating sound
and schemes for assembling it into unique music now out of the way!

But it seems that innovation is now an optional ingredient rather than
a necessary one. Current digital wonders are relatively cheap and easy
to use, but without the requirement of innovative approaches, many
musicians leave out the innovation part altogether. The most frequent
and damning criticism of today's "home studio" music productions is
that they all sound so similar. With vastly increased possibilities,
and manufacturers telling us that we are "limited only by our
imagination," why does one hear so many independent cassette
recordings that sound virtually identical?

The fact is that most of our expanded technical capability has come
about through that mixed blessing of modern civilization, mass
production. The palm-top digital reverb unit only exists because the
potential sale of a hundred thousand of the little suckers subsidized
the whole affair. (I own one myself, but what are the implications of
a dozen or so sounds spread amongst tens of thousands of users?) Out
of necessity large corporations design products for Mr. Average
User--as average and bland as possible, in fact. But does that
describe you and your music? If so, you've probably read too far
already.

Perhaps most curious of all is that musicians not only accept this
middle-of-the-roadism, but often don't even meet manufacturers' modest
expectations of them. A common story in synthesizer repair departments
is that units brought in for work frequently contain only the original
factory preset sounds, the famously creative musicians never having
even tried to come up with any of their own. And the manufacturers
listen! Now we find, in many cases, newer models of equipment which,
rather than employing more developed soundcrafting possibilities,
actually revert to less capable (easier to use) versions, even of
inferior sound quality.

I have met musicians who own home studios which almost literally
contain "one of everything." And within each of the devices which are
programmable, guess what you'll find? Yup--factory junk, milquetoast
sounds for milquetoast music. My argument here is that one does not
need to be a major stockholder in Yamaha or Roland to produce
interesting, vital music. (They even dragged me into it for a couple
of years, promising heaven-with-a-MIDI-cord-attached, but I got wise.)
We don't need one of everything, we don't need stacks of MIDI synths
and $10,000 samplers. We need innovation and original approaches.
Don't use factory synthesizer patches; don't buy factory disks for
your sampler--it was made to use your sounds. Look into "obsolete"
gear like non-programmable synths, ring modulators, and analog delays.
Use a transducer driver to run your voice through a suspended
refrigerator rack on its way to your vocal track, put some alligator
clips on your guitar strings, use a mixer to add more regeneration to
effects than the manufacturer would normally allow.

What's most difficult to obtain today is interesting music. Been to a
record store lately? I haven't found much. Oh, and one last thing:
Anybody have a spring reverb for sale?

--Dan Stearns

🔗M. Edward Borasky <znmeb@teleport.com>

11/21/2000 11:23:56 PM

Unfortunately, because of its length, I can't quote significant chunks of
your post. But I do not share your pessimism, nor do I share the pessimism
of Moore and Myers. First of all, MIDI is nothing more or less than a
standard for digital control of electronic instruments. The standard defines
little more than the pitches and durations of notes, with enough flexibility
to allow, for example, pitch bending. There is little difference between a
MIDI file and a printed score for a conventional ensemble or orchestra.

> I've weighed in with my own view that while greatly liberating -- and
> especially so for the microtonalist -- I also almost always find MIDI
> realizations gnawingly problematic as well. It is my opinion that most
> of these "problems" stem from the exclusive and rote use of
> uninteresting canned instrument simulations, resulting in varying
> degrees of static (and generally unlistenable) realizations.

The state of the art in emulation of well-known instruments (flute,
clarinet, violin, piano, etc.) is well beyond where it was 20 years ago, but
it is still fairly easy for a sophisticated listener to tell whether a
recording was produced by a real musician on a real instrument or on a
digitally controlled simulation. Yet we move closer to this goal, if indeed
it is a goal, every time a faster signal processing chip is created, and it
is only a matter of time before we reach it. Would I buy a completely
synthesized performance of, say, the Shostakovich 4th Symphony? If the
person or team that did the synthesis provided an equivalent musical
experience to an orchestral performance, I would. I chose this example
because it strikes me as an extraordinarily difficult task, both to
synthesize this work and to perform it well with a real-world orchestra. It
is rarely performed because of its length and difficulty.

On a smaller scale, I rather like the "experimental animal" I have been
working with in the MPEG-4 Structured Audio example set, a piano piece by
Granados called "El Pelele". When played with a rather simple sample-based
piano simulator coded in the MPEG-4 SAOL language, driven by a MIDI file, it
sounds better than the Windows Media Player MIDI piano. Does it sound like a
concert grand? Not really ... not as bad as Crazy Otto, though :-). A little
more code, a faster computer, a higher sample rate, a wider D/A converter --
all these would make it sound better and raise the cost.

> The strengths of MIDI realizations are all stacked in the direction of
> ease of execution and realization, but this is all at the expense of
> interesting, detailed sonic tapestries... it's not impossible to
> overcome these inherent obstacles, but a piece of music/composer has
> to either work eighty-seven times harder to do so, or strategically or
> intuitively point the music towards the inherent strengths and
> characteristic compass of the medium. (Luck, talent, and a strong
> "personality" go a ways as well of course...)

Part of this is the "pitch and duration" mindset that MIDI inherited from
Western music. Trevor Wishart argues elegantly against this mindset in "On
Sonic Art"; it should be noted that his music is "musique concrete" realized
on a digital computer rather than with tape recorders as in the past. It
seems to me, though, that the concept of "notes" with a definite pitch and
duration is rather essential in microtonal music. It's not clear to me how
one combines microtonality and "musique concrete"; since I am interested in
both of them this is a problem I will need to solve. My solution may or may
not involve MIDI, since I'm not a keyboard player.

> On the large scale, i.e., pop
> music, things got worse before they got better... but in the curious
> world of the modern DIY home-composer, things would appear to be dead
> smack in the middle of the finding-the-way, worse phase... note
> Moore's parenthetical, "Debussy wrote somewhere that he could spend
> days trying to decide between two piano chords", quote in the
> following "The Recording Studio as Musical Instrument" article...
> shouldn't the same care and important deliberations go into the sonic
> carriers of ones ideas? I believe so.--

The DIY home-composer is not limited to MIDI by any stretch of the
imagination. There are wonderful tools out there like CSound, CDP and many
others. The learning curves are steep, and the discipline of programming is
vastly different from the discipline of music.

I recall many years ago, a friend of mine was writing a rock opera. He
managed to borrow an Arp 2600 from a musician, and the two of us sat down
with it and a tape recorder. Basically, we just poked buttons, tried this
and that, twiddled knobs, and captured about six hours of tape. Then the two
of us gave the Arp back to its owner and set up a pair of tape recorders and
a splicing block. The final result was, if I remember correctly, about 11
minutes long.

If I were going to do something like this today, my approach would not be
fundamentally different. I'd sit down with the composer at my laptop, fire
up CSound, load some of the sample instruments from "The CSound Book" and
start poking keys and adjusting parameters. Rather than capturing the output
as wave files (10 megabytes a minute!) I'd simply check the CSound code into
my revision control system and play the snippets we liked back with Perl
scripts.

At some point, there would be pieces of audio on the hard drive or on the
CD-RW drive, and eventually the entire piece would be assembled and
converted to a WAV file. Instead of the ARP there is computer-generated and
transformed audio; instead of the tape there are hard disk files. But the
process is the same ... play with the instrument, explore what it can do,
keep careful records of what you liked and didn't like, play some more,
transform the sounds, play some more, have fun with it, use an infinite
amount of recording medium and then distill everything down to the music you
want.
--
M. Edward Borasky
mailto:znmeb@teleport.com
http://www.borasky-research.com

Cold leftover pizza: it's not just for breakfast any more!

🔗Joseph Pehrson <pehrson@pubmedia.com>

11/22/2000 6:23:23 AM

--- In tuning@egroups.com, "M. Edward Borasky" <znmeb@t...> wrote:

http://www.egroups.com/message/tuning/15746

> Unfortunately, because of its length, I can't quote significant
chunks of your post. But I do not share your pessimism, nor do I
share the pessimism of Moore and Myers. First of all, MIDI

Actually, this is a little funny... but I read Dan's articles and got
an ENTIRELY different impression. They seemed very optimistic
regarding the use of studio technique as an integral part of the
composing process. Sure, Dan and others "dumped" on MIDI
instrumental realizations, but, for me, that wasn't the gist of the
articles...

Just as amusing, and actually true... is the fact that many of the
poppers really feel that "classical music" is behind the times, in
the use of the recording studio as a "classical snapshot" and no
"plastic" manipulation of the medium through electronics.

They are probably right there. The "classical" folks really *DO*
seem to lag behind the curve generally... probably due to an inherent
conservatism linked to the closer associations with past musics...

________ ___ __ _ _
Joseph Pehrson

🔗M. Edward Borasky <znmeb@teleport.com>

11/22/2000 10:03:04 AM

> Actually, this is a little funny... but I read Dan's articles and got
> an ENTIRELY different impression. They seemed very optimistic
> regarding the use of studio technique as an integral part of the
> composing process. Sure, Dan and others "dumped" on MIDI
> instrumental realizations, but, for me, that wasn't the gist of the
> articles...

Well, using studio techniques aka musique concrete certainly resonates with
me, since in my early days as a composer it was all I had. My three biggest
early influences were Lejaren Hiller (algorithmic composition), Harry Partch
(y'all know his stuff better than I do :-) and Alwin Nikolais (dance set to
musique concrete). Musique concrete has been around a long time, though --
since the early 1950s. Musique concrete / studio techniques have been a part
of American popular music since David Seville aka Ross Bagdasarian (the
*father* of the current one!) created the Chipmunks!

> Just as amusing, and actually true... is the fact that many of the
> poppers really feel that "classical music" is behind the times, in
> the use of the recording studio as a "classical snapshot" and no
> "plastic" manipulation of the medium through electronics.

Of course classical music is behind the times :-). I went to a board meeting
of our classical station, KBPS-FM, last week. They play more 20th century
music than any other classical station I've ever heard. Still, what the
audience wants is Bach, Mozart, Beethoven, Brahms, Wagner and of course
Vivaldi and Copland. These are the people who can hear the difference
between a vinyl recording and a CD and prefer the vinyl. These are the
people who heat their homes in the winter with vacuum tube stereo equipment.
These are the people who spend more money on Shostakovich CDs than they do
on lunch. These are the people who would prefer to hear a monaural Toscanini
Beethoven symphony remastered with a high-frequency cutoff at 10 KHz than an
uninspired performance recorded with perfect audio quality.

And they tend to run away full speed ahead when "challenged" with even mild
experiments like Varese. There are composers who can write for these people
and still be true to their own microtonal or experimental muses; Lou
Harrison is one that immediately comes to mind. I don't write music for
conventional instruments and ensembles because I haven't put in the study
necessary to do it.

If I wanted to, I suppose I could con some code into doing it, sketching out
the performance with *shudder* MIDI emulations of conventional instruments,
converting the finished MIDI score into a printed one. This would have the
advantage of letting people know what the piece was supposed to sound like
without having to read the score. This is something that couldn't be done 25
years ago -- sketch out a conventional instrumental piece with electronic
emulations. The best most composers could do was sketch out their pieces on
the piano, making it virtually necessary for the composer to be a pianist.
I'm not a pianist; I studied it for a year when I was very young and never
progressed beyond "Chopsticks" and "The Marine Hymn".

As far as I'm concerned, with the exception of "Switched on Bach", there
wouldn't even *be* experimental / electronic music today outside of academia
if it had not been for the adoption of first the synthesizer and later MIDI
by the rock / pop culture. And just as "mainstream" classical composers all
over the world incorporated jazz in the early and middle 20th century,
"mainstream" composers today are incorporating rock into their music. It is
an exciting time to be alive and to be composing, and I for one am very glad
the pop culture kept the synthesizer and MIDI alive for me to use.
--
M. Edward Borasky
mailto:znmeb@teleport.com
http://www.borasky-research.com

Cold leftover pizza: it's not just for breakfast any more!

🔗D.Stearns <STEARNS@CAPECOD.NET>

11/22/2000 4:54:08 PM

M. Edward Borasky wrote,

> Unfortunately, because of its length, I can't quote significant
chunks of your post.

Yeah, and I messed 'em up too... somehow I doubled one, etc. Boy, do I
hate when that happens!

> But I do not share your pessimism, nor do I share the pessimism of
Moore and Myers.

Hmm... I wouldn't exactly call it pessimism! Yes, it's a point of view
tied to strong aesthetic beliefs, and as such can probably be offset
by its opposite point of view, but "pessimistic"? I don't think so.

> First of all, MIDI is nothing more or less than a standard for
digital control of electronic instruments.

Right, I know, but what people are doin' with MIDI, what sonic reality
MIDI most often falls in your ear as, is *not* besides the point,
right?

> There is little difference between a MIDI file and a printed score
for a conventional ensemble or orchestra.

Well, I'd have to say that your "little difference" is quite an
aesthetically removed and forgiving leap!

> Would I buy a completely synthesized performance of, say, the
Shostakovich 4th Symphony? If the person or team that did the
synthesis provided an equivalent musical experience to an orchestral
performance, I would.

Maybe I haven't made myself clear: All I'm interested in is a
rewarding end result where "result" and sound are not mutually
exclusive terms! I'm interested in a paradigm and a mindset that
fosters this... it is my contention that MIDI, while extremely
liberating, is by and large not fostering this condition.

> It seems to me, though, that the concept of "notes" with a definite
pitch and duration is rather essential in microtonal music. It's not
clear to me how one combines microtonality and "musique concrete";
since I am interested in both of them this is a problem I will need to
solve.

Whether manipulated or not, sonic hunks of the real world are almost
always going to involve very interesting microtonal clouds and slivers
of sound. The reworking of I of William Schuman's "Orpheus With His
Lute" that closes out my piece "Day Walks In" relies heavily on all
manner of artificial and Natural sounds as part of its polyglot
microtonal approach. My piece "In the Shade of a Birch", a little
fantasy on the intrepid hunter as personified in the story of Carl
Higdon, includes a tape-manipulated duet for a live bee and an equally
live spring reverb feedback entity! Acutely microtonal -- just not the
sort I'm going to be able to post a lattice about...

> ... play with the instrument, explore what it can do, keep careful
records of what you liked and didn't like, play some more, transform
the sounds, play some more, have fun with it, use an infinite amount
of recording medium and then distill everything down to the music you
want.

Can't argue with that! Sounds like the perfect recipe...

happy harvesting,

--Dan Stearns

🔗D.Stearns <STEARNS@CAPECOD.NET>

11/22/2000 5:29:26 PM

Joseph Pehrson wrote,

> I read Dan's articles and got an ENTIRELY different impression.
They seemed very optimistic regarding the use of studio technique as
an integral part of the composing process. Sure, Dan and others
"dumped" on MIDI instrumental realizations, but, for me, that wasn't
the gist of the articles...

I think the likeminded thread is probably something like a protest in
the direction of a type of "surrender"... now everyone's not going to
see it that way of course, and to some MIDI is to say the piano what
the word processor is to the typewriter -- a wonderfully functional
utility in the direction of progress. I'm really not a technophobe,
and my gripes and worries are pretty specific.

> Just as amusing, and actually true... is the fact that many of the
poppers really feel that "classical music" is behind the times, in the
use of the recording studio as a "classical snapshot" and no "plastic"
manipulation of the medium through electronics.

Well if you believe that music develops along the lines of what
instruments avail themselves to it, then surely the Bomb Squad
represents a much more impressive and important step in the right
direction than any legit post MIDI composer! (BTW, when I first heard
"It Takes a Nation of Millions to Hold Us Back", I thought it was
absolutely the most exciting thing I'd heard in a long long while. It
really did just blow me away. Most every musician I knew, and from
teaching and working at the music store that was quite a few back
then, thought it was utter garbage and definitely not music...)

--Dan Stearns

🔗D.Stearns <STEARNS@CAPECOD.NET>

11/22/2000 6:22:56 PM

M. Edward Borasky wrote,

> Musique concrete has been around a long time, though -- since the
early 1950s.

I think it dates a little earlier than that... I know Schaeffer's
first gramophone studies are from the late 1940s for instance.

> This is something that couldn't be done 25 years ago -- sketch out a
conventional instrumental piece with electronic emulations. The best
most composers could do was sketch out their pieces on the piano,
making it virtually necessary for the composer to be a pianist.

Yes, for better or worse this is a very good example of MIDI's
extremely liberating impact (and if your a "microtonalist" with an
itching desire to check out many different systems, all the more so).

FWIW, mp3.com is loaded with composers who are taking advantage of
this liberation... Steve Layton has one of the most continually
interesting, regularly rotated stations at mp3.com, "Going Places".

<http://stations.mp3s.com/stations/81/going_places_30_minute_ra2.html>

The current show's first featured artist is Eleanor Dimoff. I'll quote
Steve as it's most definitely relevant to this discussion...

"Eleanor Dimoff is one of the "new amatuer" composers, made possible
in a large part by today's new technology. Now retired from her career
as a psychologist and living in New York, she's found time to pursue
again her earlier interest and love of classical music. The point is
that now she's able to actually HEAR what she creates, thanks to the
power of midi-based instruments and tools. And, thanks to the
internet, she's also able to share the sound of her music with the
world. I'm sure that Eleanor would be one of the first to admit that
she's not even close to trying to become another Beethoven! Yet that's
hardly the point; what IS important is that this kind of way to pursue
your dreams exists in today's world. That she now has the chance to
offer this kind of gift to others, and recieve real returns for her
effort, is a truly remarkable and wonderful thing."

Pretty damn hard to argue with that... and I certainly find myself in
a position where I want to be supportive, but most of the time I just
wish I could be half as enthusiastic about the end results.

--Dan Stearns

🔗Monz <MONZ@JUNO.COM>

11/23/2000 12:02:44 AM

--- In tuning@egroups.com, "D.Stearns" <STEARNS@C...> wrote:

> http://www.egroups.com/message/tuning/15777
>
> Well if you believe that music develops along the lines of what
> instruments avail themselves to it, then surely the Bomb Squad
> represents a much more impressive and important step in the right
> direction than any legit post MIDI composer! (BTW, when I first
> heard "It Takes a Nation of Millions to Hold Us Back", I thought
> it was absolutely the most exciting thing I'd heard in a long
> long while. It really did just blow me away. Most every musician
> I knew, and from teaching and working at the music store that was
> quite a few back then, thought it was utter garbage and definitely
> not music...)

Dan, I agree with you 100%! This album absolutely floored me
when I heard it back in 1989, and made me a rabid Public Enemy
fan for life.

I was gigging regularly in a band then, and again just like your
case, none of my bandmates shared my opinion. We were staying right
on top of covering all the latest Top-40 and R&B, but the other
band members unanimously refused to do any Public Enemy or Ice-T,
the two most prominent rap artists at that time, and the two
greatest rap artists who ever recorded (IMO).

Ice-T's accomplishments and PE's other albums notwithstanding,
_It Takes A Nation..._ was the real breakthru. If rap had
continued in that direction instead of selling out to commercial
interests I think it would be a lot more interesting today.

I've always wanted to do a detailed microtonal analysis of
*something* from that album!...

-monz
http://www.ixpres.com/interval/monzo/homepage.html

🔗John A. deLaubenfels <jdl@adaptune.com>

11/23/2000 2:09:22 PM

[Dan Stearns:]
>"Eleanor Dimoff is one of the "new amateur" composers, made possible
>in a large part by today's new technology. Now retired from her career
>as a psychologist and living in New York, she's found time to pursue
>again her earlier interest and love of classical music. The point is
>that now she's able to actually HEAR what she creates, thanks to the
>power of midi-based instruments and tools. And, thanks to the
>internet, she's also able to share the sound of her music with the
>world. I'm sure that Eleanor would be one of the first to admit that
>she's not even close to trying to become another Beethoven! Yet that's
>hardly the point; what IS important is that this kind of way to pursue
>your dreams exists in today's world. That she now has the chance to
>offer this kind of gift to others, and receive real returns for her
>effort, is a truly remarkable and wonderful thing."
>
>Pretty damn hard to argue with that... and I certainly find myself in
>a position where I want to be supportive, but most of the time I just
>wish I could be half as enthusiastic about the end results.

Dan, unless I've missed it, no one has asked a simple question: what
quality sound box are you realizing MIDI on? The average PC sound card
puts out an awful sound - one instrument sounds almost like another, and
none are worth hearing twice, to my ears.

I don't have anything like a top-end module, but my Roland "Virtual
Sound Canvas" software does at least a fair job at realizing the range
of GM instruments.

The state of the art still has far to advance to rival acoustic
instruments - I don't think anybody disputes that - but the state of
the art is far from what most people hear. It'll be nice when everyone
has a decent sound module.

As for Eleanor Dimoff, her works seem to me to be interesting, but to
have made unfortunate use of the metronome ("correcting" every note's
timing to an exact even beat), a dangerous MIDI seduction, and so are
rhythmically stilted.

JdL

🔗D.Stearns <STEARNS@CAPECOD.NET>

11/24/2000 12:52:00 PM

John A. deLaubenfels wrote,

> Dan, unless I've missed it, no one has asked a simple question: what
quality sound box are you realizing MIDI on? The average PC sound
card puts out an awful sound - one instrument sounds almost like
another, and none are worth hearing twice, to my ears.

I've used a lot of different modules (including a couple
Sound Canvases). Even once your at a level where the quality is pretty
good, certainly tolerable, I still find that the results to be wildly
inconsistent... things I think should "work" don't always cooperate...
things I think would never fly, based on past experience, sometime
miraculously work out.

I've got over 50 disks of MIDI realizations -- several hundred pieces
in various states of completion -- and many many completed pieces I
just can't bring myself to let out of the box, simply because some
sonically/aesthetically bothersome feature or the other pisses me off
enough to shelf it, sometimes despite countless hours spent slaving
away at the realization.

> As for Eleanor Dimoff, her works seem to me to be interesting, but
to have made unfortunate use of the metronome ("correcting" every
note's timing to an exact even beat), a dangerous MIDI seduction, and
so are rhythmically stilted.

Yes, as I've said before I really think that the strengths of MIDI
realizations are all line up squarely behind convenience and ease at
the direct expense of richly detailed sonically nuanced results.
Interesting and satisfying results are not impossible, just difficult,
and more to the point as I see it anyway, often overlooked as a direct
byproduct of the medium.

For anyone who might be interested, here's a really nice article by
Nicolas Collins (it was originally published conjunction with the 1993
STEIM festival) that engages many of these issues and quite a few
others as well...

EXPLODED VIEW: the musical instrument at twilight

Nicolas Collins

The twentieth century has witnessed a radical transformation in the
mechanics of music production, and of music's role in society. The
first half of the century saw the development of the recording
industry, and with it an elaboration of the path from composer to
listener, and a redistribution of power amongst new personnel and
technologies. The second half of the century has seen a similar
redefinition of the musical instrument: the physics of plucked strings
and vibrating reeds has been overtaken by electronic manipulation of
every link in the sonic chain that stretches between finger and ear.
Such dramatic changes have affected the way music is made and the way
music is heard. They have altered our very sense of what music is and
can be.

Thomas Edison did not invent the phonograph, contrary to the popular
American myth. Rather, as he had for so many other of his
"inventions", he recognized the consumer potential of an untried
machine. Various technologies for sound recording and reproduction had
been under development in several countries for several years --
Edison picked the one that seemed most practical and profitable,
tweaked it in his lab, and introduced it to the American public. With
a foresight worthy of a modern day computer mogul, he realized that
the key to financial success with the phonograph lay in controlling
both the hardware (the recorders and players) and the software (the
recordings), so he manufactured both. Partially deaf as a result of a
punch to the head as a child , Edison claimed to have no ear for, or
interest in, music. He saw the phonograph record as a sonic autograph,
and the player as a way to hear the speaking voices of famous persons.
Musical recordings were initially introduced as a novelty, and were
not taken seriously by Edison for years. It would seem to have been a
case of right technology, wrong vision.

Edison did, however, have a canny insight into the effect the
phonograph would have on the act of listening. By choosing to record
that most personal of sounds, the spoken voice, he anticipated a
fundamental change in the social role of music that took place in the
course of the twentieth century -- the shift from music as a
predominately public activity to music as a predominately private
activity. If the voice of President William McKinley did not become a
valued possession in the home of every American, the voice of Caruso
did. Professional music left the concert hall and entered the parlor.
Putting on one's own choice of a record, to listen to alone, replaced
attending a concert with the masses. And putting on a record in the
presence of friends replaced playing in a quartet with them.

The phonograph had a precedent in the 19th century popularity of the
piano, which linked the virtuoso on stage and the amateur at home: the
only difference was the time you spent practicing. The phonograph was
an even greater leveler: a seemingly infinite array of instruments and
ghost performers at your fingertips and no need to practice. Perhaps
this was too strong an affront to the late Victorian work ethic,
because the phonograph was immediately relegated to the status of
furniture rather than being treated as a musical instrument -- a
situation which remained unchallenged, with a few exceptions such as
John Cage's "Imaginary Landscapes #1" (1939), until rise of the
"turntable artist" in the 1980s. Society was split into two distinct
categories: a small group of professionals who made music and the
large mass of society that consumed it. The phonograph represented a
milestone in the gradual distancing of people from the act of making
music -- a process that had been taking place since the rise of art
music in Europe. Edison's invention effectively replaced the Victorian
amateur with the modern consumer.

Recording music is no more "natural" a process than taking a
photograph or making a film. Much effort and artifice goes into making
a recording seem effortless and artless. Each new generation of
recording technology is touted for its verisimilitude, its accuracy,
its transparency. In the CD era it is hard to believe that anybody
took these claims seriously at the time of the Edison cylinder. The
ever-savvy Edison , when introducing the phonograph, set up two
important support systems: the marketing forces required to convince
the public that a seat in front of this small box with a horn was
indistinguishable from one in a box at La Scala, and the recording
technicians who pushed feverishly at the limits of the technology in
pursuit of this ideal.

The technicians included design engineers who sought to improve the
existing technology, and recording engineers and producers who made
the best of what they had. The latter two rapidly rose in importance
and profoundly changed the distribution of power within the chain from
composer to consumer. They were to recorded music as the conductor was
to orchestral music, and not since the emergence of the conductor as a
charismatic figure had the singular authority of the composer been so
seriously undermined. The engineer and producer were responsible for
making the recording sound like the "original" music, the composer's
auditory vision, but they also influenced the kind of music that made
it to the record's surface, and many of its formal and structural
details. They became the orchestrators of recorded music -- they knew
what sounded good on record -- and for the same reason they became its
censors. And because the producer was also "the man who wrote the
checks", he soon became the single most important person in the
recording chain.

The film composers of the 1930s and 1940s were the first to learn the
technique of "studio scoring": they wrote music that was only heard
over loudspeakers. They were followed by the Tin Pan Alley
songwriters, who understood that a two and a half minute song on one
side of a 78 was a much more effective use of the medium than a
symphony chopped up on a dropping stack of ten. But it was Phil
Spector who perfected the art of making music for vinyl. By
positioning himself as a producer first and foremost, in the middle of
the recording chain, Spector extended his power and influence over the
entire production process: he wrote the tunes and lyrics, did the
arrangements, picked the musicians and singers, invented new recording
techniques, owned interest in the companies that pressed and
distributed the disks. Not until a record had proven itself a hit did
he bother to assemble, for the sake of live shows, a group to match
the name on the label. Total control.

Or was it? The recording age saw the emergence of two other
significant new powers in the musical machine: the disk jockey and the
consumer. Thanks to radio, the record may have been one of the first
products to act as its own advertisement, but records didn't play
themselves on air, disk jockeys did. One DJ was worth a thousand
listeners, more or less, depending on personality, wattage and
demographics, and thus he was a force to be reckoned with, courted,
and bought, if necessary. On radio, and later in a booth or on stage,
the DJ was acknowledged as the virtuoso of the turntable -- as much
for his encyclopedic knowledge of recorded repertoire as for his
physical touch. Any vestige of the musical instrument that remained in
the phonograph was appropriated by the DJ.

The downside of making records, record players and radios affordable
to the masses was that the masses could pick and choose with careless
ease amongst a myriad of musical offerings. Flipping off a record
midway through a side, or scanning the radio stations, had none of the
social stigma or economic recklessness of walking out on a concert.
The phonograph and radio may have been no match for the piano in terms
of musical expressiveness, but they did give the user an unprecedented
degree of control over his or her musical environment. Feedback from
listeners to record companies was quickly formalized into the "charts"
that continue to drive record marketing today. Baroque and Classical
composers survived by winning the patronage of a wealthy few; now the
fickle buying habits of the man and woman on the street held sway over
composers and steered musical style.

By 1960 the traditional, pre-recording model of musical production and
transmission had been exploded. The locus of power had shifted away
from the composer and the germ of the musical idea and was distributed
amongst specialist technicians and middle men (arrangers, producers,
engineers, disk jockeys, A&R men) and consumers. The effect of the
"British Invasion" of popular music in the early sixties was to kill
off the American Tin Pan Alley/Phil Spector tradition of
producer-centered authority, and to establish a new class of composers
who recognized the importance of recording studio literacy. The
Beatles and the Rolling Stones may have started out as bar bands but
they came of age in the studio under the guidance of gifted producers
such as George Martin. Production vision became as essential to a
songwriter as melodies and lyrics; studio technique as critical for a
band as instrumental competence. Recording technology evolved quickly,
and experimentation extended its application beyond the direct,
accurate transference of sound to tape. The recording studio became
both a musical instrument in its own right and a compositional tool.
After Sgt Pepper the challenge was not how to replicate a live
performance on record, but how to replicate the record in live
performance.

The notion of the studio as instrument had been pioneered years
earlier in European electronic music studios -- rooms full of abducted
electronic test equipment and spinoff radio technology. Pop music gave
the idea public visibility and commercial viability. The next step was
for RCA Laboratories to formalize the concept by transforming the room
full of electronic objects into an electronic object the size of a
room: the synthesizer. Taking its cue from the increasing influence of
electronic technology over the transformation of acoustic sound into
recorded product, RCA thought to dispense with the acoustic sources --
and the musician's hourly wage -- and to replace them with electronic
ones. This pursuit of the union musician's worst nightmare was
ultimately fruitless, but synthesizer design succeeded in altering our
understanding of the "musical instrument" in ways parallel to those by
which the phonograph had altered the social uses of music: it exploded
the chain of command.

Synthesizers -- from RCA to Buchla, Moog, Arp, and Serge to Oberheim
to Yamaha and Roland -- have been based on the premise that musical
sound could be analyzed, broken down into modular components, and
reconstructed (synthesized) by electronic circuits that emulated those
modules. It is a seductive idea: in place of dozens of different
instruments and skilled players, a single technician could arrange a
handful of electronic oscillators, filters, amplifiers, and control
circuits to conjure up any existing instrument or even some novel
hybrid combination of instruments. But the greatest implication of the
synthesizer lay not in how well it replicated sound -- some 35 years
later traditional synthesis methods have yet to mimic acoustic
instruments accurately enough to pass close scrutiny -- but in the
fact that for the first time in the history of instrument design the
causal properties of physical acoustics could be ignored.

Synthesizer modules were based not on modeling the physics of the
excitation and transmission of sound , but rather on a description of
the resulting sound itself. Without the mechanical causality built
into buzzing reeds on resonant tubes and bowed strings on soundboards,
synthesizers needed a system of interconnecting and controlling the
modules. The closest thing to a standard in the first generation of
commercially viable synthesizers was Voltage Control. Briefly, all
modules in a Voltage Controlled synthesizer were designed so that any
adjustable parameter, such as the pitch of an oscillator, could be
externally controlled by a variable DC voltage. In a Voltage
Controlled Oscillator, for example, increasing the voltage applied to
the frequency control input from one volt to 2two volts would raise
the pitch one octave. Thus a keyboard might put out a control voltage
that increases by one-twelfth volt with each key that, when applied to
the Oscillator, would produce an equal tempered scale; or a second,
very slow sine wave oscillator could vary the pitch of an
audio-frequency oscillator just enough to produce something akin to
vibrato. Since the output of any module was a fluctuating voltage of
some kind, Voltage Control permitted any module's output to be
connected to any control input.

More interesting than merely reversing the analysis process in order
to re-synthesize a known instrument was the possibility of rearranging
the modules with seemingly infinite variety, producing permutational
richness quite beyond the limitations of lips, strings, reeds, and
fingers. "Patchcord music" was born of throwing away the keyboard and
interconnecting modules according to systems that were no longer
imitative. The 1960s model of analysis/synthesis failed, but its
electronic building blocks acquired a structural meaning independent
of that original model.

While Voltage Control technology lay at the heart of most early
commercial synthesizers, there were variations in its implementations
that made it difficult to interconnect the machines of different
manufacturers. Voltage Control was accepted widely not as an "industry
standard" but simply because it made modular design possible within a
single instrument or a product line. Initially, company strategists
encouraged incompatibility: it could provide patent protection or fend
off infringement liability, and it was thought to insure customer
loyalty and repeat purchases. But in the early 1980s a handful of
manufacturers saw the potential benefits of implementing an
industry-wide control standard that would permit total compatibility,
and MIDI (Musical Instrument Digital Interface) was born.

The inspiration had come from the booming personal computer industry.
As microprocessors and memory became cheaper and more powerful,
synthesizer manufacturers started to incorporate them into instruments
for control and storage functions. For a synthesizer's microprocessor
to control another synthesizer's sound instead of only its own all
that was needed was an extra pair of jacks and a little bit of
programming -- communication protocols were already well established
in the computer world. Furthermore, the growth of the computer
industry had shattered long-held assumptions about the importance of
proprietary standards, and had demonstrated the profitability of open
architecture and third-party developers. Total compatibility meant
more sales for everyone, seemingly at the expense of no-one.

MIDI employs a serial communication protocol such as that used between
computers and peripherals. It requires a microprocessor in both the
controlling device (such as a keyboard) and the controlled device
(such as a sound generating module), but the cost of such a
microprocessor is negligeable. The MIDI language permits the free
exchange of a large amount of information describing the articulation
of musical events over time. The language was designed by a committee
of representatives from synthesizer manufacturers -- not by musicians
and certainly not in consultation with any members of the avant
garde -- and it reflects a businessman's assumptions of what
information is musically important. Thus it is a bit like Esperanto:
not beautiful, not natural, but useful, logical and expandable.

This expandability is important. The microprocessors required to
transmit, receive, and interpret MIDI information were initially
underutilized (rather like the VW Beetle engine block, which remained
externally unchanged between 1955 and 1966, but internally was stroked
and bored until the horsepower of that very conservative design had
doubled.) MIDI initially clumped the modules of electronic music into
two groups: controllers and sound modules. But by placing a
user-programmable computer between the controller and the controlled,
composers could turbocharge MIDI and introduce a whole new order of
musical possibilities.

The first commercial programs to take advantage of MIDI's
expandability were utilitarian tools that might best be described as
"organizers": sequencers (with which you record and edit "scores" for
controlling MIDI sound modules -- sort of digital piano rolls), voice
editing software (which lets you tweak sounds in the comparative
luxury of the computer screen rather than using the synthesizer's
knobs and curt displays), and librarian programs (for backing up and
restoring a synthesizer's programmable variables.) The second wave of
products included what might be called "re-organizers," programs that
interpreted and transformed the MIDI data in various ways:
arpeggiators, accompanists, variation generators, improvisors. These
represent the most recent stage in the dismantling and reconstruction
of the musical instrument.

By placing a decision-making device between the player's controller
and his sound output, electronic instruments have progressed still
further from their acoustic antecedents. The composer can now re-enter
the musical chain post-score, post-player, and immediately preceding
actual sound, by programming the computer to filter out unwanted
actions by players. Or the computer can change or accompany the
player's activity according to algorithms specified by the composer or
performer but beyond his or her direct control. Or it can act like the
server in a computer network, co-ordinating the actions of several
players and controlling the sounds of multiple synthesizers. Inserting
a computer in the MIDI chain can infuse a performer's instrument with
the characteristics of a score, an orchestrator, an orchestra, an
accompanist, an improvisor, a conductor, a teacher, a censor, a
producer. In short, it opens up the musical instrument -- previously
thought of as the self-contained final link between player and
sound -- to incorporate any stage of the industrial chain of music
production.

MIDI's transformation of the musical instrument has had a profound
effect on the nature of both electronic music and the musical
instrument industry. A technology once intimately linked to the avant
garde has now become a mainstay of commercial pop. By creating an
instrument structure that parallels the production chain of a
recording, the locus of power can be shifted easily to suit the
musical ideology. Control over the end product can rest in the hand of
a virtuoso player touching a key, in the sequences prepared by a
producer, in software that improvises, in the studio automation
package controlled by the mixing engineer, in a technician's samples
or voices patches, or in a score embedded in a program. And the style
of the music reflects the center of authority. The charmingly
shambolic performances by the electronic music group the Hub result
from a collective computer network where no-one has direct control.
Composer and trombonist George Lewis shares the stage with a digital
alter ego, which improvises well because Lewis has taught it how, and
Lewis improvises brilliantly. Ed Tomney is woken up each morning by a
computer-generated voice that tells him what kind of music to write
that day. In a more commercial realm, the precision of House music is
built on power sequencing, while Rap makes extensive use of witty
sampler appropriation and the speaker-popping kick drum sound of the
venerable Roland TR707 drum machine. After MIDI, electronic
instruments are no longer contained instruments, and electronic music
is no longer a contained style.

Starting with David Tudor in the 1950s, there has been a long
tradition of "homemade" electronic musical instruments, but before
MIDI successful commercial production of home designs was limited to
those with the financial backing needed to underwrite the considerable
tool-up costs of hardware development. MIDI, however, divided the
musical instrument and its manufacturing into three distinct parts:
controllers, software, and sound modules. Controllers and sound
modules incorporate both hardware and software, while software is just
software. Software isn't any easier to write than hardware is to
build, but it can be cheaper. It can be produced on a very modest
scale, free of the demands of investors who want a product's
eccentricities worked out along with its bugs. The 1980s saw several
composers market programs imbued with their own distinctive
personalities (most notably Joel Chadabe's M and Jam Factory, and
Laurie Spiegel's Music Mouse.) Ironically, although MIDI was developed
as a convenience by and for large companies at the point when
electronic instruments began to acquire serious commercial value, it
also permitted the flourishing of artist-run cottage industries.

STEIM has, for 25 years, operated as a kind of subsidized cottage
industry. Our status as a government-supported foundation has given us
the freedom to extend non-commercial, artist-inspired design beyond
the software realm into the more cost-intensive, hardware-dependent
projects. Our primary function is serving composers whose needs are
not met by existing commercial products, or whose technical or
financial limitations place their musical vision beyond their grasp.
The emphasis has always been on live performance, and since the advent
of MIDI we have stressed the design of new controllers, which are not
based on the forms of existing instruments the way most commercial
ones are, and the development of software that extends the performer's
control over complex musical textures.

The "SensorLab" is the heart of STEIM's new controller design: a small
microcomputer that reads switches, pots, pressure pads and other
sensors, and translates that information into MIDI data; the
accompanying SPIDER control language lets the user program multiple,
dynamically re-configurable instruments based on a single, economical
hardware core. "Deviator" and "The Lick Machine" are software programs
for interrupting and transforming the MIDI data stream between the
controller and the sound modules. "Deviator" is essentially an effect
processor for MIDI data, producing delays, echoes, offsets, and other
transformations of incoming note and controller information. "The Lick
Machine" maps user defined sequences onto specific notes from the
controller for playback and manipulation by the performer. The "Big
Eye" software transforms a video camera into a contact-free controller
for translating movement and images into MIDI data.

There is a danger inherent in MIDI's modularity similar to one that
besets the recording industry: a lack of integration and feedback
between the parts. With the separation of modules it is easy to forget
one musical component while concentrating on another, to forget the
whole while indulging in the parts. Whereas commercial instrument
design usually focuses on sound at the expense of controllers, STEIM
has for many years emphasized controllers and interpretive software.
Mindful of the pitfall of segregation, we have recently embarked on an
ambitious Digital Signal Processing (DSP) project. We are designing a
compact DSP module that will serve not only as a MIDI sound generator,
sampler, and processor, but -- by incorporating elements of the
SensorLab -- will also have direct sensor interfaces allowing it to
serve as the core of a fully integrated electronic instrument. There
are aspects of acoustic instruments that have yet to be adequately
emulated in electronic ones, such as the tactile feedback between a
player's finger and a stopped vibrating string. We hope that by
combining delicate input sensors into the same hardware/software
package as the sound source we can begin to recreate this kind of
elusive musical touch. This return to a self-sufficient instrument may
seem like a step backward, but it is important to remember that while
this new module will serve as an integrated controller, software
interpreter, and sound module, it will also function as a hub for
communication with external hardware and software. After the
MIDI-based explosion of the instrument, STEIM is now creating
expandable music "kits" that have multiple options for re-assembly.

So where do we go from here? After the record's transformation of the
music industry, and electronic music's transformation of the identity
of the musical instrument, there are still links in the musical chain
that have remained relatively unchanged since the turn of the
century -- most notably the architecture and audience of the concert
hall, the audience at home, and the chamber ensemble. Are there ways
to open these as well, such that we might foster new musical forms?

Architecture is the final "instrument" in the acoustic chain from a
performer to our ears, imposing its own acoustic signature on the
music played within it, but unlike every other stage in that process
it is invariably inflexible. With the exception of the rather
underutilized performance space at IRCAM in Paris, there are no public
concert halls that permit the real-time transformation of
architectural acoustics. There are obvious economic reasons for this,
in addition to the fact that little music has been written for such
architectural instruments. But such music will not be written until
appropriate performance spaces are more readily available. If music
institutions were to encourage the construction of malleable concert
spaces -- with real-time remote control of shape, reverberation time,
frequency response, and other physical characteristics -- it might
provoke the creation of music of a truly monumental scale.

The development of architecture for music has closely paralleled the
gradual disappearance of participatory musical events within the
community. Accordingly, the modern concert hall places a passive
audience in fixed seating. Despite the fact that music is a
three-dimensional, moving medium (as Alvin Lucier has eloquently
demonstrated), "serious listening" has become a motionless activity.
Attempts to change this behavior, through interactive audio
installations and non-traditional concert hall design, have been
generally unsuccessful. We are faced with a fundamental attitudinal
difference between the passive and active consumption of music, with
the latter reserved for dance music, a small sector of the
experimental fringe, and musics outside the European classical
tradition. But just as the availability of halls with variable
acoustics might inspire new forms of "architectural music",
alternative listening environments might encourage more activity on
the part of audiences. Like the nightclubs of the 1980s, a concert
hall could be built not as a single "optimum" space, but as a sequence
of acoustically and electronically linked rooms, each with its own
character and social function. One space might cater to focused
listening and direct visual contact with the players (as in a
traditional concert hall), while another might present the music at an
ambient or even subliminal sound level. More active listeners could
interact directly with the music in "remix rooms", by adjusting
loudness, mix, and balance to suit individual taste, or wander through
a labyrinth of corridors and small rooms that would acoustically
transform the music more with every stage.

A similar passivity problem exists with "home audiences". There has
been little increase in listener activity since the advent of the
record and radio gave the consumer the power to select. Interactive
media such as CD-I are commercially insignificant compared to the
music CD, whose major selling point, I'm sure, is not sound quality
but the fact that you don't have to turn it over. Even music CDs boast
a degree of interactivity unknown with records or tapes, but what
percentage of listeners bother to program their own sequences or
listen in "shuffle" or random mode? Let's face it, once beyond passive
listening, we enter the realm of activities where satisfaction is
based on short-term competent task fulfillment. What fulfillment is
there in an alternative sequence of familiar songs? Interaction with
home electronics typically consists of scanning channels with a remote
control or playing video games. Video games emphasize the speed of
hand-eye co-ordination, usually in competition with the computer
itself rather than with other players. Scanning lets the viewer edit
broadcast material to the exact length of his attention span, while
pursuing a futile desire to miss nothing.

Neither of these two activities seems innately musical, but can we
develop a new form of parlor music based on their motivation? A task
that sits between the passive appreciation of music and the
accessible, if competitive, satisfaction of games? Scanning could be a
useful model. Multi-channel broadcast media carry tremendous amounts
of information that can be used directly in a musical work as sound
material, or can be transformed into structural elements -- for
example, translating the "value" of a given station's programming into
the amount of time you stay on it before moving on is not far removed
from certain practices of improvisational music.

Video games stress competition for its own sake, and only secondarily
(if at all) do they have any aesthetic content. A better game model
for music might be bridge. Although competitive, bridge has certain
characteristics that are similar to those of chamber music and
improvisation: it is a group activity (which sets it apart from that
other game commonly linked to music, chess); and it has a social value
beyond pure competition, with a tradition of a foursome playing
together on a regular basis. There are styles, strategies, and
"classic games" -- all contributing to a tradition of theory and
analysis. I could imagine a form of home music evolving like a weekly
bridge game. A cable scanner, an interactive CD, or a computer program
could provide elements of chance, topicality, score, and sound
material. Performances could take place at many different levels of
skill. Performances could be played back later for analysis or passive
listening. The game-like competitiveness could provide the initial
hook for pulling a listener off the couch and activating him or her as
a performer, while the social factor would encourage the
re-integration of musical performance into everyday life.

Whether this would ever happen depends, of course, not on the will of
any one composer or any four bridge players, but on the constellation
of technology, economy, social norms, and zeitgeist that governs all
cultural developments. Edison's genius, after all, lay not in
invention, but in a gift for being in the right place, at the right
time, with the right machine.

--Dan Stearns

🔗David J. Finnamore <daeron@bellsouth.net>

11/24/2000 7:20:09 PM

--- In tuning@egroups.com, "M. Edward Borasky" <znmeb@t...> wrote:

> Would I buy a completely
> synthesized performance of, say, the Shostakovich 4th Symphony? If the
> person or team that did the synthesis provided an equivalent musical
> experience to an orchestral performance, I would.

Wandering slightly OT, here, in hopes that it will inspire some

to produce more interesting sounding recordings of microtonal music,

which will help all of us, of course: Anyone remember Isao Tomita in

the late 1970s? Full orchestral scores meticulously realized with

*analog* synthesizers! And it was wonderful. Needless to say, it

sounded nothing like a real orchestra. Instead of trying to mimic

acoustic instrument sounds, he created *new sounds that filled the

same roles*. And he played them skillfully.

A decade later, I chased that ideal with a DX-100 and a 4-track

cassette recorder for a while, until I was finally sucked into

the seductive world of MIDI triggered samples around 1991. Now,

comparing my home recordings of 1990 with those of 1992, I can't

believe it's the same person. Things went way downhill way fast.
I had subjected my humanity to the soulless, unfeeling decisions
of machines. Bad idea.

If you must use a MIDI sequencer, do yourself a favor and forget
that

it has a quantize feature. Play it over and over again until it

sounds right, just like you would if you were recording straight to

tape (or disk!). You will make better musical decisions than your

computer could ever dream of, even if it could dream.

--
David J. Finnamore
Nashville, TN, USA
http://personal.bna.bellsouth.net/bna/d/f/dfin/index.html
--

🔗John A. deLaubenfels <jdl@adaptune.com>

11/25/2000 2:55:06 AM

[David J. Finnamore:]
>If you must use a MIDI sequencer, do yourself a favor and forget that
>it has a quantize feature. Play it over and over again until it
>sounds right, just like you would if you were recording straight to
>tape (or disk!). You will make better musical decisions than your
>computer could ever dream of, even if it could dream.

Yes! This is the problem with MANY midi sequences (what I referred to
as using the "metronome" - same thing). Rigid note timing is the death
of musicality. Unfortunately, many sequence programs are determined to
"help" by including it in some form or other. But I think all allow
quantization to occur on small (short) notes; that plus a fast tempo
setting can in effect disable the "feature" and allow for timing
variation, breathing.

My own midi recordings are no great shakes technically, but I always
capture timing to .001 seconds, so at least THAT is not a problem.

JdL

🔗graham@microtonal.co.uk

11/26/2000 10:58:00 AM

David J. Finnamore wrote:

> If you must use a MIDI sequencer, do yourself a favor and forget
> that it has a quantize feature.

Damn! I had until you reminded me. Now I have to start all over again.

🔗Joseph Pehrson <pehrson@pubmedia.com>

11/27/2000 7:54:34 AM

--- In tuning@egroups.com, "M. Edward Borasky" <znmeb@t...> wrote:

http://www.egroups.com/message/tuning/15760

I don't write music
for conventional instruments and ensembles because I haven't put in
the study necessary to do it.
>

Well, although not requiring all THAT much study -- a good
orchestration book will do much -- but certainly requiring
EXPERIENCE, there are so many people doing this that perhaps a "more
refreshing" path might be for your to continue origininal experiments
with computers and music concrete. I, for one, would like to hear
them if you ever get them posted on the Web (maybe they are already,
and I just don't know about it!)

________ ___ __ _
Joseph Pehrson