back to list

Reverse Engineering, Pt. 2

🔗Jonathan M. Szanto <JSZANTO@...>

5/10/2004 6:59:38 PM

List,

I value the brain trust in this joint, and I've still been thinking. The version of Margo's EuroUnion piece that Gene did made me think again: if someone could find a way (and maybe it would entail the composer re-doing a midi file) to convert a piece from multi-track w/pitch bends to single track with .scl/.tun support, that would be great.

The fact that the rendered version with 'choir' that Gene did has constant re-attacks on the notes is not a fault of Gene's, but simply a limitation of the font/rendering process. It would be nice for someone to mock this up in another synth/sample/whatever format to get an even smoother phrasing.

But this is just an example, and I already realize that there are stumbling blocks that might not be able to be overcome. OTOH, I can't believe that we've come to the point where we can create a microtonal piece, but have it locked into a convoluted form (i.e. doing pitch bends across multiple midi channels) that refuses to release it's musical information for further development.

No need for immediate answers, let's just all think about this...

Cheers,
Jon

🔗Gene Ward Smith <gwsmith@...>

5/10/2004 8:04:43 PM

--- In MakeMicroMusic@yahoogroups.com, "Jonathan M. Szanto"
<JSZANTO@A...> wrote:

> The fact that the rendered version with 'choir' that Gene did has
constant
> re-attacks on the notes is not a fault of Gene's, but simply a
limitation
> of the font/rendering process. It would be nice for someone to mock
this up
> in another synth/sample/whatever format to get an even smoother
phrasing.

Because of Paul's comments, I've been pondering the question of
putting up a Timidity alternative to some of the music which has been
discussed and seeing what people think. Do you feel there would be
value in that?

> No need for immediate answers, let's just all think about this...

Midi serves as a kind of technical bottleneck and a lot of the
persistent problems come from there.

🔗Jonathan M. Szanto <JSZANTO@...>

5/10/2004 9:35:46 PM

Gene,

{you wrote...}
>Because of Paul's comments, I've been pondering the question of putting up >a Timidity alternative to some of the music which has been discussed and >seeing what people think. Do you feel there would be value in that?

Would that be simply another rendering? If so, I don't think much would change, read below...

>Midi serves as a kind of technical bottleneck and a lot of the persistent >problems come from there.

In a way, yes, but not always. One of the reasons multi-channel things are difficult, *unless* there is a way to play/program in the individual musical lines first (and do inflection/editing then) is that you can't put it all back into one musical line that you could contour. You can't edit controller data to inflect either the color or volume or [insert synth parameter here] so that you have musical control over the line. And if Timidity is just going to render again with Soundfonts, you don't get anything to alter the repeated vocal attacks of the choir (unless you can come up with a layered, velocity-sensitive font, and go in and edit the midi velocity data).

But maybe it would fix the scoops.

MIDI is as much a bottleneck as one allows it to be, and this is an instance (unless you want to go straight to recording live, acoustic instruments, or program Csound like Prent) where there pretty much isn't an alternative. If people are going to work in the electronic world (and it *really* pains me that [currently] acoustic musics are taking a back seat, if a seat at all, on MMM), then one needs to look at the entire panorama of options. These, naturally, include playing/programming individual lines, mixing them how one wants, freezing/rendering to audio, and then bringing that audio back into the piece as one track. You can turn midi into multiple-track audio recording, and have a lot of control over the mix over a long period of time.

Yes, it is a lot of work, and some people don't like to make music that way (and I'd never force them). But I've spent a fair amount of time in the last week doing dedicated research on some of this, for my own selfish purposes :) and that includes buy recordings and magazines and a lot of time online. There is some hellaciously cool stuff going on, and the amount of control a composer can have right now - assuming you can live without hiring an A-list orchestra to record your next microtonal symphonic cycle - is getting near nirvana.

Especially mixing down to 5.1 surround...

More anon,
Jon

🔗Gene Ward Smith <gwsmith@...>

5/10/2004 11:08:55 PM

--- In MakeMicroMusic@yahoogroups.com, "Jonathan M. Szanto"
<JSZANTO@A...> wrote:

> Yes, it is a lot of work, and some people don't like to make music
that way
> (and I'd never force them). But I've spent a fair amount of time in
the
> last week doing dedicated research on some of this, for my own
selfish
> purposes :) and that includes buy recordings and magazines and a
lot of
> time online.

It sounds as if you have compositional plans of your own. Are you
planning also to tell us about how you go about the above in more
detail? My attempts to fix Bodacious Breed have so far sounded lame
to me.

🔗Jonathan M. Szanto <JSZANTO@...>

5/11/2004 12:07:38 AM

Gene,

{you wrote...}
>It sounds as if you have compositional plans of your own.

Yes.

>Are you planning also to tell us about how you go about the above in more >detail?

Yes. But it would be either while or after I actually do it. Until I'm actually composing/performing/recording, it is hard to say what I'll do, much less *how* I'll do it. For the moment, let's say that what interests me is (currently) the creation of audio environments/works that are both more than, and less than, pieces of music.

I also note that while I spend a lot of my life in the acoustic realm, there are things that have long fascinated me, and given me fertile ground, in the realm of electronic and processed sound that simply doesn't exist in the real world. And while a lot of ground-breaking work has been done in academic electronic circles, what motivates me about a million times more are the current trends in popular electronica and related musics. Killing time between bass drum interruptions at "La Traviata" I was reading the latest issue of "Electronic Musician" and came across an article about BT (a young musician who not only was a seminal trance artist, but has crossed over to film, having recently scored "Monster").

He spoke about issues in creating music in this realm (not coincidentally talking about having ditched almost every piece of music hardware for software), and some of his ideas were so damned intriguing I had to check it out. I got his latest recording, "Emotional Technology". And that sucker is doing things with music, and with sound, that I have longed to do for years.

I haven't been blown away like this for years, but hey, it's just me. I don't know that it would hit other people this way at all, especially in our more classical audience.

>My attempts to fix Bodacious Breed have so far sounded lame to me.

I can't remember what state this is in. If, in order to achieve a given tuning, a single musical entity (i.e. melody line, counterpoint line, chordal progression) is split up onto multiple midi channels, I currently can't figure how to get that kind of stuff massaged into the kind of expression I happen to hear in pieces (re: my recent musing about "reverse engineering").

More on all this later. Time stretches before me...

Cheers,
Jon

🔗Gene Ward Smith <gwsmith@...>

5/11/2004 6:43:15 AM

--- In MakeMicroMusic@yahoogroups.com, "Jonathan M. Szanto"
<JSZANTO@A...> wrote:
> >My attempts to fix Bodacious Breed have so far sounded lame to me.
>
> I can't remember what state this is in. If, in order to achieve a
given
> tuning, a single musical entity (i.e. melody line, counterpoint
line,
> chordal progression) is split up onto multiple midi channels, I
currently
> can't figure how to get that kind of stuff massaged into the kind
of
> expression I happen to hear in pieces (re: my recent musing
about "reverse
> engineering").

I haven't put up a new version, but your comments have given me some
ideas to try. One thing I would find helpful is if Scala could take a
midi file and convert it into a seq file while keeping the tuning
data; I haven't been able to convince Manuel to do this, but it seems
to me it would be obviously useful.

🔗Aaron K. Johnson <akjmicro@...>

5/11/2004 7:18:25 AM

On Monday 10 May 2004 11:35 pm, Jonathan M. Szanto wrote:

> One of the reasons multi-channel things are
> difficult, *unless* there is a way to play/program in the individual
> musical lines first (and do inflection/editing then) is that you can't put
> it all back into one musical line that you could contour. You can't edit
> controller data to inflect either the color or volume or [insert synth
> parameter here] so that you have musical control over the line. And if
> Timidity is just going to render again with Soundfonts, you don't get
> anything to alter the repeated vocal attacks of the choir (unless you can
> come up with a layered, velocity-sensitive font, and go in and edit the
> midi velocity data).
>
> But maybe it would fix the scoops.
>
> MIDI is as much a bottleneck as one allows it to be, and this is an
> instance (unless you want to go straight to recording live, acoustic
> instruments, or program Csound like Prent) where there pretty much isn't an
> alternative. If people are going to work in the electronic world (and it
> *really* pains me that [currently] acoustic musics are taking a back seat,
> if a seat at all, on MMM), then one needs to look at the entire panorama of
> options. These, naturally, include playing/programming individual lines,
> mixing them how one wants, freezing/rendering to audio, and then bringing
> that audio back into the piece as one track. You can turn midi into
> multiple-track audio recording, and have a lot of control over the mix over
> a long period of time.

I assume there are economic and practical reasons for this. Electronics makes
it a hell of a lot easier to explore a *wide* variety of tunings very
accurately and easily. Fixed pitch metallophones, refretting guitars, etc. is
fantastic, but required $$$, plus the skills of possibly learning a new
instrument. Not to mention requiring extra space for storage, etc. which is
more $$$...

I find the physical modelling trend promising, because like you Jon, I find
most sythetic timbres tied to MIDI very limiting (unless one can play a live
line with some expression, but even then...) Hardware physical modelling
synths are still quite 'spensive, though, so for me, I'm using RTSynth for
Linux. It's range of timbres is small, but it makes up for it by being so
potentially expressive sounding.

Yes, CSound is cool, but probably the hardest way to go, unless you have
developed a macro system, or front end, like Prent Rodgers. I found myself
frustrated by the hand-coding of CSound, and really have sworn it off, even
though I'm amazed from time to time what Prent and others are able to do with
it. To me, it far too slow a process, and I get really batty trying to wait
for the results---type, compile, listen, type, compile, listen, type,
compile, listen ----- *AAAAAAHHHHHHHHHHH* !!!!!!!!

One of the reasons I favor 12-note xentonal scales is not being stuck to MIDI
that I have to 'massage' into expressivity--I can play it myself live, now,
with my current setup, and not a new investment of cash. Of course, I'd love
a 'Continuum' or whatever, but right now my wife and I have plenty more
pressing expenses on our plate !!!!

BTW, one of the directions I want to go is similar to what you described, Jon:
a particular kind of ambient music not yet heard, but that I hear in my head.
It would be realized at this point by multitracking my Korg MS-2000,
hand-picking and/or morphing the best timbres of that instument.

OTOH, interesting things can happen if you hand code/algorithmitize MIDI
microtonal works. As long as there are some volume level (attack) changes and
subtle timbres, good things *can* come of it. I coded a step sequencer that
became more interesting by 10-fold once I coded attack level control into
each step. It makes it sound much much more human.

I have a feeling that MIDI sounds sterile more because of this than because
the time is so steady. In fact, unless it's at the very subtlest level, I get
very annoyed hearing an unsteady tempo (and I don't mean expressive things
like rubato--those I like--I mean a performance with 'bad time')

> Yes, it is a lot of work, and some people don't like to make music that way
> (and I'd never force them). But I've spent a fair amount of time in the
> last week doing dedicated research on some of this, for my own selfish
> purposes :) and that includes buy recordings and magazines and a lot of
> time online. There is some hellaciously cool stuff going on, and the amount
> of control a composer can have right now - assuming you can live without
> hiring an A-list orchestra to record your next microtonal symphonic cycle -
> is getting near nirvana.
>
> Especially mixing down to 5.1 surround...

Looking foward to you future contributions, Jon !

And you are right, if one has patience to be anally retentive, one gets the
best results...

Best,
Aaron Krister Johnson
http://www.dividebypi.com
http://www.akjmusic.com

🔗Paul Erlich <perlich@...>

5/11/2004 1:40:29 PM

--- In MakeMicroMusic@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...> wrote:
> --- In MakeMicroMusic@yahoogroups.com, "Jonathan M. Szanto"
> <JSZANTO@A...> wrote:
>
> > The fact that the rendered version with 'choir' that Gene did has
> constant
> > re-attacks on the notes is not a fault of Gene's, but simply a
> limitation
> > of the font/rendering process. It would be nice for someone to
mock
> this up
> > in another synth/sample/whatever format to get an even smoother
> phrasing.
>
> Because of Paul's comments, I've been pondering the question of
> putting up a Timidity alternative to some of the music which has
been
> discussed and seeing what people think. Do you feel there would be
> value in that?

Why not try the test I proposed with your current setup? It can't
hurt to get to the bottom of the difficulty . . .

🔗Jonathan M. Szanto <JSZANTO@...>

5/11/2004 2:07:05 PM

Paul,

{you wrote...}
>Why not try the test I proposed with your current setup? It can't hurt to >get to the bottom of the difficulty . . .

I think we are starting to mix two differing topics here, but Paul's experiment might shed light on at least one of them.

Cheers,
Jon

🔗Paul Erlich <perlich@...>

5/11/2004 2:33:03 PM

--- In MakeMicroMusic@yahoogroups.com, "Jonathan M. Szanto"
<JSZANTO@A...> wrote:
> Paul,
>
> {you wrote...}
> >Why not try the test I proposed with your current setup? It can't
hurt to
> >get to the bottom of the difficulty . . .
>
> I think we are starting to mix two differing topics here,

I realize that, Jon, but thanks.

Gene wrote,

> Because of Paul's comments, I've been pondering the question of
> putting up a Timidity alternative to some of the music which has
been
> discussed and seeing what people think. Do you feel there would be
> value in that?

so I thought it made sense to bring up what seems to me to be a more
direct response to my comments. Sorry if it seemed I was trampling
yours in the process.

🔗Jonathan M. Szanto <JSZANTO@...>

5/11/2004 3:05:26 PM

P,

{you wrote...}
>so I thought it made sense to bring up what seems to me to be a more >direct response to my comments. Sorry if it seemed I was trampling yours >in the process.

No problem at all. I'm sure Gene can suss out the differences as well. Anyhow, as I mentioned, the idea of reversing the pitch-bend/multi-channel paradigm is a low-priority item, at least for me.

Cheers,
Jon