back to list

Re: detail

🔗Robert C Valentine <BVAL@IIL.INTEL.COM>

1/8/2001 11:46:58 PM

Graham said :

> > I know deep down this could work, but in the end I don't have the
> > patience to build a universe atom by atom.

This has been my exact problem with doing computer and electronic
music, despite being an enthusiast for that late fifties, early
sixties 'beep and boop' style.

The problem is that composition should be at a much 'higher'
heirarchical level than one seems to get with any of the current
interfaces. FOr instance, if you have a sequence of notes and want
them in a scratchy but lyrical timbre, with the volume and tempo
incresing during their production, drawing a horizontal 'v' under
them with the word accelerando and specifying 'violin molto
expressivo' says in five seconds of writing what may take a whole
day to realize 'tolerably' in MIDI or other methods. (Of course,
you DO have a 'tolerable' sound with the electronic realisation,
with only the promise of a better sound with the more composerly
version).

One could sum some of the problem as electronic music makes the
composer into the performer. The performance is what people hear.
Therefor, you should put AN AWFUL LOT OF EFFORT into the
performance. Do you have time to both come up with something of
worth to perform and the to perform it at the 'antlike' level of
electronic music interfaces?

Bob Valentine

🔗graham@microtonal.co.uk

1/9/2001 7:10:00 AM

In-Reply-To: <200101090746.JAA69486@ius267.iil.intel.com>
This is now decidedly off-topic.

Bob Valentine wrote:

> Graham said :
>
> > > I know deep down this could work, but in the end I don't have the
> > > patience to build a universe atom by atom.
>
> This has been my exact problem with doing computer and electronic
> music, despite being an enthusiast for that late fifties, early
> sixties 'beep and boop' style.

That actually cuts out my reply, but I'll answer anyway. Late 50s, early
60s? I suppose that must be Stockhausen and the Dr Who theme. The more
mainstream 'boom and bleep' will be later.

> The problem is that composition should be at a much 'higher'
> heirarchical level than one seems to get with any of the current
> interfaces. FOr instance, if you have a sequence of notes and want
> them in a scratchy but lyrical timbre, with the volume and tempo
> incresing during their production, drawing a horizontal 'v' under
> them with the word accelerando and specifying 'violin molto
> expressivo' says in five seconds of writing what may take a whole
> day to realize 'tolerably' in MIDI or other methods. (Of course,
> you DO have a 'tolerable' sound with the electronic realisation,
> with only the promise of a better sound with the more composerly
> version).

In that case your interface appears to be a violinist. A violinist will
still be more intelligent than a sequencer until we crack the strong-AI
problem. Fortunately, sequencers are much cheaper, and won't leave pizza
crumbs on the carpet.

So how would your day be spent in a MIDI studio?

The volume increase could be achieved by key velocity, volume(!) or
expression controllers. Key velocity would be the most obvious, and part
of the keyboard performance. If that isn't good enough, as may be the
case, you can get the expression messages in there by using an expression
pedal while playing whatever you're playing, drawing the envelope into the
sequencer, or recording the expression as a separate performance.

I don't have a MIDI tempo knob. I suppose it could be done, but the Phat
Boy only supports continous controllers. So that'd have to be drawn in.
I expect tempo could be controlled via Kyma, and I plan to look at this
one day. It might be able to handle "swing rhythms" as explained in the
New Scientist last year. Now where was I?

Oh yes, now you have to set your MIDI "scratchy but lyrical" knob to the
right position. In reality, you'd have to spend a few minutes playing
with the overdrive and EQ settings, and it still wouldn't be right. But
that's the implementation at fault, not the interface.

Which leaves plenty of the day left to go outside and listen to the birds
in the park.

The {controller, sequencer} interface is still lower-level than the
{manuscript paper, skilled musician} one. There are conventions built
into traditional notation to cover what people have wanted to do in the
past. MIDI hasn't been around long enough for these conventions to be
enshrined in the interface. But that means you have plenty of flexibility
where you need it. Quite useful, what with sequencers not being as smart
as skilled musicians (see above).

> One could sum some of the problem as electronic music makes the
> composer into the performer. The performance is what people hear.
> Therefor, you should put AN AWFUL LOT OF EFFORT into the
> performance. Do you have time to both come up with something of
> worth to perform and the to perform it at the 'antlike' level of
> electronic music interfaces?

Making the composer the performer is neither an inevitable offshoot of
electronic music, nor a problem when it occurs. There are plenty of ways
the work could be divided among a team. The most obvious is {instrument
designer, composer, performer}. Very like the traditional break down,
except that the intrument designer would be more visible. Violinists
rarely have to collaborate with violin makers, because off the shelf
violins don't suck nearly as badly as MIDI presets.

Coming up with something worth performing is certainly a problem for me.
But it's also a problem for guitar pieces, where nobody quibbles with the
interface. Unfortunately, there are dumb laws that stop me buying
refugees at affordable prices to compose and perform using the virtual
instruments I design. So I've got more chance of realizing an electronic
piece than a string quartet, but it's slow progress.

So in what way do an X5D or a Phat Boy resemble an ant? These are the
interfaces I use most often for pure electronic music. Although they're
not perfect, they're more comparable to other musical instruments than
social insects. You do have to put a lot of effort into a perfomance.
But then the only time I ever played a violin, it made a "scratchy but not
at all lyrical" sound. Apparently it takes a great deal of effort to get
the "lyrical" bit working.

Now, an ant-like implementation, I can see the possibilities in that!
Hundreds of subtle parameters, all taking their lead from each other. You
could get a satisfyingly complex sound out. But it would take some time
to get right, and a fair bit of hardware to implement in real time. And
not at all how I understand existing synthesizers to work. Still, nice
idea.

Graham

🔗Robert C Valentine <BVAL@IIL.INTEL.COM>

1/10/2001 5:50:03 AM

>
> From: graham@microtonal.co.uk
> This is now decidedly off-topic.
>

Perhaps, although microtonalists are wont to turn to machine
realisations sooner than people who don't need to find accordians
(and players) cognizant of 17.3tet.

> Bob Valentine wrote:
>
> That actually cuts out my reply, but I'll answer anyway. Late 50s, early
> 60s? I suppose that must be Stockhausen and the Dr Who theme. The more
> mainstream 'boom and bleep' will be later.

And Otto Luening and Babbitt and the whole Columbia thing... I guess
Morton Subotnik was came closest to finding the commercial potential.

< snip my description of the simplicity of writing in impassioned passage
for violin which, WHEN REALISED by a competent player, will have a degree
of complexitie in timing, dynamics and timbre that would take a REALLY
LONG TIME to realize in midi/csound and would likely not result in as
'musical' a renderring as that of the competent violinist. >

> Graham :
>
> In that case your interface appears to be a violinist. A violinist will
> still be more intelligent than a sequencer until we crack the strong-AI
> problem. Fortunately, sequencers are much cheaper, and won't leave pizza
> crumbs on the carpet.

Yes, there is a cost and reward that have to be balanced here. I can hear
my music, renderred as well as my software/soundcard allow given that I
am willing to spend a large amount of time supervising that renderring
(massaging each note, which I thought you were referring to as build ing
a universe atom by atom and which I look at as an ant-like labor). Or
I can pay a violinist to read through the thing and tape it.

If I was just doing a trad sort of thing, I know which way I'd go. But
if having the violinist "read through" means explaininga tuning system
as well, then the computer realisation is probably more reliable.

>
> So how would your day be spent in a MIDI studio?
>
> The volume increase could be achieved by key velocity, volume(!) or
> expression controllers. Key velocity would be the most obvious, and part
> of the keyboard performance. If that isn't good enough, as may be the
> case, you can get the expression messages in there by using an expression
> pedal while playing whatever you're playing, drawing the envelope into the
> sequencer, or recording the expression as a separate performance.
>

All true, and if I am not a competent pianist then I will spend 20~2000x
the time drawing these envelopes (or massaging each note) compared to
drawing the 'two lines and three words' on the handwritten score.

> I don't have a MIDI tempo knob. I suppose it could be done, but the Phat
> Boy only supports continous controllers. So that'd have to be drawn in.
> I expect tempo could be controlled via Kyma, and I plan to look at this
> one day. It might be able to handle "swing rhythms" as explained in the
> New Scientist last year. Now where was I?
>

"tempo maps" are a pretty old concept and should be able to be done in
a sequencer.

> Oh yes, now you have to set your MIDI "scratchy but lyrical" knob to the
> right position. In reality, you'd have to spend a few minutes playing
> with the overdrive and EQ settings, and it still wouldn't be right. But
> that's the implementation at fault, not the interface.
>

Actually, the interface in common music notation is a mess here. If you want
'lyrical' you write 'lyrical' and hope the player does what you want. If
you want a "wheezy lyrical", you write "arco col legno". Guess what. That
doesn't work on trombone! This is a place where a synthesizer / compute music
language should push things UP the heirarchy, so that specifying "more
wheeze" would have an intelligent interpretation no matter what the
'instrument' was it was asked from.

You might say "turn the wheeze knob". Unfortunately, this really means
'find a synthesis system with a wheeze knob, figure out a sysex to assign
it to a controllable parameter, insert that into the file, enter the
wheeze amounts with the mouse and see how they fit'.

All the way to the other end of the heirarchy.

> Which leaves plenty of the day left to go outside and listen to the birds
> in the park.

But thats where I want to do my composing in the first place.

Oh well... I don't mean to sound like I'm arguing about anything. I
certainly don't know how to improve it all, other than write my
own programs to do things my way. I've done this in the past, but
this too forces a period of atom assmbling "I want to write music
close to CMN, I also want very free microtonality, how should the
program intelligently interpret C#4 if the tuning is 88cet?". So,
now I'm in play-with-people mode. Oh well, next years HW/SW will
change everything, I'm sure.

[Oh, I'll probably be starting a new program using some of your
code so, thanks for putting it out there.]

Bob Valentine