back to list

odeion1-003

🔗Peter Frazer <paf@easynet.co.uk>

12/9/2003 1:05:06 PM

Hello Peter Wakefield Sault,

I listened to Odeion1-003 and I was impressed by what I heard,
particularly for 20 years ago! I have recently experimented with
the use of genetic algorithms for musical composition and had
rather less success than that.

It is possible that you may be interested in my software synthesizer
at www.midicode.com This has a dynamic re-tuning facility which
makes it possible to modulate just intonation (or other tunings) to
a new key by supplying a new key note either on-screen or from a
midi channel assigned to that purpose.

Paul Erlich and others have pointed out that a problem with this
approach is that it leads to a shift of absolute pitch with each
modulation which is difficult to reverse.

I liked the diagram Figure 1-4. Natural Intervals and Regular Polygons
in your Keys of Atlantis which I hope to read more thoroughly when
I have time. This is a thing which I noticed about 20 years ago
and briefly allude to in the introduction to my essay on historical
tunings at www.midicode.com/tunings.

Peter Frazer,
www.midicode.com

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/9/2003 2:34:23 PM

--- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
> Hello Peter Wakefield Sault,
>
> I listened to Odeion1-003 and I was impressed by what I heard,
> particularly for 20 years ago! I have recently experimented with
> the use of genetic algorithms for musical composition and had
> rather less success than that.
>
> It is possible that you may be interested in my software synthesizer
> at www.midicode.com This has a dynamic re-tuning facility which
> makes it possible to modulate just intonation (or other tunings) to
> a new key by supplying a new key note either on-screen or from a
> midi channel assigned to that purpose.
>
> Paul Erlich and others have pointed out that a problem with this
> approach is that it leads to a shift of absolute pitch with each
> modulation which is difficult to reverse.
>
> I liked the diagram Figure 1-4. Natural Intervals and Regular
Polygons
> in your Keys of Atlantis which I hope to read more thoroughly when
> I have time. This is a thing which I noticed about 20 years ago
> and briefly allude to in the introduction to my essay on historical
> tunings at www.midicode.com/tunings.
>
> Peter Frazer,
> www.midicode.com

Well now that looks very pretty. Lovely bit of design. Thing is -
what I need to be able to do is
(a) to stream data from the generator to the synth (not necessarily
as a MIDI datastream - but as whatever is most appropriate to the
task).
(b) to be able to specify the definitive set of vibration ratios
(i.e. to only be able to select a 'pythagorean' preset is no good).
(c) to retune relative to the bridge note, *not* relative to the new
tonic.

Now will your program do all three of those?

Why is a shift of absolute pitch with each modulation a problem? I
don't see it as a problem.

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/9/2003 2:57:01 PM

--- In tuning@yahoogroups.com, "Peter Wakefield Sault" <sault@c...>
wrote:
> --- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
> > Hello Peter Wakefield Sault,
> >
> > I listened to Odeion1-003 and I was impressed by what I heard,
> > particularly for 20 years ago! I have recently experimented with
> > the use of genetic algorithms for musical composition and had
> > rather less success than that.
> >
> > It is possible that you may be interested in my software
synthesizer
> > at www.midicode.com This has a dynamic re-tuning facility which
> > makes it possible to modulate just intonation (or other tunings)
to
> > a new key by supplying a new key note either on-screen or from a
> > midi channel assigned to that purpose.
> >
> > Paul Erlich and others have pointed out that a problem with this
> > approach is that it leads to a shift of absolute pitch with each
> > modulation which is difficult to reverse.
> >
> > I liked the diagram Figure 1-4. Natural Intervals and Regular
> Polygons
> > in your Keys of Atlantis which I hope to read more thoroughly when
> > I have time. This is a thing which I noticed about 20 years ago
> > and briefly allude to in the introduction to my essay on
historical
> > tunings at www.midicode.com/tunings.
> >
> > Peter Frazer,
> > www.midicode.com
>
> Well now that looks very pretty. Lovely bit of design. Thing is -
> what I need to be able to do is
> (a) to stream data from the generator to the synth (not necessarily
> as a MIDI datastream - but as whatever is most appropriate to the
> task).
> (b) to be able to specify the definitive set of vibration ratios
> (i.e. to only be able to select a 'pythagorean' preset is no good).
> (c) to retune relative to the bridge note, *not* relative to the
new
> tonic.
>
> Now will your program do all three of those?
>
> Why is a shift of absolute pitch with each modulation a problem? I
> don't see it as a problem.

Alternatively, the generator could be configured as a plugin, could
it not...

That looks like a treasure-trove of tuneriana under 'tunings'. I'm
going to read that right now.

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/9/2003 3:17:07 PM

--- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
> Hello Peter Wakefield Sault,
>
> I listened to Odeion1-003 and I was impressed by what I heard,
> particularly for 20 years ago! I have recently experimented with
> the use of genetic algorithms for musical composition and had
> rather less success than that.
>
> It is possible that you may be interested in my software synthesizer
> at www.midicode.com This has a dynamic re-tuning facility which
> makes it possible to modulate just intonation (or other tunings) to
> a new key by supplying a new key note either on-screen or from a
> midi channel assigned to that purpose.
>
> Paul Erlich and others have pointed out that a problem with this
> approach is that it leads to a shift of absolute pitch with each
> modulation which is difficult to reverse.
>
> I liked the diagram Figure 1-4. Natural Intervals and Regular
Polygons
> in your Keys of Atlantis which I hope to read more thoroughly when
> I have time. This is a thing which I noticed about 20 years ago
> and briefly allude to in the introduction to my essay on historical
> tunings at www.midicode.com/tunings.
>
> Peter Frazer,
> www.midicode.com

Peter

How hard could it be for you to provide a switch to make your synth
vibration number addressable? That's what I'm really after. It is the
most rational, not to say simple, solution to a lot of problems.

Peter

🔗Peter Frazer <paf@easynet.co.uk>

12/11/2003 3:35:26 PM

In tuning digest 2853 Peter Sault wrote ...

>Well now that looks very pretty. Lovely bit of design. Thing is -
>what I need to be able to do is
>(a) to stream data from the generator to the synth (not necessarily
>as a MIDI datastream - but as whatever is most appropriate to the
>task).

Hi Peter,

Midicode Synthesizer will only accept a midi data stream - this can
be from your software if your software acts as a midi output device.
I do not have time to customize it for a specific application,
however, I am happy to consider general features to incorporate
when I find time to write a new version.

>(b) to be able to specify the definitive set of vibration ratios
>(i.e. to only be able to select a 'pythagorean' preset is no good).

I am not yet sure what you mean by that as i have only partially
read your web site and skimmed your other posts. Please bear
with me. You can create whatever tunings you want including
non dodecaphonic.

>(c) to retune relative to the bridge note, *not* relative to the new
>tonic.

If you know which key you are going to you should be able to
determine the new key note from the bridge note.

>Now will your program do all three of those?

>Why is a shift of absolute pitch with each modulation a problem? I
>don't see it as a problem.

It is a problem if you want to modulate round a series of keys and
end up back in the original key at the original pitch. If your music
has no definite end then it is not a problem.

>Alternatively, the generator could be configured as a plugin, could
>it not...

I have no plans to configure it as a plug-in at the moment but I will
bear this possibility in mind. Plug-in to what? VST? DLL?

>That looks like a treasure-trove of tuneriana under 'tunings'. I'm
>going to read that right now.

Enjoy!

>How hard could it be for you to provide a switch to make your synth
>vibration number addressable? That's what I'm really after. It is the
>most rational, not to say simple, solution to a lot of problems.

As before, excuse me if I have some catching up to do. What is
vibration number addressable?

Peter Frazer,
www.midicode.com

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/11/2003 8:02:48 PM

[SNIP]

> As before, excuse me if I have some catching up to do. What is
> vibration number addressable?
>
> Peter Frazer,
> www.midicode.com

MIDI provides absolute pitch number addressing, where pitch number 0
= C0. The synthesizer is expected to translate the pitch number into
a vibration number (i.e. a specific frequency in Hz).

The concept of vibration number addressing is that the synthesizer
has only to respond with a note of the requested vibration number.
Vibration numbers are reals, unlike pitch numbers which are naturals.
So, I ask the synth to give me 123.4567Hz and that is what I should
get. A task which MIDI is not up to, so far as I know, since it does
not encode reals.
---------------------------------------------
On another note, I had to devise my own pitch numbering file system
for ODEION because of the inadequacies of MIDI. Since I wish to know,
for each and every note, its particular key I store pitch numbers as
a combination of octave number (0-11), root pitch (0-11, where 0 = C)
and offset from that root (0-11). Thus octave 4, root 5 and offset 3
==> absolute pitch 56 (i.e. Ab4). This is far more information than
is available with MIDI.
---------------------------------------------
As for ending up a dynamically retuned piece in exactly the same key
at the original root vibration number - well if we all set
unfulfillable conditions for our music then we would not make very
much of it, would we? The only question here is which imperfection
one is prepared to accept for a particular piece, ET mini-wolves or
JI pitch drift.

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/11/2003 8:07:34 PM

> >Alternatively, the generator could be configured as a plugin,
could
> >it not...
>
> I have no plans to configure it as a plug-in at the moment but I
will
> bear this possibility in mind. Plug-in to what? VST? DLL?
>

By "generator" I meant ODEION - and by extension any other
algorithmic composer. I tend to think of composers as being real
people, so I have always referred to ODEION as a 'generator'.

Configured as a plugin to your synth. (BTW - does it have a short
catchy name that is easy to remember that distinguishes, in the way
that Reaktor or FruityLoops (silly name! but memorable) does?

Peter.

🔗Carl Lumma <ekin@lumma.org>

12/12/2003 12:37:54 AM

>On another note, I had to devise my own pitch numbering file system
>for ODEION because of the inadequacies of MIDI. Since I wish to know,
>for each and every note, its particular key I store pitch numbers as
>a combination of octave number (0-11), root pitch (0-11, where 0 = C)
>and offset from that root (0-11). Thus octave 4, root 5 and offset 3
>==> absolute pitch 56 (i.e. Ab4). This is far more information than
>is available with MIDI.

Thanks for explaining. This does sound convenient. I'm no expert
on MIDI, but it does have a Key message, which could probably be
sent before every note-on. Not as elegant, but it might provide
root information, and the MIDI note number would then provide offset.
Finally, the MIDI Tuning Standard (abbreviated MTS around here)
provides for, is it 2 bytes, of exact tuning data, which can be
interpreted by the synth any way you like.

>As for ending up a dynamically retuned piece in exactly the same key
>at the original root vibration number - well if we all set
>unfulfillable conditions for our music then we would not make very
>much of it, would we? The only question here is which imperfection
>one is prepared to accept for a particular piece, ET mini-wolves or
>JI pitch drift.

All very true and well put, except there is a third sort of option,
around here known as "adaptive tuning". It makes mini-shifts by
tempering *melodic* intervals (such as between roots). Paul Erlich
pointed out that by rooting to meantone in the proper key, the
root could return to concert pitch for most ""diatonic"" music,
while the shifts between adjacent common tones is kept at a minimum.
If you don't know the key in advance you can root to ET. And if
you know all the notes in the piece in advance you can root to
COFT (Calculated Optimum Fixed Temperament), per John deLaubenfels'
idea. You might refer to his web page, adaptune.com, and enjoy
many midi samples there.

-Carl

🔗Peter Frazer <paf@easynet.co.uk>

12/12/2003 5:30:56 AM

In tuning digest 2861 Peter Sault wrote ...

[>SNIP]

>> As before, excuse me if I have some catching up to do. What is
>> vibration number addressable?
>>
>> Peter Frazer,
>> www.midicode.com

>MIDI provides absolute pitch number addressing, where pitch number 0
>= C0. The synthesizer is expected to translate the pitch number into
>a vibration number (i.e. a specific frequency in Hz).

>The concept of vibration number addressing is that the synthesizer
>has only to respond with a note of the requested vibration number.
>Vibration numbers are reals, unlike pitch numbers which are naturals.
>So, I ask the synth to give me 123.4567Hz and that is what I should
>get. A task which MIDI is not up to, so far as I know, since it does
>not encode reals.

You are right. A protocol which allowed frequency to be specified as
a real number rather than note number would greatly facilitate microtonal
sequencing and other software.

Monz, are you reading this thread? I know you are working on a
microtonal sequencer.

Of course, a real number in floating point is typically
8 bytes as opposed to 1 for note number so there would be much more
data to send. But then midi is now a very old standard so speed is
probably not an issue. What may be an issue is that midi short messages
are small enough to fit in a Windows message structure whereas something
containing a real would not. This may have implications for the fast
transfer of data from one application to another.

>> >Alternatively, the generator could be configured as a plugin,
could
>> >it not...
>
>> I have no plans to configure it as a plug-in at the moment but I
will
>> bear this possibility in mind. Plug-in to what? VST? DLL?
>

>By "generator" I meant ODEION - and by extension any other
>algorithmic composer. I tend to think of composers as being real
>people, so I have always referred to ODEION as a 'generator'.

Good distinction (but I thought you meant sound generator).

>Configured as a plugin to your synth. (BTW - does it have a short
>catchy name that is easy to remember that distinguishes, in the way
>that Reaktor or FruityLoops (silly name! but memorable) does?

At the moment there is no way to do that, it just takes midi
messages from a midi source via Windows.

Midicode Synth is the best I have as a name for now.
Any suggestions? :-)

Peter Frazer.
www.midicode.com

🔗Peter Frazer <paf@easynet.co.uk>

12/12/2003 4:53:38 AM

In tuning digest 2862 Carl replied

>>On another note, I had to devise my own pitch numbering file system
>>for ODEION because of the inadequacies of MIDI. Since I wish to know,
>>for each and every note, its particular key I store pitch numbers as
>>a combination of octave number (0-11), root pitch (0-11, where 0 = C)
>>and offset from that root (0-11). Thus octave 4, root 5 and offset 3
>>==> absolute pitch 56 (i.e. Ab4). This is far more information than
>>is available with MIDI.

>Thanks for explaining. This does sound convenient. I'm no expert
>on MIDI, but it does have a Key message, which could probably be
>sent before every note-on. Not as elegant, but it might provide
>root information, and the MIDI note number would then provide offset.
>Finally, the MIDI Tuning Standard (abbreviated MTS around here)
>provides for, is it 2 bytes, of exact tuning data, which can be
>interpreted by the synth any way you like.

Good point Carl. I'm not sure if 2 bytes is really enough to provide
exact tuning, I would have to think about that some more.

>>As for ending up a dynamically retuned piece in exactly the same key
>>at the original root vibration number - well if we all set
>>unfulfillable conditions for our music then we would not make very
>>much of it, would we? The only question here is which imperfection
>>one is prepared to accept for a particular piece, ET mini-wolves or
>>JI pitch drift.

>All very true and well put, except there is a third sort of option,
>around here known as "adaptive tuning". It makes mini-shifts by
>tempering *melodic* intervals (such as between roots). Paul Erlich
>pointed out that by rooting to meantone in the proper key, the
>root could return to concert pitch for most ""diatonic"" music,
>while the shifts between adjacent common tones is kept at a minimum.
>If you don't know the key in advance you can root to ET. And if
>you know all the notes in the piece in advance you can root to
>COFT (Calculated Optimum Fixed Temperament), per John deLaubenfels'
>idea. You might refer to his web page, adaptune.com, and enjoy
>many midi samples there.

I agree that John deLaubenfels adaptive tuning ( and the more recent
Hermode tuning ) are superior in many respects to my "dynamic
re-tuning" as implemented in Midicode Synthesizer. I was working
in isolation at the time and it seemed to me an obvious step forward
to use the capabilities of computers to shift Just Intonation into a new
key at the apposite time. I still believe that this type of re-tuning is
appropriate in some instances (like Peter Saults algorithmic composition)
and I hope that I have made a contribution here.

BTW Midicode Synth was built around a microtonal tuning system to
which I then added the sound synthesizer. I accept that there are
some limitations to the sound quality and hope to develop a new
synthesis engine when I get the chance. I more people bought my
software then I could spend more time working on it . :-)

Peter Frazer.
www.midicode.com

🔗Manuel Op de Coul <manuel.op.de.coul@eon-benelux.com>

12/12/2003 6:17:56 AM

Peter Frazer wrote:
>You are right. A protocol which allowed frequency to be specified as
>a real number rather than note number would greatly facilitate microtonal
>sequencing and other software.

Frequency numbers are not the best way of specifying a tuning via midi,
because the resolution in logarithmic terms (the way people perceive pitch)
varies with the frequency.
That's why the Midi Tuning Standard and all other protocols for hardware
synths that I know use logarithmic numbers.

>Good point Carl. I'm not sure if 2 bytes is really enough to provide
>exact tuning, I would have to think about that some more.

There are different parts in the Midi Tuning Standard but the most used
bulk dump is 3 bytes.
So Peter, isn't it a good time to implement it in Midicode as I have
suggested
to you before?
Best,

Manuel

🔗Carl Lumma <ekin@lumma.org>

12/12/2003 7:33:27 AM

>I agree that John deLaubenfels adaptive tuning ( and the more recent
>Hermode tuning ) are superior in many respects to my "dynamic
>re-tuning" as implemented in Midicode Synthesizer. I was working
>in isolation at the time and it seemed to me an obvious step forward
>to use the capabilities of computers to shift Just Intonation into a
>new key at the apposite time. I still believe that this type of re-
>tuning is appropriate in some instances (like Peter Saults algorithmic
>composition) and I hope that I have made a contribution here.

As I understand it your "dynamic retuning" allows the composer to
specify roots on a dedicated MIDI channel. This is in a totally
different league than automatic retuning, in my book. The additional
choice may take a lifetime to master for different scales, but it
also brings a world of compositional opportunities that automatic
retuning cannot. Apples and oranges.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

12/12/2003 10:35:34 AM

--- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
> In tuning digest 2862 Carl replied

> >Finally, the MIDI Tuning Standard (abbreviated MTS around here)
> >provides for, is it 2 bytes, of exact tuning data, which can be
> >interpreted by the synth any way you like.
>
> Good point Carl. I'm not sure if 2 bytes is really enough to
provide
> exact tuning, I would have to think about that some more.

MTS uses *three* digits in base 128, not two. Each digit in base 128
is equivalent to 2.1072 digits in base 10, so we are talking six-
digit floating point numbers. That should suffice.

🔗Gene Ward Smith <gwsmith@svpal.org>

12/12/2003 10:30:14 AM

--- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:

> You are right. A protocol which allowed frequency to be specified
as
> a real number rather than note number would greatly facilitate
microtonal
> sequencing and other software.

That is, more or less, what the midi tuning standard does. It
describes the note in terms of three digits in base 128.

🔗Peter Frazer <paf@easynet.co.uk>

12/12/2003 2:01:50 PM

In tuning Digest 2863 Manuel wrote ...

>Frequency numbers are not the best way of specifying a tuning via midi,
>because the resolution in logarithmic terms (the way people perceive pitch)
>varies with the frequency.
>That's why the Midi Tuning Standard and all other protocols for hardware
>synths that I know use logarithmic numbers.

Good point.

>There are different parts in the Midi Tuning Standard but the most used
>bulk dump is 3 bytes.
>So Peter, isn't it a good time to implement it in Midicode as I have
>suggested
>to you before?
>Best,

>Manuel

Yes indeed Manuel, but I shall probably concentrate on trying to
improve the sound quality first.

Peter.
www.midicode.com

🔗Peter Frazer <paf@easynet.co.uk>

12/12/2003 3:08:26 PM

In tuning digest 2863 Carl wrote

>>I agree that John deLaubenfels adaptive tuning ( and the more recent
>>Hermode tuning ) are superior in many respects to my "dynamic
>>re-tuning" as implemented in Midicode Synthesizer. I was working
>>in isolation at the time and it seemed to me an obvious step forward
>>to use the capabilities of computers to shift Just Intonation into a
>>new key at the apposite time. I still believe that this type of re-
>>tuning is appropriate in some instances (like Peter Saults algorithmic
>>composition) and I hope that I have made a contribution here.

>As I understand it your "dynamic retuning" allows the composer to
>specify roots on a dedicated MIDI channel. This is in a totally
>different league than automatic retuning, in my book. The additional
>choice may take a lifetime to master for different scales, but it
>also brings a world of compositional opportunities that automatic
>retuning cannot. Apples and oranges.

>-Carl

Thanks Carl.

The basic idea of dynamic re-tuning is that you could have a midi
pedal board on the re-tuning channel and hit the new key note
typically during a modulation pivot chord. I wouldn't have thought
that was too difficult to master.

(Also works from sequencer)

Peter.
www.midicode.com

🔗Peter Frazer <paf@easynet.co.uk>

12/12/2003 3:26:34 PM

In tuning digest 2863 Gene wrote ...

>MTS uses *three* digits in base 128, not two. Each digit in base 128
>is equivalent to 2.1072 digits in base 10, so we are talking six-
>digit floating point numbers. That should suffice.

Hi Gene,

I'm not quite sure what you mean here Gene, could you point me
to a reference?

By base 128 I assume you mean 7 bit numbers so 21 bits,
2 million or so combinations, but why 2.1072 digits?

I agree that it should suffice.

In tuning digest 2864 Gene wrote ...

>> You are right. A protocol which allowed frequency to be specified
as
>> a real number rather than note number would greatly facilitate
microtonal
>> sequencing and other software.

>That is, more or less, what the midi tuning standard does. It
>describes the note in terms of three digits in base 128.

But doesn't the midi tuning standard simply enable an entire tuning
table to be downloaded?

The original idea that Peter put forward was to specify the individual
frequency of each note rather than as a note number, i.e. in the note-on
message (or equivalent).

Peter
www.midicode,com

🔗Carl Lumma <ekin@lumma.org>

12/12/2003 3:44:36 PM

>The basic idea of dynamic re-tuning is that you could have a midi
>pedal board on the re-tuning channel and hit the new key note
>typically during a modulation pivot chord. I wouldn't have thought
>that was too difficult to master.
>
>(Also works from sequencer)
>
>Peter.
>www.midicode.com

YMMV. -Carl

🔗Carl Lumma <ekin@lumma.org>

12/12/2003 3:49:11 PM

>But doesn't the midi tuning standard simply enable an entire tuning
>table to be downloaded?
>
>The original idea that Peter put forward was to specify the individual
>frequency of each note rather than as a note number, i.e. in the note-on
>message (or equivalent).

There are several different types of messages defined by the spec.

http://www.midi.org/about-midi/tuning.shtml

http://www.midi.org/about-midi/tuning_extens.shtml

Note esp. the "single note" messages on the second link.

-Carl

🔗kraig grady <kraiggrady@anaphoria.com>

12/12/2003 5:12:25 PM

>

Hello Peter!
You might be interested in the work of Boomsliter and Creel on extended referance which you can access
here http://www.anaphoria.com/BC1.PDF
and
http://www.anaphoria.com/BC2A.PDF
http://www.anaphoria.com/BC2B.PDF
http://www.anaphoria.com/BC2C.PDF

>
> From: Peter Frazer <paf@easynet.co.uk>
> Subject: Re: Re: odeion1-003
>
>
> I agree that John deLaubenfels adaptive tuning ( and the more recent
> Hermode tuning ) are superior in many respects to my "dynamic
> re-tuning" as implemented in Midicode Synthesizer. I was working
> in isolation at the time and it seemed to me an obvious step forward
> to use the capabilities of computers to shift Just Intonation into a new
> key at the apposite time. I still believe that this type of re-tuning is
> appropriate in some instances (like Peter Saults algorithmic composition)
> and I hope that I have made a contribution here.
>
>
> Peter Frazer.
> www.midicode.com
>
>

-- -Kraig Grady
North American Embassy of Anaphoria Island
http://www.anaphoria.com
The Wandering Medicine Show
KXLU 88.9 FM WED 8-9PM PST

🔗Robert Walker <robertwalker@ntlworld.com>

12/12/2003 6:56:46 PM

Hi Peter,

> The basic idea of dynamic re-tuning is that you could have a midi
> pedal board on the re-tuning channel and hit the new key note
> typically during a modulation pivot chord. I wouldn't have thought
> that was too difficult to master.

> (Also works from sequencer)

I've been working on this very idea too in FTS, independently of
you. My approach was based on Carl Lumma's "xenharmonic moving windows"
specification for a GUI to do tonic shifts which he posted
to MakeMicroMusic (I think it was), maybe a year or so ago -
not that I followed it exactly but used ideas from it.
Then various ideas sugested by users of FTS - it does get
used though I think only rather occasionally so far.

It really comes into its own for sequencer use as you
just need to add a silent retuning channel - have to make sure
the tuning changing notes in that channnel happen just before the chords
that they are affecting.

Interesting to compare notes. My one is in
FTS | View | Midi Keyboard Retuning | Tonic Shifts
with various options there.

Main ones relevant here are M.W. scale + Tonic shifts,
and + Tonic Drift.

If you click on the button then it says what it does
before you click the OK. By M.W.scale I mean the
Main Window scale - by the main window I mean
the one you see when you start the program and
that you close to exit it (called different things
depending on the view you are using)

The tonic drift option there lets each tonic
shifted scale dovetail to the next at the tonic for the new
scale so that yuo get tonic drift. The other one
has tonic shifts - well both do but the dovetail
one is useful for melodic lines that pivot at the
tonic.

What happens then is that you have a part tht you
use to do the tonic shifts, and you can either map that to a region of
the music keyboard, or to one of the input channels
or the controller etc. You then have fifteen
other polyphonic parts which can be tuned
individually to any scale you like and in this
case they are tuned tonic shifted copies of the main window scale,
so the tonic shifting then just chooses which of those
parts to play.

In fact you can hear a comparison of my tunings using
this tonic shifting and JdLs tunings of the same
piece on-line at
http://www.tunesmithy.netfirms.com/tunes/tunes.htm#7_limit_adaptive_puzzle

Robert

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/12/2003 7:23:43 PM

--- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
> In tuning digest 2861 Peter Sault wrote ...
>
> [>SNIP]
>
> >> As before, excuse me if I have some catching up to do. What is
> >> vibration number addressable?
> >>
> >> Peter Frazer,
> >> www.midicode.com
>
> >MIDI provides absolute pitch number addressing, where pitch
number 0
> >= C0. The synthesizer is expected to translate the pitch number
into
> >a vibration number (i.e. a specific frequency in Hz).
>
> >The concept of vibration number addressing is that the synthesizer
> >has only to respond with a note of the requested vibration number.
> >Vibration numbers are reals, unlike pitch numbers which are
naturals.
> >So, I ask the synth to give me 123.4567Hz and that is what I
should
> >get. A task which MIDI is not up to, so far as I know, since it
does
> >not encode reals.
>
> You are right. A protocol which allowed frequency to be specified
as
> a real number rather than note number would greatly facilitate
microtonal
> sequencing and other software.
>
> Monz, are you reading this thread? I know you are working on a
> microtonal sequencer.
>
> Of course, a real number in floating point is typically
> 8 bytes as opposed to 1 for note number so there would be much more
> data to send. But then midi is now a very old standard so speed is
> probably not an issue. What may be an issue is that midi short
messages
> are small enough to fit in a Windows message structure whereas
something
> containing a real would not. This may have implications for the
fast
> transfer of data from one application to another.
>

Computers are a zillion times faster nowadays making external
hardware synths obsolete for that reason. Ok so we still have to plug
in MIDI controller instruments for manual performance but there is
now USB and Firewire. The problem is convincing the MIDI controller
instrument makers to catch up - and replace MIDI with something
slightly less horrible. For my purposes there is no external data
connexion needed anyway - it all goes via internal buffers. I take it
your program accepts MIDI-streams from Cakewalk and the suchlike.
Accepting plugins would be one way of escaping from MIDI and
specifying your own standard.

Peter S.

> >> >Alternatively, the generator could be configured as a plugin,
> could
> >> >it not...
> >
> >> I have no plans to configure it as a plug-in at the moment but I
> will
> >> bear this possibility in mind. Plug-in to what? VST? DLL?
> >
>
> >By "generator" I meant ODEION - and by extension any other
> >algorithmic composer. I tend to think of composers as being real
> >people, so I have always referred to ODEION as a 'generator'.
>
> Good distinction (but I thought you meant sound generator).
>
> >Configured as a plugin to your synth. (BTW - does it have a short
> >catchy name that is easy to remember that distinguishes, in the
way
> >that Reaktor or FruityLoops (silly name! but memorable) does?
>
> At the moment there is no way to do that, it just takes midi
> messages from a midi source via Windows.
>
> Midicode Synth is the best I have as a name for now.
> Any suggestions? :-)
>
> Peter Frazer.
> www.midicode.com

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/12/2003 7:30:40 PM

--- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
> In tuning digest 2862 Carl replied
>
> >>On another note, I had to devise my own pitch numbering file
system
> >>for ODEION because of the inadequacies of MIDI. Since I wish to
know,
> >>for each and every note, its particular key I store pitch
numbers as
> >>a combination of octave number (0-11), root pitch (0-11, where 0
= C)
> >>and offset from that root (0-11). Thus octave 4, root 5 and
offset 3
> >>==> absolute pitch 56 (i.e. Ab4). This is far more information
than
> >>is available with MIDI.
>
> >Thanks for explaining. This does sound convenient. I'm no expert
> >on MIDI, but it does have a Key message, which could probably be
> >sent before every note-on. Not as elegant, but it might provide
> >root information, and the MIDI note number would then provide
offset.
> >Finally, the MIDI Tuning Standard (abbreviated MTS around here)
> >provides for, is it 2 bytes, of exact tuning data, which can be
> >interpreted by the synth any way you like.
>
> Good point Carl. I'm not sure if 2 bytes is really enough to
provide
> exact tuning, I would have to think about that some more.
>
> >>As for ending up a dynamically retuned piece in exactly the same
key
> >>at the original root vibration number - well if we all set
> >>unfulfillable conditions for our music then we would not make
very
> >>much of it, would we? The only question here is which
imperfection
> >>one is prepared to accept for a particular piece, ET mini-wolves
or
> >>JI pitch drift.
>
> >All very true and well put, except there is a third sort of
option,
> >around here known as "adaptive tuning". It makes mini-shifts by
> >tempering *melodic* intervals (such as between roots). Paul
Erlich
> >pointed out that by rooting to meantone in the proper key, the
> >root could return to concert pitch for most ""diatonic"" music,
> >while the shifts between adjacent common tones is kept at a
minimum.
> >If you don't know the key in advance you can root to ET. And if
> >you know all the notes in the piece in advance you can root to
> >COFT (Calculated Optimum Fixed Temperament), per John
deLaubenfels'
> >idea. You might refer to his web page, adaptune.com, and enjoy
> >many midi samples there.
>
> I agree that John deLaubenfels adaptive tuning ( and the more recent
> Hermode tuning ) are superior in many respects to my "dynamic
> re-tuning" as implemented in Midicode Synthesizer. I was working
> in isolation at the time and it seemed to me an obvious step forward
> to use the capabilities of computers to shift Just Intonation into
a new
> key at the apposite time. I still believe that this type of re-
tuning is
> appropriate in some instances (like Peter Saults algorithmic
composition)
> and I hope that I have made a contribution here.

Personally I am prepared to dispense with compromise and end up
wherever dynamic retuning takes me, keeping the tuning 'perfect'
throughout. I have no particular attachment to this or that frequency
except in relation to other frequencies. I guess the 'perfect'
program solution would be an option switch.

Peter S.

>
> BTW Midicode Synth was built around a microtonal tuning system to
> which I then added the sound synthesizer. I accept that there are
> some limitations to the sound quality and hope to develop a new
> synthesis engine when I get the chance. I more people bought my
> software then I could spend more time working on it . :-)
>

It's a chicken and egg thing, Pete. Give me vibration number
addressing and I'll buy it.

> Peter Frazer.
> www.midicode.com

🔗monz <monz@attglobal.net>

12/12/2003 7:50:38 PM

--- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
> In tuning digest 2863 Gene wrote ...
>
> > MTS uses *three* digits in base 128, not two.
> > Each digit in base 128 is equivalent to 2.1072 digits
> > in base 10, so we are talking six-digit floating point
> > numbers. That should suffice.
>
> Hi Gene,
>
> I'm not quite sure what you mean here Gene, could
> you point me to a reference?
>
> By base 128 I assume you mean 7 bit numbers so 21 bits,
> 2 million or so combinations, but why 2.1072 digits?

a few months ago i other theorists coined the term
"tetradekamu" to represent the smallest unit of
tuning resolution possible in MIDI, and it is the
unit used in MTS.

you can try to get something from these:

http://tonalsoft.com/enc/tetradekamu.htm

http://tonalsoft.com/monzo/miditune/miditune.htm

-monz

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/12/2003 7:54:44 PM

--- In tuning@yahoogroups.com, "Manuel Op de Coul"
<manuel.op.de.coul@e...> wrote:
>
> Peter Frazer wrote:
> >You are right. A protocol which allowed frequency to be specified
as
> >a real number rather than note number would greatly facilitate
microtonal
> >sequencing and other software.
>
> Frequency numbers are not the best way of specifying a tuning via
midi,
> because the resolution in logarithmic terms (the way people
perceive pitch)
> varies with the frequency.
> That's why the Midi Tuning Standard and all other protocols for
hardware
> synths that I know use logarithmic numbers.

You are missing the point entirely, Manuel. How I arrive at a
particular frequency for a particular note that I want is entirely my
own affair. If I calculate by whatever means I am using that the next
note I want is 123.4567Hz, then the synth should just simply give it
to me as requested. That's why MIDI is defective and inadequate to
the task of complete musical freedom. You keep thinking 'keyboard
keyboard keyboard'. It's very limiting. Everyone has attacked me for
being the Great Defender of Dodekaphony and rattles on about other
divisions of the octave but the simple fact is that MIDI is locked
into dodekaphony. Personally I am simply not prepared to deal with
the complexities of trying to play in, for example, a 19-pitch octave
by clunking about with 12 pitch numbers. Since MIDI is inadequate to
start with, anything based on MIDI is going to inherit the same
inadequacy and clunkiness. Ever heard of the KISS principle?

Peter

>
> >Good point Carl. I'm not sure if 2 bytes is really enough to
provide
> >exact tuning, I would have to think about that some more.
>
> There are different parts in the Midi Tuning Standard but the most
used
> bulk dump is 3 bytes.
> So Peter, isn't it a good time to implement it in Midicode as I have
> suggested
> to you before?
> Best,
>
> Manuel

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/12/2003 9:18:34 PM

--- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
> In tuning digest 2863 Carl wrote
>
> >>I agree that John deLaubenfels adaptive tuning ( and the more
recent
> >>Hermode tuning ) are superior in many respects to my "dynamic
> >>re-tuning" as implemented in Midicode Synthesizer. I was working
> >>in isolation at the time and it seemed to me an obvious step
forward
> >>to use the capabilities of computers to shift Just Intonation
into a
> >>new key at the apposite time. I still believe that this type of
re-
> >>tuning is appropriate in some instances (like Peter Saults
algorithmic
> >>composition) and I hope that I have made a contribution here.
>
> >As I understand it your "dynamic retuning" allows the composer to
> >specify roots on a dedicated MIDI channel. This is in a totally
> >different league than automatic retuning, in my book. The
additional
> >choice may take a lifetime to master for different scales, but it
> >also brings a world of compositional opportunities that automatic
> >retuning cannot. Apples and oranges.
>
> >-Carl
>
> Thanks Carl.
>
> The basic idea of dynamic re-tuning is that you could have a midi
> pedal board on the re-tuning channel and hit the new key note
> typically during a modulation pivot chord. I wouldn't have thought
> that was too difficult to master.
>
> (Also works from sequencer)
>
> Peter.
> www.midicode.com

There's a problem in that too. The modulation bridge note
(or 'pivot') need not be the tonic of the new key. So if you do not
retune relative to the bridge note then you introduce an unwanted
dissonance into the melody immediately following the pivot.

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/12/2003 9:20:29 PM

--- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
> In tuning digest 2863 Gene wrote ...
>
> >MTS uses *three* digits in base 128, not two. Each digit in base
128
> >is equivalent to 2.1072 digits in base 10, so we are talking six-
> >digit floating point numbers. That should suffice.
>
> Hi Gene,
>
> I'm not quite sure what you mean here Gene, could you point me
> to a reference?
>
> By base 128 I assume you mean 7 bit numbers so 21 bits,
> 2 million or so combinations, but why 2.1072 digits?
>
> I agree that it should suffice.
>
> In tuning digest 2864 Gene wrote ...
>
> >> You are right. A protocol which allowed frequency to be
specified
> as
> >> a real number rather than note number would greatly facilitate
> microtonal
> >> sequencing and other software.
>
> >That is, more or less, what the midi tuning standard does. It
> >describes the note in terms of three digits in base 128.
>
> But doesn't the midi tuning standard simply enable an entire tuning
> table to be downloaded?
>
> The original idea that Peter put forward was to specify the
individual
> frequency of each note rather than as a note number, i.e. in the
note-on
> message (or equivalent).
>
>
> Peter
> www.midicode,com

Thereby avoiding a whole raft of arithmetic jiggery-pokery.

Peter S.

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/12/2003 9:29:49 PM

--- In tuning@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >But doesn't the midi tuning standard simply enable an entire tuning
> >table to be downloaded?
> >
> >The original idea that Peter put forward was to specify the
individual
> >frequency of each note rather than as a note number, i.e. in the
note-on
> >message (or equivalent).
>
> There are several different types of messages defined by the spec.
>
> http://www.midi.org/about-midi/tuning.shtml
>
> http://www.midi.org/about-midi/tuning_extens.shtml
>
> Note esp. the "single note" messages on the second link.
>
> -Carl

Who employs these MIDIots? Take something that's clunky and make it
even clunkier. Why is it so difficult for them to comprehend
SIMPLICITY? Why do they always make arrogant assumptions and always
the wrong ones? There is nothing in all of that enables anyone to
request a particular vibration number. Talk about PERVERSE!

🔗Aaron K. Johnson <akjmicro@comcast.net>

12/12/2003 9:30:19 PM

On Friday 12 December 2003 11:20 pm, Peter Wakefield Sault wrote:
> --- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
> > In tuning digest 2863 Gene wrote ...
> >
> > >MTS uses *three* digits in base 128, not two. Each digit in base
>
> 128
>
> > >is equivalent to 2.1072 digits in base 10, so we are talking six-
> > >digit floating point numbers. That should suffice.
> >
> > Hi Gene,
> >
> > I'm not quite sure what you mean here Gene, could you point me
> > to a reference?
> >
> > By base 128 I assume you mean 7 bit numbers so 21 bits,
> > 2 million or so combinations, but why 2.1072 digits?
> >
> > I agree that it should suffice.
> >
> > In tuning digest 2864 Gene wrote ...
> >
> > >> You are right. A protocol which allowed frequency to be
>
> specified
>
> > as
> >
> > >> a real number rather than note number would greatly facilitate
> >
> > microtonal
> >
> > >> sequencing and other software.
> > >
> > >That is, more or less, what the midi tuning standard does. It
> > >describes the note in terms of three digits in base 128.
> >
> > But doesn't the midi tuning standard simply enable an entire tuning
> > table to be downloaded?
> >
> > The original idea that Peter put forward was to specify the
>
> individual
>
> > frequency of each note rather than as a note number, i.e. in the
>
> note-on
>
> > message (or equivalent).
> >
> >
> > Peter
> > www.midicode,com
>
> Thereby avoiding a whole raft of arithmetic jiggery-pokery.
>
> Peter S.

I fail to see what the fuss over midi is. It's perfectly accurate when using
pitch bend signals as well as note on signals. I have written functions for
microtonality that take all the work away from the user.

I do it all the time-and to avoid thinking in note numbers, it's trivial to
write a function that would translate a given pitch in hertz to the nearest
note number plus bend. Hell, my cat could do it in 3 minutes.....

-Aaron.

🔗Carl Lumma <ekin@lumma.org>

12/12/2003 9:55:27 PM

>Who employs these MIDIots? Take something that's clunky and make it
>even clunkier. Why is it so difficult for them to comprehend
>SIMPLICITY? Why do they always make arrogant assumptions and always
>the wrong ones? There is nothing in all of that enables anyone to
>request a particular vibration number. Talk about PERVERSE!

Without going into the "MIDI sucks" issue, I can't understand why
you keep harping on the vibration number thing. If you want to
think in terms of vibration number, just write an interface to
MIDI/MTS. It shouldn't require more than a few lines of code.

-Carl

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/12/2003 11:22:19 PM

--- In tuning@yahoogroups.com, "Aaron K. Johnson" <akjmicro@c...>
wrote:
> On Friday 12 December 2003 11:20 pm, Peter Wakefield Sault wrote:
> > --- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
> > > In tuning digest 2863 Gene wrote ...
> > >
> > > >MTS uses *three* digits in base 128, not two. Each digit in
base
> >
> > 128
> >
> > > >is equivalent to 2.1072 digits in base 10, so we are talking
six-
> > > >digit floating point numbers. That should suffice.
> > >
> > > Hi Gene,
> > >
> > > I'm not quite sure what you mean here Gene, could you point me
> > > to a reference?
> > >
> > > By base 128 I assume you mean 7 bit numbers so 21 bits,
> > > 2 million or so combinations, but why 2.1072 digits?
> > >
> > > I agree that it should suffice.
> > >
> > > In tuning digest 2864 Gene wrote ...
> > >
> > > >> You are right. A protocol which allowed frequency to be
> >
> > specified
> >
> > > as
> > >
> > > >> a real number rather than note number would greatly
facilitate
> > >
> > > microtonal
> > >
> > > >> sequencing and other software.
> > > >
> > > >That is, more or less, what the midi tuning standard does. It
> > > >describes the note in terms of three digits in base 128.
> > >
> > > But doesn't the midi tuning standard simply enable an entire
tuning
> > > table to be downloaded?
> > >
> > > The original idea that Peter put forward was to specify the
> >
> > individual
> >
> > > frequency of each note rather than as a note number, i.e. in the
> >
> > note-on
> >
> > > message (or equivalent).
> > >
> > >
> > > Peter
> > > www.midicode,com
> >
> > Thereby avoiding a whole raft of arithmetic jiggery-pokery.
> >
> > Peter S.
>
> I fail to see what the fuss over midi is. It's perfectly accurate
when using
> pitch bend signals as well as note on signals. I have written
functions for
> microtonality that take all the work away from the user.
>
> I do it all the time-and to avoid thinking in note numbers, it's
trivial to
> write a function that would translate a given pitch in hertz to the
nearest
> note number plus bend. Hell, my cat could do it in 3 minutes.....
>
> -Aaron.

No doubt your cat can also sing in 3-part harmony but why are you
defending the defects of MIDI? Have you got shares in it?

🔗monz <monz@attglobal.net>

12/12/2003 11:26:31 PM

--- In tuning@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >Who employs these MIDIots? Take something that's clunky and make
it
> >even clunkier. Why is it so difficult for them to comprehend
> >SIMPLICITY? Why do they always make arrogant assumptions and
always
> >the wrong ones? There is nothing in all of that enables anyone to
> >request a particular vibration number. Talk about PERVERSE!
>
> Without going into the "MIDI sucks" issue, I can't understand why
> you keep harping on the vibration number thing. If you want to
> think in terms of vibration number, just write an interface to
> MIDI/MTS. It shouldn't require more than a few lines of code.
>
> -Carl

yeah, i don't get it either. it's simple to convert
from frequency-vibration-numbers to tetradekamus and back.

i'll illustrate by means of an example. let's say we want
the note which is an 8:9 ratio above A-440.

first you find out how many 12edo semitones above or below
the reference (A-440) your desired note is. in this case,
(9/8)*440 = 495 Hz. so it's log(9/8) * 12/log(2)
= ~2.039100017 12edo semitones above A-440.

so now you know that your nearest MIDI-note is the B which
is 2 semitones above A-440.

next you subtract the actual value you calculated from
the nearest 12edo approximation. ~2.039100017 - 2 =
~0.039100017 semitones discrepancy.

now multiply that number by 2^14, since there are 2^14
tetradekamus in each 12edo semitone. ~0.039100017 * 2^14
= ~640.6146836 tetradekamus. simply round that off to
the nearest integer value and you have the tetradekamu
correction necessary to tell MIDI to give you the 8:9 ratio,
which is 495 Hz.

so the answer for this example is A-440 * 9/8
= A-440 + 2 12edo semitones + 641 tetradekamus.

if you want to use any other MIDI ...mu value, simply
change the power of 2 in the last calculation to reflect
whichever ...mu you want. so a dodekamu would use 2^12,
a hexamu would use 2^6, etc.

MTS uses the tetradekamu convention, dividing every
12edo semitone into 2^14 tetradekamu units.

so, in algorithm format, let's call the reference frequency R
and the desired frequency F. to get the MIDI-note + tetradekamu:

(int(log(F/R)*((12/log(2))))
-((log(F/R)*((12/log(2)))-(int(log(F/R)*((12/log(2)))))*(2^14)

-monz

🔗peter_wakefield_sault <sault@cyberware.co.uk>

12/12/2003 11:50:52 PM

--- In tuning@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >Who employs these MIDIots? Take something that's clunky and make
it
> >even clunkier. Why is it so difficult for them to comprehend
> >SIMPLICITY? Why do they always make arrogant assumptions and
always
> >the wrong ones? There is nothing in all of that enables anyone to
> >request a particular vibration number. Talk about PERVERSE!
>
> Without going into the "MIDI sucks" issue, I can't understand why
> you keep harping on the vibration number thing. If you want to
> think in terms of vibration number, just write an interface to
> MIDI/MTS. It shouldn't require more than a few lines of code.
>
> -Carl

I keep "harping on" about it because everybody keeps arguing against
it. I'm not trying to force you or anyone else to do it the simple
way. I just want to be able to do it the simple way myself. What is
your problem with that?

🔗Kurt Bigler <kkb@breathsense.com>

12/12/2003 11:51:31 PM

on 12/12/03 7:23 PM, Peter Wakefield Sault <sault@cyberware.co.uk> wrote:

> --- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
>> In tuning digest 2861 Peter Sault wrote ...
>>
>> [>SNIP]
>>
>>>> As before, excuse me if I have some catching up to do. What is
>>>> vibration number addressable?
>>>>
>>>> Peter Frazer,
>>>> www.midicode.com
>>
>>> MIDI provides absolute pitch number addressing, where pitch
> number 0
>>> = C0. The synthesizer is expected to translate the pitch number
> into
>>> a vibration number (i.e. a specific frequency in Hz).
>>
>>> The concept of vibration number addressing is that the synthesizer
>>> has only to respond with a note of the requested vibration number.
>>> Vibration numbers are reals, unlike pitch numbers which are
> naturals.
>>> So, I ask the synth to give me 123.4567Hz and that is what I
> should
>>> get. A task which MIDI is not up to, so far as I know, since it
> does
>>> not encode reals.
>>
>> You are right. A protocol which allowed frequency to be specified
> as
>> a real number rather than note number would greatly facilitate
> microtonal
>> sequencing and other software.
>>
>> Monz, are you reading this thread? I know you are working on a
>> microtonal sequencer.
>>
>> Of course, a real number in floating point is typically
>> 8 bytes as opposed to 1 for note number so there would be much more
>> data to send. But then midi is now a very old standard so speed is
>> probably not an issue. What may be an issue is that midi short
> messages
>> are small enough to fit in a Windows message structure whereas
> something
>> containing a real would not. This may have implications for the
> fast
>> transfer of data from one application to another.
>>
>
> Computers are a zillion times faster nowadays making external
> hardware synths obsolete for that reason. Ok so we still have to plug
> in MIDI controller instruments for manual performance but there is
> now USB and Firewire. The problem is convincing the MIDI controller
> instrument makers to catch up - and replace MIDI with something
> slightly less horrible. For my purposes there is no external data
> connexion needed anyway - it all goes via internal buffers. I take it
> your program accepts MIDI-streams from Cakewalk and the suchlike.
> Accepting plugins would be one way of escaping from MIDI and
> specifying your own standard.
>
> Peter S.

That convincing may take some time. First of all the majority of the market
has to become sophisticated enough to recognize the functional benefits of a
fully-flexible protocol. Secondly the market would be unlikely to
distinguish a hack to a bad protocol from a better protocol, and the former
would cost the industry less in making the adjustment. So it will probably
happen some day, when the cost of making the transition hits zero.

I brought up the problems with MIDI on the apple CoreAudio list and I was
treated somewhat like an alien. But it was pointed out to me that apple's
internal musical instrument interface (a kind of audio unit, I forget the
exact name) does not suffer the limitations of MIDI, while also supporting
it as one means of control. However apple has no current interest or
motivations to consider changes to the external protocol. This might be a
slight misrepresentation since the conversation was more than a year ago,
but I think the gist of it is correct. Meanwhile MTS is arriving to serve
our purposes relatively well, and we can welcome the day when even that is
implemented fully in all its flavors, and also accurately enough to satisfy
the tuning-sensitive contingent.

Finally, the very fact that so much customization and workarounds can be
done in software actually reduces the pressure for changes to MIDI.

-Kurt

🔗Kurt Bigler <kkb@breathsense.com>

12/12/2003 11:56:48 PM

on 12/12/03 7:54 PM, Peter Wakefield Sault <sault@cyberware.co.uk> wrote:

> --- In tuning@yahoogroups.com, "Manuel Op de Coul"
> <manuel.op.de.coul@e...> wrote:
>>
>> Peter Frazer wrote:
>>> You are right. A protocol which allowed frequency to be specified
> as
>>> a real number rather than note number would greatly facilitate
> microtonal
>>> sequencing and other software.
>>
>> Frequency numbers are not the best way of specifying a tuning via
> midi,
>> because the resolution in logarithmic terms (the way people
> perceive pitch)
>> varies with the frequency.
>> That's why the Midi Tuning Standard and all other protocols for
> hardware
>> synths that I know use logarithmic numbers.
>
>
> You are missing the point entirely, Manuel. How I arrive at a
> particular frequency for a particular note that I want is entirely my
> own affair. If I calculate by whatever means I am using that the next
> note I want is 123.4567Hz, then the synth should just simply give it
> to me as requested. That's why MIDI is defective and inadequate to
> the task of complete musical freedom. You keep thinking 'keyboard
> keyboard keyboard'. It's very limiting. Everyone has attacked me for
> being the Great Defender of Dodekaphony and rattles on about other
> divisions of the octave but the simple fact is that MIDI is locked
> into dodekaphony. Personally I am simply not prepared to deal with
> the complexities of trying to play in, for example, a 19-pitch octave
> by clunking about with 12 pitch numbers. Since MIDI is inadequate to
> start with, anything based on MIDI is going to inherit the same
> inadequacy and clunkiness. Ever heard of the KISS principle?
>
> Peter

The midi protocol itself is less limiting than the midi attachment to the
standard keyboard. You can take midi note numbers mod any octave size you
like. If the mainstream does not provide software synthesis to support
these needs, then someone outside the mainstream will. Many of us are
working on this for ourselves and this community right now.

How do the generalized keyboard(s) utilize midi?

-Kurt

🔗Carl Lumma <ekin@lumma.org>

12/13/2003 12:00:00 AM

>> >Who employs these MIDIots? Take something that's clunky and make
>> >it even clunkier. Why is it so difficult for them to comprehend
>> >SIMPLICITY? Why do they always make arrogant assumptions and
>> >always the wrong ones? There is nothing in all of that enables
>> >anyone to request a particular vibration number. Talk about
>> >PERVERSE!
>>
>> Without going into the "MIDI sucks" issue, I can't understand why
>> you keep harping on the vibration number thing. If you want to
>> think in terms of vibration number, just write an interface to
>> MIDI/MTS. It shouldn't require more than a few lines of code.
>
>I keep "harping on" about it because everybody keeps arguing against
>it. I'm not trying to force you or anyone else to do it the simple
>way. I just want to be able to do it the simple way myself. What is
>your problem with that?

None. But it may take some time for the World to change to
accommodate your desires. In the meantime 5 lines of code should
solve your problem, and allow you to translate MIDI files into
ODEION files and vice versa.

I don't like MIDI any more than the next guy, but it has the
significant advantage of existing.

Incidentally Peter, since you mentioned it, there are now MIDI-over-
USB and MIDI-over-IEEE1394 (firewire) specs, for better or worse.

-Carl

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/13/2003 12:16:23 AM

--- In tuning@yahoogroups.com, "monz" <monz@a...> wrote:
> --- In tuning@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> > >Who employs these MIDIots? Take something that's clunky and make
> it
> > >even clunkier. Why is it so difficult for them to comprehend
> > >SIMPLICITY? Why do they always make arrogant assumptions and
> always
> > >the wrong ones? There is nothing in all of that enables anyone
to
> > >request a particular vibration number. Talk about PERVERSE!
> >
> > Without going into the "MIDI sucks" issue, I can't understand why
> > you keep harping on the vibration number thing. If you want to
> > think in terms of vibration number, just write an interface to
> > MIDI/MTS. It shouldn't require more than a few lines of code.
> >
> > -Carl
>
>
> yeah, i don't get it either. it's simple to convert
> from frequency-vibration-numbers to tetradekamus and back.
>
>
> i'll illustrate by means of an example. let's say we want
> the note which is an 8:9 ratio above A-440.
>
> first you find out how many 12edo semitones above or below
> the reference (A-440) your desired note is. in this case,
> (9/8)*440 = 495 Hz. so it's log(9/8) * 12/log(2)
> = ~2.039100017 12edo semitones above A-440.
>
> so now you know that your nearest MIDI-note is the B which
> is 2 semitones above A-440.
>
> next you subtract the actual value you calculated from
> the nearest 12edo approximation. ~2.039100017 - 2 =
> ~0.039100017 semitones discrepancy.
>
> now multiply that number by 2^14, since there are 2^14
> tetradekamus in each 12edo semitone. ~0.039100017 * 2^14
> = ~640.6146836 tetradekamus. simply round that off to
> the nearest integer value and you have the tetradekamu
> correction necessary to tell MIDI to give you the 8:9 ratio,
> which is 495 Hz.
>
> so the answer for this example is A-440 * 9/8
> = A-440 + 2 12edo semitones + 641 tetradekamus.
>
>
> if you want to use any other MIDI ...mu value, simply
> change the power of 2 in the last calculation to reflect
> whichever ...mu you want. so a dodekamu would use 2^12,
> a hexamu would use 2^6, etc.
>
> MTS uses the tetradekamu convention, dividing every
> 12edo semitone into 2^14 tetradekamu units.
>
>
> so, in algorithm format, let's call the reference frequency R
> and the desired frequency F. to get the MIDI-note + tetradekamu:
>
> (int(log(F/R)*((12/log(2))))
> -((log(F/R)*((12/log(2)))-(int(log(F/R)*((12/log(2)))))*(2^14)
>
>
>
> -monz

Hi Joe

All that palaver to get a note which is 8:9 above 440Hz. Here's *my*
method:-

9/8 x 440Hz = 495Hz

Need I say more?

Peter

🔗Kurt Bigler <kkb@breathsense.com>

12/13/2003 12:22:51 AM

on 12/12/03 9:29 PM, Peter Wakefield Sault <sault@cyberware.co.uk> wrote:

> --- In tuning@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>>> But doesn't the midi tuning standard simply enable an entire tuning
>>> table to be downloaded?
>>>
>>> The original idea that Peter put forward was to specify the
> individual
>>> frequency of each note rather than as a note number, i.e. in the
> note-on
>>> message (or equivalent).
>>
>> There are several different types of messages defined by the spec.
>>
>> http://www.midi.org/about-midi/tuning.shtml
>>
>> http://www.midi.org/about-midi/tuning_extens.shtml
>>
>> Note esp. the "single note" messages on the second link.
>>
>> -Carl
>
> Who employs these MIDIots? Take something that's clunky and make it
> even clunkier. Why is it so difficult for them to comprehend
> SIMPLICITY? Why do they always make arrogant assumptions and always
> the wrong ones? There is nothing in all of that enables anyone to
> request a particular vibration number. Talk about PERVERSE!

Get real Peter! ;) The world is indeed perverse by that definition. The
music equipment industry certainly is. Nonetheless from another
perspective, possibly from a fairly *embracing* perspective, the practical
and the ideal will meet in a place that will offend the idealist. This is
as true as the dirt on which we stand. I suspect people involved in the
practical while trying to have a life often find themselves appreciating the
beauty of what is not ideal. I certainly find this true for myself in many
areas.

-Kur

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/13/2003 12:32:19 AM

--- In tuning@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
> on 12/12/03 7:23 PM, Peter Wakefield Sault <sault@c...> wrote:
>
> > --- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
> >> In tuning digest 2861 Peter Sault wrote ...
> >>
> >> [>SNIP]
> >>
> >>>> As before, excuse me if I have some catching up to do. What is
> >>>> vibration number addressable?
> >>>>
> >>>> Peter Frazer,
> >>>> www.midicode.com
> >>
> >>> MIDI provides absolute pitch number addressing, where pitch
> > number 0
> >>> = C0. The synthesizer is expected to translate the pitch number
> > into
> >>> a vibration number (i.e. a specific frequency in Hz).
> >>
> >>> The concept of vibration number addressing is that the
synthesizer
> >>> has only to respond with a note of the requested vibration
number.
> >>> Vibration numbers are reals, unlike pitch numbers which are
> > naturals.
> >>> So, I ask the synth to give me 123.4567Hz and that is what I
> > should
> >>> get. A task which MIDI is not up to, so far as I know, since it
> > does
> >>> not encode reals.
> >>
> >> You are right. A protocol which allowed frequency to be
specified
> > as
> >> a real number rather than note number would greatly facilitate
> > microtonal
> >> sequencing and other software.
> >>
> >> Monz, are you reading this thread? I know you are working on a
> >> microtonal sequencer.
> >>
> >> Of course, a real number in floating point is typically
> >> 8 bytes as opposed to 1 for note number so there would be much
more
> >> data to send. But then midi is now a very old standard so speed
is
> >> probably not an issue. What may be an issue is that midi short
> > messages
> >> are small enough to fit in a Windows message structure whereas
> > something
> >> containing a real would not. This may have implications for the
> > fast
> >> transfer of data from one application to another.
> >>
> >
> > Computers are a zillion times faster nowadays making external
> > hardware synths obsolete for that reason. Ok so we still have to
plug
> > in MIDI controller instruments for manual performance but there is
> > now USB and Firewire. The problem is convincing the MIDI
controller
> > instrument makers to catch up - and replace MIDI with something
> > slightly less horrible. For my purposes there is no external data
> > connexion needed anyway - it all goes via internal buffers. I
take it
> > your program accepts MIDI-streams from Cakewalk and the suchlike.
> > Accepting plugins would be one way of escaping from MIDI and
> > specifying your own standard.
> >
> > Peter S.
>
> That convincing may take some time. First of all the majority of
the market
> has to become sophisticated enough to recognize the functional
benefits of a
> fully-flexible protocol. Secondly the market would be unlikely to
> distinguish a hack to a bad protocol from a better protocol, and
the former
> would cost the industry less in making the adjustment. So it will
probably
> happen some day, when the cost of making the transition hits zero.
>
> I brought up the problems with MIDI on the apple CoreAudio list and
I was
> treated somewhat like an alien. But it was pointed out to me that
apple's
> internal musical instrument interface (a kind of audio unit, I
forget the
> exact name) does not suffer the limitations of MIDI, while also
supporting
> it as one means of control. However apple has no current interest
or
> motivations to consider changes to the external protocol. This
might be a
> slight misrepresentation since the conversation was more than a
year ago,
> but I think the gist of it is correct. Meanwhile MTS is arriving
to serve
> our purposes relatively well, and we can welcome the day when even
that is
> implemented fully in all its flavors, and also accurately enough to
satisfy
> the tuning-sensitive contingent.
>
> Finally, the very fact that so much customization and workarounds
can be
> done in software actually reduces the pressure for changes to MIDI.
>
> -Kurt

I'm going to stick to my own solution. Since I am not trying to
achieve realtime performance there is much I can dispense with,
including MUDDY. I have written a program that will precisely adjust
WAV samples, always maintaining the proper slope of the waveform,
that takes as input parameters either old and new frequencies or a
single coefficient. Then I paste it into a track using SoundForge.
That's how I created 'Babylon' - my first using my software:-
http://www.odeion.org/music/pws-babylon-l.mp3 = 32kb/s 11kHz
http://www.odeion.org/music/pws-babylon-h.mp3 = 128kb/s 44kHz

It works for me.

Peter.

🔗Kurt Bigler <kkb@breathsense.com>

12/13/2003 12:41:52 AM

on 12/12/03 11:50 PM, peter_wakefield_sault <sault@cyberware.co.uk> wrote:

> --- In tuning@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>>> Who employs these MIDIots? Take something that's clunky and make
> it
>>> even clunkier. Why is it so difficult for them to comprehend
>>> SIMPLICITY? Why do they always make arrogant assumptions and
> always
>>> the wrong ones? There is nothing in all of that enables anyone to
>>> request a particular vibration number. Talk about PERVERSE!
>>
>> Without going into the "MIDI sucks" issue, I can't understand why
>> you keep harping on the vibration number thing. If you want to
>> think in terms of vibration number, just write an interface to
>> MIDI/MTS. It shouldn't require more than a few lines of code.
>>
>> -Carl
>
> I keep "harping on" about it because everybody keeps arguing against
> it. I'm not trying to force you or anyone else to do it the simple
> way. I just want to be able to do it the simple way myself. What is
> your problem with that?

Doing it the simple way yourself is exactly what Carl suggested in his final
sentence.

I didn't hear anyone arguing against it being a good idea. But when you
first brought it up (and each time again) you brought it up with such
disdain of the stupidity of the world. The rest of us are just dealing with
"what is" and making music. And doing the simple things ourselves, just as
you are doing, when that is relatively easy to arrange, which in fact it is,
much of the time.

I am an idealist too in this regard and I'm pissed off at how stupid things
are, including MIDI. And then I get over it, particularly when I get a more
embracing picture of *why* things perpetuate in their current patterns,
sometimes for pretty good reasons. So many things are "wrong" with the
world that you have to pick what you think is most worth working on at a
given time. Replacing the midi protocol looks like a really bad choice for
the moment. But around the corner I have a need which MIDI with its current
standard bandwidth (much slower than what the hardware is capable of) will
become an obstacle, and at that point I will be spending big bucks to not
use MIDI. Meanwhile MIDI over firewire (or perhaps USB-2) may soon
eliminate some of the bandwidth problems.

Incidentally there is a MIDI alternative called OSC (Open Sound Control) and
there are ethernet-based interfaces that support this currently being
prototyped at CNMAT in Berkeley, CA with eventual technology transfer and
production planned. This includes support for piano-strips that were
previously MIDI-limited, and which in the new generation will be able to
report absolute key-position in real-time without arpegiation problems which
occur at the MIDI bandwidth. No doubt there will be OSC keyboards of other
sorts at some poin, but I don't know about this yet.

Its good to moan about things at times, when we have things to moan about,
but eventually it takes too much time to continue in that mode. Hey, when I
arrive somewhere new, I find myself moaning again too. And I get away with
it when everyone knows less than me! But my propoganda has still not hurt
Bill Gates.

(Peter I'm being much lighter and less careful now, but I'm enjoying myself
more. I hope you can enjoy yourself too and that you don't take me too
seriously.)

-Kurt

🔗Kurt Bigler <kkb@breathsense.com>

12/13/2003 12:44:40 AM

on 12/13/03 12:16 AM, Peter Wakefield Sault <sault@cyberware.co.uk> wrote:

> --- In tuning@yahoogroups.com, "monz" <monz@a...> wrote:
>> --- In tuning@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
>>>> Who employs these MIDIots? Take something that's clunky and make
>> it
>>>> even clunkier. Why is it so difficult for them to comprehend
>>>> SIMPLICITY? Why do they always make arrogant assumptions and
>> always
>>>> the wrong ones? There is nothing in all of that enables anyone
> to
>>>> request a particular vibration number. Talk about PERVERSE!
>>>
>>> Without going into the "MIDI sucks" issue, I can't understand why
>>> you keep harping on the vibration number thing. If you want to
>>> think in terms of vibration number, just write an interface to
>>> MIDI/MTS. It shouldn't require more than a few lines of code.
>>>
>>> -Carl
>>
>>
>> yeah, i don't get it either. it's simple to convert
>> from frequency-vibration-numbers to tetradekamus and back.
>>
>>
>> i'll illustrate by means of an example. let's say we want
>> the note which is an 8:9 ratio above A-440.
>>
>> first you find out how many 12edo semitones above or below
>> the reference (A-440) your desired note is. in this case,
>> (9/8)*440 = 495 Hz. so it's log(9/8) * 12/log(2)
>> = ~2.039100017 12edo semitones above A-440.
>>
>> so now you know that your nearest MIDI-note is the B which
>> is 2 semitones above A-440.
>>
>> next you subtract the actual value you calculated from
>> the nearest 12edo approximation. ~2.039100017 - 2 =
>> ~0.039100017 semitones discrepancy.
>>
>> now multiply that number by 2^14, since there are 2^14
>> tetradekamus in each 12edo semitone. ~0.039100017 * 2^14
>> = ~640.6146836 tetradekamus. simply round that off to
>> the nearest integer value and you have the tetradekamu
>> correction necessary to tell MIDI to give you the 8:9 ratio,
>> which is 495 Hz.
>>
>> so the answer for this example is A-440 * 9/8
>> = A-440 + 2 12edo semitones + 641 tetradekamus.
>>
>>
>> if you want to use any other MIDI ...mu value, simply
>> change the power of 2 in the last calculation to reflect
>> whichever ...mu you want. so a dodekamu would use 2^12,
>> a hexamu would use 2^6, etc.
>>
>> MTS uses the tetradekamu convention, dividing every
>> 12edo semitone into 2^14 tetradekamu units.
>>
>>
>> so, in algorithm format, let's call the reference frequency R
>> and the desired frequency F. to get the MIDI-note + tetradekamu:
>>
>> (int(log(F/R)*((12/log(2))))
>> -((log(F/R)*((12/log(2)))-(int(log(F/R)*((12/log(2)))))*(2^14)
>>
>>
>>
>> -monz
>
> Hi Joe
>
> All that palaver to get a note which is 8:9 above 440Hz. Here's *my*
> method:-
>
> 9/8 x 440Hz = 495Hz
>
> Need I say more?

That's exactly what we are all asking ourselves. ;)

-Kurt

> Peter

🔗Kurt Bigler <kkb@breathsense.com>

12/13/2003 12:50:16 AM

on 12/13/03 12:32 AM, Peter Wakefield Sault <sault@cyberware.co.uk> wrote:

> --- In tuning@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
>> on 12/12/03 7:23 PM, Peter Wakefield Sault <sault@c...> wrote:
>>
>>> --- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:
>>>> In tuning digest 2861 Peter Sault wrote ...
>>>>
>>>> [>SNIP]
>>>>
>>>>>> As before, excuse me if I have some catching up to do. What is
>>>>>> vibration number addressable?
>>>>>>
>>>>>> Peter Frazer,
>>>>>> www.midicode.com
>>>>
>>>>> MIDI provides absolute pitch number addressing, where pitch
>>> number 0
>>>>> = C0. The synthesizer is expected to translate the pitch number
>>> into
>>>>> a vibration number (i.e. a specific frequency in Hz).
>>>>
>>>>> The concept of vibration number addressing is that the
> synthesizer
>>>>> has only to respond with a note of the requested vibration
> number.
>>>>> Vibration numbers are reals, unlike pitch numbers which are
>>> naturals.
>>>>> So, I ask the synth to give me 123.4567Hz and that is what I
>>> should
>>>>> get. A task which MIDI is not up to, so far as I know, since it
>>> does
>>>>> not encode reals.
>>>>
>>>> You are right. A protocol which allowed frequency to be
> specified
>>> as
>>>> a real number rather than note number would greatly facilitate
>>> microtonal
>>>> sequencing and other software.
>>>>
>>>> Monz, are you reading this thread? I know you are working on a
>>>> microtonal sequencer.
>>>>
>>>> Of course, a real number in floating point is typically
>>>> 8 bytes as opposed to 1 for note number so there would be much
> more
>>>> data to send. But then midi is now a very old standard so speed
> is
>>>> probably not an issue. What may be an issue is that midi short
>>> messages
>>>> are small enough to fit in a Windows message structure whereas
>>> something
>>>> containing a real would not. This may have implications for the
>>> fast
>>>> transfer of data from one application to another.
>>>>
>>>
>>> Computers are a zillion times faster nowadays making external
>>> hardware synths obsolete for that reason. Ok so we still have to
> plug
>>> in MIDI controller instruments for manual performance but there is
>>> now USB and Firewire. The problem is convincing the MIDI
> controller
>>> instrument makers to catch up - and replace MIDI with something
>>> slightly less horrible. For my purposes there is no external data
>>> connexion needed anyway - it all goes via internal buffers. I
> take it
>>> your program accepts MIDI-streams from Cakewalk and the suchlike.
>>> Accepting plugins would be one way of escaping from MIDI and
>>> specifying your own standard.
>>>
>>> Peter S.
>>
>> That convincing may take some time. First of all the majority of
> the market
>> has to become sophisticated enough to recognize the functional
> benefits of a
>> fully-flexible protocol. Secondly the market would be unlikely to
>> distinguish a hack to a bad protocol from a better protocol, and
> the former
>> would cost the industry less in making the adjustment. So it will
> probably
>> happen some day, when the cost of making the transition hits zero.
>>
>> I brought up the problems with MIDI on the apple CoreAudio list and
> I was
>> treated somewhat like an alien. But it was pointed out to me that
> apple's
>> internal musical instrument interface (a kind of audio unit, I
> forget the
>> exact name) does not suffer the limitations of MIDI, while also
> supporting
>> it as one means of control. However apple has no current interest
> or
>> motivations to consider changes to the external protocol. This
> might be a
>> slight misrepresentation since the conversation was more than a
> year ago,
>> but I think the gist of it is correct. Meanwhile MTS is arriving
> to serve
>> our purposes relatively well, and we can welcome the day when even
> that is
>> implemented fully in all its flavors, and also accurately enough to
> satisfy
>> the tuning-sensitive contingent.
>>
>> Finally, the very fact that so much customization and workarounds
> can be
>> done in software actually reduces the pressure for changes to MIDI.
>>
>> -Kurt
>
> I'm going to stick to my own solution. Since I am not trying to
> achieve realtime performance there is much I can dispense with,
> including MUDDY. I have written a program that will precisely adjust
> WAV samples, always maintaining the proper slope of the waveform,
> that takes as input parameters either old and new frequencies or a
> single coefficient. Then I paste it into a track using SoundForge.
> That's how I created 'Babylon' - my first using my software:-
> http://www.odeion.org/music/pws-babylon-l.mp3 = 32kb/s 11kHz
> http://www.odeion.org/music/pws-babylon-h.mp3 = 128kb/s 44kHz
>
> It works for me.
>
> Peter.

Now why didn't you just say that in the first place? And that was almost
*humble*. ;)

(I'm probably getting carried away now Peter. I'll try to stop. Aren't you
glad I didn't accept absolute power?)

-Kurt

🔗alternativetuning <alternativetuning@yahoo.com>

12/13/2003 12:50:38 AM

I don't understand the objection to midi, going from frequency to
midi number offsets is just a short subroutine.

And if even that's too much work, try something like PD -- it's free
and open sourced, runs on most platforms and accepts frequencies as
control numbers to control anything: midi, dsp, samples, video, your
dishwasher...

"Ortstheorie" = local theory, the theory prevailing in a community.
Is this term from ethnology, or from philosophy of science?

Gabor Bernath

🔗Werner Mohrlok <wmohrlok@hermode.com>

12/13/2003 1:11:54 AM

-----Urspr�ngliche Nachricht-----
Von: Peter Wakefield Sault [mailto:sault@cyberware.co.uk]
Gesendet: Samstag, 13. Dezember 2003 09:16
An: tuning@yahoogroups.com
Betreff: [tuning] Re: odeion1-003

--- In tuning@yahoogroups.com, "monz" <monz@a...> wrote:
> --- In tuning@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> > >Who employs these MIDIots? Take something that's clunky and make
> it
> > >even clunkier. Why is it so difficult for them to comprehend
> > >SIMPLICITY? Why do they always make arrogant assumptions and
> always
> > >the wrong ones? There is nothing in all of that enables anyone
to
> > >request a particular vibration number. Talk about PERVERSE!
> >
> > Without going into the "MIDI sucks" issue, I can't understand why
> > you keep harping on the vibration number thing. If you want to
> > think in terms of vibration number, just write an interface to
> > MIDI/MTS. It shouldn't require more than a few lines of code.
> >
> > -Carl
>
>
> yeah, i don't get it either. it's simple to convert
> from frequency-vibration-numbers to tetradekamus and back.
>
>
> i'll illustrate by means of an example. let's say we want
> the note which is an 8:9 ratio above A-440.
>
> first you find out how many 12edo semitones above or below
> the reference (A-440) your desired note is. in this case,
> (9/8)*440 = 495 Hz. so it's log(9/8) * 12/log(2)
> = ~2.039100017 12edo semitones above A-440.
>
> so now you know that your nearest MIDI-note is the B which
> is 2 semitones above A-440.
>
> next you subtract the actual value you calculated from
> the nearest 12edo approximation. ~2.039100017 - 2 =
> ~0.039100017 semitones discrepancy.
>
> now multiply that number by 2^14, since there are 2^14
> tetradekamus in each 12edo semitone. ~0.039100017 * 2^14
> = ~640.6146836 tetradekamus. simply round that off to
> the nearest integer value and you have the tetradekamu
> correction necessary to tell MIDI to give you the 8:9 ratio,
> which is 495 Hz.
>
> so the answer for this example is A-440 * 9/8
> = A-440 + 2 12edo semitones + 641 tetradekamus.
>
>
> if you want to use any other MIDI ...mu value, simply
> change the power of 2 in the last calculation to reflect
> whichever ...mu you want. so a dodekamu would use 2^12,
> a hexamu would use 2^6, etc.
>
> MTS uses the tetradekamu convention, dividing every
> 12edo semitone into 2^14 tetradekamu units.
>
>
> so, in algorithm format, let's call the reference frequency R
> and the desired frequency F. to get the MIDI-note + tetradekamu:
>
> (int(log(F/R)*((12/log(2))))
> -((log(F/R)*((12/log(2)))-(int(log(F/R)*((12/log(2)))))*(2^14)
>
>
>
> -monz

Hi Joe

All that palaver to get a note which is 8:9 above 440Hz. Here's *my*
method:-

9/8 x 440Hz = 495Hz

Need I say more?

Peter

Peter,

science likes to complicate simple things. Science of music is no
exception.

Werner Mohrlok

Yahoo! Groups Sponsor

You do not need web access to participate. You may subscribe through
email. Send an empty email to one of these addresses:
tuning-subscribe@yahoogroups.com - join the tuning group.
tuning-unsubscribe@yahoogroups.com - unsubscribe from the tuning group.
tuning-nomail@yahoogroups.com - put your email message delivery on hold
for the tuning group.
tuning-digest@yahoogroups.com - change your subscription to daily digest
mode.
tuning-normal@yahoogroups.com - change your subscription to individual
emails.
tuning-help@yahoogroups.com - receive general help information.

Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.

🔗Kurt Bigler <kkb@breathsense.com>

12/13/2003 1:13:34 AM

on 12/12/03 3:08 PM, Peter Frazer <paf@easynet.co.uk> wrote:

> In tuning digest 2863 Carl wrote
>
>>> I agree that John deLaubenfels adaptive tuning ( and the more recent
>>> Hermode tuning ) are superior in many respects to my "dynamic
>>> re-tuning" as implemented in Midicode Synthesizer. I was working
>>> in isolation at the time and it seemed to me an obvious step forward
>>> to use the capabilities of computers to shift Just Intonation into a
>>> new key at the apposite time. I still believe that this type of re-
>>> tuning is appropriate in some instances (like Peter Saults algorithmic
>>> composition) and I hope that I have made a contribution here.
>
>> As I understand it your "dynamic retuning" allows the composer to
>> specify roots on a dedicated MIDI channel. This is in a totally
>> different league than automatic retuning, in my book. The additional
>> choice may take a lifetime to master for different scales, but it
>> also brings a world of compositional opportunities that automatic
>> retuning cannot. Apples and oranges.
>
>> -Carl
>
> Thanks Carl.
>
> The basic idea of dynamic re-tuning is that you could have a midi
> pedal board on the re-tuning channel and hit the new key note
> typically during a modulation pivot chord. I wouldn't have thought
> that was too difficult to master.
>
> (Also works from sequencer)
>
> Peter.
> www.midicode.com

Its not difficult as long as you sufficiently restrict your possibilities!

Carl and I have been working on this in our "spare time" on my software
organ, and before me, Carl was working with others on a design (called
"xenharmonic moving windows"). But without biasing the current conversation
with our experiences, what would you propose as a protocol for a musician
with a pedalboard to allow an arbitrary modulation with an arbitrary common
tone, or as Peter I think calls the same thing "bridge note".

So far I have dealt with the situation in which either the common tone is
either the old tonic or the new tonic, based on an arbitrary protocol in
which upward and downward movements on the pedalboard select one or the
other. I have been thinking about other solutions involving using a
temporary change to an intermediate key which I think allows any note to
then be common, though I have not analyzed or tested this extensively. So
it requires 2 pedal key-presses per modulation in the general case. This is
awkward and interferes with performance. It can be improved upon by various
tricks, such as allowing the intermediate key to be recognized by an time
overlap with the prevous pedal key and a kind of mono-mode functioning.
This is probably not brilliantly clear since I am in a rush.

But just to pose the question to you again, with a specific example. Lets
say you have a 12-tone scale based on the harmonic series. Carl gave me
this one, which is quite useful, perhaps in some sense optimal:

16:17:18:19:20:21:22:24:26:27:28:30

And suppose you have a 3:4:5 (12:16:20) chord at G-C-E and you want the E to
remain at fixed pitch as you hold the same 3 keys and you want the chord to
become an 11:15:19, with the bottom 2 notes retuning to create this. (I
hope I got that right.)

This is not necessarily the most musically useful example, but that was too
much to come up with quickly here.

What protocol for pedal use would you suggest to achieve such freedom of
choice while playing?

I see that Robert Walker has posted something related to this, but have not
had the time to analyze it. Carl and I should probably be talking to Robert
too.

Thanks,
Kurt

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/13/2003 1:17:43 AM

--- In tuning@yahoogroups.com, "alternativetuning"
<alternativetuning@y...> wrote:
> I don't understand the objection to midi, going from frequency to
> midi number offsets is just a short subroutine.
>
> And if even that's too much work, try something like PD -- it's
free
> and open sourced, runs on most platforms and accepts frequencies as
> control numbers to control anything: midi, dsp, samples, video,
your
> dishwasher...
>
> "Ortstheorie" = local theory, the theory prevailing in a community.
> Is this term from ethnology, or from philosophy of science?
>
> Gabor Bernath

Hi Gabor

I'm a bit of a stickler for precision. Since the only reason I want
to specify particular vibration numbers is to achieve perfect
harmony, I want the *exact* frequency that will do that for me.

Now you tell me. What exactly is the MIDI message for 440.0008Hz?

Peter

PS - I'll tell you now to save you the effort. There is no MIDI
message for 440.0008Hz.

🔗Werner Mohrlok <wmohrlok@hermode.com>

12/13/2003 2:02:05 AM

-----Urspr�ngliche Nachricht-----
Von: alternativetuning [mailto:alternativetuning@yahoo.com]
Gesendet: Samstag, 13. Dezember 2003 09:51
An: tuning@yahoogroups.com
Betreff: [tuning] Re: odeion1-003, "Ortstheorie"

"Ortstheorie" = local theory, the theory prevailing in a community.
Is this term from ethnology, or from philosophy of science?

Gabor Bernath

From science.

I hope, I can explain it with my poor English. And I hope, there will be
someone
who can it explain more precisely:

"Ortstheorie" is a german term of one of the different "hearing" theories.
This theory
says that our hearing is somehow working like a fourier transformation.
This means
in abstract:
Our ear splits the complex musical tones into their partial tones, everyone
of them
perceived on a different place in our ear. In the "Schnecke" (Snail???)
The problem of this theory is that with it one cannot explain why we hear
the
combination tones.
The explanation of the "Ortstheorie" is: These combination tones are
"somehow produced" in our brain.
I feel that is a funny theory: The combination tones are physical existing
in the air,
every tuner receives them, it is possible to measure them. Neverless this
theory
says: The human ear cannot identify them, but a hidden function in our
brain restores them. This may be possible but it seems to me somehow
fantastic.

There exist, too, other theories explaining better the perception of
combination
tones. But I hope there exist other members who can explain them.

Werner Mohrlok

You do not need web access to participate. You may subscribe through
email. Send an empty email to one of these addresses:
tuning-subscribe@yahoogroups.com - join the tuning group.
tuning-unsubscribe@yahoogroups.com - unsubscribe from the tuning group.
tuning-nomail@yahoogroups.com - put your email message delivery on hold
for the tuning group.
tuning-digest@yahoogroups.com - change your subscription to daily digest
mode.
tuning-normal@yahoogroups.com - change your subscription to individual
emails.
tuning-help@yahoogroups.com - receive general help information.

Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/13/2003 2:12:08 AM

--- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...> wrote:
>
>
> -----Ursprüngliche Nachricht-----
> Von: alternativetuning [mailto:alternativetuning@y...]
> Gesendet: Samstag, 13. Dezember 2003 09:51
> An: tuning@yahoogroups.com
> Betreff: [tuning] Re: odeion1-003, "Ortstheorie"
>
> "Ortstheorie" = local theory, the theory prevailing in a community.
> Is this term from ethnology, or from philosophy of science?
>
> Gabor Bernath
>
> From science.
>
> I hope, I can explain it with my poor English. And I hope, there
will be
> someone
> who can it explain more precisely:
>
> "Ortstheorie" is a german term of one of the different "hearing"
theories.
> This theory
> says that our hearing is somehow working like a fourier
transformation.
> This means
> in abstract:
> Our ear splits the complex musical tones into their partial tones,
everyone
> of them
> perceived on a different place in our ear. In the "Schnecke"
(Snail???)
> The problem of this theory is that with it one cannot explain why
we hear
> the
> combination tones.
> The explanation of the "Ortstheorie" is: These combination tones are
> "somehow produced" in our brain.
> I feel that is a funny theory: The combination tones are physical
existing
> in the air,
> every tuner receives them, it is possible to measure them.
Neverless this
> theory
> says: The human ear cannot identify them, but a hidden function in
our
> brain restores them. This may be possible but it seems to me somehow
> fantastic.
>
> There exist, too, other theories explaining better the perception of
> combination
> tones. But I hope there exist other members who can explain them.
>
> Werner Mohrlok
>

Hi Werner

It's not that complex at all. Difference tones are actually amplitude
modulations of the combined component frequencies. The combined
components act as a carrier wave.

Would you like an image? I can put one together for you with my
little synth program.

Peter

🔗alternativetuning <alternativetuning@yahoo.com>

12/13/2003 2:13:54 AM

Werner:

I did not know if you were talking about a particular theory or doing
a meta-discussion about theory making.

In any case, the translation you need is "place theory". May be
Martin Braun can come onlist and update us on the status of place
theory, as well as on genetics of musical absolute pitch. There is
much work in this field and much is now out of date.

Gabor

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/13/2003 2:19:01 AM

--- In tuning@yahoogroups.com, "Peter Wakefield Sault" <sault@c...>
wrote:
> --- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...>
wrote:
> >
> >
> > -----Ursprüngliche Nachricht-----
> > Von: alternativetuning [mailto:alternativetuning@y...]
> > Gesendet: Samstag, 13. Dezember 2003 09:51
> > An: tuning@yahoogroups.com
> > Betreff: [tuning] Re: odeion1-003, "Ortstheorie"
> >
> > "Ortstheorie" = local theory, the theory prevailing in a
community.
> > Is this term from ethnology, or from philosophy of science?
> >
> > Gabor Bernath
> >
> > From science.
> >
> > I hope, I can explain it with my poor English. And I hope, there
> will be
> > someone
> > who can it explain more precisely:
> >
> > "Ortstheorie" is a german term of one of the different "hearing"
> theories.
> > This theory
> > says that our hearing is somehow working like a fourier
> transformation.
> > This means
> > in abstract:
> > Our ear splits the complex musical tones into their partial
tones,
> everyone
> > of them
> > perceived on a different place in our ear. In the "Schnecke"
> (Snail???)
> > The problem of this theory is that with it one cannot explain why
> we hear
> > the
> > combination tones.
> > The explanation of the "Ortstheorie" is: These combination tones
are
> > "somehow produced" in our brain.
> > I feel that is a funny theory: The combination tones are physical
> existing
> > in the air,
> > every tuner receives them, it is possible to measure them.
> Neverless this
> > theory
> > says: The human ear cannot identify them, but a hidden function
in
> our
> > brain restores them. This may be possible but it seems to me
somehow
> > fantastic.
> >
> > There exist, too, other theories explaining better the perception
of
> > combination
> > tones. But I hope there exist other members who can explain them.
> >
> > Werner Mohrlok
> >
>
> Hi Werner
>
> It's not that complex at all. Difference tones are actually
amplitude
> modulations of the combined component frequencies. The combined
> components act as a carrier wave.
>
> Would you like an image? I can put one together for you with my
> little synth program.
>
> Peter

Which reminds me of a little experiment that I want to perform,
though I haven't advanced enough with my synth to be able to do it
yet.

Since a difference tone is an amplitude modulation, I should be able
to cancel it out by providing a separate equal amplitude modulation
at 90 degrees to the difference tone. Then I would be able to tell
whether detection of the interval that gave rise to the difference
tone is in fact dependent upon that difference tone.

Peter

🔗Werner Mohrlok <wmohrlok@hermode.com>

12/13/2003 2:29:24 AM

-----Urspr�ngliche Nachricht-----
Von: Peter Wakefield Sault [mailto:sault@cyberware.co.uk]
Gesendet: Samstag, 13. Dezember 2003 10:18
An: tuning@yahoogroups.com
Betreff: [tuning] Re: odeion1-003, "Ortstheorie"

--- In tuning@yahoogroups.com, "alternativetuning"
<alternativetuning@y...> wrote:
> I don't understand the objection to midi, going from frequency to
> midi number offsets is just a short subroutine.
>
> And if even that's too much work, try something like PD -- it's
free
> and open sourced, runs on most platforms and accepts frequencies as
> control numbers to control anything: midi, dsp, samples, video,
your
> dishwasher...
>
Hi Gabor

I'm a bit of a stickler for precision. Since the only reason I want
to specify particular vibration numbers is to achieve perfect
harmony, I want the *exact* frequency that will do that for me.

Now you tell me. What exactly is the MIDI message for 440.0008Hz?

Peter

PS - I'll tell you now to save you the effort. There is no MIDI
message for 440.0008Hz.

Indeed, the smallest step by a two byte tuning format ist from 440 to
440.00155.. Hz.
And I know only one instrument which supports such little steps. This is
the synthesizer
"Virus" of Access. It supports the MIDI data format "Single Note Tuning
Change Real Time",
a two byte tuning format. And it supports it too by its internal physical
precision.
Informations to all MIDI tuning formats you will find at:

www.midi.org

Werner Mohrlok

Yahoo! Groups Sponsor
ADVERTISEMENT

You do not need web access to participate. You may subscribe through
email. Send an empty email to one of these addresses:
tuning-subscribe@yahoogroups.com - join the tuning group.
tuning-unsubscribe@yahoogroups.com - unsubscribe from the tuning group.
tuning-nomail@yahoogroups.com - put your email message delivery on hold
for the tuning group.
tuning-digest@yahoogroups.com - change your subscription to daily digest
mode.
tuning-normal@yahoogroups.com - change your subscription to individual
emails.
tuning-help@yahoogroups.com - receive general help information.

Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.

🔗Werner Mohrlok <wmohrlok@hermode.com>

12/13/2003 2:29:25 AM

-----Urspr�ngliche Nachricht-----
Von: Peter Wakefield Sault [mailto:sault@cyberware.co.uk]
Gesendet: Samstag, 13. Dezember 2003 11:12
An: tuning@yahoogroups.com
Betreff: [tuning] Re: "Ortstheorie"

--- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...> wrote:
>
>
> -----Urspr�ngliche Nachricht-----
> Von: alternativetuning [mailto:alternativetuning@y...]
> Gesendet: Samstag, 13. Dezember 2003 09:51
> An: tuning@yahoogroups.com
> Betreff: [tuning] Re: odeion1-003, "Ortstheorie"
>
> "Ortstheorie" = local theory, the theory prevailing in a community.
> Is this term from ethnology, or from philosophy of science?
>
> Gabor Bernath
>
> From science.
>
> I hope, I can explain it with my poor English. And I hope, there
will be
> someone
> who can it explain more precisely:
>
> "Ortstheorie" is a german term of one of the different "hearing"
theories.
> This theory
> says that our hearing is somehow working like a fourier
transformation.
> This means
> in abstract:
> Our ear splits the complex musical tones into their partial tones,
everyone
> of them
> perceived on a different place in our ear. In the "Schnecke"
(Snail???)
> The problem of this theory is that with it one cannot explain why
we hear
> the
> combination tones.
> The explanation of the "Ortstheorie" is: These combination tones are
> "somehow produced" in our brain.
> I feel that is a funny theory: The combination tones are physical
existing
> in the air,
> every tuner receives them, it is possible to measure them.
Neverless this
> theory
> says: The human ear cannot identify them, but a hidden function in
our
> brain restores them. This may be possible but it seems to me somehow
> fantastic.
>
> There exist, too, other theories explaining better the perception of
> combination
> tones. But I hope there exist other members who can explain them.
>
> Werner Mohrlok
>

Hi Werner

It's not that complex at all. Difference tones are actually amplitude
modulations of the combined component frequencies. The combined
components act as a carrier wave.

Would you like an image? I can put one together for you with my
little synth program.

Peter

Hi Peter,

you are right. But we all should not forget: All mathematic models are not
the "reality".
They only describe physical effects by mathematic models.

Thank you for your offer. I would like to get it.
Thankyou in advance.
Werner M.

Yahoo! Groups Sponsor
ADVERTISEMENT

You do not need web access to participate. You may subscribe through
email. Send an empty email to one of these addresses:
tuning-subscribe@yahoogroups.com - join the tuning group.
tuning-unsubscribe@yahoogroups.com - unsubscribe from the tuning group.
tuning-nomail@yahoogroups.com - put your email message delivery on hold
for the tuning group.
tuning-digest@yahoogroups.com - change your subscription to daily digest
mode.
tuning-normal@yahoogroups.com - change your subscription to individual
emails.
tuning-help@yahoogroups.com - receive general help information.

Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.

🔗Werner Mohrlok <wmohrlok@hermode.com>

12/13/2003 2:39:27 AM

-----Urspr�ngliche Nachricht-----
Von: alternativetuning [mailto:alternativetuning@yahoo.com]
Gesendet: Samstag, 13. Dezember 2003 11:14
An: tuning@yahoogroups.com
Betreff: [tuning] Re: "Ortstheorie"

Werner:

I did not know if you were talking about a particular theory or doing
a meta-discussion about theory making.

In any case, the translation you need is "place theory". May be
Martin Braun can come onlist and update us on the status of place
theory, as well as on genetics of musical absolute pitch. There is
much work in this field and much is now out of date.

Gabor

I know - and the different groups, fighting for the one
or the other theory are fighting hard against one another...

Thank you for the translation

Werner M.

You do not need web access to participate. You may subscribe through
email. Send an empty email to one of these addresses:
tuning-subscribe@yahoogroups.com - join the tuning group.
tuning-unsubscribe@yahoogroups.com - unsubscribe from the tuning group.
tuning-nomail@yahoogroups.com - put your email message delivery on hold
for the tuning group.
tuning-digest@yahoogroups.com - change your subscription to daily digest
mode.
tuning-normal@yahoogroups.com - change your subscription to individual
emails.
tuning-help@yahoogroups.com - receive general help information.

Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/

🔗Werner Mohrlok <wmohrlok@hermode.com>

12/13/2003 2:44:11 AM

-----Urspr�ngliche Nachricht-----
Von: Peter Wakefield Sault [mailto:sault@cyberware.co.uk]
Gesendet: Samstag, 13. Dezember 2003 11:19
An: tuning@yahoogroups.com
Betreff: [tuning] Re: "Ortstheorie"

--- In tuning@yahoogroups.com, "Peter Wakefield Sault" <sault@c...>
wrote:
> --- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...>
wrote:
> >
> >
> > -----Urspr�ngliche Nachricht-----
> > Von: alternativetuning [mailto:alternativetuning@y...]
> > Gesendet: Samstag, 13. Dezember 2003 09:51
> > An: tuning@yahoogroups.com
> > Betreff: [tuning] Re: odeion1-003, "Ortstheorie"
> >
> > "Ortstheorie" = local theory, the theory prevailing in a
community.
> > Is this term from ethnology, or from philosophy of science?
> >
> > Gabor Bernath
> >
> > From science.
> >
> > I hope, I can explain it with my poor English. And I hope, there
> will be
> > someone
> > who can it explain more precisely:
> >
> > "Ortstheorie" is a german term of one of the different "hearing"
> theories.
> > This theory
> > says that our hearing is somehow working like a fourier
> transformation.
> > This means
> > in abstract:
> > Our ear splits the complex musical tones into their partial
tones,
> everyone
> > of them
> > perceived on a different place in our ear. In the "Schnecke"
> (Snail???)
> > The problem of this theory is that with it one cannot explain why
> we hear
> > the
> > combination tones.
> > The explanation of the "Ortstheorie" is: These combination tones
are
> > "somehow produced" in our brain.
> > I feel that is a funny theory: The combination tones are physical
> existing
> > in the air,
> > every tuner receives them, it is possible to measure them.
> Neverless this
> > theory
> > says: The human ear cannot identify them, but a hidden function
in
> our
> > brain restores them. This may be possible but it seems to me
somehow
> > fantastic.
> >
> > There exist, too, other theories explaining better the perception
of
> > combination
> > tones. But I hope there exist other members who can explain them.
> >
> > Werner Mohrlok
> >
>
> Hi Werner
>
> It's not that complex at all. Difference tones are actually
amplitude
> modulations of the combined component frequencies. The combined
> components act as a carrier wave.
>
> Would you like an image? I can put one together for you with my
> little synth program.
>
> Peter

Which reminds me of a little experiment that I want to perform,
though I haven't advanced enough with my synth to be able to do it
yet.

Since a difference tone is an amplitude modulation, I should be able
to cancel it out by providing a separate equal amplitude modulation
at 90 degrees to the difference tone. Then I would be able to tell
whether detection of the interval that gave rise to the difference
tone is in fact dependent upon that difference tone.

Peter

Some years ago I made a little experiment with the "Microwave" synth and a
headphone.
With the Microwave it was and is possible to separte the audio output to
"left"
and "right" precisely and completely.
First I sent two tones of a third interval in just intonation two both
ears. The difference tone(s)
could be heared distinctly.
Than I seperated the two tones to the left and the right ear and the
difference tones have
been disappeared.
I first had the assumption that this is a prove against the place theory
but later I came to
the result that this doesn't proof the one or the other theory.

Werner M

Yahoo! Groups Sponsor
ADVERTISEMENT

You do not need web access to participate. You may subscribe through
email. Send an empty email to one of these addresses:
tuning-subscribe@yahoogroups.com - join the tuning group.
tuning-unsubscribe@yahoogroups.com - unsubscribe from the tuning group.
tuning-nomail@yahoogroups.com - put your email message delivery on hold
for the tuning group.
tuning-digest@yahoogroups.com - change your subscription to daily digest
mode.
tuning-normal@yahoogroups.com - change your subscription to individual
emails.
tuning-help@yahoogroups.com - receive general help information.

Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/13/2003 3:56:46 AM

> >
> > There exist, too, other theories explaining better the
perception of
> > combination
> > tones. But I hope there exist other members who can explain
them.
> >
> > Werner Mohrlok
> >
>
> Hi Werner
>
> It's not that complex at all. Difference tones are actually
amplitude
> modulations of the combined component frequencies. The combined
> components act as a carrier wave.
>
> Would you like an image? I can put one together for you with my
> little synth program.
>
> Peter
>
> Hi Peter,
>
> you are right. But we all should not forget: All mathematic
models are not
> the "reality".
> They only describe physical effects by mathematic models.
>
> Thank you for your offer. I would like to get it.
> Thankyou in advance.
> Werner M.

Hi Werner

I have created a Photo Album called PWS and uploaded an image,
WaveMaster.jpg, into it. It shows 2 constituent sinewaves, one of
800Hz and another of 900Hz, which comprise the interval of a
wholetone of 8:9, and the additive mix of the two. The Difference
Tone of 100Hz is clearly visible as amplitude modulation.

Peter

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/13/2003 4:08:39 AM

--- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...> wrote:
>
> -----Ursprüngliche Nachricht-----
> Von: Peter Wakefield Sault [mailto:sault@c...]
> Gesendet: Samstag, 13. Dezember 2003 11:19
> An: tuning@yahoogroups.com
> Betreff: [tuning] Re: "Ortstheorie"
>
>
> --- In tuning@yahoogroups.com, "Peter Wakefield Sault"
<sault@c...>
> wrote:
> > --- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...>
> wrote:
> > >
> > >
> > > -----Ursprüngliche Nachricht-----
> > > Von: alternativetuning [mailto:alternativetuning@y...]
> > > Gesendet: Samstag, 13. Dezember 2003 09:51
> > > An: tuning@yahoogroups.com
> > > Betreff: [tuning] Re: odeion1-003, "Ortstheorie"
> > >
> > > "Ortstheorie" = local theory, the theory prevailing in a
> community.
> > > Is this term from ethnology, or from philosophy of science?
> > >
> > > Gabor Bernath
> > >
> > > From science.
> > >
> > > I hope, I can explain it with my poor English. And I hope,
there
> > will be
> > > someone
> > > who can it explain more precisely:
> > >
> > > "Ortstheorie" is a german term of one of the
different "hearing"
> > theories.
> > > This theory
> > > says that our hearing is somehow working like a fourier
> > transformation.
> > > This means
> > > in abstract:
> > > Our ear splits the complex musical tones into their partial
> tones,
> > everyone
> > > of them
> > > perceived on a different place in our ear. In the "Schnecke"
> > (Snail???)
> > > The problem of this theory is that with it one cannot explain
why
> > we hear
> > > the
> > > combination tones.
> > > The explanation of the "Ortstheorie" is: These combination
tones
> are
> > > "somehow produced" in our brain.
> > > I feel that is a funny theory: The combination tones are
physical
> > existing
> > > in the air,
> > > every tuner receives them, it is possible to measure them.
> > Neverless this
> > > theory
> > > says: The human ear cannot identify them, but a hidden
function
> in
> > our
> > > brain restores them. This may be possible but it seems to me
> somehow
> > > fantastic.
> > >
> > > There exist, too, other theories explaining better the
perception
> of
> > > combination
> > > tones. But I hope there exist other members who can explain
them.
> > >
> > > Werner Mohrlok
> > >
> >
> > Hi Werner
> >
> > It's not that complex at all. Difference tones are actually
> amplitude
> > modulations of the combined component frequencies. The combined
> > components act as a carrier wave.
> >
> > Would you like an image? I can put one together for you with my
> > little synth program.
> >
> > Peter
>
> Which reminds me of a little experiment that I want to perform,
> though I haven't advanced enough with my synth to be able to do it
> yet.
>
> Since a difference tone is an amplitude modulation, I should be
able
> to cancel it out by providing a separate equal amplitude
modulation
> at 90 degrees to the difference tone. Then I would be able to tell
> whether detection of the interval that gave rise to the difference
> tone is in fact dependent upon that difference tone.
>
> Peter
>
> Some years ago I made a little experiment with the "Microwave"
synth and a
> headphone.
> With the Microwave it was and is possible to separte the audio
output to
> "left"
> and "right" precisely and completely.
> First I sent two tones of a third interval in just intonation two
both
> ears. The difference tone(s)
> could be heared distinctly.
> Than I seperated the two tones to the left and the right ear and
the
> difference tones have
> been disappeared.

That's what I would expect.

> I first had the assumption that this is a prove against the
place theory
> but later I came to
> the result that this doesn't proof the one or the other theory.
>
> Werner M

To measure the effect upon your ability to identify the interval
would involve having someone else select intervals in both ways and
measure your success rate in each.

Peter

🔗Werner Mohrlok <wmohrlok@hermode.com>

12/13/2003 5:14:04 AM

-----Urspr�ngliche Nachricht-----
Von: Peter Wakefield Sault [mailto:sault@cyberware.co.uk]
Gesendet: Samstag, 13. Dezember 2003 12:57
An: tuning@yahoogroups.com
Betreff: [tuning] Re: "Ortstheorie" - Difference Tone Image

> >
> > There exist, too, other theories explaining better the
perception of
> > combination
> > tones. But I hope there exist other members who can explain
them.
> >
> > Werner Mohrlok
> >
>
> Hi Werner
>
> It's not that complex at all. Difference tones are actually
amplitude
> modulations of the combined component frequencies. The combined
> components act as a carrier wave.
>
> Would you like an image? I can put one together for you with my
> little synth program.
>
> Peter
>
> Hi Peter,
>
> you are right. But we all should not forget: All mathematic
models are not
> the "reality".
> They only describe physical effects by mathematic models.
>
> Thank you for your offer. I would like to get it.
> Thankyou in advance.
> Werner M.

Hi Werner

I have created a Photo Album called PWS and uploaded an image,
WaveMaster.jpg, into it. It shows 2 constituent sinewaves, one of
800Hz and another of 900Hz, which comprise the interval of a
wholetone of 8:9, and the additive mix of the two. The Difference
Tone of 100Hz is clearly visible as amplitude modulation.

Peter

Thanks

Werner M.

Yahoo! Groups Sponsor
ADVERTISEMENT

You do not need web access to participate. You may subscribe through
email. Send an empty email to one of these addresses:
tuning-subscribe@yahoogroups.com - join the tuning group.
tuning-unsubscribe@yahoogroups.com - unsubscribe from the tuning group.
tuning-nomail@yahoogroups.com - put your email message delivery on hold
for the tuning group.
tuning-digest@yahoogroups.com - change your subscription to daily digest
mode.
tuning-normal@yahoogroups.com - change your subscription to individual
emails.
tuning-help@yahoogroups.com - receive general help information.

Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.

🔗Robert Walker <robertwalker@ntlworld.com>

12/13/2003 9:42:26 AM

Hi Kurt,

Interesting to hear you've been working on it too.

I see - when your player presses the foot board then it
treats the current note played as the new tonic
to retune to - is that right? Perhaps I may
have a go at implementing that too, sounds
a useful idea.

In my program there is a retuning octave,
and player chooses which note to use for the bridge note.
So you can choose any note for that. You have
one octave of the keyboard set aside for that
(in twelve tone scales) so to tune to the current
note played, player needs to play the same
note simultaneously in the retuning octave.
If you want the bridge note to be an Ab
play an Ab in the retuning octave.
Also if the music is beginning to drift
and you want to reset with a diesis shift
I do it so that you just play a note twice,
e.. C twice to reset to the original pitch
for the C.

That wouldn't be very convenient really for
more complex pieces though as you
have to keep playing notes over in the retuning
octave - which I normally put at leftmost
octave of keyboard but it can go anywhere
(even in the middle if one likes).

One could alternatively
have an extra small keyboard relaying it
on another midi channel in order to not
lose any of the area of ones main keyboard.

Alternatively you can also use a foot controller
- there the problem is that the 12 notes
requires rather fine control of the controller
Though there is visual feedback to show
which note you selected for retuning, it requires rather
fine control for twelve note scales. Perhaps
one could learn to use that method.

It isn't so hard for e.g. pentatonic scales.
and all these methods are suitable for use from
a sequencer of course.

However, if one were to get a pedal keyboard
(what are they called? - the things that
organists use to play using their feet) - just needs to be
one octave, then the player could select
the bridge note using the feet. Then
still have another pedal board for
your other method, if the bridge note is
the current note played as that might be
conceptually easier than simultaneously playing the
identical note on the pedals probably.

How does that sound?

I think with my program the problem has been
partly that the GUI is still a bit rough
- everything works but it looks more complex
at first sight than it really is when you
use it, and it isn't so immediately obvious
to a first time user how it works I think.

I'm going through the program
at present highlighting various of the things
it can do and doing a special GUI "view" for each one
designed to make them easier to use and when
I get to this one then it will make it easier to use.

Anyway maybe by sharing and pooling our ideas
we can come up with something better together.
Hope maybe some of these ideas I use may help,

Robert

🔗Gene Ward Smith <gwsmith@svpal.org>

12/13/2003 11:26:21 AM

--- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:

> I'm not quite sure what you mean here Gene, could you point me
> to a reference?

http://www.linet.gr.jp/~tamuki/timidity/mts/tuning.html

> By base 128 I assume you mean 7 bit numbers so 21 bits,
> 2 million or so combinations, but why 2.1072 digits?

128 is a bit larger than 100, and base 100 uses digits from 00 to 99,
or exactly two digits worth of base 10. Base 128 has digits equating
to log10(128) of base 10 digits, or 2.1072 base 10 digits.

> The original idea that Peter put forward was to specify the
individual
> frequency of each note rather than as a note number, i.e. in the
note-on
> message (or equivalent).

You can adjust individual notes with so-called "pitch-bends" if you
like.

🔗Gene Ward Smith <gwsmith@svpal.org>

12/13/2003 11:36:34 AM

--- In tuning@yahoogroups.com, "Peter Wakefield Sault" <sault@c...>
wrote:

> You are missing the point entirely, Manuel. How I arrive at a
> particular frequency for a particular note that I want is entirely
my
> own affair. If I calculate by whatever means I am using that the
next
> note I want is 123.4567Hz, then the synth should just simply give
it
> to me as requested.

That's how Csound works. Though you hate logarithmic measures, you
could equally well use those instead, however. Moreover, while midi
pitch bends are an ugly system, they can easily enough do this also.

🔗monz <monz@attglobal.net>

12/13/2003 11:49:33 AM

hi Peter,

--- In tuning@yahoogroups.com, "Peter Wakefield Sault" <sault@c...>
wrote:

> --- In tuning@yahoogroups.com, "monz" <monz@a...> wrote:

> > MTS uses the tetradekamu convention, dividing every
> > 12edo semitone into 2^14 tetradekamu units.
> >
> >
> > so, in algorithm format, let's call the reference frequency R
> > and the desired frequency F. to get the MIDI-note + tetradekamu:
> >
> > (int(log(F/R)*((12/log(2))))
> > -((log(F/R)*((12/log(2)))-(int(log(F/R)*((12/log(2)))))*(2^14)
> >
> >
> >
> > -monz
>
> Hi Joe
>
> All that palaver to get a note which is 8:9 above 440Hz.
> Here's *my* method:-
>
> 9/8 x 440Hz = 495Hz
>
> Need I say more?
>
> Peter

and in another message:

> From: "peter_wakefield_sault" <sault@c...>
> Date: Sat Dec 13, 2003 7:50 am
> Subject: Re: odeion1-003
>
>--- In tuning@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >
> > Without going into the "MIDI sucks" issue, I can't
> > understand why you keep harping on the vibration number
> > thing. If you want to think in terms of vibration number,
> > just write an interface to MIDI/MTS. It shouldn't require
> > more than a few lines of code.
> >
> > -Carl
>
> I keep "harping on" about it because everybody keeps
> arguing against it. I'm not trying to force you or anyone
> else to do it the simple way. I just want to be able to do
> it the simple way myself. What is your problem with that?

don't take what i'm writing as an argument against your
admirably simple and elegant method. and anyway, of course
we've all already wished that a microtonalist could simply
use frequency-numbers instead of these more convoluted
calculations.

but unfortunately, if MIDI is your medium of choice, you'll
have to do the calculations, because MIDI was set up on
a 12edo basis.

if you're using a computer (which you obviously are), then
a basic tool like an electronic spreadsheet will do the job
for you with no hassle at all.

just input your ratio in one column, calculate frequency
numbers in the next column if you need to see those, then
set up two columns for the MIDI values, one for the MIDI-note

(int(log(F/R)*((12/log(2))))

and another for the tetradekamu value.

(int(log(F/R)*((12/log(2))))
-((log(F/R)*((12/log(2)))-(int(log(F/R)*((12/log(2)))))*(2^14)

after you've set up the formula, all you need to do is
input ratios. the spreadsheet will do the rest and spit
out the two MIDI answers.

-monz

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/13/2003 12:02:06 PM

--- In tuning@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...> wrote:
> --- In tuning@yahoogroups.com, "Peter Wakefield Sault" <sault@c...>
> wrote:
>
> > You are missing the point entirely, Manuel. How I arrive at a
> > particular frequency for a particular note that I want is
entirely
> my
> > own affair. If I calculate by whatever means I am using that the
> next
> > note I want is 123.4567Hz, then the synth should just simply give
> it
> > to me as requested.
>
> That's how Csound works. Though you hate logarithmic measures, you
> could equally well use those instead, however. Moreover, while midi
> pitch bends are an ugly system, they can easily enough do this also.

Hi Gene

Csound? Wossat?

I take your point about pitch bends. And I can see that the old-
timers here have worked overtime to find workarounds for MIDI and I
am grateful for the methods which have been shown to me.

I am averse only to complicated methods. I am aware that, so far as
software is concerned, the greater the complexity of a process, the
greater the chance of error in programming it and the greater the
difficulty in verifying the results.

Peter

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/13/2003 12:04:36 PM

--- In tuning@yahoogroups.com, "monz" <monz@a...> wrote:
> hi Peter,
>
>
> --- In tuning@yahoogroups.com, "Peter Wakefield Sault" <sault@c...>
> wrote:
>
> > --- In tuning@yahoogroups.com, "monz" <monz@a...> wrote:
>
> > > MTS uses the tetradekamu convention, dividing every
> > > 12edo semitone into 2^14 tetradekamu units.
> > >
> > >
> > > so, in algorithm format, let's call the reference frequency R
> > > and the desired frequency F. to get the MIDI-note +
tetradekamu:
> > >
> > > (int(log(F/R)*((12/log(2))))
> > > -((log(F/R)*((12/log(2)))-(int(log(F/R)*((12/log(2)))))*(2^14)
> > >
> > >
> > >
> > > -monz
> >
> > Hi Joe
> >
> > All that palaver to get a note which is 8:9 above 440Hz.
> > Here's *my* method:-
> >
> > 9/8 x 440Hz = 495Hz
> >
> > Need I say more?
> >
> > Peter
>
>
>
> and in another message:
>
> > From: "peter_wakefield_sault" <sault@c...>
> > Date: Sat Dec 13, 2003 7:50 am
> > Subject: Re: odeion1-003
> >
> >--- In tuning@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> > >
> > > Without going into the "MIDI sucks" issue, I can't
> > > understand why you keep harping on the vibration number
> > > thing. If you want to think in terms of vibration number,
> > > just write an interface to MIDI/MTS. It shouldn't require
> > > more than a few lines of code.
> > >
> > > -Carl
> >
> > I keep "harping on" about it because everybody keeps
> > arguing against it. I'm not trying to force you or anyone
> > else to do it the simple way. I just want to be able to do
> > it the simple way myself. What is your problem with that?
>
>
>
> don't take what i'm writing as an argument against your
> admirably simple and elegant method. and anyway, of course
> we've all already wished that a microtonalist could simply
> use frequency-numbers instead of these more convoluted
> calculations.
>
> but unfortunately, if MIDI is your medium of choice, you'll
> have to do the calculations, because MIDI was set up on
> a 12edo basis.
>
> if you're using a computer (which you obviously are), then
> a basic tool like an electronic spreadsheet will do the job
> for you with no hassle at all.
>
> just input your ratio in one column, calculate frequency
> numbers in the next column if you need to see those, then
> set up two columns for the MIDI values, one for the MIDI-note
>
> (int(log(F/R)*((12/log(2))))
>
>
> and another for the tetradekamu value.
>
> (int(log(F/R)*((12/log(2))))
> -((log(F/R)*((12/log(2)))-(int(log(F/R)*((12/log(2)))))*(2^14)
>
>
> after you've set up the formula, all you need to do is
> input ratios. the spreadsheet will do the rest and spit
> out the two MIDI answers.
>
>
>
> -monz

Thankyou Joe. I have saved the formula and will try it out.

Peter

🔗Gene Ward Smith <gwsmith@svpal.org>

12/13/2003 12:13:56 PM

--- In tuning@yahoogroups.com, "monz" <monz@a...> wrote:

> yeah, i don't get it either. it's simple to convert
> from frequency-vibration-numbers to tetradekamus and back.

It's conceptually even easier than this. Take the midi base frequency
f to be 8.1758 Hz, and now find cents(495/f)/100 = 12 log2(495/f),
which is about 71.0391. Now convert this into base 128, and get
(71).(5)(1) as your answer. It is log base 2^(1/12) of the frequency
ratio with the base frequency, expressed in base 128.

🔗Carl Lumma <ekin@lumma.org>

12/13/2003 12:23:33 PM

>In my program there is a retuning octave,
>and player chooses which note to use for the bridge note.
>So you can choose any note for that. You have
>one octave of the keyboard set aside for that
>(in twelve tone scales) so to tune to the current
>note played, player needs to play the same
>note simultaneously in the retuning octave.
>If you want the bridge note to be an Ab
>play an Ab in the retuning octave.

Robert, what is a "bridge note"? What is actually
happening here? Please give an example.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

12/13/2003 12:23:41 PM

--- In tuning@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
'
> Incidentally there is a MIDI alternative called OSC (Open Sound
Control) and
> there are ethernet-based interfaces that support this currently
being
> prototyped at CNMAT in Berkeley, CA with eventual technology
transfer and
> production planned.

Where is tuning to be found in the OSC spec? I just looked on the
CNMAT web page and didn't find it.

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/13/2003 12:34:48 PM

--- In tuning@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >In my program there is a retuning octave,
> >and player chooses which note to use for the bridge note.
> >So you can choose any note for that. You have
> >one octave of the keyboard set aside for that
> >(in twelve tone scales) so to tune to the current
> >note played, player needs to play the same
> >note simultaneously in the retuning octave.
> >If you want the bridge note to be an Ab
> >play an Ab in the retuning octave.
>
> Robert, what is a "bridge note"? What is actually
> happening here? Please give an example.
>
> -Carl

Carl

A modulation bridge is that part of a modulation where the notes are
common to both keys and which is therefore ambiguous. It may extend
over several notes. However, a composer has to indicate a change of
accidentals at a single point and the first note following this can
be taken as the bridge note, which is usually the first note of a
bar. The use of common notes makes the transition from one key to
another smoother and improves the effect for the listener.

Peter

🔗Carl Lumma <ekin@lumma.org>

12/13/2003 12:44:24 PM

>A modulation bridge is that part of a modulation where the notes are
>common to both keys and which is therefore ambiguous. It may extend
>over several notes. However, a composer has to indicate a change of
>accidentals at a single point and the first note following this can
>be taken as the bridge note, which is usually the first note of a
>bar. The use of common notes makes the transition from one key to
>another smoother and improves the effect for the listener.
>
>Peter

So if you're specifying a common tone, how do you specify which
change you're making?

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

12/13/2003 12:52:15 PM

--- In tuning@yahoogroups.com, "Peter Wakefield Sault" <sault@c...>
wrote:

> Csound? Wossat?

It's a powerful freeware program for sound synthesis. Googling
on "csound" brings up a lot of stuff; it's been around a long time
and has many devoted fans.

🔗Robert Walker <robertwalker@ntlworld.com>

12/13/2003 1:37:24 PM

Hi Carl,

> Robert, what is a "bridge note"? What is actually
> happening here? Please give an example.

Rightio - yes my bridge notes are just the new tonics
in the new scales. So if say you start at C and
you want to change to an E then the E remains at its
pitch whatever it is so far (say 5/4), but all the other notes get retuned to
make E the new tonic. Change now to say Ab as the tonic and
the Ab is now at perhaps 25/16, but the other notes all
get retuned so since our original tuning had
pure major thirds, now C is 125/64.

But this isn't the most general one can have.

Maybe one wants to do a bridge somewhere else
other than the tonic.

E.g. set E = 5/4 to the 3/2 of the new scale.
I.e. change the key to A major, with E as the
bridge note.

My method can't handle it to that generality yet.
The bridge note is always set to the tonic of the
new scale. It means you can only change the tuning
when you play the tonic of the new scale.

I see that really you have two notes to set there
- the bridge note and the new tonic.

So that is more of a challenge to the User Interface.
I suppose you could do it by saying that
the bridge note is to be the current note played
in a melody line, or a best fit for the notes currently in play
for a chord,and user just changes the tonic. Come to think
of it that is probably what Peter is describing.

That sounds like a far more natural system
to play with. User doesn't need to bother
about the bridge notes, so the music always
dovetails about at least one note, and just set the
tonic to change to.

So if piece moves to a perceived new key,
user can simply set the
new tonic to that key. Or to the roots
of the chords perhaps in j.i. where you can't
even keep triads nicely tuned within a single
diatonic key.

Perhaps also have some optional way of setting the
bridge note too if user wants to. There a
natural idea is to use a second note played
in the tonic shifting channel or octave - and
have rule that the first note played there
is the desired new tonic, then befoer
you play any new notes to be retuned,
optionally you can then play a second
note as a broken chord so program
can tell it is second (and not a new tonic because
the old tonic is still held down) and that then is
the desired bridge note.

With that system, user can set the bridge note
to anything i they like, not necessarily
to any note currently in play. But if not
set then it gets found by the software
by some heuristics to dovetail as nicely as possible.

Still retain the idea of a double press of the same
tonic in succession resets the tuning in case of
tonic drift. Or could have some other system there,
say that hold down the new tonic + a special "reset" note
simultaneously.

Robert

🔗Peter Frazer <paf@easynet.co.uk>

12/13/2003 1:33:04 PM

On Sat, 13 Dec 2003 01:13:34 -0800 Kurt Bigler wrote

>on 12/12/03 3:08 PM, Peter Frazer <paf@easynet.co.uk> wrote:

>> In tuning digest 2863 Carl wrote
>>
>>>> I agree that John deLaubenfels adaptive tuning ( and the more recent
>>>> Hermode tuning ) are superior in many respects to my "dynamic
>>>> re-tuning" as implemented in Midicode Synthesizer. I was working
>>>> in isolation at the time and it seemed to me an obvious step forward
>>>> to use the capabilities of computers to shift Just Intonation into a
>>>> new key at the apposite time. I still believe that this type of re-
>>>> tuning is appropriate in some instances (like Peter Saults algorithmic
>>>> composition) and I hope that I have made a contribution here.
>>
>>> As I understand it your "dynamic retuning" allows the composer to
>>> specify roots on a dedicated MIDI channel. This is in a totally
>>> different league than automatic retuning, in my book. The additional
>>> choice may take a lifetime to master for different scales, but it
>>> also brings a world of compositional opportunities that automatic
>>> retuning cannot. Apples and oranges.
>>
>>> -Carl
>>
>> Thanks Carl.
>>
>> The basic idea of dynamic re-tuning is that you could have a midi

>> pedal board on the re-tuning channel and hit the new key note
>> typically during a modulation pivot chord. I wouldn't have thought
>> that was too difficult to master.
>>
> (Also works from sequencer)
>>
>> Peter.
>> www.midicode.com

Hi Kurt,

>Its not difficult as long as you sufficiently restrict your possibilities!

Absolutely. What I did was really quite simple so it was easy to get it
working.

>Carl and I have been working on this in our "spare time" on my software
>organ, and before me, Carl was working with others on a design (called
>"xenharmonic moving windows"). But without biasing the current conversation
>with our experiences, what would you propose as a protocol for a musician
>with a pedalboard to allow an arbitrary modulation with an arbitrary common
>tone, or as Peter I think calls the same thing "bridge note".

>So far I have dealt with the situation in which either the common tone is
>either the old tonic or the new tonic, based on an arbitrary protocol in
>which upward and downward movements on the pedalboard select one or the
>other. I have been thinking about other solutions involving using a
>temporary change to an intermediate key which I think allows any note to
>then be common, though I have not analyzed or tested this extensively. So
>it requires 2 pedal key-presses per modulation in the general case. This is
>awkward and interferes with performance. It can be improved upon by various
>tricks, such as allowing the intermediate key to be recognized by an time
>overlap with the prevous pedal key and a kind of mono-mode functioning.
>This is probably not brilliantly clear since I am in a rush.

What you and Carl have been working on is more complex.

>But just to pose the question to you again, with a specific example. Lets
>say you have a 12-tone scale based on the harmonic series. Carl gave me
>this one, which is quite useful, perhaps in some sense optimal:

> 16:17:18:19:20:21:22:24:26:27:28:30

>And suppose you have a 3:4:5 (12:16:20) chord at G-C-E and you want the E to
>remain at fixed pitch as you hold the same 3 keys and you want the chord to
>become an 11:15:19, with the bottom 2 notes retuning to create this. (I
>hope I got that right.)

I have tried to analyse this and can not see what your new tuning would be.
Can you give a full set of ratios for your scale after modulation or tell me the
new key note, please?

>This is not necessarily the most musically useful example, but that was too
>much to come up with quickly here.

>What protocol for pedal use would you suggest to achieve such freedom of
>choice while playing?

I don't know Kurt.

Let me give a simple example of what dynamic re-tuning does in Just Intonation.
Suppose we are in C major

C 1/1
C# 16/15
D 9/8
Eb 6/5
E 5/4
F 4/3
F# 45/32
G 3/2
G# 8/5
A 5/3
Bb 9/5
B 15/8
C 2/1

You hold a chord of C-E-G and hit the G pedal to modulate. Shift the
original scale to G and multiply by 3/2 to get the new tuning.

C 1/1
C# 135/128
D 9/8
Eb 6/5
E 5/4
F 27/20
F# 6/4
G 3/2
G# 8/5
A 27/16
Bb 9/5
B 15/8
C 2/1

So in this example the notes of your pivot chord do not move but other scale
degrees are now tuned to the new tonic of G.

>I see that Robert Walker has posted something related to this, but have not
>had the time to analyze it. Carl and I should probably be talking to Robert
>too.

I have been conversing with Robert off-list and I know he has also looked at
this area.

>Thanks,
>Kurt

Thank you,
Peter.
www.midicode.com

🔗Kurt Bigler <kkb@breathsense.com>

12/13/2003 2:21:01 PM

on 12/13/03 9:42 AM, Robert Walker <robertwalker@ntlworld.com> wrote:

> Hi Kurt,
>
> Interesting to hear you've been working on it too.
>
> I see - when your player presses the foot board then it
> treats the current note played as the new tonic
> to retune to - is that right? Perhaps I may
> have a go at implementing that too, sounds
> a useful idea.

I don't have the time to reply to your full post right now, but I'll clarify
this point so you realize how I am really doing it. In fact xmw
(Xenharmonic Moving Windows) has had something called "crossfree" mode and I
fleshed that out by adding "diamondfree" (which I'm pretty sure is different
from "diamondrel" but I have not assimilated it all yet). Then I created 2
more modes:

crossfree-up/diamondfree-down

diamondfree-up/crossfree-down

Where up and down relate to the direction of motion on the control channel
(the pedalboard). I don't remember which of the two I like best. But there
are some things about these that can become somewhat intuitive even if they
seem arbitrary. And there are always ways to have the choice of either up
or down when playing, if you have at least a 2 octave pedalboard, because
you can substitute octaves at any time as an intermediate step which causes
no modulation itself but which affects the availability/convenience of the
up/down choice for the subsequent modulation.

In any case diamondfree is to crossfree as diamond is to cross, and you can
also say diamondfree is simply crossfree with the modulation scale inverted
in the octave (intervals reversed top-to-bottom). If you look at how
crossfree and diamondfree behave it might not be exactly as I stated things,
but it might. In crossfree I think the new tonic is the common tone. In
diamondfree the old tonic is the common tone. In any case I am sure about
the following, which is useful: If you do a modulation in crossfree and the
reverse modulation in diamondfree, you end up back where you started. This
reduces absolute drifting which otherwise becomes much too horrendous for
keyboard playing since within 60 seconds perhaps the keyboard range has
shifted out of the audible range!

That's all for now. Glad to have more thinkers. Sorry if once again this
was not brilliantly clear and also sorry that all the terms are not defined
well for those who haven't been following xmw.

-Kurt

🔗Peter Frazer <paf@easynet.co.uk>

12/13/2003 2:35:12 PM

On Fri, 12 Dec 2003 15:49:11 -0800 Carl wrote

>>But doesn't the midi tuning standard simply enable an entire tuning
>>table to be downloaded?
>>
>>The original idea that Peter put forward was to specify the individual
>>frequency of each note rather than as a note number, i.e. in the note-on
>>message (or equivalent).

>There are several different types of messages defined by the spec.

>http://www.midi.org/about-midi/tuning.shtml

>http://www.midi.org/about-midi/tuning_extens.shtml

>Note esp. the "single note" messages on the second link.

>-Carl

Thank you Carl, that makes it clear.

Peter.
www.midicode.com

🔗Peter Frazer <paf@easynet.co.uk>

12/13/2003 3:19:47 PM

On Sat, 13 Dec 2003 02:56:46 -0000 Robert wrote

>Hi Peter,

>> The basic idea of dynamic re-tuning is that you could have a midi
>> pedal board on the re-tuning channel and hit the new key note
>> typically during a modulation pivot chord. I wouldn't have thought
>> that was too difficult to master.

>> (Also works from sequencer)

>I've been working on this very idea too in FTS, independently of
>you. My approach was based on Carl Lumma's "xenharmonic moving windows"
>specification for a GUI to do tonic shifts which he posted
>to MakeMicroMusic (I think it was), maybe a year or so ago -
>not that I followed it exactly but used ideas from it.
>Then various ideas sugested by users of FTS - it does get
>used though I think only rather occasionally so far.

[SNIP]

>In fact you can hear a comparison of my tunings using
>this tonic shifting and JdLs tunings of the same
>piece on-line at
>http://www.tunesmithy.netfirms.com/tunes/tunes.htm#7_limit_adaptive_puzzle

>Robert

You guys are way ahead of me! It will take me some time to
absorb this.

Peter
www.midicode.com

🔗Peter Frazer <paf@easynet.co.uk>

12/13/2003 3:31:52 PM

On Sat, 13 Dec 2003 03:23:43 -0000 Peter wrote

>Computers are a zillion times faster nowadays making external
>hardware synths obsolete for that reason. Ok so we still have to plug
>in MIDI controller instruments for manual performance but there is
>now USB and Firewire. The problem is convincing the MIDI controller
>instrument makers to catch up - and replace MIDI with something
>slightly less horrible. For my purposes there is no external data
>connexion needed anyway - it all goes via internal buffers. I take it
>your program accepts MIDI-streams from Cakewalk and the suchlike.
>Accepting plugins would be one way of escaping from MIDI and
>specifying your own standard.

>Peter S.

I don't think the midi standard will change, particularly now it is
possible to send re-tuning messages with a resolution of 0.0061
cents. I know that doesn't suit your way of working and that topic
has been well covered.

Yes, my program accepts midi streams. I will consider providing
programmatic access next time I update it but addressing by
frequency rather than midi note number is probably too big a
change.

Peter
www.midicode.com

🔗Peter Frazer <paf@easynet.co.uk>

12/13/2003 3:39:55 PM

On Sat, 13 Dec 2003 03:50:38 -0000 Monz wrote

>a few months ago i other theorists coined the term
>"tetradekamu" to represent the smallest unit of
>tuning resolution possible in MIDI, and it is the
>unit used in MTS.

>you can try to get something from these:

>http://tonalsoft.com/enc/tetradekamu.htm

>http://tonalsoft.com/monzo/miditune/miditune.htm

>-monz

Thanks for that Monz, and your useful formula.

Peter
www.midicode.com

🔗Peter Frazer <paf@easynet.co.uk>

12/13/2003 3:53:59 PM

On Sat, 13 Dec 2003 05:18:34 -0000 Peter wrote

>There's a problem in that too. The modulation bridge note
>(or 'pivot') need not be the tonic of the new key. So if you do not
>retune relative to the bridge note then you introduce an unwanted
>dissonance into the melody immediately following the pivot.

For Just Intonation I think that the relation of the new key note to
the bridge note would ( usually ) be such that dissonance does not
occur. Can you give me an example in which dissonance would
occur, please?

Peter
www.midicode.com

🔗Peter Wakefield Sault <sault@cyberware.co.uk>

12/13/2003 3:59:38 PM

--- In tuning@yahoogroups.com, Carl Lumma <ekin@l...> wrote:
> >A modulation bridge is that part of a modulation where the notes
are
> >common to both keys and which is therefore ambiguous. It may
extend
> >over several notes. However, a composer has to indicate a change
of
> >accidentals at a single point and the first note following this
can
> >be taken as the bridge note, which is usually the first note of a
> >bar. The use of common notes makes the transition from one key to
> >another smoother and improves the effect for the listener.
> >
> >Peter
>
> So if you're specifying a common tone, how do you specify which
> change you're making?
>
> -Carl

I'm sorry Carl but I don't understand the question. I can only take a
wild guess and answer that. In most modulations, at least two-thirds
of the notes of the scale in the old key are common to the scale of
the new key. E.g., in modulating from C to F:-

C D E F G A B ==> F G A Bb C D E

where the bridge would *never* involve melodic motion from B to Bb to
achieve the modulation. The last note of the old key is always some
or other note of the new key also - and vice versa. However, that
note need not be the tonic of either. In fact there can be several
notes in succession which are common to both keys - extending the
ambiguity. This should also apply if the modulation is achieved via a
chord progression where one can talk of common chords, such as Am7 in
the example, instead of common notes. Does that clarify?

Peter

🔗Peter Frazer <paf@easynet.co.uk>

12/13/2003 4:07:13 PM

On Fri, 12 Dec 2003 17:12:25 -0800 Kraig wrote

>Hello Peter!
> You might be interested in the work of Boomsliter and Creel on extended referance which you can access
> here http://www.anaphoria.com/BC1.PDF
> and
> http://www.anaphoria.com/BC2A.PDF
> http://www.anaphoria.com/BC2B.PDF
> http://www.anaphoria.com/BC2C.PDF

Thanks, Kraig. I have saved them to read later.

Peter
www.midicode.com

🔗Peter Frazer <paf@easynet.co.uk>

12/13/2003 4:45:46 PM

On Sat, 13 Dec 2003 19:26:21 -0000 Gene wrote

>--- In tuning@yahoogroups.com, Peter Frazer <paf@e...> wrote:

>> I'm not quite sure what you mean here Gene, could you point me
>> to a reference?

>http://www.linet.gr.jp/~tamuki/timidity/mts/tuning.html

>> By base 128 I assume you mean 7 bit numbers so 21 bits,
>> 2 million or so combinations, but why 2.1072 digits?

>128 is a bit larger than 100, and base 100 uses digits from 00 to 99,
>or exactly two digits worth of base 10. Base 128 has digits equating
>to log10(128) of base 10 digits, or 2.1072 base 10 digits.

Thanks Gene. I see.

>> The original idea that Peter put forward was to specify the
individual
>> frequency of each note rather than as a note number, i.e. in the
note-on
>> message (or equivalent).

>You can adjust individual notes with so-called "pitch-bends" if you
>like.

Yes, sure, but I can quite understand that Peter thinks it's easier
to ask for a frequency and get a frequency.

Peter
www.midicode.com

🔗Kurt Bigler <kkb@breathsense.com>

12/13/2003 5:45:01 PM

on 12/13/03 3:31 PM, Peter Frazer <paf@easynet.co.uk> wrote:

> On Sat, 13 Dec 2003 03:23:43 -0000 Peter wrote
>
>> Computers are a zillion times faster nowadays making external
>> hardware synths obsolete for that reason. Ok so we still have to plug
>> in MIDI controller instruments for manual performance but there is
>> now USB and Firewire. The problem is convincing the MIDI controller
>> instrument makers to catch up - and replace MIDI with something
>> slightly less horrible. For my purposes there is no external data
>> connexion needed anyway - it all goes via internal buffers. I take it
>> your program accepts MIDI-streams from Cakewalk and the suchlike.
>> Accepting plugins would be one way of escaping from MIDI and
>> specifying your own standard.
>
>> Peter S.
>
> I don't think the midi standard will change, particularly now it is
> possible to send re-tuning messages with a resolution of 0.0061
> cents. I know that doesn't suit your way of working and that topic
> has been well covered.
>
> Yes, my program accepts midi streams. I will consider providing
> programmatic access next time I update it but addressing by
> frequency rather than midi note number is probably too big a
> change.

There are also other problems with such a change in model. Not necessarily
problems, but possibly. The MIDI note number also serves as a unique
identifier for a playing note. This is important for keyboard playing
purposes (that is when an actual keyboard isused for input) when retuning
can happen due to some protocol "on the side" which controls modulation.
Playing notes may or may not be retuned dynamically during modulations. In
any case the frequency does not uniquely identify the playing note in this
case, in a way that is helpful when keeping track of notes on and off. For
example if a modulation occurs and you don't keep track of the note number
*somewhere*, you will be unable to turn off the note playing at the previous
frequency.

In my current implementation of XMW I keep track of both note numbers and
frequency. Note numbers flow fairly transparently through the system and
frequencies change rather dynamically. But the note numbers keep everything
from going awry.

-Kurt

>
> Peter
> www.midicode.com

🔗Joseph Pehrson <jpehrson@rcn.com>

12/13/2003 6:17:30 PM

--- In tuning@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:

/tuning/topicId_49433.html#49748

>
> Finally, the very fact that so much customization and workarounds
can be
> done in software actually reduces the pressure for changes to MIDI.
>
> -Kurt

***My understanding, and this has been a big topic of conversation
over the last year or so on Jon Szanto's MakeMicroMusic list:

/makemicromusic/

is that most of the advancements in microtonality will come, not
through MIDI, but through *softsynths...* since the investment in
such alterations is not as much for the companies as in designing new
*hardware...* Besides, the sampled synths will sound better than
MIDI.

Not that I'm using one yet... I need a faster computer for starters...

J. Pehrson

🔗Kurt Bigler <kkb@breathsense.com>

12/13/2003 7:34:54 PM

on 12/13/03 12:23 PM, Gene Ward Smith <gwsmith@svpal.org> wrote:

> --- In tuning@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
> '
>> Incidentally there is a MIDI alternative called OSC (Open Sound
> Control) and
>> there are ethernet-based interfaces that support this currently
> being
>> prototyped at CNMAT in Berkeley, CA with eventual technology
> transfer and
>> production planned.
>
> Where is tuning to be found in the OSC spec? I just looked on the
> CNMAT web page and didn't find it.

You may find OSC to be too flexible.

This PDF link might clarify it:

http://cnmat.cnmat.berkeley.edu/ICMC98/papers-pdf/OSC.pdf

but this excerpt may provide the essential piece of information:

> Unlike MIDI [MIDI 95], ZIPI's MPDL [McMillen 94] and other musical control
> languages, OSC does not enforce any model of ³channels,² ³notes,²
> ³orchestras,² ³velocity,² etc. OSC's model is musically neutral and much
> more general: an OSC application consists of a dynamically changing set of
> objects arranged hierarchically, and each of these objects has a set of
> messages that can be sent to it to control its behavior. Thus, the goal of
> making an application OSC-addressable is not to come up with features that
> match predefined OSC messages, but to provide a set of OSC messages that
> match the features of the application and organize them into a meaningful
> hierarchy.

Keep in mind that people are doing such wacky things that the very concept
of a note is in question.

I'll bet protocol layers for note messages probably exist, but may not be
part of OSC proper. For now, this is all I know.

-Kurt

🔗Carl Lumma <ekin@lumma.org>

12/13/2003 7:39:28 PM

>> >A modulation bridge is that part of a modulation where the notes
>> >are common to both keys and which is therefore ambiguous. It may
>> >extend over several notes. However, a composer has to indicate a
>> >change of accidentals at a single point and the first note following
>> >this can be taken as the bridge note, which is usually the first
>> >note of a bar. The use of common notes makes the transition from
>> >one key to another smoother and improves the effect for the listener.
>>
>> So if you're specifying a common tone, how do you specify which
>> change you're making?
>
>I'm sorry Carl but I don't understand the question. I can only take a
>wild guess and answer that. In most modulations, at least two-thirds
>of the notes of the scale in the old key are common to the scale of
>the new key. E.g., in modulating from C to F:-
>
>C D E F G A B ==> F G A Bb C D E
>
>where the bridge would *never* involve melodic motion from B to Bb to
>achieve the modulation. The last note of the old key is always some
>or other note of the new key also - and vice versa. However, that
>note need not be the tonic of either. In fact there can be several
>notes in succession which are common to both keys - extending the
>ambiguity. This should also apply if the modulation is achieved via a
>chord progression where one can talk of common chords, such as Am7 in
>the example, instead of common notes. Does that clarify?

Not really. If a bunch of notes don't change in JI after a modulation,
they don't change, and you're done. But if you want one that *does*
change not to change, this can be done by adjusting the concert pitch
of the tuning immediately after the modulation. In this case one has
to specify which not he doesn't want to change, in addition to the
new tonic (if they are not the same). Unless your program tries to do
that automatically... ?

-Carl

🔗Kurt Bigler <kkb@breathsense.com>

12/13/2003 7:53:30 PM

on 12/13/03 1:33 PM, Peter Frazer <paf@easynet.co.uk> wrote:

> On Sat, 13 Dec 2003 01:13:34 -0800 Kurt Bigler wrote
>
>> on 12/12/03 3:08 PM, Peter Frazer <paf@easynet.co.uk> wrote:
>
>>> In tuning digest 2863 Carl wrote
>>>
>>>>> I agree that John deLaubenfels adaptive tuning ( and the more recent
>>>>> Hermode tuning ) are superior in many respects to my "dynamic
>>>>> re-tuning" as implemented in Midicode Synthesizer. I was working
>>>>> in isolation at the time and it seemed to me an obvious step forward
>>>>> to use the capabilities of computers to shift Just Intonation into a
>>>>> new key at the apposite time. I still believe that this type of re-
>>>>> tuning is appropriate in some instances (like Peter Saults algorithmic
>>>>> composition) and I hope that I have made a contribution here.
>>>
>>>> As I understand it your "dynamic retuning" allows the composer to
>>>> specify roots on a dedicated MIDI channel. This is in a totally
>>>> different league than automatic retuning, in my book. The additional
>>>> choice may take a lifetime to master for different scales, but it
>>>> also brings a world of compositional opportunities that automatic
>>>> retuning cannot. Apples and oranges.
>>>
>>>> -Carl
>>>
>>> Thanks Carl.
>>>
>>> The basic idea of dynamic re-tuning is that you could have a midi
>
>>> pedal board on the re-tuning channel and hit the new key note
>>> typically during a modulation pivot chord. I wouldn't have thought
>>> that was too difficult to master.
>>>
>> (Also works from sequencer)
>>>
>>> Peter.
>>> www.midicode.com
>
> Hi Kurt,
>
>> Its not difficult as long as you sufficiently restrict your possibilities!
>
> Absolutely. What I did was really quite simple so it was easy to get it
> working.
>
>> Carl and I have been working on this in our "spare time" on my software
>> organ, and before me, Carl was working with others on a design (called
>> "xenharmonic moving windows"). But without biasing the current conversation
>> with our experiences, what would you propose as a protocol for a musician
>> with a pedalboard to allow an arbitrary modulation with an arbitrary common
>> tone, or as Peter I think calls the same thing "bridge note".
>
>> So far I have dealt with the situation in which either the common tone is
>> either the old tonic or the new tonic, based on an arbitrary protocol in
>> which upward and downward movements on the pedalboard select one or the
>> other. I have been thinking about other solutions involving using a
>> temporary change to an intermediate key which I think allows any note to
>> then be common, though I have not analyzed or tested this extensively. So
>> it requires 2 pedal key-presses per modulation in the general case. This is
>> awkward and interferes with performance. It can be improved upon by various
>> tricks, such as allowing the intermediate key to be recognized by an time
>> overlap with the prevous pedal key and a kind of mono-mode functioning.
>> This is probably not brilliantly clear since I am in a rush.
>
> What you and Carl have been working on is more complex.
>
>> But just to pose the question to you again, with a specific example. Lets
>> say you have a 12-tone scale based on the harmonic series. Carl gave me
>> this one, which is quite useful, perhaps in some sense optimal:
>
>> 16:17:18:19:20:21:22:24:26:27:28:30
>
>> And suppose you have a 3:4:5 (12:16:20) chord at G-C-E and you want the E to
>> remain at fixed pitch as you hold the same 3 keys and you want the chord to
>> become an 11:15:19, with the bottom 2 notes retuning to create this. (I
>> hope I got that right.)
>
> I have tried to analyse this and can not see what your new tuning would be.
> Can you give a full set of ratios for your scale after modulation or tell
> me the
> new key note, please?

Yes, I gave the absolute minimum information there.

The scale before and after modulation are the same scale with the 16..30
ratios shown above. The 16 corresponds to the "tonic" and the tonic is
shifted to a new position. In this case the tonic is shifted one note down
from C to B. Lets measure frequencies in units contrived to make the
frequency of C equal to 1 in the first chord.

Then the G-C-E 3:4:5 (or equally 12:16:20) chord is initially:

3/4 : 1 : 5/4

and after the modulation the G-C-E chord is, because I specified that it
have the ratios 11:15:19 and that the top note remain at constant pitch:

(5/4)*(11/19) : (5/4)*(15/19) : 5/4

which makes the calculations explicit but can be simplified to

55/76 : 75/76 : 5/4

>> This is not necessarily the most musically useful example, but that was too
>> much to come up with quickly here.
>
>> What protocol for pedal use would you suggest to achieve such freedom of
>> choice while playing?
>
> I don't know Kurt.

Well, you needn't belabor these details in any case. I was mainly trying to
demonstrate the degree of freedom in controlling the exact results of a
modulation that might be useful, and maybe it will be, but if not, don't
worry.

No rush on this. Carl and I have been talking about it for 6 months or
more, and until we got down and dirty with practical use of prototyped
behavior on an instrument we could play, it was might harder to clarify.

So just think about modulations where you may want a certain tone in a chord
before modulation to match the pitch of a certain tone in a chord after
modulation. This is most motivated if you have something you are really
trying to do, musically. In my particular case it was a desire to be able
to reproduce things such as Toby Twining did (or at least what I was
hearing) in Chrysalid Requium (followed by certain improvisational
"responses" to that music) that provided additional clarity as I was
approaching this problem.

Of course if the harmony is such that no common notes occur at a certain
place, then things are much freer and the issue disappears.

-Kurt

🔗Kurt Bigler <kkb@breathsense.com>

12/13/2003 8:11:16 PM

on 12/13/03 3:53 PM, Peter Frazer <paf@easynet.co.uk> wrote:

> On Sat, 13 Dec 2003 05:18:34 -0000 Peter wrote
>
>> There's a problem in that too. The modulation bridge note
>> (or 'pivot') need not be the tonic of the new key. So if you do not
>> retune relative to the bridge note then you introduce an unwanted
>> dissonance into the melody immediately following the pivot.
>
> For Just Intonation I think that the relation of the new key note to
> the bridge note would ( usually ) be such that dissonance does not
> occur. Can you give me an example in which dissonance would
> occur, please?

The example scale you considered as a 12-tone "Just Intonation" scale is
exactly the kind of scale that will minimize some problems, especially by
making it possible to retain tunings of more notes when cycle-of-5th
modulations are done.

However, what I have been recently interested in is using the harmonic
scale:

16:17:18:19:20:21:22:24:26:27:28:30

and doing things like modulating by major and minor thirds. From XMW (with
some conceptual modification) came the the idea that the current state of
the tuning system includes a scale, a reference midi note #, and a reference
pitch.

In my current way of doing things I think of a modulation as consisting of
two aspects mathematically:

adding an offset (positive or negative) to the reference midi note #

this represents the movement of the tonic in MIDI note # space

multiplying a reference pitch (frequency actually) by a ratio

this represents the movement of the tonic in frequency space

(Personally I prefer to call this pitch but represent it in Hz. Others
object to that.)

Currently I am working with the assumption that all playing notes are
retuned to the new scale. XMW also allow the option that playing notes are
not retuned, but retain their previous pitch. Without returning,
dissonances can occur. But even with returning, if you are not careful what
you are doing musically, dissonances can occur. There are plenty of
dissonant-sounding triads you can pull out of the harmonic scale above. In
fact the very example I gave for the resultant chord 11:15:19 might not
sound that consonant.

Another rather rushed attempt at clarifying. I hope it helps.

-Kurt

>
> Peter
> www.midicode.com

🔗Werner Mohrlok <wmohrlok@hermode.com>

12/13/2003 10:30:13 PM

-----Urspr�ngliche Nachricht-----
Von: Peter Frazer [mailto:paf@easynet.co.uk]
Gesendet: Samstag, 13. Dezember 2003 22:33
An: tuning@yahoogroups.com
Betreff: [tuning] Re: odeion1-003

On Sat, 13 Dec 2003 01:13:34 -0800 Kurt Bigler wrote

>on 12/12/03 3:08 PM, Peter Frazer <paf@easynet.co.uk> wrote:

>> In tuning digest 2863 Carl wrote
>>
>>>> I agree that John deLaubenfels adaptive tuning ( and the more recent
>>>> Hermode tuning ) are superior in many respects to my "dynamic
>>>> re-tuning" as implemented in Midicode Synthesizer. I was working
>>>> in isolation at the time and it seemed to me an obvious step forward
>>>> to use the capabilities of computers to shift Just Intonation into a
>>>> new key at the apposite time. I still believe that this type of re-
>>>> tuning is appropriate in some instances (like Peter Saults
algorithmic
>>>> composition) and I hope that I have made a contribution here.
>>
>>> As I understand it your "dynamic retuning" allows the composer to
>>> specify roots on a dedicated MIDI channel. This is in a totally
>>> different league than automatic retuning, in my book. The additional
>>> choice may take a lifetime to master for different scales, but it
>>> also brings a world of compositional opportunities that automatic
>>> retuning cannot. Apples and oranges.
>>
>>> -Carl
>>
>> Thanks Carl.
>>
>> The basic idea of dynamic re-tuning is that you could have a midi

>> pedal board on the re-tuning channel and hit the new key note
>> typically during a modulation pivot chord. I wouldn't have thought
>> that was too difficult to master.
>>
> (Also works from sequencer)
>>
>> Peter.
>> www.midicode.com

Hi Kurt,

>Its not difficult as long as you sufficiently restrict your
possibilities!

Absolutely. What I did was really quite simple so it was easy to get it
working.

>Carl and I have been working on this in our "spare time" on my software
>organ, and before me, Carl was working with others on a design (called
>"xenharmonic moving windows"). But without biasing the current
conversation
>with our experiences, what would you propose as a protocol for a musician
>with a pedalboard to allow an arbitrary modulation with an arbitrary
common
>tone, or as Peter I think calls the same thing "bridge note".

>So far I have dealt with the situation in which either the common tone is
>either the old tonic or the new tonic, based on an arbitrary protocol in
>which upward and downward movements on the pedalboard select one or the
>other. I have been thinking about other solutions involving using a
>temporary change to an intermediate key which I think allows any note to
>then be common, though I have not analyzed or tested this extensively.
So
>it requires 2 pedal key-presses per modulation in the general case. This
is
>awkward and interferes with performance. It can be improved upon by
various
>tricks, such as allowing the intermediate key to be recognized by an time
>overlap with the prevous pedal key and a kind of mono-mode functioning.
>This is probably not brilliantly clear since I am in a rush.

What you and Carl have been working on is more complex.

>But just to pose the question to you again, with a specific example.
Lets
>say you have a 12-tone scale based on the harmonic series. Carl gave me
>this one, which is quite useful, perhaps in some sense optimal:

> 16:17:18:19:20:21:22:24:26:27:28:30

>And suppose you have a 3:4:5 (12:16:20) chord at G-C-E and you want the E
to
>remain at fixed pitch as you hold the same 3 keys and you want the chord
to
>become an 11:15:19, with the bottom 2 notes retuning to create this. (I
>hope I got that right.)

I have tried to analyse this and can not see what your new tuning would
be.
Can you give a full set of ratios for your scale after modulation or tell
me the
new key note, please?

>This is not necessarily the most musically useful example, but that was
too
>much to come up with quickly here.

>What protocol for pedal use would you suggest to achieve such freedom of
>choice while playing?

I don't know Kurt.

Let me give a simple example of what dynamic re-tuning does in Just
Intonation.
Suppose we are in C major

C 1/1
C# 16/15
D 9/8
Eb 6/5
E 5/4
F 4/3
F# 45/32
G 3/2
G# 8/5
A 5/3
Bb 9/5
B 15/8
C 2/1

You hold a chord of C-E-G and hit the G pedal to modulate. Shift the
original scale to G and multiply by 3/2 to get the new tuning.

C 1/1
C# 135/128
D 9/8
Eb 6/5
E 5/4
F 27/20
F# 6/4
G 3/2
G# 8/5
A 27/16
Bb 9/5
B 15/8
C 2/1

So in this example the notes of your pivot chord do not move but other
scale
degrees are now tuned to the new tonic of G.

But what are you doing at the following chord sequence in C major:
C-E-G, F-A-C, D-F-A, G-B-D-(F), C-E-G..

At the point D-F-A the fifth D-A shows a "Wolf". If yo accept this, the
idea of "just intonation" has a break.
I f you change the scale at this point to a D Scale, you will climb 22
Cents deeper and if you repeat
this sequence 4 times, you will end in B major.
Changing the key without any modulation...

Werner Mohrlok

Yahoo! Groups Sponsor
ADVERTISEMENT

You do not need web access to participate. You may subscribe through
email. Send an empty email to one of these addresses:
tuning-subscribe@yahoogroups.com - join the tuning group.
tuning-unsubscribe@yahoogroups.com - unsubscribe from the tuning group.
tuning-nomail@yahoogroups.com - put your email message delivery on hold
for the tuning group.
tuning-digest@yahoogroups.com - change your subscription to daily digest
mode.
tuning-normal@yahoogroups.com - change your subscription to individual
emails.
tuning-help@yahoogroups.com - receive general help information.

Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.

🔗Jon Szanto <JSZANTO@ADNC.COM>

12/13/2003 10:34:22 PM

--- In tuning@yahoogroups.com, "Joseph Pehrson" <jpehrson@r...> wrote:
> ***My understanding, and this has been a big topic of conversation
> over the last year or so on Jon Szanto's MakeMicroMusic list:
>
> /makemicromusic/
>
> is that most of the advancements in microtonality will come, not
> through MIDI, but through *softsynths...* since the investment in
> such alterations is not as much for the companies as in designing new
> *hardware...*

I believe that is a fair statement. What many people seem to not realize is that MIDI is simply a communication protocol, and in that sense it isn't any more restricting (in fact, less so) than the damned 7-white, 5-black keyboard. If you want to step outside of 12tet, that is. Joe, you yourself could speak on how much work you went through setting up Blackjack on a kbd.

But these are simply minor impediments, easily overcome by a talented and driven musician. Can MIDI make most anything happen? Indeed it can. Is it a paradise? No it isn't.

> Besides, the sampled synths will sound better than MIDI.

That isn't correct, certainly syntax-wise. "MIDI" doesn't make sounds, it just communicates. And there are samplers, synths, and sampler/synth hybrids. Maybe you meant to say "sampler/synths will sound better than General MIDI (GM) patch sets". Which, to some extent, depends on the skill of the (electronic) orchestration, but all other things being equal, could potentially be a valid statement (gad, Jon, waffling on that magnitude would impress even Al Gore...).

> Not that I'm using one yet... I need a faster computer for
> starters...

Get your arse in gear, Pehrson - Monteith is tearing up things on the other side the pond! :)

Cheers,
Jon

🔗Kurt Bigler <kkb@breathsense.com>

12/14/2003 12:06:17 AM

on 12/13/03 7:39 PM, Carl Lumma <ekin@lumma.org> wrote:

>>>> A modulation bridge is that part of a modulation where the notes
>>>> are common to both keys and which is therefore ambiguous. It may
>>>> extend over several notes. However, a composer has to indicate a
>>>> change of accidentals at a single point and the first note following
>>>> this can be taken as the bridge note, which is usually the first
>>>> note of a bar. The use of common notes makes the transition from
>>>> one key to another smoother and improves the effect for the listener.
>>>
>>> So if you're specifying a common tone, how do you specify which
>>> change you're making?
>>
>> I'm sorry Carl but I don't understand the question. I can only take a
>> wild guess and answer that. In most modulations, at least two-thirds
>> of the notes of the scale in the old key are common to the scale of
>> the new key. E.g., in modulating from C to F:-
>>
>> C D E F G A B ==> F G A Bb C D E
>>
>> where the bridge would *never* involve melodic motion from B to Bb to
>> achieve the modulation. The last note of the old key is always some
>> or other note of the new key also - and vice versa. However, that
>> note need not be the tonic of either. In fact there can be several
>> notes in succession which are common to both keys - extending the
>> ambiguity. This should also apply if the modulation is achieved via a
>> chord progression where one can talk of common chords, such as Am7 in
>> the example, instead of common notes. Does that clarify?
>
> Not really. If a bunch of notes don't change in JI after a modulation,
> they don't change, and you're done. But if you want one that *does*
> change not to change, this can be done by adjusting the concert pitch
> of the tuning immediately after the modulation. In this case one has
> to specify which not he doesn't want to change, in addition to the
> new tonic (if they are not the same). Unless your program tries to do
> that automatically... ?

I somewhat dislike the idea, but I am thinking of how useful it would be to
be able to interpret gestures of the hands on the keys, perhaps by wearing
one of those virtual-reality gloves (hopefully a *very* comfortable one)
while playing. Then we could do things like stroke keys in a certain way to
indicate a bridge note, etc.

-Kurt

>
> -Carl

🔗Carl Lumma <ekin@lumma.org>

12/14/2003 12:22:21 AM

>I somewhat dislike the idea, but I am thinking of how useful it would be
>to be able to interpret gestures of the hands on the keys, perhaps by
>wearing one of those virtual-reality gloves (hopefully a *very*
>comfortable one) while playing. Then we could do things like stroke
>keys in a certain way to indicate a bridge note, etc.

And I really dislike the idea. :)

-C.

🔗Kurt Bigler <kkb@breathsense.com>

12/14/2003 12:33:00 AM

on 12/14/03 12:22 AM, Carl Lumma <ekin@lumma.org> wrote:

>> I somewhat dislike the idea, but I am thinking of how useful it would be
>> to be able to interpret gestures of the hands on the keys, perhaps by
>> wearing one of those virtual-reality gloves (hopefully a *very*
>> comfortable one) while playing. Then we could do things like stroke
>> keys in a certain way to indicate a bridge note, etc.
>
> And I really dislike the idea. :)
>
> -C.

I know, but it would solve all our problems!

-Kurt

🔗Carl Lumma <ekin@lumma.org>

12/14/2003 12:33:47 AM

>>> I somewhat dislike the idea, but I am thinking of how useful it would be
>>> to be able to interpret gestures of the hands on the keys, perhaps by
>>> wearing one of those virtual-reality gloves (hopefully a *very*
>>> comfortable one) while playing. Then we could do things like stroke
>>> keys in a certain way to indicate a bridge note, etc.
>>
>> And I really dislike the idea. :)
>>
>> -C.
>
>I know, but it would solve all our problems!

Your problems maybe; I don't have any problems.

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

12/14/2003 1:08:57 AM

--- In tuning@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:

> Currently I am working with the assumption that all playing notes
are
> retuned to the new scale. XMW also allow the option that playing
notes are
> not retuned, but retain their previous pitch.

Sounds interesting, but what's XMW?

🔗Carl Lumma <ekin@lumma.org>

12/14/2003 1:18:05 AM

>> Currently I am working with the assumption that all playing notes
>> are retuned to the new scale. XMW also allow the option that
>> playing notes are not retuned, but retain their previous pitch.
>
>Sounds interesting, but what's XMW?

It's an acronym for Xenharmonic Moving Windows, a spec I wrote for
MIDI-relaying software in 2001, based on an idea worked up by
Denny Genovese and I in 1997, and I'm sure quite common throughout
the tuning literature from the past century. The original spec
is now well out of date thanks to feature creep, but may be found
here...

http://lumma.org/tuning/xmw.txt

...I expect a much more detailed and up-to-date spec and/or
software implementation within a few years. Here's a tease...

http://lumma.org/tuning/xmw.png

-Carl

🔗Kurt Bigler <kkb@breathsense.com>

12/14/2003 1:22:33 AM

on 12/14/03 1:08 AM, Gene Ward Smith <gwsmith@svpal.org> wrote:

> --- In tuning@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
>
>> Currently I am working with the assumption that all playing notes
> are
>> retuned to the new scale. XMW also allow the option that playing
> notes are
>> not retuned, but retain their previous pitch.
>
> Sounds interesting, but what's XMW?

Xenharmonic Moving Windows, referred to in a couple of other threads
recently. Originally developed by Carl and others before I was born into
this list. Carl can fill in more details about the history. The other
recent threads probably clarify the purpose of it somewhat as a mechanism
for permitting modulation within a fixed keyboard structure to allow dynamic
retuning. Well actually I guess this was one of those threads. Maybe you
just forgot what XMW stood for?

-Kurt

🔗Joseph Pehrson <jpehrson@rcn.com>

12/14/2003 11:32:01 AM

--- In tuning@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:

/tuning/topicId_49433.html#49840

In my particular case it was a desire to be able
> to reproduce things such as Toby Twining did (or at least what I was
> hearing) in Chrysalid Requium (followed by certain improvisational
> "responses" to that music) that provided additional clarity as I was
> approaching this problem.
>
> Of course if the harmony is such that no common notes occur at a
certain
> place, then things are much freer and the issue disappears.
>
> -Kurt

***Since nobody has posted about this, I will: the new issue of the
magazine 1/1 came out (it's published, pretty much, "whenever...")
and it has a nice article by Bill Alves about the Chrysalid Requiem
in it, in case somebody doesn't know about it...

J. Pehrson

🔗Joseph Pehrson <jpehrson@rcn.com>

12/14/2003 11:44:52 AM

--- In tuning@yahoogroups.com, "Jon Szanto" <JSZANTO@A...> wrote:

/tuning/topicId_49433.html#49845

> I believe that is a fair statement. What many people seem to not
realize is that MIDI is simply a communication protocol, and in that
sense it isn't any more restricting (in fact, less so) than the
damned 7-white, 5-black keyboard. If you want to step outside of
12tet, that is. Joe, you yourself could speak on how much work you
went through setting up Blackjack on a kbd.
>

***Yes, it involves "renaming" the keys on the keyboard according to
the new set of notes (21 in the case of Blackjack) and trying to
forget everything one previously knew about playing music on a
keyboard... (not easy to do).

Then, there's a *translation* from a sequencer in 12-tET to the new
note names when they are written down on the staff.

So, to answer Peter's question, it *is* possible to work in non-12
with MIDI, but not easy. (Peter, please don't sue me because I think
12 notes per octave is not the only option... :)

> But these are simply minor impediments, easily overcome by a
talented and driven musician. Can MIDI make most anything happen?
Indeed it can. Is it a paradise? No it isn't.
>

***Yes, that's well put. Also, we should keep in mind that software,
such as Sibelius, *is* designed for 12-tET (a point for Peter!) and
anyone using it a different way is really going "against the
grain..." And, there are so many *different* ways of going "against
the grain" that we are pretty much on our own to figure out a way to
make things work... But, since that's what we want to do, we *will*
do it, one way or the other...

> > Not that I'm using one yet... I need a faster computer for
> > starters...
>
> Get your arse in gear, Pehrson - Monteith is tearing up things on
the other side the pond! :)
>
> Cheers,
> Jon

***Well, I appreciate the "tough love" but it really isn't a race,
and, at the moment, I am more interested in working with *acoustic*
instruments anyway. Besides, every couple of months the softsynths
get more and more sophisticated, so waiting doesn't hurt that much.
Additionally, I'm happy with my working methods I have established
when I *do* electronic music and its the *music* and not the
*equipment* that matters the most in the final accounting... (I
agree, though, that good, modern gear will *definitely* improve
things!)

I'm expecting that I'll be upgrading in the Spring. My first foray
will be the use of the new, improved Sibelius that uses an internal
softsynth. However, after that time I'm sure I'll be experimenting
with specifically microtonal softsynths as well, since it seems they
will ultimately offer the greatest possibilities...

J. Pehrson

🔗Gene Ward Smith <gwsmith@svpal.org>

12/14/2003 1:57:37 PM

--- In tuning@yahoogroups.com, "Joseph Pehrson" <jpehrson@r...> wrote:

> I'm expecting that I'll be upgrading in the Spring. My first foray
> will be the use of the new, improved Sibelius that uses an internal
> softsynth.

Is that out?

🔗Carl Lumma <ekin@lumma.org>

12/14/2003 2:12:14 PM

>> I'm expecting that I'll be upgrading in the Spring. My first foray
>> will be the use of the new, improved Sibelius that uses an internal
>> softsynth.
>
>Is that out?

Sibelius3, yes it's out. The softsynth is by Native Instruments.

-Carl

🔗Kurt Bigler <kkb@breathsense.com>

12/15/2003 12:07:41 PM

on 12/14/03 11:32 AM, Joseph Pehrson <jpehrson@rcn.com> wrote:

> --- In tuning@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
>
> /tuning/topicId_49433.html#49840
>
> In my particular case it was a desire to be able
>> to reproduce things such as Toby Twining did (or at least what I was
>> hearing) in Chrysalid Requium (followed by certain improvisational
>> "responses" to that music) that provided additional clarity as I was
>> approaching this problem.
>>
>> Of course if the harmony is such that no common notes occur at a
> certain
>> place, then things are much freer and the issue disappears.
>>
>> -Kurt
>
>
> ***Since nobody has posted about this, I will: the new issue of the
> magazine 1/1 came out (it's published, pretty much, "whenever...")
> and it has a nice article by Bill Alves about the Chrysalid Requiem
> in it, in case somebody doesn't know about it...
>
> J. Pehrson

How does one get this magazine? I can't seem to find it among all the noise
in web-searching.

-Kurt

🔗Jon Szanto <JSZANTO@ADNC.COM>

12/15/2003 1:14:26 PM

--- In tuning@yahoogroups.com, Kurt Bigler <kkb@b...> wrote:
> How does one get this magazine? I can't seem to find it among all
> the noise in web-searching.

http://www.justintonation.net/

Cheers,
Jon

🔗Paul Erlich <paul@stretch-music.com>

12/30/2003 9:30:00 AM

--- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...> wrote:

> "Ortstheorie" is a german term of one of the different "hearing"
theories.
> This theory
> says that our hearing is somehow working like a fourier
transformation.
> This means
> in abstract:
> Our ear splits the complex musical tones into their partial tones,
everyone
> of them
> perceived on a different place in our ear. In the "Schnecke"
(Snail???)
> The problem of this theory is that with it one cannot explain why
we hear
> the
> combination tones.

That is not really true. Helmholtz, Plomp and other place theorists
had no difficulty coming up with explanations for the combination
tones, namely nonlinear response. The mathematics explaining how
nonlinear response leads to combinational tones is not too complex
and is explained very nicely in the _Feynman Lectures on Physics_.
For the technically-minded, if the nonlinear response function is
expanded as a polynomial, terms of order n will lead to nth-order
combinational tones -- where the familiar difference (f2 - f1) and
sum (f2 + f1) tones are 2nd order, combinational tones like 2*f1 - f2
are 3rd order, etc.

> The explanation of the "Ortstheorie" is: These combination tones are
> "somehow produced" in our brain.

Whose "Ortstheorie" relies on this explanation??

> I feel that is a funny theory: The combination tones are physical
existing
> in the air,

Over a century ago, Helmholtz thought so. But in fact, this is not
true below lethal sound levels of around 150dB. Instead, it is
primarily the bones of the middle ear that respond nonlinearly and
introduce combinational tones. This is why, as you or someone pointed
out in another message, combinational tones are nearly absent when
the two original tones are presented separately, one to each ear
("binaurally").

> every tuner receives them,

How do you come to this conclusion? Even if you have a tuner which
purports to measure pure frequency components and only pure frequency
components, which is pretty unlikely, you'd still have to prove that
the tuner's response introduces no nonlinearity -- most likely,
though, it does.

🔗Paul Erlich <paul@stretch-music.com>

12/30/2003 9:37:16 AM

--- In tuning@yahoogroups.com, "Peter Wakefield Sault" <sault@c...>
wrote:

> I have created a Photo Album called PWS and uploaded an image,
> WaveMaster.jpg, into it. It shows 2 constituent sinewaves, one of
> 800Hz and another of 900Hz, which comprise the interval of a
> wholetone of 8:9, and the additive mix of the two. The Difference
> Tone of 100Hz is clearly visible as amplitude modulation.

Amplitude modulation does not make a tone -- pressure modulation
does. It would require a nonlinear response to the signal you graph
before one would obtain actual tones (frequency components) at the
frequency of the amplitude modulation.

Wave traces like this can be misleading -- for example, two identical-
sounding signals can have completely different wave traces, due to
the relative phases of the frequency components being different. In
this case, the wave trace may seem to imply audible amplitude
modulation or "beating", though at this fast rate, beating is
inaudible -- only when two frequency components are within a critical
bandwidth (~250 cents or so in this range) will any beating between
them be audible -- the appearance of the wave trace is not a good
guide to what will be heard.

🔗Werner Mohrlok <wmohrlok@hermode.com>

12/30/2003 11:18:33 AM

> Over a century ago, Helmholtz thought so. But in fact, this is not
> true below lethal sound levels of around 150dB. Instead, it is
> primarily the bones of the middle ear that respond nonlinearly and
> introduce combinational tones. This is why, as you or someone pointed
> out in another message, combinational tones are nearly absent when
> the two original tones are presented separately, one to each ear
> ("binaurally").

> every tuner receives them,

> How do you come to this conclusion? Even if you have a tuner which
> purports to measure pure frequency components and only pure frequency
> components, which is pretty unlikely, you'd still have to prove that
> the tuner's response introduces no nonlinearity -- most likely,
> though, it does.

Hi Paul,

indeed, I have to correct this sentence into "My tuner does so",
as I don't know whether the intelligence of my tuner is higher
than this of your tuner or of others..
I possess a tool with which I am able retune two frequencies gliding
in the same manner as for instance is it usual to tune the chords
of a string instrument.
I start for example with a fifth interval and reduce it gliding
to unison.
You know the result: The combinational tone starts an octave deeper
than the lower tone of the interval and sinks down gliding. As soon
as the frequency of the combination tone will be less than about18 Hz
the combination tone changes for my ear from a deep tone to beats
which become slower and slower. In the same moment I can watch
the LED of my tuner, blinking in the sam rhythm.
I believe that the LED is blinking still at higher frequencies,
but my eye cannot see this, I am sure, I need not to explain, why.

Do you believe that these "beats" are in principle different
in their character or physical reason than the combination tones?

Best
Werner

🔗Paul Erlich <paul@stretch-music.com>

12/30/2003 1:14:24 PM

--- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...> wrote:
>
> The combinational tone starts an octave deeper
> than the lower tone of the interval and sinks down gliding. As soon
> as the frequency of the combination tone will be less than about18
Hz
> the combination tone changes for my ear from a deep tone to beats
> which become slower and slower.

The continuity between the two phenomena is true enough in a
quantitative sense but qualitatively they are actually two different
phenomena. In fact if you're trained in very fast rhythms, you may
actually be able to hear both phenomena at the same time in the 20-30
Hz range.

> Do you believe that these "beats" are in principle different
> in their character or physical reason than the combination tones?

Yes. As I explained in my response to Peter W. S., beats are
*amplitude modulation*, which without a nonlinear response does not
imply the presence of any *pressure modulation*, or tone, at that
frequency. Perhaps you could demonstrate this to yourself if you
repeat the experiment with *very quiet* tones. It will then be likely
that the combinational tones will not be heard at all, or at least be
greatly attenuated, while the beating, when it comes in, will still
be just as prominent (relative to the loudness of the tones) as
before. Another thing that's fun to try, since I think you've
mentioned it before, is to listen to the tones binaurally -- one to
each ear, with as little cross-talk as possible -- then the vast
majority of the combinational tones should simply disappear, while
the beating will manifest itself as a very spatial "swirling"
called "binaural beats" that is a result of the signals from the two
ears being combined in the brain.

I'd be happy to go into the physical/mathematical explanation of
beats and combinational tones for you, if you wish, or alternatively
you could study the (separate) topics of beats and combination tones
in _The Feynman Lectures on Physics_.

🔗Werner Mohrlok <wmohrlok@hermode.com>

12/30/2003 2:04:50 PM

-----Urspr�ngliche Nachricht-----
Von: Paul Erlich [mailto:paul@stretch-music.com]
Gesendet: Dienstag, 30. Dezember 2003 22:14
An: tuning@yahoogroups.com
Betreff: [tuning] Re: "Ortstheorie"

--- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...> wrote:
> >
> > The combinational tone starts an octave deeper
> > than the lower tone of the interval and sinks down gliding. As soon
> > as the frequency of the combination tone will be less than
> > about18 Hz
> > the combination tone changes for my ear from a deep tone to beats
> > which become slower and slower.

> The continuity between the two phenomena is true enough in a
> quantitative sense but qualitatively they are actually two different
> phenomena. In fact if you're trained in very fast rhythms, you may
> actually be able to hear both phenomena at the same time in the 20-30
> Hz range.

Paul, please be not sorry. But I don't believe, that these are
qualitatively different. And hearing both phenomena at the same
time is no proof to the contrary.

> > Do you believe that these "beats" are in principle different
> > in their character or physical reason than the combination tones?

> Yes. As I explained in my response to Peter W. S., beats are
> *amplitude modulation*, which without a nonlinear response does not
> imply the presence of any *pressure modulation*, or tone, at that
> frequency. Perhaps you could demonstrate this to yourself if you
> repeat the experiment with *very quiet* tones. It will then be likely
> that the combinational tones will not be heard at all, or at least be
> greatly attenuated, while the beating, when it comes in, will still
> be just as prominent (relative to the loudness of the tones) as
> before. Another thing that's fun to try, since I think you've
> mentioned it before, is to listen to the tones binaurally -- one to
> each ear, with as little cross-talk as possible -- then the vast
> majority of the combinational tones should simply disappear, while
> the beating will manifest itself as a very spatial "swirling"
> called "binaural beats" that is a result of the signals from the two
> ears being combined in the brain.

Yes, and I made in the early 90ths a lot of such experiments, but
only for my own fun. But please reflect precisely:
All these experiments don't prove definitely
the truth of the one or the other hear theory.
They only show that the incoming tones at the both different ears
will be joined in a relative late step of our hear process.

> I'd be happy to go into the physical/mathematical explanation of
> beats and combinational tones for you, if you wish, or alternatively
> you could study the (separate) topics of beats and combination tones
> in _The Feynman Lectures on Physics_.

Maybe. And Aristoteles said, the sun is turning around the earth...
But, seriously, You know more in the different hearing theories than me,
but I followed already discussions of professors which fighted for
different hear theories. These fights ended nearly with brawls.
Frankly, the local theory seems to me a very problematic theory,
(you know the arguments against this). IBut I agree, all the different
theories show specific weaknesses.
Maybe a combination of all these theories will be the truth...
I propose: We actually shouldn't extend the discussion in the
different hear theories. It would be too much for
the other members and too much for my poor english.
I pay my respect to your opinion. But I actually keep the mine.
This is: The place theory with their actual formulations
cannot be the last word - and the term "nonlinear" as the main
explanation for combination tones is a very miserable invention.

Best,

Werner

🔗Paul Erlich <paul@stretch-music.com>

12/31/2003 1:09:42 PM

--- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...> wrote:
>
> -----Ursprüngliche Nachricht-----
> Von: Paul Erlich [mailto:paul@s...]
> Gesendet: Dienstag, 30. Dezember 2003 22:14
> An: tuning@yahoogroups.com
> Betreff: [tuning] Re: "Ortstheorie"
>
>
> --- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...>
wrote:
> > >
> > > The combinational tone starts an octave deeper
> > > than the lower tone of the interval and sinks down gliding.
As soon
> > > as the frequency of the combination tone will be less than
> > > about18 Hz
> > > the combination tone changes for my ear from a deep tone to
beats
> > > which become slower and slower.
>
> > The continuity between the two phenomena is true enough in a
> > quantitative sense but qualitatively they are actually two
different
> > phenomena. In fact if you're trained in very fast rhythms, you
may
> > actually be able to hear both phenomena at the same time in
the 20-30
> > Hz range.
>
> Paul, please be not sorry. But I don't believe, that these are
> qualitatively different. And hearing both phenomena at the same
> time is no proof to the contrary.

If you can have either without the other, *or* both together, they
would seem not to be different aspects of the same phenomenon.

> > > Do you believe that these "beats" are in principle different
> > > in their character or physical reason than the combination
tones?
>
> > Yes. As I explained in my response to Peter W. S., beats are
> > *amplitude modulation*, which without a nonlinear response
does not
> > imply the presence of any *pressure modulation*, or tone, at
that
> > frequency. Perhaps you could demonstrate this to yourself if
you
> > repeat the experiment with *very quiet* tones. It will then be
likely
> > that the combinational tones will not be heard at all, or at
least be
> > greatly attenuated, while the beating, when it comes in, will
still
> > be just as prominent (relative to the loudness of the tones) as
> > before. Another thing that's fun to try, since I think you've
> > mentioned it before, is to listen to the tones binaurally --
one to
> > each ear, with as little cross-talk as possible -- then the
vast
> > majority of the combinational tones should simply disappear,
while
> > the beating will manifest itself as a very spatial "swirling"
> > called "binaural beats" that is a result of the signals from
the two
> > ears being combined in the brain.
>
> Yes, and I made in the early 90ths a lot of such experiments, but
> only for my own fun. But please reflect precisely:
> All these experiments don't prove definitely
> the truth of the one or the other hear theory.

I was certainly not attempting to defend the place theory of hearing
to the exclusion of all others, if that's what you thought! That's a
whole separate issue . . .

> They only show that the incoming tones at the both different ears
> will be joined in a relative late step of our hear process.

It also shows that combinational tones are primarily created by
nonlinearities in the mechanism *in each ear* . . .

> > I'd be happy to go into the physical/mathematical explanation
of
> > beats and combinational tones for you, if you wish, or
alternatively
> > you could study the (separate) topics of beats and combination
tones
> > in _The Feynman Lectures on Physics_.
>
> Maybe. And Aristoteles said, the sun is turning around the
earth...
> But, seriously, You know more in the different hearing theories
than me,
> but I followed already discussions of professors which fighted for
> different hear theories. These fights ended nearly with brawls.

That was not my concern in replying to this message, Werner. Rather,
I wanted to discuss beats and combinational tones, both of which are
among the most well-understood and least controversial aspects of
hearing.

> and the term "nonlinear" as the main
> explanation for combination tones is a very miserable invention.

Please explain this statement. "Nonlinear" has a very precise
mathematical meaning -- put simply, the function relating input to
output is not a straight line -- and i see nothing "miserable" about
it.

Meanwhile, in case there's any confusion, the phenomenon of "virtual
pitch" or "fundamental tracking" is completely distinct from that of
combinational tones -- for example, a set of pure frequencies 420,
520, 620, 720 when played loudly into one or both ears will of course
produce combinational tones at 100 (&200 . . .), but when played more
softly, you will only hear the "virtual pitch", which will be at
roughly 108.

🔗Werner Mohrlok <wmohrlok@hermode.com>

1/1/2004 7:16:51 AM

A happy new year to all.

Paul,
thank you for your patient answers.
These subjecte are worth to be discussed carefully
and you are right, there may be still some confusions.
But as I already said: I have a time problem, and
answering sloppy would be unpolite.
Therefore I will store this message and answer later.

Best
Werner
-----Urspr�ngliche Nachricht-----
Von: Paul Erlich [mailto:paul@stretch-music.com]
Gesendet: Mittwoch, 31. Dezember 2003 22:10
An: tuning@yahoogroups.com
Betreff: [tuning] Re: "Ortstheorie"

--- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...> wrote:
>
> -----Urspr�ngliche Nachricht-----
> Von: Paul Erlich [mailto:paul@s...]
> Gesendet: Dienstag, 30. Dezember 2003 22:14
> An: tuning@yahoogroups.com
> Betreff: [tuning] Re: "Ortstheorie"
>
>
> --- In tuning@yahoogroups.com, "Werner Mohrlok" <wmohrlok@h...>
wrote:
> > >
> > > The combinational tone starts an octave deeper
> > > than the lower tone of the interval and sinks down gliding.
As soon
> > > as the frequency of the combination tone will be less than
> > > about18 Hz
> > > the combination tone changes for my ear from a deep tone to
beats
> > > which become slower and slower.
>
> > The continuity between the two phenomena is true enough in a
> > quantitative sense but qualitatively they are actually two
different
> > phenomena. In fact if you're trained in very fast rhythms, you
may
> > actually be able to hear both phenomena at the same time in
the 20-30
> > Hz range.
>
> Paul, please be not sorry. But I don't believe, that these are
> qualitatively different. And hearing both phenomena at the same
> time is no proof to the contrary.

If you can have either without the other, *or* both together, they
would seem not to be different aspects of the same phenomenon.

> > > Do you believe that these "beats" are in principle different
> > > in their character or physical reason than the combination
tones?
>
> > Yes. As I explained in my response to Peter W. S., beats are
> > *amplitude modulation*, which without a nonlinear response
does not
> > imply the presence of any *pressure modulation*, or tone, at
that
> > frequency. Perhaps you could demonstrate this to yourself if
you
> > repeat the experiment with *very quiet* tones. It will then be
likely
> > that the combinational tones will not be heard at all, or at
least be
> > greatly attenuated, while the beating, when it comes in, will
still
> > be just as prominent (relative to the loudness of the tones) as
> > before. Another thing that's fun to try, since I think you've
> > mentioned it before, is to listen to the tones binaurally --
one to
> > each ear, with as little cross-talk as possible -- then the
vast
> > majority of the combinational tones should simply disappear,
while
> > the beating will manifest itself as a very spatial "swirling"
> > called "binaural beats" that is a result of the signals from
the two
> > ears being combined in the brain.
>
> Yes, and I made in the early 90ths a lot of such experiments, but
> only for my own fun. But please reflect precisely:
> All these experiments don't prove definitely
> the truth of the one or the other hear theory.

I was certainly not attempting to defend the place theory of hearing
to the exclusion of all others, if that's what you thought! That's a
whole separate issue . . .

> They only show that the incoming tones at the both different ears
> will be joined in a relative late step of our hear process.

It also shows that combinational tones are primarily created by
nonlinearities in the mechanism *in each ear* . . .

> > I'd be happy to go into the physical/mathematical explanation
of
> > beats and combinational tones for you, if you wish, or
alternatively
> > you could study the (separate) topics of beats and combination
tones
> > in _The Feynman Lectures on Physics_.
>
> Maybe. And Aristoteles said, the sun is turning around the
earth...
> But, seriously, You know more in the different hearing theories
than me,
> but I followed already discussions of professors which fighted for
> different hear theories. These fights ended nearly with brawls.

That was not my concern in replying to this message, Werner. Rather,
I wanted to discuss beats and combinational tones, both of which are
among the most well-understood and least controversial aspects of
hearing.

> and the term "nonlinear" as the main
> explanation for combination tones is a very miserable invention.

Please explain this statement. "Nonlinear" has a very precise
mathematical meaning -- put simply, the function relating input to
output is not a straight line -- and i see nothing "miserable" about
it.

Meanwhile, in case there's any confusion, the phenomenon of "virtual
pitch" or "fundamental tracking" is completely distinct from that of
combinational tones -- for example, a set of pure frequencies 420,
520, 620, 720 when played loudly into one or both ears will of course
produce combinational tones at 100 (&200 . . .), but when played more
softly, you will only hear the "virtual pitch", which will be at
roughly 108.