back to list

There is an easy solution to this problem....

🔗PageWizard, Magician of the Caverns <PageWizard17@aol.com>

8/8/2001 11:10:01 PM

First of all, it is ridiculous to expect the computer to guess what
you are thinking about before you play it on a keyboard. This would
be extremely difficult, if not impossible to implement. The software
does not need to do this in order for the user to be able to
modulate. The software would calculate the ratios once the key(s)
are physically struck. The delay time between the physical action
and the actual sound produced would be exactly proportional to the
software's speed of correct ratio configuration. Here is an example:

Normally, lets say, the tuning tables in the software are aligned
with a fixed frequency at a C note of 100 Hz for instance. With any
tuning table there must be some standard set before all other
subsequent notes are played in relation to the reference. Even in 12
ET, the identity of a played note depends on its relation to a
reference note, even if the reference is never played.
The software will store an accumulation bank between all previously
played notes, so that it will be able to use the bank to access
relations between the notes. When given a certain scale formula in
ratios, the software will only output signals corresponding to
specific combinations of those ratios. If false signals are inputed,
the software will be able to detect that a modulation is occurring
and it will be able to adjust all other notes accordingly.
First, I will start with a transition involving simultaneous
chords. After this, I will talk about transitions involving
individual note progressions. I am sure you will agree that chordal
progressions are a much simpler matter than single note
progressions. You are correct in this, but you must remember that
scalar (single note) progressions have an order which depends on the
notes previously played. With the knowledge of the previously played
notes and the scale ratio table, the software will have a very good
idea of what frequency to produce when a subsequent physical note is
sounded.
Remember, our fixed note on the keyboard is an arbitrary "C" at 100
Hz. At this time, I physically play the pattern on the keyboard
which represents a C Major. I will later show a different transition
which does not rely on chords which include the reference frequency.
The C Major is represented as 1/1-5/4-3/2 or as in Hz 100-125-150.
There is, of course, much more information here that the software
must recognize before it produces the final tonal combination. The
software must recognize the ratios within the ratios or the ratios
between points in this chord. Here is a list.

1/1 base (100)
-d (distance) to 5/4= +5/4 (plus 25)
-d to 3/2= +3/2 (plus 50)

5/4 base (125 as compared to 1/1)
-d to 1/1= -5/4 (minus 25)
-d to 3/2= +6/5 (plus 25)

3/2 base (150 as compared to 1/1)
-d to 1/1= -3/2 (minus 50)
-d to 5/4= -6/5 (minus 25)

This is a standard example of chord data involving all of the
possible relations between the individual tones of a chord to each
other. Even when this chord is written, all of the ratios are wired
to the reference. The reference here is the one in which we base our
whole tuning archive from. If this chord were a specific distance
from the reference, the software would calculate each tone's distance
from the global reference and construct a chordal relationship with
the tonal relations of that chord. I will get into this later.
First, lets make the transition from C Major to C# Major since it
will offer a very good contrast between an erroneous result and a
valid result. First I will show what a false C# Major would look
like using the ratios based off of 100 Hz (arbitrary C).

INCORRECT PHYSICAL INPUT (TRUNCATED FOR CONVENIENCE)
False C#
16/15 4/3 8/5
106 2/3 133 1/3 160

After the software analyzes that the frequencies of 106 2/3 and 133
1/3 are correct. The false fifth is 40 Hz narrow. The software moves
the new fifth up to 200 Hz. The final signal is then sent for
transmission.

I will submit further information in the future involving single
note progressions. You may, though, be wondering how the software
would decide whether the C# is itself or one of its inversions since
any could be just as likely. The software will be able to solve this
problem since it makes all of its calculations based on an arbitrary
reference such as C (physical key on the keyboard) represents 100
Hz. All corresponding ratios of inputs will be based off of this.
The software will be able to see that (in this case) I am playing
ratios of 16/15, 4/3, and 8/5 in relation to 100 Hz. It will have
tables of all of the possible combinations of three notes in a given
scale system. This will allow it to narrow down possibilities. Each
chord has a distinct identity. Since there are only a limited number
of possible chord identities within a specific scale, the software
will be able to narrow down the possibilities using its bank of
relations between notes in a chord. Since 106 2/3-133 1/3-160 is not
a correct chord in the software's lists, it begins to narrow down
posssibilities. Firstly, it will see that 106 2/3 to 133 1/3 is
a "major third" distance. There is only a distinct amount of
possibilities left for the last false note within this specific scale
system. I will have to do additional work on this, but I believe
that by using its table of scalar relations and by already attaining
half of the chord's information, it will be able to realize that 200
is the relation implied instead of 160. In order to do this, 200
must be the likeliest choice out of all of the other possibilities.
For instance, the software will not (globally) know whether you imply
an infinitude of choices for the third note played (since no physical
note is ever fixed except the C 100 Hz -arbitrary-). These two
things, theoretically, will allow the software to make a good choice:

1) The number of pure ratios in the scale system and the number of
notes physically played, gives the software a good idea of where to
look. It knows there are only 12 notes in a set of this system
(within an octave), and it knows that only 3 entities are being
played at once.

2) The played entities' relation to the global reference of 100 Hz
will allow the software to construct a draft table of this new
chord's identity.

3) This draft table is searched for relations which are true within
the ratios allowed in this system. Any true ratios are saved.

4) The software uses the diad (in this case) of a third distance to
narrow down its possibilities to a possible identity for the last
note.

5) Since the table only has a limited number of possible
alternatives when given the previous information, it will likely
transform the false 160 into the correct alternative of 200 Hz.

I will have to work out further details especially in the last step
of "estimation." The effectiveness of this step will primarily depend
on the software's ability to narrow down possibilities in order to
make the correct guess everytime. If there is any more information
that would prove helpful in this case, please let me know of it.
Single note progressions will be described in a later message. I
would appreciate your help in working together to design software
which can do this revolutionary technology effectively. Thank you.

Sincerely,
PageWizard

🔗carl@lumma.org

8/9/2001 12:29:56 AM

> First of all, it is ridiculous to expect the computer to guess
> what you are thinking about before you play it on a keyboard.

Agreed.

> The software does not need to do this in order for the user to be
> able to modulate.

I am aware of the general concept of key-guessing. I have not
heard of any attempts to anticipate harmony before it is played.
When I said that I don't want the computer guessing what I
played, I meant this for the following reason: I want to choose
the modulations myself (believe it or not, there is more than
one type of modulation). I want to choose exactly everything
that happens, specify it in a score or with my hands... If the
computer does this according to some fixed algorithm, I haven't
gained any new musical resources, I've simply made my existing
music sound better... A great idea, but not the particular path
I'm on.

> The software would calculate the ratios once the key(s) are
> physically struck. The delay time between the physical action
> and the actual sound produced would be exactly proportional to
> the software's speed of correct ratio configuration.

It's actually not easy to sit between a synth and a midi
controller without introducing a noticeable delay in an
instrument's response. You need to get it below 5 ms, really.
Doable, but not off-the-shelf easy. You can get away with
more delay, as they say, but I wouldn't enjoy it to play.

> Even in 12 ET, the identity of a played note depends on its
> relation to a reference note, even if the reference is never
> played.

Sorry, I lost you here.

> The software will store an accumulation bank between all
> previously played notes, so that it will be able to use the
> bank to access relations between the notes. When given a
> certain scale formula in ratios, the software will only output
> signals corresponding to specific combinations of those ratios.
> If false signals are inputed, the software will be able to
> detect that a modulation is occurring and it will be able to
> adjust all other notes accordingly.
>
> First, I will start with a transition involving simultaneous
> chords. After this, I will talk about transitions involving
> individual note progressions. I am sure you will agree that
> chordal progressions are a much simpler matter than single note
> progressions.

By single note progressions, you mean melodies? I do agree.

> You are correct in this, but you must remember that scalar (single
> note) progressions have an order which depends on the notes
> previously played.

This is the part I don't get.

> First, lets make the transition from C Major to C# Major since it
> will offer a very good contrast between an erroneous result and a
> valid result. First I will show what a false C# Major would look
> like using the ratios based off of 100 Hz (arbitrary C).
>
> INCORRECT PHYSICAL INPUT (TRUNCATED FOR CONVENIENCE)
> False C#
> 16/15 4/3 8/5
> 106 2/3 133 1/3 160
>
> After the software analyzes that the frequencies of 106 2/3 and 133
> 1/3 are correct. The false fifth is 40 Hz narrow. The software
> moves the new fifth up to 200 Hz. The final signal is then sent
> for transmission.

Actually, the ratios shown do form a major triad with consonant
proportions 4:5:6, just like the one on C major did. You've made
an error -- the interval from 160 to 106 hz is a just 3:2 "fifth".

> I will submit further information in the future involving single
> note progressions. You may, though, be wondering how the software
> would decide whether the C# is itself or one of its inversions
> since any could be just as likely.

Sounds like you're thinking about the famous "comma" problem?

> The software will be able to solve this problem since it makes
> all of its calculations based on an arbitrary reference such as
> C (physical key on the keyboard) represents 100 Hz. All
> corresponding ratios of inputs will be based off of this.
> The software will be able to see that (in this case) I am playing
> ratios of 16/15, 4/3, and 8/5 in relation to 100 Hz. It will have
> tables of all of the possible combinations of three notes in a
> given scale system. This will allow it to narrow down
> possibilities. Each chord has a distinct identity. Since there
> are only a limited number of possible chord identities within a
> specific scale, the software will be able to narrow down the
> possibilities using its bank of relations between notes in a
> chord.
/.../
> I would appreciate your help in working together to design
> software which can do this revolutionary technology effectively.
> Thank you.

I'd love to help out, but I've already got my hands full with
my own projects. I encourage you to develop you're ideas and
make them a reality!

-Carl

🔗jpehrson@rcn.com

8/9/2001 8:08:46 AM

--- In tuning@y..., "PageWizard, Magician of the Caverns"

/tuning/topicId_26809.html#26809

>
> I will have to work out further details especially in the last
step
> of "estimation." The effectiveness of this step will primarily
depend
> on the software's ability to narrow down possibilities in order to
> make the correct guess everytime. If there is any more information
> that would prove helpful in this case, please let me know of it.
> Single note progressions will be described in a later message. I
> would appreciate your help in working together to design software
> which can do this revolutionary technology effectively. Thank you.
>
> Sincerely,
> PageWizard

I believe John deLaubenfels is really the person to discuss this
further... I think he's shown that "adaptive just intonation" is
really quite a bit more complex than some of the examples you have
been giving... it's really a rather involved problem, and he goes
quite a way in solving it...

However, John doesn't attempt to do this all in "real time..."

Is that really possible.... I would have my doubts...

______________ ____________ _________
Joseph Pehrson

🔗Robert Walker <robertwalker@ntlworld.com>

11/9/2001 7:48:50 AM

Hi PageWizard,

I'm sure John can give more detailed examples, but here is a simple
one (of my own) to show where software can easily get confused without
look ahead.

You've been playing consonant triads for a while appropriate for
j.i. scale of
1 16/15 9/8 6/5 5/4 4/3 45/32 3/2 8/5 5/3 9/5 15/8 2

with 1/1 = c
Here the 8/5 is the Gb, a major third below the C=2/1

Now without preparation you play a note in the vicinity of Gb or F#

How can the software tell if you intend a Gb = 8/5, or a F# = 25/16
which is 5/4 above the E.

If you play it simulataneously with an E then it will know that you mean
the 25/16. But suppose you play the E a little after it (intending it
to be sim. perhaps, but on microsecond level it isn't).

Or, suppose you just play a sustained F# and then introduce the E later.

How can the software tell, without anticipating what you are going to
do next, whether you want an F# or a Gb?

Let's suppose it decides on an F#. What does it do if you were to play
a C next instead? It would have to slide the F# to a Gb at that point.
Or, play the C at a 5/4 above the F# at 125/64, and at some later point
slide that up to the 2/1.

So clearly it is going to make "mistakes" that a leisure time retuning
program wouldn't make.

To minimise those mistakes, you have to find a way to get the program
to try to anticipate what is most likely to be played next, which
is hard. It probably can be done, but chances are you'll always find a composer
who delights in breaking whatever rules it is using to do that.

Thinking this over a bit more thoroughly I see that even a half second
delay isn't really going to help. You'll want a two or three bars
delay prob. in many cases, between playing the notes and hearing them!

Do say if you see any way round this kind of an example.

Robert

🔗Robert Walker <robertwalker@ntlworld.com>

11/9/2001 7:56:28 AM

Hi PageWizard,

Of course, in that post, Gb should read Ab and F# should read G#,
sorry.

Robert

🔗carl@lumma.org

8/9/2001 11:52:53 AM

> I believe John deLaubenfels is really the person to discuss this
> further... I think he's shown that "adaptive just intonation" is
> really quite a bit more complex than some of the examples you have
> been giving... it's really a rather involved problem, and he goes
> quite a way in solving it...

True.

> However, John doesn't attempt to do this all in "real time..."

An early version of his software, called JI Relay, did.

> Is that really possible.... I would have my doubts...

It isn't possible to do drift control in the same way as
John's latest stuff does, since it uses knowledge of where
the music is going. But it is possible to sit between a
midi stream and a synth and do things, as PageWizard suggests.

-Carl

🔗carl@lumma.org

8/9/2001 1:29:03 PM

> You've been playing consonant triads for a while appropriate for
> j.i. scale of
> 1 16/15 9/8 6/5 5/4 4/3 45/32 3/2 8/5 5/3 9/5 15/8 2
>
> with 1/1 = c
> Here the 8/5 is the Gb, a major third below the C=2/1
>
> Now without preparation you play a note in the vicinity of Gb or F#
>
> How can the software tell if you intend a Gb = 8/5, or a F# = 25/16
> which is 5/4 above the E.

I can see two possibilities...

1. use the last tuning used for that note
2. use the tuning in the last key you were
known to be in (ie 8/5 in C)

...if there is no previous information -- the note is out of the
blue -- it can choose randomly, or perhaps the performer is to
specify a default key before he begins playing.

> If you play it simulataneously with an E then it will know that you
> mean the 25/16. But suppose you play the E a little after it
> (intending it to be sim. perhaps, but on microsecond level it
> isn't).

The note can be bent into tune once the simultaneity is recognized,
or left as it is until the it is replayed. Both options are cool --
bending sounds cool, and so do the "krunchy" (Keenan Pepper's term)
suspended sonorities you get if the old note is frozen until its
next note-on.

> How can the software tell, without anticipating what you are going
> to do next, whether you want an F# or a Gb?

It has a notion of what key you're in. Besides just making sure
chords
are as in-tune as it can make them, it assumes you're composing
in the diatonic scale, more or less, and maintains a persistant notion
of key. It takes a certain amount of something to change this notion,
the exact amount and type of this something up to the programmer. I
can get code from Stephen if anybody's interested.

I should also say I suspect software _without_ this feature would also
be worth having. It just bends chords, and has no persistent vision
at all.

> So clearly it is going to make "mistakes" that a leisure time
> retuning program wouldn't make.

Yes.

> To minimise those mistakes, you have to find a way to get the
> program to try to anticipate what is most likely to be played
> next, which is hard. It probably can be done, but chances are
> you'll always find a composer who delights in breaking whatever
> rules it is using to do that.

Sure.

> Thinking this over a bit more thoroughly I see that even a half
> second delay isn't really going to help. You'll want a two or
> three bars delay prob. in many cases, between playing the notes
> and hearing them!

That's leisure time. I wouldn't call something real-time until
it's quicker than .1 seconds, I wouldn't call something good
real-time until it's faster than .01 seconds, and I wouldn't call
something excellent real-time until it's faster than .005 seconds.
Kyma should be able to do this.

> Do say if you see any way round this kind of an example.

I'm sure there are ways around it that I haven't thought of, but
the thing I'd like to drive home is that they all result in
compositional resources not substantially greater than 12-tET.
You're music will sound great, and there will be all sorts of
microtonal action to follow, but this action is not precisely
under the control of the composer. For that, we need a guitar
with more frets, a keyboard with more keys, or a different
system of notation.

-Carl

🔗Robert Walker <robertwalker@ntlworld.com>

11/10/2001 2:09:24 AM

Hi PageWizard, and Carl,

PageWizard wrote:

....
> We need to realize that for two different things, there must be two
> different keys on the keyboard or apparatus altogether. We must
> design a new instrument of the future which allows for these
> different identities. I need to, first of all, figure out how many
> different identities are there. Equal temperament only disregarded
> the problem by cramping these dual identities into one for
> convenience and impurity. In this case the 8/5 and the 5/4 are the
> same note really. Two separate identities really are not the same,
> and until we realize that we will not have purity without each
> separate identity, then we will never have purity. ET is not purity,
> it is a compromise which minimizes note numbers.

I found Margo Schulter's article on development of 12 tone helpful,
with perspective that Bb come as result of splitting the note B into
two notes, one reached by a fifth from above, and one by a fifth from
below.

You can find her article in the faq linked to from
/tuning2

As of writing it is still available at nbci, but
will need to be moved soon, prob. to v3space.
At present, the url for the article is:
http://members.nbci.com/_XMCM/tune_smithy/tree/on_site_tree/margoschulter/Why_12_notes_as_one_attrac
tive_arrangement.html

So now one is doing the same with thirds, and getting two notes for Ab / G#,
one reached by two thirds from below, and one by a single third from above.

I know that there are keyboards with split keys to help play, e.g.
19-tet, or 31 - tet. Not sure, but think I remember reading that it
is also done for quarter comma meantone, skipping round the wolf fifth by
splitting a key.

Also know of a split key pythagorean 17 tone system that Margo describes
by 15th century theorists:
1/1 256/243 2187/2048 9/8 32/27 19683/16384 81/64 4/3 1024/729 729/512
3/2 128/81 6561/4096 27/16 16/9 59049/32768 243/128 2/1

See
http://www.medieval.org/emfaq/harmony/pyth4.html
section 4.5.

However, and this is question for the list as a whole too:

Has anyone devised, say, a 17 or 19 tone scale for a split
key arrangement with just intonation ratios, targetting the
simpler ratios for triads, like 5/4, 6/5, etc?

If so, would be nice to add to FTS midi in relaying presets
- I've added the C 15 pyth. one there. (It's set up so that one
can select either note of a "split key" by pressing the
sustain pedal, or alternatively, caps lock key on keyboard
etc, just before playing the note).

Seems to me this is somewhat the direction you are considering
at the moment.

I know also of scale by Erv Wilson that is made by stacking
1/1 5/4 3/2 on top of itself until you get 17 notes and
reducing it into the octave, which we discussed on this
list a little while back. It's not quite the same thing
though, as one is stacking triads, rather than 5/4s.

wilson_17.scl | Wilson's 17-tone 5-limit scale

1/1 135/128 10/9 9/8 1215/1024 5/4 81/64 4/3 45/32 729/512 3/2
405/256 5/3 27/16 16/9 15/8 243/128 2/1

Carl wrote:

> > You've been playing consonant triads for a while appropriate for
> > j.i. scale of
> > 1 16/15 9/8 6/5 5/4 4/3 45/32 3/2 8/5 5/3 9/5 15/8 2
> >
> > with 1/1 = c
> > Here the 8/5 is the Gb, a major third below the C=2/1
> >
> > Now without preparation you play a note in the vicinity of Gb or F#
> >
> > How can the software tell if you intend a Gb = 8/5, or a F# = 25/16
> > which is 5/4 above the E.
>
> I can see two possibilities...
>
> 1. use the last tuning used for that note
> 2. use the tuning in the last key you were
> known to be in (ie 8/5 in C)
>
> ...if there is no previous information -- the note is out of the
> blue -- it can choose randomly, or perhaps the performer is to
> specify a default key before he begins playing.
>
> > If you play it simulataneously with an E then it will know that you
> > mean the 25/16. But suppose you play the E a little after it
> > (intending it to be sim. perhaps, but on microsecond level it
> > isn't).
>
> The note can be bent into tune once the simultaneity is recognized,
> or left as it is until the it is replayed. Both options are cool --
> bending sounds cool, and so do the "krunchy" (Keenan Pepper's term)
> suspended sonorities you get if the old note is frozen until its
> next note-on.
>

Yes, can sound nice. However can also sound just as if it is a mistake
being corrected. One would have to see which it turned out as.

> > Thinking this over a bit more thoroughly I see that even a half
> > second delay isn't really going to help. You'll want a two or
> > three bars delay prob. in many cases, between playing the notes
> > and hearing them!
>
> That's leisure time. I wouldn't call something real-time until
> it's quicker than .1 seconds, I wouldn't call something good
> real-time until it's faster than .01 seconds, and I wouldn't call
> something excellent real-time until it's faster than .005 seconds.
> Kyma should be able to do this.
>

Yes, I agree. I think, say, 0.2 seconds or even 0.5 secs delay
could be got used to if it is the only way to achieve j.i. chords.
If software needs to look ahead for notes of chord perceived as simultaneous
(or very slightly arpeggiated) then there's no choice, as the time
is set by the amount of inevitable raggedness, or acceptable
arpeggiation that one gets in human performances, and isn't
a limitation of the software as such.

However, I think it most likely would be pretty hard to get used to playing
music and hearing it two bars later!

Maybe could be done,... I wonder.

Robert

🔗jpehrson@rcn.com

8/10/2001 6:29:05 AM

--- In tuning@y..., "Robert Walker" <robertwalker@n...> wrote:

/tuning/topicId_26809.html#26823

> Hi PageWizard,
>
> I'm sure John can give more detailed examples, but here is a simple
> one (of my own) to show where software can easily get confused
without
> look ahead.
>
> You've been playing consonant triads for a while appropriate for
> j.i. scale of
> 1 16/15 9/8 6/5 5/4 4/3 45/32 3/2 8/5 5/3 9/5 15/8 2
>
> with 1/1 = c
> Here the 8/5 is the Gb, a major third below the C=2/1
>
> Now without preparation you play a note in the vicinity of Gb or F#
>
> How can the software tell if you intend a Gb = 8/5, or a F# = 25/16
> which is 5/4 above the E.
>
> If you play it simulataneously with an E then it will know that you
mean
> the 25/16. But suppose you play the E a little after it (intending
it
> to be sim. perhaps, but on microsecond level it isn't).
>
> Or, suppose you just play a sustained F# and then introduce the E
later.
>
> How can the software tell, without anticipating what you are going
to
> do next, whether you want an F# or a Gb?
>
> Let's suppose it decides on an F#. What does it do if you were to
play
> a C next instead? It would have to slide the F# to a Gb at that
point.
> Or, play the C at a 5/4 above the F# at 125/64, and at some later
point
> slide that up to the 2/1.
>
> So clearly it is going to make "mistakes" that a leisure time
retuning
> program wouldn't make.
>
> To minimise those mistakes, you have to find a way to get the
program
> to try to anticipate what is most likely to be played next, which
> is hard. It probably can be done, but chances are you'll always
find a composer
> who delights in breaking whatever rules it is using to do that.
>
> Thinking this over a bit more thoroughly I see that even a half
second
> delay isn't really going to help. You'll want a two or three bars
> delay prob. in many cases, between playing the notes and hearing
them!
>
> Do say if you see any way round this kind of an example.
>
> Robert

Thanks, Robert for your clear example of this. I actually understood
the entire post (!) :)

Is this the reason that John deLaubenfels abandoned "real time" just
intonation tuning??

*Did* he totally abandon that?? John??

___________ ________ ________
Joseph Pehrson

🔗jpehrson@rcn.com

8/10/2001 6:34:31 AM

--- In tuning@y..., carl@l... wrote:

/tuning/topicId_26809.html#26827

> > I believe John deLaubenfels is really the person to discuss this
> > further... I think he's shown that "adaptive just intonation" is
> > really quite a bit more complex than some of the examples you
have
> > been giving... it's really a rather involved problem, and he goes
> > quite a way in solving it...
>
> True.
>
> > However, John doesn't attempt to do this all in "real time..."
>
> An early version of his software, called JI Relay, did.
>
> > Is that really possible.... I would have my doubts...
>
> It isn't possible to do drift control in the same way as
> John's latest stuff does, since it uses knowledge of where
> the music is going. But it is possible to sit between a
> midi stream and a synth and do things, as PageWizard suggests.
>
> -Carl

Hi Carl...

But Robert Walker just said that THREE MEASURES were necessary...

One couldn't play an instrument in "real time" like that, could they??

____________ _______ _______
Joseph Pehrson

🔗carl@lumma.org

8/10/2001 11:24:37 AM

>> That's leisure time. I wouldn't call something real-time until
>> it's quicker than .1 seconds, I wouldn't call something good
>> real-time until it's faster than .01 seconds, and I wouldn't call
>> something excellent real-time until it's faster than .005 seconds.
>> Kyma should be able to do this.
>
> Yes, I agree. I think, say, 0.2 seconds or even 0.5 secs delay
> could be got used to if it is the only way to achieve j.i. chords.
> If software needs to look ahead for notes of chord perceived as
> simultaneous (or very slightly arpeggiated) then there's no
> choice, as the time is set by the amount of inevitable raggedness,
> or acceptable arpeggiation that one gets in human performances,
> and isn't a limitation of the software as such.
>
> However, I think it most likely would be pretty hard to get used
> to playing music and hearing it two bars later!
>
> Maybe could be done,... I wonder.

It could probably be done, but it wouldn't be easy, and the work
would just be to get as good as you would be normally. I'd
rather work to get somewhere better!

Fortunately, I think .005 should be quite possible. The thing I
think you're missing is that the program doesn't have to wait for
near simultaneous notes any more than it has to wait for distantly
simultaneous notes... it just keeps a record of the last used
pitch for each note, starts there, and then bends it as soon as
other notes come in, if necessary. It can also try key-guessing,
and take the initial value for a note from the key it thinks
you're in rather than just using the previous value of the pitch.

Re your question on if anyone has come up with a JI scale with
17 or 19 notes... it takes 18 notes to play in every key of
the classical 5-limit diatonic scale, and I have a keyboard
mapping for this in the files section under "Carl". Notice that
each instance of the scale has uniform fingering.

-Carl