back to list

Optimal octave stretching

🔗Graham Breed <graham@microtonal.co.uk>

4/12/2004 10:12:48 AM

I've been looking at the RMS weighted error of equal temperaments. A weighted RMS for an unbound set of intervals is more appropriate than the worst weighted error used by TOP. The worst error is used to ensure that none of the intervals you plan to use fall outside an acceptable range of mistuning. The more complex and less frequently used an interval is, the more important that it lie within this range if it is to be heard as a consonance. Dissonance curves of whatever kind also show larger basins for more simple consonances, so that the range of mistuning is likely to be smaller for more complex intervals. These factors together conspire to make the most complex intervals the ones that dominate the result, so that if there's no cutoff the result will not converge. Hence you decide in advance where the cutoff should be.

The RMS error, on the other hand, gives the average pain associated with a mistuning. In this case, it is appropriate to give simple or common intervals (generally the two will be the same) a higher weighting because their mistuning will lead to greater pain. It doesn't matter if an interval is classed as a consonance or not, because its presence can still take pain away from a chord. Ideally, we'd only consider intervals that pass a worst-error threshold, but for simplicity's sake an unbounded set can be considered.

It would be nice if a particular error converged for some set of intervals as its size approached infinity -- say an integer limit containing only the primes we're interested in. Unfortunately, I can't find one so I'll use the simplest error for a set of primes -- the weighted RMS of the prime intervals. Adding or averaging the errors of the primes is usually the first thing people think of. We tell them they're wrong because they don't take account of things like 15:8 being more complex than 6:5. This objection doesn't apply when you consider 2 on a par with other prime numbers, because then 15:8 becomes naturally more complex than 6:5. A temperament, such as 19-equal in the 5-limit, where the errors of 5:4 and 3:2 cancel out in 6:5 will be a good fit for octave stretching. It will then have a naturally reduced prime error.

The obvious weighting to use is the size of the prime interval in octaves. That should give an indication of the average Tenny-weighted error for an arbitrary set of intervals. This weighting essentially ensures that prime and composite numbers are treated equally. If you like, you can set a weighting such that high primes have a much smaller weight than low ones, so that you don't need to specify the prime limit.

This is all a bit arbitrary, but so is any algorithm in the absence of sound, empirical data on the strength and tolerance of mistuning for each interval. This is a particular problem for octave-specific measures, because interval size and perceptual octave stretching come into play. So we may as well stick with the simplest method if we're going to bother at all.

The weighted, square error for an equally tempered interval is given by

[(km - p)w]**2

where

k is the size of a scale step
m is the number of scale steps to this tempered interval
p is the untempered pitch difference of this interval
w is the weight given to this interval
**2 is "squared"

For weighting by interval size, w=1/p, so

[(km - p)/p]**2 = (km/p - 1)**2

The mean squared error is then

Avg[(km/p - 1)**2]

over m an p for all primes. Setting x=m/p to be the ideal number of steps to a just octave for each prime, that becomes

Avg[(kx - 1)**2] = Avg[(kx)**2 - 2kx + 1]

The optimum value for k is found by setting the derivative with respect to k equal to zero, so

Avg[2k(x**2) - 2x] = 0
k = Avg(x)/Avg(x**2)

Then, rearranging the formula for the mean squared error, and plugging in this optimum step size

Avg[(kx)**2 - 2kx + 1]
= k**2 Avg(x**2) - 2k Avg(x) + 1
= [Avg(x)/Avg(x**2)]**2 Avg(x**2) - 2[Avg(x)/Avg(x**2)] Avg(x) + 1
= Avg(x)**2 / Avg(x**2) - 2[Avg(x)**2]/Avg(x**2) + 1
= 1 - Avg(x)**2/Avg(x**2)

So the RMS error is

Sqrt[1 - Avg(x)**2/Avg(x**2)]

This is quite similar to the sample standard deviation of {x} (that is, the standard deviation you shouldn't use in error estimation):

STD(x) = Sqrt[Avg(x**2) - Avg(x)**2]

So you could write the RMS error as STD(x)/Sqrt(Avg(x**2)) if you happen to have a convenient way of calculating the standard deviation. As each x will be close to n, the number of steps to a tempered octave, you can simplify the RMS as STD(x)/n.

Anyway, I've adapted my python module at

http://x31eq.com/temper.py

to do these calculations. Here are some examples:

>>> temper.PrimeET(12, temper.primes[:2]).getPORMSWE()
0.0025886343681387008
>>> (temper.PrimeET(12, temper.primes[:2]).getPORMSWEStretch()-1)*1200
-1.5596534250319039

That means 5-limit 12-equal has a prime, optimum, RMS, weighted error of around 0.003. This is a dimensionless value hopefully comparable to the TOP error. The optimum octave is flattened by around 1.6 cents.

>>> temper.PrimeET(19, temper.primes[:2]).getPORMSWE()
0.0015921986407487665
>>> (temper.PrimeET(19, temper.primes[:2]).getPORMSWEStretch()-1)*1200
2.5780456079649738
>>> temper.PrimeET(22, temper.primes[:2]).getPORMSWE()
0.0022460185834616815
>>> (temper.PrimeET(22, temper.primes[:2]).getPORMSWEStretch()-1)*1200
-0.86081876412746894
>>> temper.PrimeET(29, temper.primes[:2]).getPORMSWE()
0.0025604733781234741
>>> (temper.PrimeET(29, temper.primes[:2]).getPORMSWEStretch()-1)*1200
1.6758871121345997
>>> temper.PrimeET(31, temper.primes[:2]).getPORMSWE()
0.0013562866803350085
>>> (temper.PrimeET(31, temper.primes[:2]).getPORMSWEStretch()-1)*1200
0.9757470533824808
>>> temper.PrimeET(50, temper.primes[:2]).getPORMSWE()
0.0013261119467051412
>>> (temper.PrimeET(50, temper.primes[:2]).getPORMSWEStretch()-1)*1200
1.5845318713727963

50-equal is probably close to the RMS meantone optimum, so the stretch of 1.6 cents is refreshingly close to the 1.7 cents Gene gave for TOP meantone on metatuning. In fact, it's a fix because stretched 31-equal is closer to the TOP meantone, and 81 is closer to the meantone PORMSWE:

>>> temper.PrimeET(81, temper.primes[:2]).getPORMSWE()
0.0013189616858225524
>>> (temper.PrimeET(81, temper.primes[:2]).getPORMSWEStretch()-1)*1200
1.3515272079124507

But, anyway, the stretching is of the same order of magnitude.

I would do more comparisons, but I haven't implemented the TOP optimization yet. For that matter, I can only do it for 3-limit equal temeperaments, 5-limit linear temperaments, 7-limit planar temperaments, etc. I'm sure I could work out how to do PORMSWE for linear temperaments, but I haven't done so yet. So for now, it's a case of finding a representative equal temperament.

Guessing the ennealimmal optimum is difficult, because it seems to lie close to the point where the octave goes from being sharp to flat. But it looks close to 612-equal, with the stretch stable either side.

>>> (temper.PrimeET(612, temper.primes[:3]).getPORMSWEStretch()-1)*1200
0.020981278370690859

That's the same order of magnitude as the TOP optimum Gene gave of 0.036 cents.

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

4/12/2004 11:26:51 AM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:

> It would be nice if a particular error converged for some set of
> intervals as its size approached infinity -- say an integer limit
> containing only the primes we're interested in. Unfortunately, I
can't
> find one so I'll use the simplest error for a set of primes -- the
> weighted RMS of the prime intervals.

Zeta tuning works along the lines you want here. A Python script
which found the Zeta tuning might be a bit of a pain to write,
though, and it only works for rank one (equal or "dimension zero")
temperaments.

> 50-equal is probably close to the RMS meantone optimum, so the
stretch
> of 1.6 cents is refreshingly close to the 1.7 cents Gene gave for
TOP
> meantone on metatuning.

It would be interested to find the Zeta tunings for comparison; I'll
need to move to the Linux side to do that.

> Guessing the ennealimmal optimum is difficult, because it seems to
lie
> close to the point where the octave goes from being sharp to flat.
But
> it looks close to 612-equal, with the stretch stable either side.

You could try 441 also.

🔗Graham Breed <graham@microtonal.co.uk>

4/12/2004 11:53:53 AM

Gene Ward Smith wrote:

> Zeta tuning works along the lines you want here. A Python script > which found the Zeta tuning might be a bit of a pain to write, > though, and it only works for rank one (equal or "dimension zero") > temperaments.

That's the thing you keep mentioning related to the zeta function, is it?

> You could try 441 also.

Oh, I did, and many more. It gives an octave flat by 0.012 cents. I've shown that the optimum lies between 5679- 7173-equal.

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

4/12/2004 11:24:19 PM

--- In tuning-math@yahoogroups.com, Graham Breed <graham@m...> wrote:
> Gene Ward Smith wrote:
>
> > Zeta tuning works along the lines you want here. A Python script
> > which found the Zeta tuning might be a bit of a pain to write,
> > though, and it only works for rank one (equal or "dimension
zero")
> > temperaments.
>
> That's the thing you keep mentioning related to the zeta function,
is it?

One of several things I keep mentioning related to the Zeta function.

🔗Graham Breed <graham@microtonal.co.uk>

4/13/2004 3:40:55 AM

I wrote:

> Oh, I did, and many more. It gives an octave flat by 0.012 cents. I've > shown that the optimum lies between 5679- 7173-equal.

I've got it working with linear temperaments now. Here's the ennealimmal and 5-limit meantone results:

>>> enne = temper.Temperament(171,612,temper.limit9)
>>> enne.optimizePORMSWE()
>>> enne.getPRMSWError()
2.4769849465587193e-05
>>> (enne.mapping[0][0]*enne.basis[0] - 1)*1200
0.021691213712138335
>>> enne.basis[1]*1200
49.021363311937186
>>> meantone = temper.Temperament(19,31,temper.limit5)
>>> meantone.optimizePORMSWE()
>>> meantone.getPRMSWError()
0.001318517728382543
>>> (meantone.basis[0] - 1)*1200
1.3968513622916845
>>> meantone.basis[1]*1200
504.34774072728203

Graham