back to list

Graphical comparison of TOP error vs. unweighted superparticular prime error

🔗Herman Miller <hmiller@IO.COM>

6/2/2008 5:47:41 PM

I did some graphical comparisons of a few 7-limit temperaments showing the TOP-weighted error and an unweighted error based on the superparticular primes (2/1 3/2 5/4 7/6). Hmm, from now on I think I'll say "USP error" for "unweighted superparticular prime".

/tuning-math/files/temp-err-eval/

The red is the TOP error, and the green is the USP error. For the blue channel, I calculated the error for the following set of intervals:

2/1 3/2 5/3 5/4 7/4 7/5 7/6 9/5 9/7 9/8 15/8

A really good temperament like miracle will have a bright white spot in the middle surrounded by a multicolored "halo". Orwell and magic are a little bit less even, but the results are similar. You can already start to see how the red area is aligned differently from the white. This becomes more apparent with sensi, meantone, and porcupine. Finally, keemun and pajara begin to show the limitations of both these approximations. I did some tests using different weights for the superparticular primes, and using a different set of primes (2/1 3/2 5/3 7/5), but none of them came out as well as the USP.

Conclusion: although more testing is needed, it appears that both these error measures (TOP and USP) have some usefulness. The USP error may be a better evaluation for intervals smaller than an octave in some cases.

🔗Graham Breed <gbreed@gmail.com>

6/2/2008 8:22:05 PM

Herman Miller wrote:
> I did some graphical comparisons of a few 7-limit temperaments showing > the TOP-weighted error and an unweighted error based on the > superparticular primes (2/1 3/2 5/4 7/6). Hmm, from now on I think I'll > say "USP error" for "unweighted superparticular prime".
> > /tuning-math/files/temp-err-eval/
> > The red is the TOP error, and the green is the USP error. For the blue > channel, I calculated the error for the following set of intervals:
> > 2/1 3/2 5/3 5/4 7/4 7/5 7/6 9/5 9/7 9/8 15/8

They're continuous functions, presumably of the generators. I've already argued that Tenney-weighted prime errors aren't the best when the scale stretch isn't optimized. My solution, as a tweak to TOP-max, is to take the range of the weighted errors instead of the worst (which is also the max Kees-weighted error). That's also wrong when you use an unreasonable scale stretch. So for a general measure you could try a linear combination of TOP-max and the error range.

> A really good temperament like miracle will have a bright white spot in > the middle surrounded by a multicolored "halo". Orwell and magic are a > little bit less even, but the results are similar. You can already start > to see how the red area is aligned differently from the white. This > becomes more apparent with sensi, meantone, and porcupine. Finally, > keemun and pajara begin to show the limitations of both these > approximations. I did some tests using different weights for the > superparticular primes, and using a different set of primes (2/1 3/2 5/3 > 7/5), but none of them came out as well as the USP.

So the white spot is where the two measures give the same *optimum*? I can also see lines (usually white) in most (not pajara) graphs. Those must be the agreement on arbitrary generators provided you optimize the scale stretch.

TOP-max isn't very interesting for pajara because 7:5 dominiates. Try TOP-RMS (and related) instead.

> Conclusion: although more testing is needed, it appears that both these > error measures (TOP and USP) have some usefulness. The USP error may be > a better evaluation for intervals smaller than an octave in some cases.

That's what ranges and standard deviations are for. But they still aren't fully explored.

Graham

🔗Herman Miller <hmiller@IO.COM>

6/3/2008 5:20:55 PM

Graham Breed wrote:

> So the white spot is where the two measures give the same > *optimum*? I can also see lines (usually white) in most > (not pajara) graphs. Those must be the agreement on > arbitrary generators provided you optimize the scale stretch.

The white areas are places where all three error measures are in close agreement. You could be right about the scale stretch.

> TOP-max isn't very interesting for pajara because 7:5 > dominiates. Try TOP-RMS (and related) instead.

/tuning-math/files/temp-err-eval/
(click on error-pajara-rms.jpg)

This is interesting. There seems to be a pretty big cyan area where the USP is in better agreement with the selected intervals than the TOP, but also a big white spot where all three are in close agreement.