back to list

Using adjusted error for subgroup badness calculations

🔗Mike Battaglia <battaglia01@...>

1/5/2012 1:03:08 PM

This is a bad idea, right? Because adjusted error is just TE error
multiplied by some constant proportional to the largest prime in the
limit, which is why this has ~14 cents of adjusted error

http://x31eq.com/cgi-bin/rt.cgi?ets=19&limit=2.3.5.128

and this has only 4.4 cents of adjusted error

http://x31eq.com/cgi-bin/rt.cgi?ets=19&limit=2.3.5

despite that the TE error differs by about a tenth of a cent.

In short, the point of adjusted error is to be able to compare
temperaments of different limits, but the way it's currently defined
isn't useful for comparing things on different subgroups. You need a
better way to assign each subgroup a scalar that's consistent across
subgroups. I would suggest something like rms(weighted L2 complexity)
of all the intervals in the subgroup, but that's going to weight 5/3
and 15/1 the same, which you may not necessarily want. The magic
bullet might be rms(weighted triangular L2 complexity of each interval
in the subgroup).

I'd recommend that Igs redo the results again, but instead of looking
at adjusted error, to look at the actual TE error, and then multiply
it by some constant that's consistent with the weighting of the
subgroup.

-Mike

🔗Mike Battaglia <battaglia01@...>

1/5/2012 1:18:53 PM

I think Graham just confirmed in another thread

Graham wrote:
> The adjusted error is the TE error multiplied by the log of the largest prime. It matches the target error.

-Mike

On Thu, Jan 5, 2012 at 4:03 PM, Mike Battaglia <battaglia01@...> wrote:
> This is a bad idea, right? Because adjusted error is just TE error
> multiplied by some constant proportional to the largest prime in the
> limit, which is why this has ~14 cents of adjusted error

🔗cityoftheasleep <igliashon@...>

1/5/2012 6:54:42 PM

--- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> I'd recommend that Igs redo the results again, but instead of looking
> at adjusted error, to look at the actual TE error, and then multiply
> it by some constant that's consistent with the weighting of the
> subgroup.

FFFFFFFFFFFFFFFFFFFFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU---!!!!!!!

If I give you the subgroups I'm looking at, can you calculate the constants by which I'll be multiplying TE error for each ET on that subgroup?

-Igs

🔗Mike Battaglia <battaglia01@...>

1/5/2012 7:23:33 PM

On Thu, Jan 5, 2012 at 9:54 PM, cityoftheasleep <igliashon@...> wrote:
>
> --- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> > I'd recommend that Igs redo the results again, but instead of looking
> > at adjusted error, to look at the actual TE error, and then multiply
> > it by some constant that's consistent with the weighting of the
> > subgroup.
>
> FFFFFFFFFFFFFFFFFFFFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU---!!!!!!!
>
> If I give you the subgroups I'm looking at, can you calculate the constants by which I'll be multiplying TE error for each ET on that subgroup?

I'm actually curious why we need to re-weight the error at all. Isn't
it already weighted? Why not just use TE error by itself?

One thing that's notable about TE error is that it goes -down- as more
accurate primes are added. For example, consider 31-EDO in the 5-limit

http://x31eq.com/cgi-bin/rt.cgi?ets=31&limit=5

1.628 cents of error. Now look at it in the 7-limit:

http://x31eq.com/cgi-bin/rt.cgi?ets=31&limit=7

1.432 cents, which is less.

This is because TE error is a type of average prime error. If you add
another, really accurate prime into the mix, the "average" error goes
down. Multiplication by the largest prime undoes some of this, but so
would just looking at something like weighted sum-squared error
instead of weighted RMS error.

So what exactly is the behavior you're looking for? Should 7-limit
31-EDO be lower in error than 5-limit 31-EDO?

-Mike

🔗Carl Lumma <carl@...>

1/5/2012 8:20:12 PM

Graham wrote:
> The adjusted error is the TE error multiplied by the log of
> the largest prime. It matches the target error.

Mike wrote:
> This is a bad idea, right?

Yes, sure sounds like it'll screw up Igs' particular search
big-time.

-Carl

🔗cityoftheasleep <igliashon@...>

1/5/2012 9:38:28 PM

--- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
> I'm actually curious why we need to re-weight the error at all. Isn't
> it already weighted? Why not just use TE error by itself?

I think I need an re-education on this whole "error" thing. Is it correct that error is weighted because, for a fixed amount of error, there is a larger relative increase in discordance for that error on a simple ratio vs. a more complex one? So error weighting was introduced to help maximize concordance in temperament optimization? What if someone just cares about accuracy, and not concordance? Wouldn't unweighted error make more sense then?

> One thing that's notable about TE error is that it goes -down- as more
> accurate primes are added. For example, consider 31-EDO in the 5-limit
>
> http://x31eq.com/cgi-bin/rt.cgi?ets=31&limit=5
>
> 1.628 cents of error. Now look at it in the 7-limit:
>
> http://x31eq.com/cgi-bin/rt.cgi?ets=31&limit=7
>
> 1.432 cents, which is less.

Okay, where do these error figures come from, or maybe, what do they mean? What does it mean that 31-TET has 1.628 cents of error on the 5-limit?

> So what exactly is the behavior you're looking for? Should 7-limit
> 31-EDO be lower in error than 5-limit 31-EDO?

I hate that this is subjective. What should I be looking for? What, exactly, is the problem with not weighting the error? (And how would I even calculate unweighted error? Average the error for all the dyads on the n-odd-limit (or subgroup) tonality diamond?

-Igs

🔗Carl Lumma <carl@...>

1/7/2012 6:58:42 PM

Pfff... log(largest prime) is exactly the subgroup penalty
I came up with! The thing is, it should be used only in the
competition for best subgroup for a given ET, not in the
badness calc. The score for each subgroup in the competitions
is going to be

error * log(largest prime)/rank(subgroup)

The badness is going to be

error/rank(subgroup) * (notes/oct)^exponent

-Carl

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:
>
> Graham wrote:
> > The adjusted error is the TE error multiplied by the log of
> > the largest prime. It matches the target error.
>
> Mike wrote:
> > This is a bad idea, right?
>
> Yes, sure sounds like it'll screw up Igs' particular search
> big-time.
>
> -Carl