back to list

Inverse weighting

🔗Mike Battaglia <battaglia01@...>

1/16/2012 12:39:14 PM

I never really thought about inverse weighting before, but it seems
interesting to try out. But what name do we give to this sort of
weighting? "Inverse Tenney?"

Let's call the analogue to TOP ITOP. Would it, in principle, still be
the one minimizing the max Inverse Tenney weighted error?

-Mike

🔗cityoftheasleep <igliashon@...>

1/16/2012 1:12:02 PM

How would you implement this? Replace log(THD) with log(1/THD)?

-Igs

--- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> I never really thought about inverse weighting before, but it seems
> interesting to try out. But what name do we give to this sort of
> weighting? "Inverse Tenney?"
>
> Let's call the analogue to TOP ITOP. Would it, in principle, still be
> the one minimizing the max Inverse Tenney weighted error?
>
> -Mike
>

🔗Mike Battaglia <battaglia01@...>

1/16/2012 1:21:57 PM

On Mon, Jan 16, 2012 at 4:12 PM, cityoftheasleep
<igliashon@...> wrote:
>
> How would you implement this? Replace log(THD) with log(1/THD)?
>
> -Igs

That seems like one sensible thing to try. It's going to be tricky,
though. Graham dealt with this here in primerr.pdf

http://www.x31eq.com/primerr.pdf

The simplest approach is to treat all prime errors
equally. This gives unreasonable results in practice.
For example, the intervals 2:1 and 7:1 are counted on
an equal footing, so an error of 1 cent in 2:1 is treated
as badly as an error of 1 cent in 7:1. But the error
in 8:1 has to be three times as big as the error in 2:1
for any regular temperament. So 8:1 is allowed to be
three times as out of tune as 7:1 when it’s only a little
bit bigger! To give more flexibility, each prime error is
given a different weighting when calculating the over-
all error for a temperament.In many cases it’s certainly not appropriate to give
complex intervals a higher weight. You may decide
that more complex intervals are harder to hear, and
so must be tuned more precisely to help the ear. But
in that case you must consider a finite set of intervals.
Unbounded prime-limit errors will end up being deter-
mined by the infinite number of infinitely complex –
and therefore completely meaningless – ratios.

So scratch my iTOP idea. It'll definitely require a bit more thinking than that.

BTW, you should join tuning-math if you want to just discuss math and
not argue over music cognition anymore.

-Mike

🔗genewardsmith <genewardsmith@...>

1/16/2012 1:33:53 PM

--- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> The simplest approach is to treat all prime errors
> equally. This gives unreasonable results in practice.

In practice, the Frobenius tuning seems reasonable to me. What's a good example where it craps out?

🔗Mike Battaglia <battaglia01@...>

1/16/2012 1:39:43 PM

On Mon, Jan 16, 2012 at 4:33 PM, genewardsmith
<genewardsmith@...> wrote:
>
> --- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> > The simplest approach is to treat all prime errors
> > equally. This gives unreasonable results in practice.
>
> In practice, the Frobenius tuning seems reasonable to me. What's a good example where it craps out?

I dunno, this was copied verbatim from Graham's primerr.pdf. He'll
have to answer.

Maybe this is a good time to dive back into Frobenius maps. Is there
something like Frobenius error? That might be useful for what Igs is
looking for. (Stay tuned, Igs...)

-Mike

🔗genewardsmith <genewardsmith@...>

1/16/2012 2:22:29 PM

--- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> Maybe this is a good time to dive back into Frobenius maps. Is there
> something like Frobenius error? That might be useful for what Igs is
> looking for. (Stay tuned, Igs...)

Certainly. Frobenius error, Frobenius complexity, Frobenius badness are all easily defined and computed.

🔗Mike Battaglia <battaglia01@...>

1/16/2012 4:26:00 PM

On Mon, Jan 16, 2012 at 5:22 PM, genewardsmith
<genewardsmith@...> wrote:
>
> --- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> > Maybe this is a good time to dive back into Frobenius maps. Is there
> > something like Frobenius error? That might be useful for what Igs is
> > looking for. (Stay tuned, Igs...)
>
> Certainly. Frobenius error, Frobenius complexity, Frobenius badness are all easily defined and computed.

OK, so all of the Frobenius metrics are basically the same as TE, but
with unweighted coordinates instead of weighted coordinates?

-Mike

🔗genewardsmith <genewardsmith@...>

1/16/2012 6:04:44 PM

--- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> OK, so all of the Frobenius metrics are basically the same as TE, but
> with unweighted coordinates instead of weighted coordinates?

Right.

🔗Mike Battaglia <battaglia01@...>

1/16/2012 6:11:15 PM

OK, so this isn't inverse weighting then, but rather equal weighting.
I'm going to finish up these 15-limit diamond calculations and we'll
see how consistent it turns out to be with Frobenius error. Then Igs
might be happy to just use that from now on instead of TE error.

Have you developed any metrics that work specfically for inverse weighting?

Actually, is inverse weighting even consistent with "regular mapping"
at all? I don't see how it is. For example, we'd want 8/1 to have less
error than 2/1, which is impossible.

-Mike

On Mon, Jan 16, 2012 at 9:04 PM, genewardsmith
<genewardsmith@...> wrote:
>
> --- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> > OK, so all of the Frobenius metrics are basically the same as TE, but
> > with unweighted coordinates instead of weighted coordinates?
>
> Right.

🔗cityoftheasleep <igliashon@...>

1/16/2012 9:01:46 PM

--- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> Actually, is inverse weighting even consistent with "regular mapping"
> at all? I don't see how it is. For example, we'd want 8/1 to have less
> error than 2/1, which is impossible.

Yeah, that seems problematic. Graham's paper mentioned a similar problem.

I still want to know if there's any difference in weighting method if we wanted to reflect the Farey series version of HE (with the overall downward slope, and 3/1 more concordant than 2/1). I'm not sure, but it seems like weighting would have to take into account interval width (in cents), perhaps on top of Tenney Harmonic Distance. I have no idea how it would be implemented.

I also want to know if different weighting methods are more appropriate to different HE formulations--which curve does standard Tenney weighting best reflect? High s, gaussian? Low s, Vos? Or does it matter?

-Igs