back to list

Gene's 17 limit R2 temperaments

🔗Graham Breed <gbreed@gmail.com>

1/17/2006 9:34:50 AM

Sorry I'm a few months late replying to this. See:

/tuning-math/message/11104

So this is supposed to be a general search? Your standard vals (nearest-prime equal temperaments) are letting you down. The double 22 that comes at the top isn't at all special, but you're missing other temperaments of similar complexity. As you said that higher limit searches are harder, you've probably noticed this by now. To fix it you need to check alternative mappings. Also you can simplify the search by imposing a badness threshold on the ETs.

This is the first thing I got from a similar search:

13/41

1202.6 cents period
381.4 cents generator

mapping by period and generator:
[(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (4, -1), (-1, 16)]

mapping by steps:
[(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 81), (77, 90)]

complexity measure: 4.545
RMS weighted error: 2.643 cents/octave
max weighted error: 3.966 cents/octave

That mapping of 19 differs from the nearest-primes in two different entries, so I'm guessing you're missing it.

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

1/17/2006 11:05:35 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> Sorry I'm a few months late replying to this. See:
>
> /tuning-math/message/11104
>
> So this is supposed to be a general search?

Obviously not, which is why I said it was limited by only using
standard vals. It was a preliminary look.

Your standard vals
> (nearest-prime equal temperaments) are letting you down. The double 22
> that comes at the top isn't at all special, but you're missing other
> temperaments of similar complexity. As you said that higher limit
> searches are harder, you've probably noticed this by now.

I was well aware I was likely to be missing interesting stuff, which
is why I made no claim for completeness. A complete survey would have
been a bigger job, and I didn't see anyone clamoring for the results.

To fix it you
> need to check alternative mappings.

Or use commas, or do both. One interesting thing about higher prime
limits is that in them, the interesting temperaments more and more
become those which have a comma basis consisting of superparticular
ratios only. Hence, a search at least through sets of superparticulars
where the ratio between the largest and the smallest comma is bounded
(to keep the search smaller) makes sense.

> mapping by period and generator:
> [(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (4, -1), (-1, 16)]
>
> mapping by steps:
> [(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 81), (77, 90)]

Here's a case in point. The TM basis is
{55/54,65/64,85/84,91/90,99/98}, where all of the commas are
superparticular and where log(55/54)/log(99/98) = 1.807. This would
certainly be picked up from the commas.

🔗Graham Breed <gbreed@gmail.com>

1/18/2006 6:36:22 AM

Gene Ward Smith wrote:

> Obviously not, which is why I said it was limited by only using
> standard vals. It was a preliminary look.

A preliminary look at a problem that was solved years earlier?

> I was well aware I was likely to be missing interesting stuff, which
> is why I made no claim for completeness. A complete survey would have
> been a bigger job, and I didn't see anyone clamoring for the results.

A complete survey would have been trivial, and anybody who wanted the results could have produced them.

> To fix it you >>need to check alternative mappings. > > Or use commas, or do both. One interesting thing about higher prime
> limits is that in them, the interesting temperaments more and more
> become those which have a comma basis consisting of superparticular
> ratios only. Hence, a search at least through sets of superparticulars
> where the ratio between the largest and the smallest comma is bounded
> (to keep the search smaller) makes sense.

Why the bound on ratio sizes? You need something to simplify it anyway. Another intteresting thing about higher prime limits is that they have lots of superparticular ratios and so a search through them doesn't make sense from a point of view of computational efficiency -- unless you have more ways of simplifying it.

>>mapping by period and generator:
>>[(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (4, -1), (-1, 16)]
>>
>>mapping by steps:
>>[(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 81), (77, 90)]
> > > Here's a case in point. The TM basis is
> {55/54,65/64,85/84,91/90,99/98}, where all of the commas are
> superparticular and where log(55/54)/log(99/98) = 1.807. This would
> certainly be picked up from the commas.

Of course, any temperament can be picked up from commas, like any temperament can be picked up from ETs, provided you feed the right commas/ETs in. If you happen to know you want this temperament, then yes, you only need to search superparticulars from 55:54 to 99:98. There are 10 of them, which by my calculations means you need to consider 120 combinations. But then it's easy to search for things you know how to find. That search would miss this almost as good temperament:

52:51, 56:55, 78:77, 91:90, 100:99
10/27

1196.1 cents period
442.8 cents generator

mapping by period and generator:
[(1, 0), (-1, 7), (-1, 9), (-2, 13), (2, 4), (0, 10), (3, 3)]

mapping by steps:
[(19, 8), (30, 13), (44, 19), (53, 23), (66, 28), (70, 30), (78, 33)]

complexity measure: 4.631
RMS weighted error: 2.730 cents/octave
max weighted error: 3.944 cents/octave

Oops!

By my reckoning there are 6 rank 2 temperaments with an RMS weighted prime error under 3 cents/octave and a Kees complexity of under 6. If I happen to know the result I want to get, I could find them all using superparticulars from 50:49 to 110:109. That's 6,435 combinations to check, or maybe fewer if I trust your extra constraint. It also happens that the first 20 equal temperaments with a badness cutoff of 0.08 will do the same job, and I only need to look at 380 combinations.

My advice, if you wanted to reproduce these surveys, is still to base them on equal temperaments (maybe finding generators instead of pairing off ETs). It would be nice if you could come up with some theory for deciding what badness cutoff an number of equal temperaments are likely to work. That's currently a weak spot in the method.

The "ET badness" I use is number of notes times optimal RMS Tenney weighted prime error. For precise definitions, check the source code:

http://x31eq.com/temper/regular.py

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

1/18/2006 10:29:34 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> Gene Ward Smith wrote:
>
> > Obviously not, which is why I said it was limited by only using
> > standard vals. It was a preliminary look.
>
> A preliminary look at a problem that was solved years earlier?

Years earlier someone did a complete survey of 17-limit linear
temperaments up to a defined cutoff?

> > I was well aware I was likely to be missing interesting stuff, which
> > is why I made no claim for completeness. A complete survey would have
> > been a bigger job, and I didn't see anyone clamoring for the results.
>
> A complete survey would have been trivial, and anybody who wanted the
> results could have produced them.

Why is it trivial? If it's so trivial, why don't you do it, and
explain how you know it is complete?

> Another intteresting thing about higher prime limits is that they
have
> lots of superparticular ratios and so a search through them doesn't
make
> sense from a point of view of computational efficiency -- unless you
> have more ways of simplifying it.

But fortunatly or unfortunately, the vast numbers of superparticulars
means that there are large numbers of possibilities out there which
make sense. By not considering them, or doing something equivalent,
you almost guarantee missing something.

> Of course, any temperament can be picked up from commas, like any
> temperament can be picked up from ETs, provided you feed the right
> commas/ETs in. If you happen to know you want this temperament, then
> yes, you only need to search superparticulars from 55:54 to 99:98.
> There are 10 of them, which by my calculations means you need to
> consider 120 combinations.

However, there aren't going to be 120 wedgies resulting from these 120
combinations, so the search will not be as long as that makes it sound.

But then it's easy to search for things you
> know how to find. That search would miss this almost as good
temperament:
>
> 52:51, 56:55, 78:77, 91:90, 100:99
> 10/27
>
> 1196.1 cents period
> 442.8 cents generator
>
> mapping by period and generator:
> [(1, 0), (-1, 7), (-1, 9), (-2, 13), (2, 4), (0, 10), (3, 3)]
>
> mapping by steps:
> [(19, 8), (30, 13), (44, 19), (53, 23), (66, 28), (70, 30), (78, 33)]
>
> complexity measure: 4.631
> RMS weighted error: 2.730 cents/octave
> max weighted error: 3.944 cents/octave
>
> Oops!

I don't see an oops here. By sticking in 52/51, you guarantee in
advance that the temperament cannot be beyond a specififiable degree
of accuracy. So, go to commas as low as the degree of accuracy you are
looking at, and stop.

> By my reckoning there are 6 rank 2 temperaments with an RMS weighted
> prime error under 3 cents/octave and a Kees complexity of under 6.
If I
> happen to know the result I want to get, I could find them all using
> superparticulars from 50:49 to 110:109. That's 6,435 combinations to
> check, or maybe fewer if I trust your extra constraint.

Once again, taking the wedige first before doing this will cut the
amount of computation down a great deal. Moreover, you end up with a
huge number of temperaments which if your six interest you might also
be of interest. After all, you've picked a cutoff in a certain way,
but there's nothing canonical about that way which makes these other
temperaments so much worse. Choosing a different method will give
different results; hence, a more robust list would be larger.

It also happens
> that the first 20 equal temperaments with a badness cutoff of 0.08 will
> do the same job, and I only need to look at 380 combinations.

You also need to calculate badness, for equal temperaments which are
not necessarily standard, before you start. And you don't have any
guarantee of completeness either way, but my suspicion would be you
are more likely to miss something starting from the vals.

> My advice, if you wanted to reproduce these surveys, is still to base
> them on equal temperaments (maybe finding generators instead of pairing
> off ETs).

My advice is to do everything which seems to make sense, all at once.
What one method misses another might pick up.

It would be nice if you could come up with some theory for
> deciding what badness cutoff an number of equal temperaments are likely
> to work. That's currently a weak spot in the method.

That is definately a thought. Any proofs would require knowing
specifically how you are defining badness for both the ets and the
temperaments, which you are sort of defining below, I guess, though I
don't know what you are saying.

🔗Carl Lumma <ekin@lumma.org>

1/18/2006 1:16:43 PM

>By my reckoning there are 6 rank 2 temperaments with an RMS weighted
>prime error under 3 cents/octave and a Kees complexity of under 6.

What are they!?

>If I happen to know the result I want to get, I could find them all
>using superparticulars from 50:49 to 110:109.

??

-Carl

🔗Graham Breed <gbreed@gmail.com>

1/19/2006 6:56:48 AM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
> >>A preliminary look at a problem that was solved years earlier?
> > Years earlier someone did a complete survey of 17-limit linear
> temperaments up to a defined cutoff?

Years earlier I solved the problem of how to do the searches. Although "solved" is a bit restricting -- it suggests I can't have any more fun in the future. I still have to solve the problem of giving it a friendly user interface, anyway.

>>>I was well aware I was likely to be missing interesting stuff, which
>>>is why I made no claim for completeness. A complete survey would have
>>>been a bigger job, and I didn't see anyone clamoring for the results.
>>
>>A complete survey would have been trivial, and anybody who wanted the >>results could have produced them.
> > Why is it trivial? If it's so trivial, why don't you do it, and
> explain how you know it is complete?

It's trivial because all you have to do is get my code and run the searches. You can even do simple searches without downloading anything, providing my website's online (which it probably wasn't when you posted the original message).

Doing a complete search and knowing it's complete are different things. I'm still working on the latter.

I don't do the search myself becasue I'm not interested in the results. I'm satisfied that anybody who wants them can get them.

> But fortunatly or unfortunately, the vast numbers of superparticulars
> means that there are large numbers of possibilities out there which
> make sense. By not considering them, or doing something equivalent,
> you almost guarantee missing something.

Yes, if you start out looking at superparticulars. But if you start out looking at equal temperaments you can easily find something that makes sense. If you missed something marginally better that's no big problem.

>>Of course, any temperament can be picked up from commas, like any >>temperament can be picked up from ETs, provided you feed the right >>commas/ETs in. If you happen to know you want this temperament, then >>yes, you only need to search superparticulars from 55:54 to 99:98. >>There are 10 of them, which by my calculations means you need to >>consider 120 combinations. > > However, there aren't going to be 120 wedgies resulting from these 120
> combinations, so the search will not be as long as that makes it sound.

You need to find 120 wedgie-like-things before you can compare them to see how many distinct wedgies you have. So the order of complexity of the calculation is as I said.

> I don't see an oops here. By sticking in 52/51, you guarantee in
> advance that the temperament cannot be beyond a specififiable degree
> of accuracy. So, go to commas as low as the degree of accuracy you are
> looking at, and stop.

Yes, and how high do you go? Every example I've seen shows the complexity gets bigger than pairing off ETs for high limits.

>>By my reckoning there are 6 rank 2 temperaments with an RMS weighted >>prime error under 3 cents/octave and a Kees complexity of under 6. > If I >>happen to know the result I want to get, I could find them all using >>superparticulars from 50:49 to 110:109. That's 6,435 combinations to >>check, or maybe fewer if I trust your extra constraint. > > Once again, taking the wedige first before doing this will cut the
> amount of computation down a great deal. Moreover, you end up with a
> huge number of temperaments which if your six interest you might also
> be of interest. After all, you've picked a cutoff in a certain way,
> but there's nothing canonical about that way which makes these other
> temperaments so much worse. Choosing a different method will give
> different results; hence, a more robust list would be larger.

Taking the wedgie first won't affect the order of complexity of the calculation. With my implementation, it's a fairly significant part of the overall operation.

What use are a huge number of temperaments I didn't ask for?

> It also happens >>that the first 20 equal temperaments with a badness cutoff of 0.08 will >>do the same job, and I only need to look at 380 combinations.
> > You also need to calculate badness, for equal temperaments which are
> not necessarily standard, before you start. And you don't have any
> guarantee of completeness either way, but my suspicion would be you
> are more likely to miss something starting from the vals.

Yes, you need to produce a list of equal temperaments to do a search by equal temperaments. You also need to produce a list off commas to do a search by commas. With my latest implementation, the equal temperament search takes the minority of the time for the overall search for those cases where the number of equal temperaments gets large enough to worry about. That's what you'd expect from the overall search being quadratic in the number of equal temperaments.

My old, odd-limit code uses a less efficient algorithm to find the ETs. That means it does get very slow for high limits, but it's acceptable for 17.

> It would be nice if you could come up with some theory for >>deciding what badness cutoff an number of equal temperaments are likely >>to work. That's currently a weak spot in the method.
> > > That is definately a thought. Any proofs would require knowing
> specifically how you are defining badness for both the ets and the
> temperaments, which you are sort of defining below, I guess, though I
> don't know what you are saying.

It shouldn't matter too much what the badness is. I'd also like ways of relating different measures of complexity to each other.

The complexity of an equal temperament is the number of notes to the octave. That's easy enough.

The complexity of a rank 2 temperament can be the Kees complexity. Tenney weight the generator mapping, and taking the difference between the maximum and minimum values (including zero for the octave).

There are different ways of calculating the errors, but they should be the same for equal and rank 1 temperaments. I'd like to use the Tenney-weighted prime RMS. I think the Tenney-weighted prime standard deviation will be easier to do proofs with because the square error is a quadratic function of the generator size. The two can be proven close to each other.

It happens that an RMS error leads to a standard deviation as the complexity, rather than max-min. Perhaps I would prefer that. But the std is always less than half the max-min so the proof can follow from that.

The badness for a rank 2 temperament can be anything sensible that makes the proof easy. By sensible, I mean that lower error or complexity should always reduce the badness.

The badness for an equal temperament can be anything that makes the proof easy, and restricts the number we need to find.

Graham

🔗Graham Breed <gbreed@gmail.com>

1/19/2006 6:57:18 AM

Carl Lumma wrote:
>>By my reckoning there are 6 rank 2 temperaments with an RMS weighted >>prime error under 3 cents/octave and a Kees complexity of under 6.
> > What are they!?

Number 1

55:54, 65:64, 85:84, 91:90, 99:98
13/41

1202.6 cents period
381.4 cents generator

mapping by period and generator:
[(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (4, -1), (-1, 16)]

mapping by steps:
[(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 81), (77, 90)]

complexity measure: 4.545
RMS weighted error: 2.643 cents/octave
max weighted error: 3.966 cents/octave

Number 2

52:51, 56:55, 78:77, 91:90, 100:99
10/27

1196.1 cents period
442.8 cents generator

mapping by period and generator:
[(1, 0), (-1, 7), (-1, 9), (-2, 13), (2, 4), (0, 10), (3, 3)]

mapping by steps:
[(19, 8), (30, 13), (44, 19), (53, 23), (66, 28), (70, 30), (78, 33)]

complexity measure: 4.631
RMS weighted error: 2.730 cents/octave
max weighted error: 3.944 cents/octave

Number 3

50:49, 65:64, 78:77, 85:84, 105:104
5/13

601.2 cents period
231.5 cents generator

mapping by period and generator:
[(2, 0), (2, 3), (5, -1), (6, -1), (5, 5), (7, 1), (7, 3)]

mapping by steps:
[(16, 10), (25, 16), (37, 23), (45, 28), (55, 35), (59, 37), (65, 41)]

complexity measure: 4.647
RMS weighted error: 2.939 cents/octave
max weighted error: 4.968 cents/octave

Number 4

55:54, 66:65, 85:84, 99:98, 105:104
13/41

1201.4 cents period
381.1 cents generator

mapping by period and generator:
[(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (-2, 18), (-1, 16)]

mapping by steps:
[(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 82), (77, 90)]

complexity measure: 4.864
RMS weighted error: 2.889 cents/octave
max weighted error: 5.206 cents/octave

Number 5

65:64, 78:77, 81:80, 85:84, 100:99, 105:104
11/26

1202.8 cents period
508.4 cents generator

mapping by period and generator:
[(1, 0), (2, -1), (4, -4), (-1, 9), (6, -6), (2, 4), (-1, 12)]

mapping by steps:
[(19, 7), (30, 11), (44, 16), (53, 20), (66, 24), (70, 26), (77, 29)]

complexity measure: 4.940
RMS weighted error: 2.802 cents/octave
max weighted error: 4.390 cents/octave

Number 6

50:49, 78:77, 81:80, 85:84, 99:98, 100:99, 105:104
2/13

601.2 cents period
92.7 cents generator

mapping by period and generator:
[(2, 0), (3, 1), (4, 4), (5, 4), (6, 6), (6, 9), (8, 1)]

mapping by steps:
[(14, 12), (22, 19), (32, 28), (39, 34), (48, 42), (51, 45), (57, 49)]

complexity measure: 4.864
RMS weighted error: 2.941 cents/octave
max weighted error: 4.660 cents/octave

>>If I happen to know the result I want to get, I could find them all
>>using superparticulars from 50:49 to 110:109.
> > ??

Take all 17-limit superparticular ratios with a numerator from 50 to 110. Take all subsets of 5 of them. Each of the above temperaments is represented by at least one of those sets. I haven't checked that they're linearly independent so all I've shown it that you need at least this many superparticulars.

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

1/19/2006 12:10:56 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> The complexity of a rank 2 temperament can be the Kees complexity.
> Tenney weight the generator mapping, and taking the difference between
> the maximum and minimum values (including zero for the octave).

What are you using as the complexity of a rank 3 temperament?

🔗Carl Lumma <ekin@lumma.org>

1/19/2006 12:19:04 PM

At 06:57 AM 1/19/2006, you wrote:
>Carl Lumma wrote:
>>>By my reckoning there are 6 rank 2 temperaments with an RMS weighted
>>>prime error under 3 cents/octave and a Kees complexity of under 6.
>>
>> What are they!?
>
>Number 1
>
>55:54, 65:64, 85:84, 91:90, 99:98
>13/41
>
>1202.6 cents period
> 381.4 cents generator
>
>mapping by period and generator:
>[(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (4, -1), (-1, 16)]
>
>mapping by steps:
>[(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 81), (77, 90)]
>
>complexity measure: 4.545
>RMS weighted error: 2.643 cents/octave
>max weighted error: 3.966 cents/octave
>
>Number 2
>
>52:51, 56:55, 78:77, 91:90, 100:99
>10/27
>
>1196.1 cents period
> 442.8 cents generator
>
>mapping by period and generator:
>[(1, 0), (-1, 7), (-1, 9), (-2, 13), (2, 4), (0, 10), (3, 3)]
>
>mapping by steps:
>[(19, 8), (30, 13), (44, 19), (53, 23), (66, 28), (70, 30), (78, 33)]
>
>complexity measure: 4.631
>RMS weighted error: 2.730 cents/octave
>max weighted error: 3.944 cents/octave
>
>Number 3
>
>50:49, 65:64, 78:77, 85:84, 105:104
>5/13
>
> 601.2 cents period
> 231.5 cents generator
>
>mapping by period and generator:
>[(2, 0), (2, 3), (5, -1), (6, -1), (5, 5), (7, 1), (7, 3)]
>
>mapping by steps:
>[(16, 10), (25, 16), (37, 23), (45, 28), (55, 35), (59, 37), (65, 41)]
>
>complexity measure: 4.647
>RMS weighted error: 2.939 cents/octave
>max weighted error: 4.968 cents/octave
>
>Number 4
>
>55:54, 66:65, 85:84, 99:98, 105:104
>13/41
>
>1201.4 cents period
> 381.1 cents generator
>
>mapping by period and generator:
>[(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (-2, 18), (-1, 16)]
>
>mapping by steps:
>[(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 82), (77, 90)]
>
>complexity measure: 4.864
>RMS weighted error: 2.889 cents/octave
>max weighted error: 5.206 cents/octave
>
>Number 5
>
>65:64, 78:77, 81:80, 85:84, 100:99, 105:104
>11/26
>
>1202.8 cents period
> 508.4 cents generator
>
>mapping by period and generator:
>[(1, 0), (2, -1), (4, -4), (-1, 9), (6, -6), (2, 4), (-1, 12)]
>
>mapping by steps:
>[(19, 7), (30, 11), (44, 16), (53, 20), (66, 24), (70, 26), (77, 29)]
>
>complexity measure: 4.940
>RMS weighted error: 2.802 cents/octave
>max weighted error: 4.390 cents/octave
>
>Number 6
>
>50:49, 78:77, 81:80, 85:84, 99:98, 100:99, 105:104
>2/13
>
> 601.2 cents period
> 92.7 cents generator
>
>mapping by period and generator:
>[(2, 0), (3, 1), (4, 4), (5, 4), (6, 6), (6, 9), (8, 1)]
>
>mapping by steps:
>[(14, 12), (22, 19), (32, 28), (39, 34), (48, 42), (51, 45), (57, 49)]
>
>complexity measure: 4.864
>RMS weighted error: 2.941 cents/octave
>max weighted error: 4.660 cents/octave

Do these have existing names?

>>>If I happen to know the result I want to get, I could find them all
>>>using superparticulars from 50:49 to 110:109.
>>
>> ??
>
>Take all 17-limit superparticular ratios with a numerator from 50 to
>110. Take all subsets of 5 of them. Each of the above temperaments is
>represented by at least one of those sets. I haven't checked that
>they're linearly independent so all I've shown it that you need at least
>this many superparticulars.

A limit of 110 on the numerator isn't very high for the 17-limit. Won't
you miss lots of stuff?

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

1/19/2006 3:05:38 PM

--- In tuning-math@yahoogroups.com, Carl Lumma <ekin@l...> wrote:

> A limit of 110 on the numerator isn't very high for the 17-limit. Won't
> you miss lots of stuff?

You'll miss every single low-error temperament, without exception.

🔗Gene Ward Smith <gwsmith@svpal.org>

1/19/2006 3:37:34 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

Carl wanted names, so at least we could try to relate these to
lower-limit names. However, that isn't always so easy.

> Number 1
>
> 55:54, 65:64, 85:84, 91:90, 99:98
> 13/41

This, for instance, might be called 17-limit magic, but below another
13/41 mapping is given. Moreover, the 41-et stanrdard doesn't support
it, and we are talking another 41 val here. If you add 50/49 to the
above commas, you get the standard 22-et val, and 22 seems like an
obvious MOS for this, perhaps tuned to 20/63 as a generator.

> Take all 17-limit superparticular ratios with a numerator from 50 to
> 110. Take all subsets of 5 of them. Each of the above temperaments is
> represented by at least one of those sets. I haven't checked that
> they're linearly independent so all I've shown it that you need at
least
> this many superparticulars.

Don't you mean at most?

🔗Gene Ward Smith <gwsmith@svpal.org>

1/19/2006 3:52:51 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> mapping by period and generator:
> [(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (4, -1), (-1, 16)]
>
> mapping by steps:
> [(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 81), (77, 90)]

Incidentally, why do you insist on using this form for the mapping?
It's far better to show it as two vals, thusly:

[<1 0 2 -1 -1 4 -1|, <0 5 1 1 12 14 -1 16|]

and

[<19 30 44 53 65 70 77|, <22 35 51 62 76 81 90|]

That way you can see the information in a way which makes more sense.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

1/19/2006 11:45:02 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> Doing a complete search and knowing it's complete are different
things.
> I'm still working on the latter.

But you've got the former covered? ;)

> > It would be nice if you could come up with some theory for
> >>deciding what badness cutoff an number of equal temperaments are
likely
> >>to work. That's currently a weak spot in the method.
> >
> >
> > That is definately a thought. Any proofs would require knowing
> > specifically how you are defining badness for both the ets and the
> > temperaments, which you are sort of defining below, I guess,
though I
> > don't know what you are saying.
>
> It shouldn't matter too much what the badness is. I'd also like
ways of
> relating different measures of complexity to each other.
>
> The complexity of an equal temperament is the number of notes to
the
> octave. That's easy enough.

I get interesting results for the complexity of, say, 12-equal in the
5-limit (without dividing by the number of primes, so everyting is 3
times too large). It seems to be somewhere between 3 * 12 and 3 * the
number of notes per 2:1 in top-12-equal. I posted something to this
effect in '04 . . . In any case, I think it's important that
complexity be defined the same way no matter what the rank of the
temperament is, though comparing complexities across different ranks
(and or different co-dimensions) may not be meaningful . . .

> The complexity of a rank 2 temperament can be the Kees complexity.
>
> Tenney weight the generator mapping, and taking the difference
between
> the maximum and minimum values (including zero for the octave).
>
> There are different ways of calculating the errors, but they should
be
> the same for equal and rank 1 temperaments.

I thought equal and rank 1 temperaments *were* the same.

> I'd like to use the
> Tenney-weighted prime RMS. I think the Tenney-weighted prime
standard
> deviation will be easier to do proofs with because the square error
is a
> quadratic function of the generator size. The two can be proven
close
> to each other.
>
> It happens that an RMS error leads to a standard deviation as the
> complexity,

Why does a certain choice for error lead to a certain choice for
complexity?

> rather than max-min. Perhaps I would prefer that. But the
> std is always less than half the max-min so the proof can follow
>from that.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

1/19/2006 11:48:07 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> I haven't checked that
> they're linearly independent so all I've shown it that you need at
least
> this many superparticulars.

Don't you mean "at most this many"?

🔗wallyesterpaulrus <perlich@aya.yale.edu>

1/20/2006 12:05:26 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
>
> Carl wanted names, so at least we could try to relate these to
> lower-limit names. However, that isn't always so easy.
>
> > Number 1
> >
> > 55:54, 65:64, 85:84, 91:90, 99:98
> > 13/41
>
> This, for instance, might be called 17-limit magic, but below
another
> 13/41 mapping is given. Moreover, the 41-et stanrdard doesn't
support
> it, and we are talking another 41 val here. If you add 50/49 to the
> above commas, you get the standard 22-et val, and 22 seems like an
> obvious MOS for this, perhaps tuned to 20/63 as a generator.

In this latter case, 50:49 is what I'd call a "chromatic unison
vector". In the 22-tone MOS here, 50:49 does not get tempered to a
perfect unison, but to the distance that one note changes when the
entire MOS is transposed by one generator. When the MOS is taken as
the diatonic scale instead, this distance is called an "augmented
unison" by musicians, and corresponds to the ratio 25:24 (or 135:128,
etc.).

🔗wallyesterpaulrus <perlich@aya.yale.edu>

1/20/2006 12:06:28 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
>
> > mapping by period and generator:
> > [(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (4, -1), (-1, 16)]
> >
> > mapping by steps:
> > [(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 81), (77,
90)]
>
> Incidentally, why do you insist on using this form for the mapping?
> It's far better to show it as two vals, thusly:
>
> [<1 0 2 -1 -1 4 -1|, <0 5 1 1 12 14 -1 16|]
>
> and
>
> [<19 30 44 53 65 70 77|, <22 35 51 62 76 81 90|]
>
> That way you can see the information in a way which makes more
>sense.

I would have said exactly the opposite.

🔗Gene Ward Smith <gwsmith@svpal.org>

1/20/2006 12:54:36 AM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus" <perlich@a...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
> >
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
> wrote:
> >
> > > mapping by period and generator:
> > > [(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (4, -1), (-1, 16)]
> > >
> > > mapping by steps:
> > > [(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 81), (77,
> 90)]
> >
> > Incidentally, why do you insist on using this form for the mapping?
> > It's far better to show it as two vals, thusly:
> >
> > [<1 0 2 -1 -1 4 -1|, <0 5 1 1 12 14 -1 16|]
> >
> > and
> >
> > [<19 30 44 53 65 70 77|, <22 35 51 62 76 81 90|]
> >
> > That way you can see the information in a way which makes more
> >sense.
>
> I would have said exactly the opposite.

Why? Graham's pairings do not appear to me to tell me anything anyone
wants to know. What is the point? If you do it my way, you know the
following:

(1) <1 0 2 -1 -1 4 -1| is the octave mapping

(2) <0 5 1 1 12 14 -1 16| is the generator mapping

(3) <19 30 44 53 65 70 77| is an et val supporting the temperament

(4) <22 35 51 62 76 81 90| is another et val supporting the temperament.

That's four pieces of information which are actually useful to know. I
know a lot about the temperament, just from looking at them. What does
the other way of doing things tell me?

🔗wallyesterpaulrus <perlich@aya.yale.edu>

1/20/2006 1:43:02 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "wallyesterpaulrus"
<perlich@a...>
> wrote:
> >
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> > wrote:
> > >
> > > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
> > wrote:
> > >
> > > > mapping by period and generator:
> > > > [(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (4, -1), (-1,
16)]
> > > >
> > > > mapping by steps:
> > > > [(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 81),
(77,
> > 90)]
> > >
> > > Incidentally, why do you insist on using this form for the
mapping?
> > > It's far better to show it as two vals, thusly:
> > >
> > > [<1 0 2 -1 -1 4 -1|, <0 5 1 1 12 14 -1 16|]
> > >
> > > and
> > >
> > > [<19 30 44 53 65 70 77|, <22 35 51 62 76 81 90|]
> > >
> > > That way you can see the information in a way which makes more
> > >sense.
> >
> > I would have said exactly the opposite.
>
> Why?

Because the numbers are being grouped by the thing they map to, which
is normally the thing you're going to be interested in calculating.
And it's a lot easier to do a calculation when the numbers you're
calculating with are all bunched together instead of spread far apart
among other numbers.

> Graham's pairings do not appear to me to tell me anything anyone
> wants to know. What is the point? If you do it my way, you know the
> following:
>
> (1) <1 0 2 -1 -1 4 -1| is the octave mapping

A monkey could read this off Graham's results. Beyond that, these
lists of seven numbers in (1) and (2) aren't going to be recognizable
by normal humans, and neither one of them seems to even tell you
anything meaningful by itself even if you did recognize it somehow.

> (2) <0 5 1 1 12 14 -1 16| is the generator mapping

If this doesn't line up perfectly underneath or above (1) -- and in
your method, it doesn't -- it's not going to be that easy to see
which element of the octave mapping goes with which element of the
generator mapping, and it seems to me that neither is too meaningful
without the other.

> (3) <19 30 44 53 65 70 77| is an et val supporting the temperament

A monkey could read this off Graham's results.

> (4) <22 35 51 62 76 81 90| is another et val supporting the
temperament.

Ditto.

> That's four pieces of information which are actually useful to know.

How is either (1) or (2) useful to know by itself?

> I
> know a lot about the temperament, just from looking at them. What
does
> the other way of doing things tell me?

The same darn thing, the same dang numbers, just formatted better.

🔗Gene Ward Smith <gwsmith@svpal.org>

1/20/2006 2:37:44 AM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus" <perlich@a...>
wrote:

> Because the numbers are being grouped by the thing they map to, which
> is normally the thing you're going to be interested in calculating.

Why do I care that on val maps 7 by 53, and the other by 62? It's only
by taking them as a whole I can get anything out of them.

Do you actually do calculations in this way, and if so, how? You sound
as if you don't know what you are talking about due to lack of
relevant experience.

> And it's a lot easier to do a calculation when the numbers you're
> calculating with are all bunched together instead of spread far apart
> among other numbers.

Oh, please! This makes NO sense. Of course it is harder to do
calculations, because what you want to do any calculations with are
the vals. Nothing else is even remotely as useful.

> A monkey could read this off Graham's results.

Well, I must be sub-simian, because I notice below I got something
wrong. You know why? Because it's hard to read this way! You want
vals, or at least you should want them if you know what the point of
these computations are, and Graham's method makes them hard to get to,
and replaces it with something next to useless.

Beyond that, these
> lists of seven numbers in (1) and (2) aren't going to be recognizable
> by normal humans, and neither one of them seems to even tell you
> anything meaningful by itself even if you did recognize it somehow.

And you think Graham's numbers will make sense? They don't convey much
to me, whereas the vals immediately give me something I can compute
with, and allow comparison with what I think 19 and 22 ought to be.
The mapping to primes makes a little more sense in Graham's setup,
since they correspond to a generator pair, but the structural features
of the tuning are completely obscured, and the val pair is still much
better.

> > (2) <0 5 1 1 12 14 -1 16| is the generator mapping
>
> If this doesn't line up perfectly underneath or above (1) -- and in
> your method, it doesn't -- it's not going to be that easy to see
> which element of the octave mapping goes with which element of the
> generator mapping, and it seems to me that neither is too meaningful
> without the other.

Of course they are meaningful in isolation. That's the point. You
don't need the friggin pairs to make sense of them, and use them. You
don't need them to line up. They are *mappings* in and of themselves.

> > That's four pieces of information which are actually useful to know.
>
> How is either (1) or (2) useful to know by itself?

(1) isn't all that useful by itself, except for the first number,
telling you the period, but (2) is the mapping to generators, which is
of obvious utility. If I have a bunch of intervals I want to temper,
however, both (1) and (2) *are* useful, whereas the number pairs are
going to be a big fat pain to work with unless you do the sensible
thing and convert them a la monkey.

> > I
> > know a lot about the temperament, just from looking at them. What
> does
> > the other way of doing things tell me?
>
> The same darn thing, the same dang numbers, just formatted better.

You havn't given a single argument for this propositon which makes any
sense, you just assert it, based on nothing. It sounds as if you don't
ever apply mappings to monzos and see where they map to, which is
mind-boggling. What the hell have you been doing all these years?

🔗wallyesterpaulrus <perlich@aya.yale.edu>

1/20/2006 3:16:30 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
wrote:

> Do you actually do calculations in this way, and if so, how?

You multiply the period part of the mapping by the size of the
period, and then the generator part of the mapping by the size of the
generator, and then add these products. It's so much easier when the
numbers going into the calculation are right next to each other,
instead of far apart.

> You sound
> as if you don't know what you are talking about due to lack of
> relevant experience.

What I'm talking about is solely and utterly based on relevant
experience.

> > And it's a lot easier to do a calculation when the numbers you're
> > calculating with are all bunched together instead of spread far
apart
> > among other numbers.
>
> Oh, please! This makes NO sense. Of course it is harder to do
> calculations, because what you want to do any calculations with are
> the vals. Nothing else is even remotely as useful.

I'm talking about human calculations, not computer calculations where
the input might as well be punch cards. (1) and (2) are like punch
cards to me -- they are the inputs into complex calculations which
give meaningful results, but they each carry no readable meaning on
their own. If you're creating output for a computer to read, OK. But
if you're creating output for a human to read, then I beg to differ.

> > A monkey could read this off Graham's results.
>
> Well, I must be sub-simian, because I notice below I got something
> wrong. You know why? Because it's hard to read this way!

But it's even harder to do calculations with vals.

> You want
> vals, or at least you should want them if you know what the point of
> these computations are,

Wow.

> and Graham's method makes them hard to get to,
> and replaces it with something next to useless.

Wow^2.

> Beyond that, these
> > lists of seven numbers in (1) and (2) aren't going to be
recognizable
> > by normal humans, and neither one of them seems to even tell you
> > anything meaningful by itself even if you did recognize it
somehow.
>
> And you think Graham's numbers will make sense? They don't convey
much
> to me, whereas the vals immediately give me something I can compute
> with,

But what do they *convey* to you, to use your own language?

> and allow comparison with what I think 19 and 22 ought to be.

How do (1) and (2), which is what we're talking about here, allow
comparison with what you think 19 and 22 ought to be? I think you
changed the subject and are talking about (3) and (4).

> The mapping to primes makes a little more sense in Graham's setup,
> since they correspond to a generator pair, but the structural
features
> of the tuning are completely obscured,

To a human? How?

> and the val pair is still much
> better.

Define "better".

> > > (2) <0 5 1 1 12 14 -1 16| is the generator mapping
> >
> > If this doesn't line up perfectly underneath or above (1) -- and
in
> > your method, it doesn't -- it's not going to be that easy to see
> > which element of the octave mapping goes with which element of
the
> > generator mapping, and it seems to me that neither is too
meaningful
> > without the other.
>
> Of course they are meaningful in isolation. That's the point. You
> don't need the friggin pairs to make sense of them, and use them.
You
> don't need them to line up. They are *mappings* in and of
themselves.

I should have said that the period mapping is not too meaningful in
isolation. I have no idea what useful information it conveys about
the tuning, by itself.

> If I have a bunch of intervals I want to temper,
> however, both (1) and (2) *are* useful, whereas the number pairs are
> going to be a big fat pain to work with unless you do the sensible
> thing and convert them a la monkey.

Again, I seem to have a completely opposite experience. Once you've
gone through the pairs once and calculated each prime, you have a
list of tempered primes (you no longer have to deal with *two* sets
of numbers), and can then easily calculate any tempered interval if
you can do prime factorization.

> > > I
> > > know a lot about the temperament, just from looking at them.
What
> > does
> > > the other way of doing things tell me?
> >
> > The same darn thing, the same dang numbers, just formatted better.
>
> You havn't given a single argument for this propositon which makes
any
> sense, you just assert it, based on nothing.

This is simply not true, but it actually applies to many of *your*
statements very well.

> It sounds as if you don't
> ever apply mappings to monzos and see where they map to,

Do you mean how the monzos are mapped to by the period and generator?
Or something else?

> which is
> mind-boggling. What the hell have you been doing all these years?

I'm not going to "grace" this insult with an answer.

🔗Graham Breed <gbreed@gmail.com>

1/20/2006 11:09:46 AM

Gene Ward Smith wrote:

>>Because the numbers are being grouped by the thing they map to, which >>is normally the thing you're going to be interested in calculating. ...
> Do you actually do calculations in this way, and if so, how? You sound
> as if you don't know what you are talking about due to lack of
> relevant experience.

Touch�! In fact, I show them that way because that's how I keep them internally, for easy of calculation. I fall at your knees, oh great and wise master!

Graham

🔗Graham Breed <gbreed@gmail.com>

1/20/2006 11:09:56 AM

wallyesterpaulrus wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
> > >>Doing a complete search and knowing it's complete are different > > things. > >> I'm still working on the latter.
> > > But you've got the former covered? ;)

According to my preliminary results, yes.

> I get interesting results for the complexity of, say, 12-equal in the > 5-limit (without dividing by the number of primes, so everyting is 3 > times too large). It seems to be somewhere between 3 * 12 and 3 * the > number of notes per 2:1 in top-12-equal. I posted something to this > effect in '04 . . . In any case, I think it's important that > complexity be defined the same way no matter what the rank of the > temperament is, though comparing complexities across different ranks > (and or different co-dimensions) may not be meaningful . . .

Do you? How do you get that?

The mean-abs Tenney-weighted wedgie method will give the complexity as very close to the number of notes to the octave for an equal temperament. I don't have a non-wedgie method for ranks over 2.

>>The complexity of a rank 2 temperament can be the Kees complexity. >>
>>Tenney weight the generator mapping, and taking the difference > > between > >>the maximum and minimum values (including zero for the octave).
>>
>>There are different ways of calculating the errors, but they should > > be > >>the same for equal and rank 1 temperaments.
> > > I thought equal and rank 1 temperaments *were* the same.

Yes, sorry. "Equal and rank 2".

>>It happens that an RMS error leads to a standard deviation as the >>complexity,
> > Why does a certain choice for error lead to a certain choice for > complexity?

To explain that I'd need to start explaining my method. I do intend that but I'll get through this email blizzard first.

Graham

🔗Graham Breed <gbreed@gmail.com>

1/20/2006 11:11:18 AM

wallyesterpaulrus wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
> > >>I haven't checked that >>they're linearly independent so all I've shown it that you need at > > least > >>this many superparticulars.
> > > Don't you mean "at most this many"?

No, I meant what I said, but I can see two of you misunderstood so I'll explain.

Above each R2 temperament I showed a list of commas which are unison vectors of those temperaments. I made sure there were at least 5 commas in each list. What I didn't do was to make sure those commas were linearly independent. So you may find that wedging those commas together defines an equal, not R2, temperament. If that's the case, then you'll need to start with more superparticulars. Hence the list I started with may not have been adequate.

Graham

🔗Graham Breed <gbreed@gmail.com>

1/20/2006 11:10:36 AM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
> > >>mapping by period and generator:
>>[(1, 0), (0, 5), (2, 1), (-1, 12), (-1, 14), (4, -1), (-1, 16)]
>>
>>mapping by steps:
>>[(19, 22), (30, 35), (44, 51), (53, 62), (65, 76), (70, 81), (77, 90)]
> > > Incidentally, why do you insist on using this form for the mapping?
> It's far better to show it as two vals, thusly:
> > [<1 0 2 -1 -1 4 -1|, <0 5 1 1 12 14 -1 16|]
> > and
> > [<19 30 44 53 65 70 77|, <22 35 51 62 76 81 90|]
> > That way you can see the information in a way which makes more sense.

I don't insist on using it, I just use it. If you look at some of the other results I've given lately you'll see they use a different form.

Graham

🔗Graham Breed <gbreed@gmail.com>

1/20/2006 11:11:58 AM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
> > >>The complexity of a rank 2 temperament can be the Kees complexity. >>Tenney weight the generator mapping, and taking the difference between >>the maximum and minimum values (including zero for the octave).
> > > What are you using as the complexity of a rank 3 temperament?

The sum of the absolute values of the Tenney weighted wedgie, divided by the number of primes.

Graham

🔗Carl Lumma <ekin@lumma.org>

1/20/2006 1:10:03 PM

>>>The complexity of a rank 2 temperament can be the Kees complexity.
>>>Tenney weight the generator mapping, and taking the difference between
>>>the maximum and minimum values (including zero for the octave).
>>
>> What are you using as the complexity of a rank 3 temperament?
>
>The sum of the absolute values of the Tenney weighted wedgie, divided by
>the number of primes.

Why would you use a different complexity for R2 and R3 temperaments?

-Carl

🔗Gene Ward Smith <gwsmith@svpal.org>

1/20/2006 1:36:04 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus" <perlich@a...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...>
> wrote:
>
> > Do you actually do calculations in this way, and if so, how?
>
> You multiply the period part of the mapping by the size of the
> period, and then the generator part of the mapping by the size of the
> generator, and then add these products. It's so much easier when the
> numbers going into the calculation are right next to each other,
> instead of far apart.

I thought that might be it. You only need do this once, and then you
have a tuning map, which is what you *really* want.

Graham, I believe, thinks of what he's printing out as a matrix, in
which case printing it out either way makes sense and it is a question
of which way is more useful. As a matrix, you can do the operation you
want once, and then you're done. The more interesting stuff is matrix
multiplication on the other side, since this connects with the
structural features of the temperament, not with the precise values of
the tuning map. And that part is a list of vals. It's what tells you
things about the nataure of the temperament.

> > You sound
> > as if you don't know what you are talking about due to lack of
> > relevant experience.
>
> What I'm talking about is solely and utterly based on relevant
> experience.

Doing what? Computing tuning maps?

> I'm talking about human calculations, not computer calculations where
> the input might as well be punch cards. (1) and (2) are like punch
> cards to me -- they are the inputs into complex calculations which
> give meaningful results, but they each carry no readable meaning on
> their own. If you're creating output for a computer to read, OK. But
> if you're creating output for a human to read, then I beg to differ.

The number of generator steps to reach a prime carries no readble
meaning on its own? What are you smoking?

> > > A monkey could read this off Graham's results.
> >
> > Well, I must be sub-simian, because I notice below I got something
> > wrong. You know why? Because it's hard to read this way!
>
> But it's even harder to do calculations with vals.

For heaven's sake, you have Matlab. How can computing with vals be
anything other than dead easy?

> But what do they *convey* to you, to use your own language?

They immediately (especially the period and generator map) tell me
things about the nature of the temperament.

> > The mapping to primes makes a little more sense in Graham's setup,
> > since they correspond to a generator pair, but the structural
> features
> > of the tuning are completely obscured,
>
> To a human? How?

You don't have a clue what the generator is, or how it maps to primes.

> > and the val pair is still much
> > better.
>
> Define "better".

More informative and easier to use.

> You
> > don't need them to line up. They are *mappings* in and of
> themselves.
>
> I should have said that the period mapping is not too meaningful in
> isolation. I have no idea what useful information it conveys about
> the tuning, by itself.

In isolation, the most important thing it conveys is the period.

> Again, I seem to have a completely opposite experience. Once you've
> gone through the pairs once and calculated each prime, you have a
> list of tempered primes (you no longer have to deal with *two* sets
> of numbers), and can then easily calculate any tempered interval if
> you can do prime factorization.

That's the tuning map. Yes, multiplying the matrix by the generators
on one side gives you the tuning map. You do that once, and once only,
and then you are done. The result tells you little about the structure
feaures of the tuning unless you compute where some commas map to, but
it is a very nice thing to have. In fact, I think it would be nice if
Graham simply gave it.

However, there's a lot more to this business than the tuning map!

🔗Gene Ward Smith <gwsmith@svpal.org>

1/20/2006 1:47:19 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> Touché! In fact, I show them that way because that's how I keep them
> internally, for easy of calculation. I fall at your knees, oh great
and
> wise master!

Rubbish. You keep then internaly as a matrix. Probably this is simply
how Python prints out your matrix, which of course is a very different
question. You could make it more legible simply by having Python print
out the transpose instead.

🔗Graham Breed <gbreed@gmail.com>

1/20/2006 1:40:54 PM

Carl Lumma wrote:
>>>>The complexity of a rank 2 temperament can be the Kees complexity. >>>>Tenney weight the generator mapping, and taking the difference between >>>>the maximum and minimum values (including zero for the octave).
>>>
>>>What are you using as the complexity of a rank 3 temperament?
>>
>>The sum of the absolute values of the Tenney weighted wedgie, divided by >>the number of primes.
> > > Why would you use a different complexity for R2 and R3 temperaments?

R2 temperaments in the generic regular temperament library will use wedgie complexity. There's still an option of a special LinearTemperament (sic) object that can do the Kees complexity, and maybe even a way of searching for them.

The new, slimline libraries that only do R1 and R2 temperaments don't have any wedgie code in them, so they can't calculate that version of complexity. But I think the Kees version makes more sense and if I could generalize it to higher ranks I would. Not that it makes a great deal of difference.

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

1/20/2006 2:12:50 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> The sum of the absolute values of the Tenney weighted wedgie,
divided by
> the number of primes.

And for error? I might try to see if I track you.

🔗Herman Miller <hmiller@IO.COM>

1/20/2006 8:37:49 PM

wallyesterpaulrus wrote:
> --- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@s...> > wrote:
> > >>Do you actually do calculations in this way, and if so, how?
> > > You multiply the period part of the mapping by the size of the > period, and then the generator part of the mapping by the size of the > generator, and then add these products. It's so much easier when the > numbers going into the calculation are right next to each other, > instead of far apart.

Another benefit of this organization is that it can be useful for classification. You can group all the (1, 0) temperaments together (what used to be called "linear" temperaments), and under those, a subgroup of [(1, 0), (2, -1)] temperaments (all those generated by an octave and a fourth), and so on.

On the other hand, the other arrangement can allow you to recognize relationships between temperaments that don't share the same period mapping, such as meantone and injera.

<1, 2, 4, 7]
<0, -1, -4, -10]

vs.

<2, 3, 4, 5]
<0, 1, 4, 4]

So both arrangements of numbers have some use (and it's trivial to convert one to the other....)

🔗Graham Breed <gbreed@gmail.com>

1/21/2006 8:58:39 AM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
> > >>The sum of the absolute values of the Tenney weighted wedgie,
> > divided by > >>the number of primes.
> > > And for error? I might try to see if I track you.

The RMS of the optimal Tenney-weighted tuning map.

Graham

🔗Graham Breed <gbreed@gmail.com>

1/21/2006 9:00:14 AM

Gene Ward Smith wrote:

> Rubbish. You keep then internaly as a matrix. Probably this is simply
> how Python prints out your matrix, which of course is a very different
> question. You could make it more legible simply by having Python print
> out the transpose instead.

At least you must have studied my code, because you know it better than I do. Still, I'll talk about the code as I understood it before receiving your enlightenment.

The general library "regular_wedgie.py" keeps things as Numeric arrays, and uses the default way Numeric stringifies them. It shows them the way you want and doesn't have a period/generator mapping. The lastest version also shows the tuning map because you asked for it.

The original odd-limit library "temper.py" stores both the period/generator and melodic mappings as lists of pairs. That's because it's easier to loop over one list than two. Older versions did it, and showed it, the other way round.

The newer weighted-prime library "regular.py" was already storing the period and generator mappings as different lists, as it happens. That's because a lot of calculations only require the generator mapping and it saves the trouble of splitting them up. When I made that change I also changed the stringification to look like it used to. As there was no particular reason to keep backwards compatibility I've changed that now so it's back in line with the internal representation and what you want.

On thinking about it I've changed the way the melodic mapping is shown as well. I think the output is clearer as a result so that's how it'll stay. It happens to be more consistent with the constructor anyway. I won't get it to line the numbers up right, or show things as bras and kets, because it adds bloat to the code. The same for the tuning map in this case -- the period and generator should tell you what you need to know about the tuning. I changed the precision of the period and generator while I was at it.

So next time I copy and paste temperament listings, that's what you'll see! Here's that magical 17-limit temperament as an example:

13/41

1202.597 cents period
381.442 cents generator

mapping by period and generator:
[1, 0, 2, -1, -1, 4, -1]
[0, 5, 1, 12, 14, -1, 16]

mapping by steps:
(19, 30, 44, 53, 65, 70, 77)
(22, 35, 51, 62, 76, 81, 90)

complexity measure: 4.545
RMS weighted error: 2.643 cents/octave
max weighted error: 3.966 cents/octave

Graham

🔗Gene Ward Smith <gwsmith@svpal.org>

1/21/2006 11:01:42 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> So next time I copy and paste temperament listings, that's what you'll
> see! Here's that magical 17-limit temperament as an example:

Great!

🔗Carl Lumma <ekin@lumma.org>

1/21/2006 12:04:52 PM

>So next time I copy and paste temperament listings, that's what you'll
>see! Here's that magical 17-limit temperament as an example:
>
>13/41
>
>1202.597 cents period
> 381.442 cents generator
>
>mapping by period and generator:
>[1, 0, 2, -1, -1, 4, -1]
>[0, 5, 1, 12, 14, -1, 16]
>
>mapping by steps:
>(19, 30, 44, 53, 65, 70, 77)
>(22, 35, 51, 62, 76, 81, 90)
>
>complexity measure: 4.545
>RMS weighted error: 2.643 cents/octave
>max weighted error: 3.966 cents/octave

FWIW, I prefer this way too.

-Carl

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/6/2006 10:26:51 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> wallyesterpaulrus wrote:
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
> >
> >
> >>Doing a complete search and knowing it's complete are different
> >
> > things.
> >
> >> I'm still working on the latter.
> >
> >
> > But you've got the former covered? ;)
>
> According to my preliminary results, yes.
>
> > I get interesting results for the complexity of, say, 12-equal in
the
> > 5-limit (without dividing by the number of primes, so everyting
is 3
> > times too large). It seems to be somewhere between 3 * 12 and 3 *
the
> > number of notes per 2:1 in top-12-equal. I posted something to
this
> > effect in '04 . . . In any case, I think it's important that
> > complexity be defined the same way no matter what the rank of the
> > temperament is, though comparing complexities across different
ranks
> > (and or different co-dimensions) may not be meaningful . . .
>
> Do you?

Do I what?

> How do you get that?

How do I get what?

> The mean-abs Tenney-weighted wedgie method will give the complexity
as
> very close to the number of notes to the octave for an equal
> temperament.

Exactly.

> I don't have a non-wedgie method for ranks over 2.

Good think you want to do away with wedgies, then :)

> >>The complexity of a rank 2 temperament can be the Kees
complexity.
> >>
> >>Tenney weight the generator mapping, and taking the difference
> >
> > between
> >
> >>the maximum and minimum values (including zero for the octave).
> >>
> >>There are different ways of calculating the errors, but they
should
> >
> > be
> >
> >>the same for equal and rank 1 temperaments.
> >
> >
> > I thought equal and rank 1 temperaments *were* the same.
>
> Yes, sorry. "Equal and rank 2".
>
> >>It happens that an RMS error leads to a standard deviation as the
> >>complexity,
> >
> > Why does a certain choice for error lead to a certain choice for
> > complexity?
>
> To explain that I'd need to start explaining my method.

I'm looking forward to that.

> I do intend
> that but I'll get through this email blizzard first.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/6/2006 10:31:27 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> wallyesterpaulrus wrote:
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
> >
> >
> >>I haven't checked that
> >>they're linearly independent so all I've shown it that you need
at
> >
> > least
> >
> >>this many superparticulars.
> >
> >
> > Don't you mean "at most this many"?
>
> No, I meant what I said, but I can see two of you misunderstood so
I'll
> explain.
>
> Above each R2 temperament I showed a list of commas which are
unison
> vectors of those temperaments. I made sure there were at least 5
commas
> in each list. What I didn't do was to make sure those commas were
> linearly independent. So you may find that wedging those commas
> together defines an equal, not R2, temperament.

You get a "null wedgie", not an equal temperament, if you wedge
together linearly dependent commas. I don't see how you can possibly
get an equal temperament using only commas in the kernel of the R2
temperament.

> If that's the case,
> then you'll need to start with more superparticulars. Hence the
list I
> started with may not have been adequate.

I'm confused.

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/6/2006 10:55:22 AM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> Gene Ward Smith wrote:
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
> >
> >
> >>The complexity of a rank 2 temperament can be the Kees
complexity.
> >>Tenney weight the generator mapping, and taking the difference
between
> >>the maximum and minimum values (including zero for the octave).
> >
> >
> > What are you using as the complexity of a rank 3 temperament?
>
> The sum of the absolute values of the Tenney weighted wedgie,
divided by
> the number of primes.
>
>
> Graham

Why not use the latter for all ranks?

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/6/2006 11:47:43 AM

--- In tuning-math@yahoogroups.com, "Gene Ward Smith" <gwsmith@...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "wallyesterpaulrus"
<perlich@a...>
> wrote:
> >
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> > wrote:
> >
> > > Do you actually do calculations in this way, and if so, how?
> >
> > You multiply the period part of the mapping by the size of the
> > period, and then the generator part of the mapping by the size of
the
> > generator, and then add these products. It's so much easier when
the
> > numbers going into the calculation are right next to each other,
> > instead of far apart.
>
> I thought that might be it. You only need do this once, and then you
> have a tuning map, which is what you *really* want.
>
> Graham, I believe, thinks of what he's printing out as a matrix, in
> which case printing it out either way makes sense and it is a
question
> of which way is more useful. As a matrix, you can do the operation
you
> want once, and then you're done. The more interesting stuff is
matrix
> multiplication on the other side, since this connects with the
> structural features of the temperament, not with the precise values
of
> the tuning map. And that part is a list of vals. It's what tells you
> things about the nataure of the temperament.

That's great, but Graham was explicitly saying that he's telling you
*one* thing about the temperament, and we can't assume that all the
readers are going to be aware of all these other abstract related
things.

> For heaven's sake, you have Matlab.

I repeat -- human calcuations, not computer calculations.

> > But what do they *convey* to you, to use your own language?
>
> They immediately (especially the period and generator map) tell me
> things about the nature of the temperament.

Such as?

> > > The mapping to primes makes a little more sense in Graham's
setup,
> > > since they correspond to a generator pair, but the structural
> > features
> > > of the tuning are completely obscured,
> >
> > To a human? How?
>
> You don't have a clue what the generator is,

Graham gives it higher up.

>or how it maps to
>primes.

You mean what ratio it represents?

> > > and the val pair is still much
> > > better.
> >
> > Define "better".
>
> More informative and easier to use.

You're not convincing me.

> > You
> > > don't need them to line up. They are *mappings* in and of
> > themselves.
> >
> > I should have said that the period mapping is not too meaningful
in
> > isolation. I have no idea what useful information it conveys
about
> > the tuning, by itself.
>
> In isolation, the most important thing it conveys is the period.

But Graham already gave the period higher up.

> > Again, I seem to have a completely opposite experience. Once
you've
> > gone through the pairs once and calculated each prime, you have a
> > list of tempered primes (you no longer have to deal with *two*
sets
> > of numbers), and can then easily calculate any tempered interval
if
> > you can do prime factorization.
>
> That's the tuning map. Yes, multiplying the matrix by the generators
> on one side gives you the tuning map. You do that once, and once
only,
> and then you are done. The result tells you little about the
structure
> feaures of the tuning unless you compute where some commas map to,
but
> it is a very nice thing to have. In fact, I think it would be nice
if
> Graham simply gave it.
>
> However, there's a lot more to this business than the tuning map!

Graham gave some results which are explicitly stated to have a
certain meaning, and are formatted appropriately to correspond with
just that meaning. You came down pretty harshly on him because you
don't care about that explicitly stated meaning, you care about all
possible meanings, and you expect readers to have Matlab code and
computer hardware built into their brains. Thus I clearly needed to
defend Graham here. Little surprise, then, that you ended up painting
me as a dunce in response.

🔗Gene Ward Smith <genewardsmith@coolgoose.com>

2/6/2006 12:14:40 PM

--- In tuning-math@yahoogroups.com, "wallyesterpaulrus" <perlich@...>
wrote:

> You get a "null wedgie", not an equal temperament, if you wedge
> together linearly dependent commas. I don't see how you can possibly
> get an equal temperament using only commas in the kernel of the R2
> temperament.

Using only commas in the kernel of an R2 temperament, it's impossible.
They must be in the kernel of an R1 temperament.

🔗Graham Breed <gbreed@gmail.com>

2/6/2006 6:59:47 PM

Gene Ward Smith wrote:
> --- In tuning-math@yahoogroups.com, "wallyesterpaulrus" <perlich@...>
> wrote:
> > >>You get a "null wedgie", not an equal temperament, if you wedge >>together linearly dependent commas. I don't see how you can possibly >>get an equal temperament using only commas in the kernel of the R2 >>temperament.
> > > Using only commas in the kernel of an R2 temperament, it's impossible.
> They must be in the kernel of an R1 temperament.

I was wrong, because I was holding my ranks the wrong way up. It'll be a redundant definition of a higher rank temperament.

Graham

🔗Graham Breed <gbreed@gmail.com>

2/6/2006 6:59:58 PM

wallyesterpaulrus wrote:

>>>What are you using as the complexity of a rank 3 temperament?
>>
>>The sum of the absolute values of the Tenney weighted wedgie, > > divided by > >>the number of primes.
> > Why not use the latter for all ranks?

I do use it for all ranks in the code that works with all ranks. I don't use it in the code that doesn't go above rank 2 because it saves defining wedgies. Also, the worst-Kees complexity makes more sense to me and works with the proven method I've since worked out.

Graham

🔗wallyesterpaulrus <perlich@aya.yale.edu>

2/10/2006 12:57:34 PM

--- In tuning-math@yahoogroups.com, Herman Miller <hmiller@...> wrote:
>
> wallyesterpaulrus wrote:
> > --- In tuning-math@yahoogroups.com, "Gene Ward Smith"
<gwsmith@s...>
> > wrote:
> >
> >
> >>Do you actually do calculations in this way, and if so, how?
> >
> >
> > You multiply the period part of the mapping by the size of the
> > period, and then the generator part of the mapping by the size of
the
> > generator, and then add these products. It's so much easier when
the
> > numbers going into the calculation are right next to each other,
> > instead of far apart.
>
> Another benefit of this organization is that it can be useful for
> classification. You can group all the (1, 0) temperaments together
(what
> used to be called "linear" temperaments),

You mean what are *now* called "linear" temperaments, not what *used*
to be called "linear" temperaments, right?