back to list

5-Dimensional Subgroup 13-Limit Minimum Badness Measures for ETs 5-36

🔗cityoftheasleep <igliashon@...>

1/4/2012 6:41:13 PM

/tuning/files/IgliashonJones/5D%20Min%20ET%20Badness%20.pdf

I'm not sure if I should call these 13-limit or 15-limit; they're 13-prime but 15-odd.

I don't know if I calculated badness correctly. I just did sqrt(complexity)*error. If someone can provide me a better formula, I'll run it again.

The "best subgroups" for each ET, the error of which was used to calculate each ET's badness, are as follows:

5-TET: 2.7.11.13.15
6-TET: 2.5.7.9.11
7-TET: 2.9.11.13.15
8-TET: 2.5/3.7/3.11/3.13/3
9-TET: 2.5.7/3.11/3.13/3
10-TET: 2.7.11.13.15
11-TET: 2.7.9.11.15
12-TET: 11-Limit
13-TET: 2.5.9.11.13
14-TET: 2.9/7.11/7.13/7.15/7
15-TET: 2.3.5.11.13
16-TET: 2.5.7.11.13
17-TET: 2.3.7.11.13
18-TET: 2.5.7/3.11/3.13/3
19-TET: 2.3.5.7.13
20-TET: 2.7.11.13.15
21-TET: 2.5.7.11.13
22-TET: 2.7.9.11.15
23-TET: 2.5/3.7/3.11/3.13/3
24-TET: 2.9.11.13.15
25-TET: 2.5.7.9.11
26-TET: 2.7.9.11.13
27-TET: 2.5.7.9.13
28-TET: 2.5.9.11.13
29-TET: 2.3.7/5.11/5.13/5
30-TET: 2.7.9.13.15
31-TET: 2.7.9.11.15
32-TET: 2.5.7/3.11/3.13/3
33-TET: 2.9.11.13.15
34-TET: 2.9.11.13.15
35-TET: 2.5.7.9.11
36-TET: 2.3.7.11.13

All of these subgroups allow for 15-odd-limit pentads of some variety, which should allow for plenty of versatility. It's interesting to note that 9, 11, and 13 all beat 12, and that 20 and 29 are the global best.

Also worth noting is that the most accurate 5D subgroup for 22-TET is the same as 11-TET.

-Igs

🔗Jake Freivald <jdfreivald@...>

1/4/2012 7:04:35 PM

Igs,

I just glanced at these, but I noted that 24 EDO is 2.9.11.13.15 by
your calcs. Why 9 instead of 3? The 3 is off by 2 whopping cents.

Thanks,
Jake

On 1/4/12, cityoftheasleep <igliashon@...> wrote:
> /tuning/files/IgliashonJones/5D%20Min%20ET%20Badness%20.pdf
>
> I'm not sure if I should call these 13-limit or 15-limit; they're 13-prime
> but 15-odd.
>
> I don't know if I calculated badness correctly. I just did
> sqrt(complexity)*error. If someone can provide me a better formula, I'll
> run it again.
>
> The "best subgroups" for each ET, the error of which was used to calculate
> each ET's badness, are as follows:
>
> 5-TET: 2.7.11.13.15
> 6-TET: 2.5.7.9.11
> 7-TET: 2.9.11.13.15
> 8-TET: 2.5/3.7/3.11/3.13/3
> 9-TET: 2.5.7/3.11/3.13/3
> 10-TET: 2.7.11.13.15
> 11-TET: 2.7.9.11.15
> 12-TET: 11-Limit
> 13-TET: 2.5.9.11.13
> 14-TET: 2.9/7.11/7.13/7.15/7
> 15-TET: 2.3.5.11.13
> 16-TET: 2.5.7.11.13
> 17-TET: 2.3.7.11.13
> 18-TET: 2.5.7/3.11/3.13/3
> 19-TET: 2.3.5.7.13
> 20-TET: 2.7.11.13.15
> 21-TET: 2.5.7.11.13
> 22-TET: 2.7.9.11.15
> 23-TET: 2.5/3.7/3.11/3.13/3
> 24-TET: 2.9.11.13.15
> 25-TET: 2.5.7.9.11
> 26-TET: 2.7.9.11.13
> 27-TET: 2.5.7.9.13
> 28-TET: 2.5.9.11.13
> 29-TET: 2.3.7/5.11/5.13/5
> 30-TET: 2.7.9.13.15
> 31-TET: 2.7.9.11.15
> 32-TET: 2.5.7/3.11/3.13/3
> 33-TET: 2.9.11.13.15
> 34-TET: 2.9.11.13.15
> 35-TET: 2.5.7.9.11
> 36-TET: 2.3.7.11.13
>
> All of these subgroups allow for 15-odd-limit pentads of some variety, which
> should allow for plenty of versatility. It's interesting to note that 9,
> 11, and 13 all beat 12, and that 20 and 29 are the global best.
>
> Also worth noting is that the most accurate 5D subgroup for 22-TET is the
> same as 11-TET.
>
> -Igs
>
>
>
> ------------------------------------
>
> You can configure your subscription by sending an empty email to one
> of these addresses (from the address at which you receive the list):
> tuning-subscribe@yahoogroups.com - join the tuning group.
> tuning-unsubscribe@yahoogroups.com - leave the group.
> tuning-nomail@yahoogroups.com - turn off mail from the group.
> tuning-digest@yahoogroups.com - set group to send daily digests.
> tuning-normal@yahoogroups.com - set group to send individual emails.
> tuning-help@yahoogroups.com - receive general help information.
> Yahoo! Groups Links
>
>
>
>

🔗Mike Battaglia <battaglia01@...>

1/4/2012 7:07:50 PM

Likewise with 7-EDO.

-Mike

On Wed, Jan 4, 2012 at 10:04 PM, Jake Freivald <jdfreivald@...> wrote:

> **
>
>
> Igs,
>
> I just glanced at these, but I noted that 24 EDO is 2.9.11.13.15 by
> your calcs. Why 9 instead of 3? The 3 is off by 2 whopping cents.
>
> Thanks,
> Jake
>

🔗cityoftheasleep <igliashon@...>

1/4/2012 7:33:49 PM

Hi Jake,
I just realized I forgot to do 2.3.11.13.15. The error for 24 would be the same on that subgroup.

-Igs

--- In tuning@yahoogroups.com, Jake Freivald <jdfreivald@...> wrote:
>
> Igs,
>
> I just glanced at these, but I noted that 24 EDO is 2.9.11.13.15 by
> your calcs. Why 9 instead of 3? The 3 is off by 2 whopping cents.
>
> Thanks,
> Jake
>
>
> On 1/4/12, cityoftheasleep <igliashon@...> wrote:
> > /tuning/files/IgliashonJones/5D%20Min%20ET%20Badness%20.pdf
> >
> > I'm not sure if I should call these 13-limit or 15-limit; they're 13-prime
> > but 15-odd.
> >
> > I don't know if I calculated badness correctly. I just did
> > sqrt(complexity)*error. If someone can provide me a better formula, I'll
> > run it again.
> >
> > The "best subgroups" for each ET, the error of which was used to calculate
> > each ET's badness, are as follows:
> >
> > 5-TET: 2.7.11.13.15
> > 6-TET: 2.5.7.9.11
> > 7-TET: 2.9.11.13.15
> > 8-TET: 2.5/3.7/3.11/3.13/3
> > 9-TET: 2.5.7/3.11/3.13/3
> > 10-TET: 2.7.11.13.15
> > 11-TET: 2.7.9.11.15
> > 12-TET: 11-Limit
> > 13-TET: 2.5.9.11.13
> > 14-TET: 2.9/7.11/7.13/7.15/7
> > 15-TET: 2.3.5.11.13
> > 16-TET: 2.5.7.11.13
> > 17-TET: 2.3.7.11.13
> > 18-TET: 2.5.7/3.11/3.13/3
> > 19-TET: 2.3.5.7.13
> > 20-TET: 2.7.11.13.15
> > 21-TET: 2.5.7.11.13
> > 22-TET: 2.7.9.11.15
> > 23-TET: 2.5/3.7/3.11/3.13/3
> > 24-TET: 2.9.11.13.15
> > 25-TET: 2.5.7.9.11
> > 26-TET: 2.7.9.11.13
> > 27-TET: 2.5.7.9.13
> > 28-TET: 2.5.9.11.13
> > 29-TET: 2.3.7/5.11/5.13/5
> > 30-TET: 2.7.9.13.15
> > 31-TET: 2.7.9.11.15
> > 32-TET: 2.5.7/3.11/3.13/3
> > 33-TET: 2.9.11.13.15
> > 34-TET: 2.9.11.13.15
> > 35-TET: 2.5.7.9.11
> > 36-TET: 2.3.7.11.13
> >
> > All of these subgroups allow for 15-odd-limit pentads of some variety, which
> > should allow for plenty of versatility. It's interesting to note that 9,
> > 11, and 13 all beat 12, and that 20 and 29 are the global best.
> >
> > Also worth noting is that the most accurate 5D subgroup for 22-TET is the
> > same as 11-TET.
> >
> > -Igs
> >
> >
> >
> > ------------------------------------
> >
> > You can configure your subscription by sending an empty email to one
> > of these addresses (from the address at which you receive the list):
> > tuning-subscribe@yahoogroups.com - join the tuning group.
> > tuning-unsubscribe@yahoogroups.com - leave the group.
> > tuning-nomail@yahoogroups.com - turn off mail from the group.
> > tuning-digest@yahoogroups.com - set group to send daily digests.
> > tuning-normal@yahoogroups.com - set group to send individual emails.
> > tuning-help@yahoogroups.com - receive general help information.
> > Yahoo! Groups Links
> >
> >
> >
> >
>

🔗Carl Lumma <carl@...>

1/4/2012 11:57:47 PM

Igs wrote:

> /tuning/files/IgliashonJones
> /5D%20Min%20ET%20Badness%20.pdf

Nice graph. I'd like to see a 4-D version too.

> I don't know if I calculated badness correctly. I just did
> sqrt(complexity)*error. If someone can provide me a better
> formula, I'll run it again.

I recommend n^(5/4) * error

(n^(4/3) * error if you do a 4-D version)

> I'm not sure if I should call these 13-limit or 15-limit;
> they're 13-prime but 15-odd.

I'd use 15-limit. I'd also try to think something better
that doesn't use the term "limit" at all, but I did that
and I couldn't think of anything.

> The "best subgroups" for each ET, the error of which was
> used to calculate each ET's badness, are as follows:

One big problem with using weighted error for such a
search is that it favors complex subgroups showing up.
That's because larger errors are forgiven more on the
high primes. See
http://lumma.org/music/theory/subgroup/
where I ran afoul of this (you'll have to actually read
it this time to get to that part).

Possible Solutions:

1. Don't use weighted error. Not a great solution.

2. Report the best ET for each subgroup rather than the
other way around. This is what I went with, but it
would be nice to stay doing what you're doing and
actually solve the problem.

3. Use weighted complexity. A good but perhaps only
partial solution.

4. Include a penalty factor based on the concordance of
the subgroup (seen as a chord or something). For rank 1
this will be close to using weighted complexity (it'll
differ for subgroups that contain fractions like 7/3).
This is probably the best solution. I'd have to think
about the best penalty to use. Either that, or do what
all reasonable people do these days: ask Keenan Pepper.
The first thing that comes to mind is to multiply the
subgroup elements together and take the Tenney height
of the resulting ratio.

-Carl

🔗Carl Lumma <carl@...>

1/5/2012 12:01:42 AM

I wrote:

> 4. Include a penalty factor based on the concordance of
> the subgroup (seen as a chord or something). For rank 1
> this will be close to using weighted complexity (it'll
> differ for subgroups that contain fractions like 7/3).
> This is probably the best solution. I'd have to think
> about the best penalty to use. Either that, or do what
> all reasonable people do these days: ask Keenan Pepper.
> The first thing that comes to mind is to multiply the
> subgroup elements together and take the Tenney height
> of the resulting ratio.

If possible you should avoid simply undoing the error
weighting.... -C.

🔗cityoftheasleep <igliashon@...>

1/5/2012 10:37:56 AM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:
>
> Nice graph. I'd like to see a 4-D version too.

Thanks. But I don't have the time, and I'm considering 5-D to be the minimum size worth looking at to ensure a plenitude of harmonic versatility.

> > I don't know if I calculated badness correctly. I just did
> > sqrt(complexity)*error. If someone can provide me a better
> > formula, I'll run it again.
>
> I recommend n^(5/4) * error

So you'd penalize complexity more? I went with square root because I figured that the negative impact of further increasing complexity from a given n-ET decreases as the complexity of n increases. Or in other words, as n increases, the difference of rank-1 generator size between n and n+1 decreases.

> I'd use 15-limit. I'd also try to think something better
> that doesn't use the term "limit" at all, but I did that
> and I couldn't think of anything.

Okay, 15-limit.

> One big problem with using weighted error for such a
> search is that it favors complex subgroups showing up.
> That's because larger errors are forgiven more on the
> high primes. See
> http://lumma.org/music/theory/subgroup/
> where I ran afoul of this (you'll have to actually read
> it this time to get to that part).

I've read it, but many things didn't make much sense to me before and still don't now. Are you saying that 5 cents of adjusted error on the 2.7.11.13.15 subgroup is somehow more error than 5 cents of adjusted error on the 2.3.5.7.11 subgroup? From what Paul told me about optimization, I got the impression that higher-complexity doesn't mean odd limit but rather lattice complexity, meaning that on the 5-limit, 2, 3, and 5 are all weighted equally, but intervals like 27/25 or 125/81 are forgiven more. There was a big discussion between Paul and Michael S. over some temperament where 10/7 was equated with 13/9 but came out as almost exactly 10/7, and Michael thought that meant that this is because ratios of 13 are weighted less in the optimization than ratios of 7. Paul said no, it's because of the push-and-pull effect of other commas in the temperament they were looking at, and demonstrated another temperament where the same two ratios were equated and the resultant interval was right between them in size.

Also, I take issue with a couple aspects of your approach: 1) you allow for the inclusion of the 17th harmonic in all sizes of subgroup. I tried to be more conservative, putting the cutoff at 15 and looking only at pentads, because we can be reasonably certain that *all* pentads of the 15-odd-limit will be concordant in at least one voicing. A lot of the best subroups you came up with for the various ETs strike me as not being very musically-relevant. 2) in some of the pages you looked at nonoctave ETs, rather than including non-integer basis intervals. I included subgroups like 2.5/3.7/3.11/3.13/3, to include chords like 3:5:7:11:13 but ensure that they only appeared in octave-repeating ETs.

> 2. Report the best ET for each subgroup rather than the
> other way around. This is what I went with, but it
> would be nice to stay doing what you're doing and
> actually solve the problem.

Well, I did this too; see my first post on the subject from a couple days ago. There only a few ETs on this list--20, 25, 30, 31, 34, and 36. I also did a list of best ETs of 24 or fewer notes, which also had only a few ETs--9, 17, 19, 20, 21, 22, 23, and 24. Only 20 is on both lists.

> 4. Include a penalty factor based on the concordance of
> the subgroup (seen as a chord or something).

I thought about this, too. I'd be willing to try it out if you and/or Keenan can come up with a good way to implement it.

-Igs

🔗Carl Lumma <carl@...>

1/5/2012 12:05:51 PM

"cityoftheasleep" <igliashon@...> wrote:

> > Nice graph. I'd like to see a 4-D version too.
>
> Thanks. But I don't have the time, and I'm considering 5-D
> to be the minimum size worth looking at to ensure a plenitude
> of harmonic versatility.

That's a shame for a couple reasons. First because it
must mean you haven't automated your process -- you ought
to be able to spit it out by just changing one number.
Second, people spent 300 years on triads, I think tetrads
are worthy of consideration. Of course you can subset
from 5-D, but the search might miss some good systems by
forcing the extra basis element on during the search.

> > I recommend n^(5/4) * error
>
> So you'd penalize complexity more?

Yes, which you should like. This is logflat badness --
see my recent posts for why it's important.

> I went with square root because I figured that the
> negative impact of further increasing complexity from
> a given n-ET decreases as the complexity of n increases.
> Or in other words, as n increases, the difference
> of rank-1 generator size between n and n+1 decreases.

Your search returns a result for every ET and only
considers subgroups starting with 2 and uses notes/oct
as a complexity. All fine choices, but we need to be
aware that it means that complexity plays no role in
which subgroup is selected for each ET.
However: badness figures, if you report them, can still
be used to compare final ETs results to one another,
and then the complexity term will be important again.

Your point about increasing complexity mattering less
is perfectly valid. Just be aware that it penalizes
simple systems more than you would probably like.

> > One big problem with using weighted error for such a
> > search is that it favors complex subgroups showing up.
> > That's because larger errors are forgiven more on the
> > high primes. See
> > http://lumma.org/music/theory/subgroup/
> > where I ran afoul of this (you'll have to actually read
> > it this time to get to that part).
>
> I've read it, but many things didn't make much sense to
> me before and still don't now.

Sorry. I'd appreciate any suggestions for improvement
(offlist perhaps).

> Are you saying that 5 cents of adjusted error on the
> 2.7.11.13.15 subgroup is somehow more error than 5 cents
> of adjusted error on the 2.3.5.7.11 subgroup?

All else being equal there's an equal chance of equal
error on any prime among the ETs. But if you use Tenney
weighting, it makes a given error on large primes seem
smaller. So the average ET will appear to do better at
subgroups with larger primes in them.

I don't know what adjusted error is.

> From what Paul told me about optimization, I got the
> impression that higher-complexity doesn't mean odd limit
> but rather lattice complexity, meaning that on the 5-limit,
> 2, 3, and 5 are all weighted equally, but intervals like
> 27/25 or 125/81 are forgiven more.

Sorry, I don't know what weighting you're actually using.
If you're using TOP damage or TE error or any other
function of Tenney-weighted prime errors, you'll have the
problem I describe above, as they do NOT weight the
primes equally. You seem to be describing a case when
Paul showed that TOP damage reflects the weighted error
of all JI intervals. That's true but not relevant here.
OTOH I could be misunderstanding from your description
what Paul was saying.

> Also, I take issue with a couple aspects of your approach:
> 1) you allow for the inclusion of the 17th harmonic in all
> sizes of subgroup. I tried to be more conservative, putting
> the cutoff at 15 and looking only at pentads, because we can
> be reasonably certain that *all* pentads of the 15-odd-limit
> will be concordant in at least one voicing.

I do check all 5-combinations, which means I include
everything you do except for subgroups with fractions in
them. I agree that subgroups like 2.15.17 etc are a bit
ridiculous -- hence my "penalty" suggestion.

> 2) in some of the pages you looked at nonoctave ETs,
> rather than including non-integer basis intervals.
> I included subgroups like 2.5/3.7/3.11/3.13/3, to include
> chords like 3:5:7:11:13 but ensure that they only appeared
> in octave-repeating ETs.

All pages include non-octave subgroups. The vals I give
always correspond to the subgroup, but to keep a common
basis for the complexity, I always report the nearest
steps/oct (calling it the "ET") even if 2 isn't in the
subgroup, error calc, and tuning optimization. You can
always think of these systems as ED2s if you want.
For example

> "ET" primes val
> 41 (5 13 17) <96 153 169]
> 41 (5 11 13 17) <96 143 153 169]
> 41 (5 7 11 13 17) <96 116 143 153 169]
> This looks 3 * 88-CET.

All of these are 96 ED5. All would have 41/oct if there
were an octave. Therefore they all have a step size near
29 cents, and therefore every 3rd note is like a note
of 88-CET.

I always use the error that results from the optimal
octave stretch. That's more theoretically interesting
but I like your approach of forcing pure octaves since
that's what most instruments can handle easier.

> > 4. Include a penalty factor based on the concordance of
> > the subgroup (seen as a chord or something).
>
> I thought about this, too. I'd be willing to try it out
> if you and/or Keenan can come up with a good way to
> implement it.

OK! If we do that I think you'll have a bona fide
contribution to music theory on your hands.

-Carl

🔗cityoftheasleep <igliashon@...>

1/5/2012 12:29:09 PM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:
> > Thanks. But I don't have the time, and I'm considering 5-D
> > to be the minimum size worth looking at to ensure a plenitude
> > of harmonic versatility.
>
> That's a shame for a couple reasons. First because it
> must mean you haven't automated your process -- you ought
> to be able to spit it out by just changing one number.

I'm not a programmer. All these numbers I got by going to Graham's temperament finder, putting in the subgroup, then putting in each ET manually, calculating, copying the read-out for "adjusted error" (which is different than TE error, but I'm not sure how), and pasting into a spreadsheet. It took me about three days of leisurely work to get it done. If I knew how to automate it, that would have made my life terribly easier.

> Second, people spent 300 years on triads, I think tetrads
> are worthy of consideration. Of course you can subset
> from 5-D, but the search might miss some good systems by
> forcing the extra basis element on during the search.

There's more to this than just chordal harmony. One of the things that makes the meantone diatonic work so well is that non-chord tones played melodically are still consonant. Insisting on at least a 5D system means that melodies over 4D or 3D chords should also be possible. It also means that all the subgroups should be reasonably competitive with the 11-limit, in terms of concordance and versatility.

> > > I recommend n^(5/4) * error
> >
> > So you'd penalize complexity more?
>
> Yes, which you should like. This is logflat badness --
> see my recent posts for why it's important.

I've been following most of your posts but haven't seen this. In any case, I'll try that weighting and see what it looks like.

> Your search returns a result for every ET and only
> considers subgroups starting with 2 and uses notes/oct
> as a complexity. All fine choices, but we need to be
> aware that it means that complexity plays no role in
> which subgroup is selected for each ET.

Huh? Why is that important?

> However: badness figures, if you report them, can still
> be used to compare final ETs results to one another,
> and then the complexity term will be important again.

...right...

> Your point about increasing complexity mattering less
> is perfectly valid. Just be aware that it penalizes
> simple systems more than you would probably like.

Well, I don't know that it does. I often feel that in most badness rankings I've seen, simple systems like 1-ET, 3-ET, 5-ET, etc. come up much higher than I think they should. I think my square-root-based ranking looks pretty good at the moment.

> > I've read it, but many things didn't make much sense to
> > me before and still don't now.
>
> Sorry. I'd appreciate any suggestions for improvement
> (offlist perhaps).

Will do when I get the chance.

> All else being equal there's an equal chance of equal
> error on any prime among the ETs. But if you use Tenney
> weighting, it makes a given error on large primes seem
> smaller.

How so?

> So the average ET will appear to do better at
> subgroups with larger primes in them.

Isn't this just because larger primes are often more accurately represented in the average ET <100? When I was off in high-harmonic la-la land, I remember finding some near-dead-on matches in the low ETs for harmonics above 17, much closer most of the time than harmonics <15.

> I don't know what adjusted error is.

Graham never gave me a straight answer about it, or how it differs from the TE error his app spits out. However, when I calculated the average errors of some ETs on a certain n-limit tonality diamond, the results I got were pretty to close to the results I got from Graham's app if I checked those ETs on the same n-limit. So I just figured it was a similar but more elegant error measure.

> Sorry, I don't know what weighting you're actually using.

Neither do I! It's all from Graham's app. In any case, I thought weighting was only important in optimization, with error being a fixed quantity?

> You seem to be describing a case when
> Paul showed that TOP damage reflects the weighted error
> of all JI intervals. That's true but not relevant here.

Why not relevant?

> OTOH I could be misunderstanding from your description
> what Paul was saying.

This would be a lot easier if I knew how Graham's app was working.

> I do check all 5-combinations, which means I include
> everything you do except for subgroups with fractions in
> them. I agree that subgroups like 2.15.17 etc are a bit
> ridiculous -- hence my "penalty" suggestion.

I'd rather just ignore them, but your penalty suggestion still might be worth implementing them. Paul and Keenan were both adamant about including fractional basis elements. ETs like 25 and 23 suddenly look a lot better when you do include them.

> All pages include non-octave subgroups. The vals I give
> always correspond to the subgroup, but to keep a common
> basis for the complexity, I always report the nearest
> steps/oct (calling it the "ET") even if 2 isn't in the
> subgroup, error calc, and tuning optimization. You can
> always think of these systems as ED2s if you want.

I think the fractional basis element approach is better, because it ensures that there's always a reasonably-accurate tuning of 2/1. Without that, your 8-ET (for instance) is just BP.

> I always use the error that results from the optimal
> octave stretch. That's more theoretically interesting
> but I like your approach of forcing pure octaves since
> that's what most instruments can handle easier.

To my knowledge, I'm not forcing pure octaves. But again, Graham's app is doing all my calculations and I don't know exactly how it works.

> OK! If we do that I think you'll have a bona fide
> contribution to music theory on your hands.

Great!

-Igs

🔗gbreed@...

1/5/2012 12:51:56 PM

The adjusted error is the TE error multiplied by the log of the largest prime. It matches the target error.

Graham

------Original message------
From: cityoftheasleep <igliashon@...>
To: <tuning@yahoogroups.com>
Date: Thursday, January 5, 2012 8:29:09 PM GMT-0000
Subject: [tuning] Re: 5-Dimensional Subgroup 13-Limit Minimum Badness Measures for ETs 5-36

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:
> > Thanks. But I don't have the time, and I'm considering 5-D
> > to be the minimum size worth looking at to ensure a plenitude
> > of harmonic versatility.
>
> That's a shame for a couple reasons. First because it
> must mean you haven't automated your process -- you ought
> to be able to spit it out by just changing one number.

I'm not a programmer. All these numbers I got by going to Graham's temperament finder, putting in the subgroup, then putting in each ET manually, calculating, copying the read-out for "adjusted error" (which is different than TE error, but I'm not sure how), and pasting into a spreadsheet. It took me about three days of leisurely work to get it done. If I knew how to automate it, that would have made my life terribly easier.

> Second, people spent 300 years on triads, I think tetrads
> are worthy of consideration. Of course you can subset
> from 5-D, but the search might miss some good systems by
> forcing the extra basis element on during the search.

There's more to this than just chordal harmony. One of the things that makes the meantone diatonic work so well is that non-chord tones played melodically are still consonant. Insisting on at least a 5D system means that melodies over 4D or 3D chords should also be possible. It also means that all the subgroups should be reasonably competitive with the 11-limit, in terms of concordance and versatility.

> > > I recommend n^(5/4) * error
> >
> > So you'd penalize complexity more?
>
> Yes, which you should like. This is logflat badness --
> see my recent posts for why it's important.

I've been following most of your posts but haven't seen this. In any case, I'll try that weighting and see what it looks like.

> Your search returns a result for every ET and only
> considers subgroups starting with 2 and uses notes/oct
> as a complexity. All fine choices, but we need to be
> aware that it means that complexity plays no role in
> which subgroup is selected for each ET.

Huh? Why is that important?

> However: badness figures, if you report them, can still
> be used to compare final ETs results to one another,
> and then the complexity term will be important again.

...right...

> Your point about increasing complexity mattering less
> is perfectly valid. Just be aware that it penalizes
> simple systems more than you would probably like.

Well, I don't know that it does. I often feel that in most badness rankings I've seen, simple systems like 1-ET, 3-ET, 5-ET, etc. come up much higher than I think they should. I think my square-root-based ranking looks pretty good at the moment.

> > I've read it, but many things didn't make much sense to
> > me before and still don't now.
>
> Sorry. I'd appreciate any suggestions for improvement
> (offlist perhaps).

Will do when I get the chance.

> All else being equal there's an equal chance of equal
> error on any prime among the ETs. But if you use Tenney
> weighting, it makes a given error on large primes seem
> smaller.

How so?

> So the average ET will appear to do better at
> subgroups with larger primes in them.

Isn't this just because larger primes are often more accurately represented in the average ET <100? When I was off in high-harmonic la-la land, I remember finding some near-dead-on matches in the low ETs for harmonics above 17, much closer most of the time than harmonics <15.

> I don't know what adjusted error is.

Graham never gave me a straight answer about it, or how it differs from the TE error his app spits out. However, when I calculated the average errors of some ETs on a certain n-limit tonality diamond, the results I got were pretty to close to the results I got from Graham's app if I checked those ETs on the same n-limit. So I just figured it was a similar but more elegant error measure.

> Sorry, I don't know what weighting you're actually using.

Neither do I! It's all from Graham's app. In any case, I thought weighting was only important in optimization, with error being a fixed quantity?

> You seem to be describing a case when
> Paul showed that TOP damage reflects the weighted error
> of all JI intervals. That's true but not relevant here.

Why not relevant?

> OTOH I could be misunderstanding from your description
> what Paul was saying.

This would be a lot easier if I knew how Graham's app was working.

> I do check all 5-combinations, which means I include
> everything you do except for subgroups with fractions in
> them. I agree that subgroups like 2.15.17 etc are a bit
> ridiculous -- hence my "penalty" suggestion.

I'd rather just ignore them, but your penalty suggestion still might be worth implementing them. Paul and Keenan were both adamant about including fractional basis elements. ETs like 25 and 23 suddenly look a lot better when you do include them.

> All pages include non-octave subgroups. The vals I give
> always correspond to the subgroup, but to keep a common
> basis for the complexity, I always report the nearest
> steps/oct (calling it the "ET") even if 2 isn't in the
> subgroup, error calc, and tuning optimization. You can
> always think of these systems as ED2s if you want.

I think the fractional basis element approach is better, because it ensures that there's always a reasonably-accurate tuning of 2/1. Without that, your 8-ET (for instance) is just BP.

> I always use the error that results from the optimal
> octave stretch. That's more theoretically interesting
> but I like your approach of forcing pure octaves since
> that's what most instruments can handle easier.

To my knowledge, I'm not forcing pure octaves. But again, Graham's app is doing all my calculations and I don't know exactly how it works.

> OK! If we do that I think you'll have a bona fide
> contribution to music theory on your hands.

Great!

-Igs

------------------------------------

You can configure your subscription by sending an empty email to one
of these addresses (from the address at which you receive the list):
tuning-subscribe@yahoogroups.com - join the tuning group.
tuning-unsubscribe@yahoogroups.com - leave the group.
tuning-nomail@yahoogroups.com - turn off mail from the group.
tuning-digest@yahoogroups.com - set group to send daily digests.
tuning-normal@yahoogroups.com - set group to send individual emails.
tuning-help@yahoogroups.com - receive general help information.
Yahoo! Groups Links

🔗Carl Lumma <carl@...>

1/5/2012 9:10:36 PM

Igs wrote:

> I'm not a programmer. All these numbers I got by going
> to Graham's temperament finder, putting in the subgroup, then
> putting in each ET manually, calculating, copying the read-out
> for "adjusted error" (which is different than TE error, but
> I'm not sure how), and pasting into a spreadsheet. It took me
> about three days of leisurely work to get it done.

Holy *. I'll code it up for you.

> There's more to this than just chordal harmony. One of
> the things that makes the meantone diatonic work so well
> is that non-chord tones played melodically are still
> consonant. Insisting on at least a 5D system means that
> melodies over 4D or 3D chords should also be possible.
> It also means that all the subgroups should be reasonably
> competitive with the 11-limit, in terms of concordance
> and versatility.

That's due to the fact that the 5-limit is a particularly
consonant subgroup, and the fact that meantone has low
complexity (so you get more 5-limit intervals per note,
hence you are likely to land on one). In my book, if you
get the subgroup penalty and complexity exponent right,
you'll replicate this effect even for smaller subgroups.
But anyway, you'll be able to type in any number you want.

> I've been following most of your posts but haven't seen this.
> In any case, I'll try that weighting and see what it looks
> like.

I see you did, but now we know the adjusted error will
throw your results off so we'll need to redo them. I'll
just make the complexity weighting another command-line
choice. Here's the input form I'm thinking of

Input: subgroup-srch D minET maxET err penalty comp

D = [number] dimensionality of subgroups
minET = [number] smallest ET considered
maxET = [number] largest ET considered
err = "TE", "TOP", or "RMS" (unweighted)
penalty = ?? subgroup penalty formula
comp = "logflat" or "root" (square root)

Each line of output: ET, best subgroup, error, badness

> > Your search returns a result for every ET and only
> > considers subgroups starting with 2 and uses notes/oct
> > as a complexity. All fine choices, but we need to be
> > aware that it means that complexity plays no role in
> > which subgroup is selected for each ET.
>
> Huh? Why is that important?

It's just to note that you can find the best subgroup for
a given ET by only looking at the error. You only need to
consider complexity if you want to compare the performance
of the ETs afterward.

> > All else being equal there's an equal chance of equal
> > error on any prime among the ETs. But if you use Tenney
> > weighting, it makes a given error on large primes seem
> > smaller.
>
> How so?

For a subgroup s1.s2.s3... with the error (in cents) of
s1 = e1, s2 = e2... the TE error is

RMS(e1/log(s1), e2/log(s2), ...)

We use base-2 logs, so log(2) = 1. Which is to say the
octave's error isn't weighted at all. The error on 3 is
weighted by something like 1.6. That means the error is
divided by 1.6 relative to the octave's error. So if both
2 and 3 were mistuned by the same amount it would appear
that 2 was worse. That's because the concordance of
octaves is generally thought to deteriorate faster with
mistuning than tritaves. 1.6 times faster in fact. This
works fine unless you're picking the subgroup based on
the error. Then complex ones will tend to look better.

> > So the average ET will appear to do better at
> > subgroups with larger primes in them.
>
> Isn't this just because larger primes are often more
> accurately represented in the average ET < 100?

They're not! Any prime other than 2 is likely to have
the same unweighted error, on average.

> > You seem to be describing a case when
> > Paul showed that TOP damage reflects the weighted error
> > of all JI intervals. That's true but not relevant here.
>
> Why not relevant?

Why aren't zebras on rollerskates relevant? Kind of
hard to answer.

> > I do check all 5-combinations, which means I include
> > everything you do except for subgroups with fractions in
> > them. I agree that subgroups like 2.15.17 etc are a bit
> > ridiculous -- hence my "penalty" suggestion.
>
> I'd rather just ignore them,

That's fine, as long as you have a definite rule about
ignoring them, so people believe your search was fair.
I tried to come up with a rule, for instance that the
subgroup must have at least 3 consecutive primes or
something like that, but that won't work on subgroups with
fractions in them. Since I couldn't think of a good rule
I proposed a penalty factor. If you can suggest an
"ignore this subgroup" rule I'm all ears.

> Paul and Keenan were both adamant about including fractional
> basis elements. ETs like 25 and 23 suddenly look a lot
> better when you do include them.

Absolutely right; the only reason I didn't include them
before is because I was ignorant. By the way, I've been
channeling Graham this whole time. It's great!

> I think the fractional basis element approach is better,
> because it ensures that there's always a reasonably-
> accurate tuning of 2/1. Without that, your 8-ET (for
> instance) is just BP.

Well, including those subgroups does more than address
this particular issue.

> > I always use the error that results from the optimal
> > octave stretch. That's more theoretically interesting
> > but I like your approach of forcing pure octaves since
> > that's what most instruments can handle easier.
>
> To my knowledge, I'm not forcing pure octaves.

If you use TE error (or adjusted error) you are in fact
getting the error of the ET when octaves are optimally
stretched. Is that what you want? Because some systems
have significantly higher error if you don't allow that
(it's not possible with ordinary guitar tunings and
straight frets, for instance).

> > OK! If we do that I think you'll have a bona fide
> > contribution to music theory on your hands.
>
> Great!

Yeah, this is it. I can feel it!

Hang on, there's no way you could have entered every
possible 15-limit subgroup into Graham's website for
every ET, is there? Are you already using some rule to
weed them down?

-Carl

🔗cityoftheasleep <igliashon@...>

1/5/2012 10:14:47 PM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:

> Holy *. I'll code it up for you.

LOL, that would be awesome!

> That's due to the fact that the 5-limit is a particularly
> consonant subgroup, and the fact that meantone has low
> complexity (so you get more 5-limit intervals per note,
> hence you are likely to land on one).

This is true, because the 5-limit is the only 3D group of the 15-odd-limit that automatically gets you a 5D group--2.3.5.9.15, to be exact. Which is to say, you can link 5-limit triads to make larger consonant chords in a way that is not possible with, say, 7-limit triads.

> In my book, if you
> get the subgroup penalty and complexity exponent right,
> you'll replicate this effect even for smaller subgroups.
> But anyway, you'll be able to type in any number you want.

I'm curious about this. Can you give an example of how this effect could be replicated for smaller subgroups?

> Input: subgroup-srch D minET maxET err penalty comp
>
> D = [number] dimensionality of subgroups
> minET = [number] smallest ET considered
> maxET = [number] largest ET considered
> err = "TE", "TOP", or "RMS" (unweighted)
> penalty = ?? subgroup penalty formula
> comp = "logflat" or "root" (square root)
>
> Each line of output: ET, best subgroup, error, badness

Great. This'll make the minimal badness comparisons easier. Someday though it might be nice to see, I dunno, maybe the top 5 subgroups per ET, but I'm not being picky. If it's a pain to throw that in, forget it.

> > Huh? Why is that important?
>
> It's just to note that you can find the best subgroup for
> a given ET by only looking at the error. You only need to
> consider complexity if you want to compare the performance
> of the ETs afterward.

Oh, right. Duh.

> For a subgroup s1.s2.s3... with the error (in cents) of
> s1 = e1, s2 = e2... the TE error is
>
> RMS(e1/log(s1), e2/log(s2), ...)
>
> We use base-2 logs, so log(2) = 1. Which is to say the
> octave's error isn't weighted at all. The error on 3 is
> weighted by something like 1.6. That means the error is
> divided by 1.6 relative to the octave's error. So if both
> 2 and 3 were mistuned by the same amount it would appear
> that 2 was worse. That's because the concordance of
> octaves is generally thought to deteriorate faster with
> mistuning than tritaves. 1.6 times faster in fact. This
> works fine unless you're picking the subgroup based on
> the error. Then complex ones will tend to look better.

That answers a lot of questions I asked in a post I made right before I saw this one. Thanks!

> They're not! Any prime other than 2 is likely to have
> the same unweighted error, on average.

I'm not so sure about that. Do you have a proof? I'd think, for example, prime 19 is more likely to be represented accurately in any random ET, because it's so well represented in 4-ET (and therefore all the multiples of 4-ET). Surely some primes are more likely than others to have low unweighted error?

> > I'd rather just ignore them,
>
> That's fine, as long as you have a definite rule about
> ignoring them, so people believe your search was fair.

My rule is that the elements of the subgroup must be able to combine to form at least one otonal 15-limit pentad. That narrows the field considerably, especially when non-integer basis elements are used. I'm sure there are arguments for including, say, 17-limit or 19-limit intervals, but I'm not as convinced of the definitive concordance of all 19-limit otonal pentads as I am of all 15-limit ones. What do you think of that?

> Absolutely right; the only reason I didn't include them
> before is because I was ignorant. By the way, I've been
> channeling Graham this whole time. It's great!

Channeling him? Has he passed on the great beyond? I never knew you were a medium!

> Well, including those subgroups does more than address
> this particular issue.

Even better!

> If you use TE error (or adjusted error) you are in fact
> getting the error of the ET when octaves are optimally
> stretched. Is that what you want? Because some systems
> have significantly higher error if you don't allow that
> (it's not possible with ordinary guitar tunings and
> straight frets, for instance).

Why should it be impossible for guitars with straight frets? I only used TE error because I didn't have any other options available to me, and everyone's assured me that TE error is proportional to POTE error (or whatever you want to call it). But if you're correct about 19-ET beating 22-ET for TE tunings but not POTE tunings, then I'd prefer to deal with POTE. If you can give me a way to calculate that in the program you're coding up, that'd be rad.

> Hang on, there's no way you could have entered every
> possible 15-limit subgroup into Graham's website for
> every ET, is there? Are you already using some rule to
> weed them down?

See above. I ignored lots of rational-basis subgroups, like 2.5.9/7.11/3.13/7, which doesn't allow for otonal pentadic harmony between its basis elements.

-Igs

🔗Carl Lumma <carl@...>

1/6/2012 12:55:27 PM

"cityoftheasleep" <igliashon@...> wrote:

> > In my book, if you
> > get the subgroup penalty and complexity exponent right,
> > you'll replicate this effect even for smaller subgroups.
> > But anyway, you'll be able to type in any number you want.
>
> I'm curious about this. Can you give an example of how this
> effect could be replicated for smaller subgroups?

The piece you just linked me to in porcupine[8] had some
great examples of playing melodic passing tones over chords
and having it sound in tune! (And can we please agree this
is not possible in 8-ET or 10-ET using a single timbre like
the piano?) You can hear the same in many of Petr Parizek's
pieces, and in Glassic and Decatonic Swing by Erlich and
Sarkissian. And howabout this video for 3.5.7

http://www.youtube.com/watch?v=Lbv-26sMc6o

!

> Great. This'll make the minimal badness comparisons easier.
> Someday though it might be nice to see, I dunno, maybe the
> top 5 subgroups per ET, but I'm not being picky. If it's a
> pain to throw that in, forget it.

It's pretty easy.

> > They're not! Any prime other than 2 is likely to have
> > the same unweighted error, on average.
>
> I'm not so sure about that. Do you have a proof? I'd think,
> for example, prime 19 is more likely to be represented accurately
> in any random ET, because it's so well represented in 4-ET (and
> therefore all the multiples of 4-ET). Surely some primes are
> more likely than others to have low unweighted error?

There might be some of that going on but it should be
approximately random. I mean, if you go up to 19 and say
nothing over 5 counts as "simple", then yeah, there are
certainly more chances for non-simple primes to be
approximated well. But the erroneous effect of the
weighting is there regardless and should be dealt with.

> My rule is that the elements of the subgroup must be able to
> combine to form at least one otonal 15-limit pentad. That
> narrows the field considerably, especially when non-integer
> basis elements are used. I'm sure there are arguments for
> including, say, 17-limit or 19-limit intervals, but I'm not
> as convinced of the definitive concordance of all 19-limit
> otonal pentads as I am of all 15-limit ones. What do you
> think of that?

I think it's a pretty good rule. Do you have a list of
these subgroups somewhere? How did you generate it?

> > Absolutely right; the only reason I didn't include them
> > before is because I was ignorant. By the way, I've been
> > channeling Graham this whole time. It's great!
>
> Channeling him? Has he passed on the great beyond? I never
> knew you were a medium!

Why wait until people die to start channeling them?

> > If you use TE error (or adjusted error) you are in fact
> > getting the error of the ET when octaves are optimally
> > stretched. Is that what you want? Because some systems
> > have significantly higher error if you don't allow that
> > (it's not possible with ordinary guitar tunings and
> > straight frets, for instance).
>
> Why should it be impossible for guitars with straight frets?

I guess it is possible, sorry. But stretched-octave
tunings are still difficult on many synths. And with
acoustic instruments, pure octaves are convenient because
it's so easy to naturally zoom to 2:1 by ear, whereas
stretch often involves minute offsets to this. Stretch
doesn't make much difference for the more accurate
systems, but it can improve things like mavila a lot.
Your call whether you want to use it for the scoring.

-Carl

🔗cityoftheasleep <igliashon@...>

1/6/2012 3:40:57 PM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:
> The piece you just linked me to in porcupine[8] had some
> great examples of playing melodic passing tones over chords
> and having it sound in tune!

Thanks! It's the first thing I wrote in Porcupine that I actually liked.

> (And can we please agree this
> is not possible in 8-ET or 10-ET using a single timbre like
> the piano?)

8-ET has the same 5:6:11:13 harmonies you get in 24-ET. Listen to "Smoke in the Moonlight" on my soundcloud page. Chris V. thought it was JI when I did a "guess the tuning". So a run of successive 8-ET steps approximates 10:11:12:13 pretty well, you could even throw a 17 in there, too. 8-ET has to be the most underrated ET in the world. I agree about 10, though--it's hard to squeeze that kind of melodic versatility out of it.

> > Great. This'll make the minimal badness comparisons easier.
> > Someday though it might be nice to see, I dunno, maybe the
> > top 5 subgroups per ET, but I'm not being picky. If it's a
> > pain to throw that in, forget it.
>
> It's pretty easy.

Sweet. I look forward to getting my hands on this thing you're coding!

> There might be some of that going on but it should be
> approximately random. I mean, if you go up to 19 and say
> nothing over 5 counts as "simple", then yeah, there are
> certainly more chances for non-simple primes to be
> approximated well. But the erroneous effect of the
> weighting is there regardless and should be dealt with.

Fair enough.

> I think it's a pretty good rule. Do you have a list of
> these subgroups somewhere? How did you generate it?

LOL, the same way I generate everything--by hand! I think I missed a few in the spreadsheet I uploaded, though. I'll post them here tomorrow sometime, after I've had time to check my work. There are not too many of them, though.

> I guess it is possible, sorry. But stretched-octave
> tunings are still difficult on many synths. And with
> acoustic instruments, pure octaves are convenient because
> it's so easy to naturally zoom to 2:1 by ear, whereas
> stretch often involves minute offsets to this. Stretch
> doesn't make much difference for the more accurate
> systems, but it can improve things like mavila a lot.
> Your call whether you want to use it for the scoring.

I'd prefer to go with pure octaves, for the reasons you stated. Paul would be sad, but I'm a pragmatist. Ease of implementation on existing instruments is definitely a factor in a tuning's large-scale viability.

-Igs

🔗Mike Battaglia <battaglia01@...>

1/6/2012 5:06:25 PM

On Fri, Jan 6, 2012 at 6:40 PM, cityoftheasleep <igliashon@...> wrote:
>
> --- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:
> > The piece you just linked me to in porcupine[8] had some
> > great examples of playing melodic passing tones over chords
> > and having it sound in tune!
>
> Thanks! It's the first thing I wrote in Porcupine that I actually liked.

Is this "Porcupiano?" That piece was frickin insane on the second or
third listen. If you go up two minor thirds and down a fourth, you are
now the same distance from the tonic as if you go up a major third and
down two fifths. My brain caught onto that after listening a few times
and it changed some aspect of my perception of the piece.

> > (And can we please agree this
> > is not possible in 8-ET or 10-ET using a single timbre like
> > the piano?)
>
> 8-ET has the same 5:6:11:13 harmonies you get in 24-ET. Listen to "Smoke in the Moonlight" on my soundcloud page. Chris V. thought it was JI when I did a "guess the tuning". So a run of successive 8-ET steps approximates 10:11:12:13 pretty well, you could even throw a 17 in there, too. 8-ET has to be the most underrated ET in the world. I agree about 10, though--it's hard to squeeze that kind of melodic versatility out of it.

And it also has sensamagic triads, like 245/243-tempered 1/1 9/7
5/3's. Those are great.

-Mike

🔗Carl Lumma <carl@...>

1/6/2012 6:43:00 PM

Igs wrote:

> > (And can we please agree this
> > is not possible in 8-ET or 10-ET using a single timbre like
> > the piano?)
>
> 8-ET has the same 5:6:11:13 harmonies you get in 24-ET. Listen
> to "Smoke in the Moonlight" on my soundcloud page. Chris V.
> thought it was JI when I did a "guess the tuning". So a run of
> successive 8-ET steps approximates 10:11:12:13 pretty well, you
> could even throw a 17 in there, too. 8-ET has to be the most
> underrated ET in the world. I agree about 10, though--it's hard
> to squeeze that kind of melodic versatility out of it.

Can we please agree it's not possible in 8-ET?

:) It's a great piece and the chords are plenty consonant,
but I don't hear many 'in tune' passing tones.

> > I guess it is possible, sorry. But stretched-octave
> > tunings are still difficult on many synths. And with
> > acoustic instruments, pure octaves are convenient because
> > it's so easy to naturally zoom to 2:1 by ear, whereas
> > stretch often involves minute offsets to this. Stretch
> > doesn't make much difference for the more accurate
> > systems, but it can improve things like mavila a lot.
> > Your call whether you want to use it for the scoring.
>
> I'd prefer to go with pure octaves, for the reasons you stated.
> Paul would be sad, but I'm a pragmatist. Ease of implementation
> on existing instruments is definitely a factor in a tuning's
> large-scale viability.

I think that's a good choice.

-Carl

🔗Chris Vaisvil <chrisvaisvil@...>

1/6/2012 7:36:09 PM

Seriously, making any theory determinations based upon my hearing is very
likely to be faulty and totally non-consensus.
My hearing is a bit weird. I almost failed ear training. (but that
mechanical course on tape was awful).

I suggest you don't use my results alone to determine anything.

Chris

On Fri, Jan 6, 2012 at 9:43 PM, Carl Lumma <carl@...> wrote:

> **
>
>
> Igs wrote:
>
> > > (And can we please agree this
> > > is not possible in 8-ET or 10-ET using a single timbre like
> > > the piano?)
> >
> > 8-ET has the same 5:6:11:13 harmonies you get in 24-ET. Listen
> > to "Smoke in the Moonlight" on my soundcloud page. Chris V.
> > thought it was JI when I did a "guess the tuning". So a run of
> > successive 8-ET steps approximates 10:11:12:13 pretty well, you
> > could even throw a 17 in there, too. 8-ET has to be the most
> > underrated ET in the world. I agree about 10, though--it's hard
> > to squeeze that kind of melodic versatility out of it.
>
> Can we please agree it's not possible in 8-ET?
>
> :
>

🔗Mike Battaglia <battaglia01@...>

1/6/2012 8:20:36 PM

On Fri, Jan 6, 2012 at 9:43 PM, Carl Lumma <carl@...> wrote:
>
> Can we please agree it's not possible in 8-ET?
>
> :) It's a great piece and the chords are plenty consonant,
> but I don't hear many 'in tune' passing tones.

That's because Igs wasn't using 245/243-tempered 1/1-9/7-5/3 chords.
I'm telling you, try it. You've got those for "Happy" and diminished
for "sad" - what else could you possibly need?

-Mike

🔗cityoftheasleep <igliashon@...>

1/7/2012 9:13:43 AM

--- In tuning@yahoogroups.com, "Carl Lumma" <carl@...> wrote:

> I think it's a pretty good rule. Do you have a list of
> these subgroups somewhere? How did you generate it?

What I did was first look at all the pentads made up of odd harmonics up to 15, noting which were actually 4D or otherwise redundant (which takes out a LOT of them, since 9 and 15 are not prime and only co-prime with certain basis elements):

2:3:5:7:9 (=2:3:5:7, omitted)
2:3:5:7:11 (=11-limit)
2:3:5:7:13
2:3:5:7:15 (=2:3:5:7, omitted)
2:3:5:9:11 (=2.3.5.11, omitted)
2:3:5:9:13 (=2.3.5.13, omitted)
2:3:5:9:15 (=2.3.5, omitted but left in by mistake in my original)
2:3:5:11:13
2:3:5:11:15 (=2.3.5.11, omitted)
2:3:5:13:15 (=2.3.5.13, omitted)
2:3:7:9:11 (=2.3.7.11, omitted)
2:3:7:9:13 (=2.3.7.13, omitted)
2:3:7:9:15 (=7-limit, omitted)
2:3:7:11:13
2:3:7:11:15 (=11-limit)
2:3:7:13:15 (=2.3.5.7.13)
2.3.9.11.13 (=2.3.11.13, omitted)
2.3.9.11.15 (=2.3.5.11, omitted)
2.3.9.13.15 (=2.3.5.13, omitted)
2.3.11.13.15 (=2.3.5.11.13)
2:5:7:9:11
2:5:7:9:13
2:5:7:9:15 (=2:3:5:7, omitted)
2:5:7:11:13
2:5:7:11:15 (=11-limit)
2:5:9:11:13
2:5:9:11:15 (=2.3.5.11, omitted)
2:5:9:13:15 (=2.3.5.13, omitted)
2:5:11:13:15 (=2.3.5.11.13)
2:7:9:11:13
2:7:9:11:15
2:7:9:13:15
2:7:11:13:15
2:9:11:13:15

2-less Pentads:
3:5:7:9:11 (**at first I thought this = 2.5/3.7/3.9/3.11/3, then realized 9/3=3, so though 2.3.5/3.7/3.11/3...but then realized that if you have 2, 3, and 5/3, you have 5 as well, so this is actually just the 11-limit if you're only looking at ED2s**).
3:5:7:9:13 (=2.3.5.7.13, see above)
3:5:7:9:15 (=7-limit, see above)
3:5:7:11:13 = 2.5/3.7/3.11/3.13/3, kept
3:5:7:11:15 (=11-limit, see above)
3:5:7:13:15 (=2.3.5.7.13, see above)
3:7:9:11:13 (=2.3.7.11.13, see above)
3:7:9:11:15 (=2.3.5.7.11, see above)
3:7:11:13:15 = 2.5.7/3.11/3.13/3, kept
5:7:9:11:13 = 2.7/5.9/5.11/5.13/5, kept
5:7:9:11:15 (=11-limit, see above)
5:7:9:13:15 (=2.3.5.7.13, see above)
5:7:11:13:15 = 2.3.7/5.11/5.13/5, kept
5:9:11:13:15 (=2.3.5.11.13, see above)
7:9:11:13:15 = 2.9/7.11/7.13/7.15/7, kept

-------------------------------------------

So, if we weed out all the redundant pentads, we get the following 5D pentadic subgroups:

2:3:5:7:11 (=11-limit)
2:3:5:7:13
2:3:5:11:13
2:3:7:11:13
2:5:7:9:11
2:5:7:9:13
2:5:7:11:13
2:5:9:11:13
2:7:9:11:13
2:7:9:11:15
2:7:9:13:15
2:7:11:13:15
2:9:11:13:15
3:5:7:11:13 = 2.5/3.7/3.11/3.13/3
3:7:11:13:15 = 2.5.7/3.11/3.13/3
5:7:9:11:13 = 2.7/5.9/5.11/5.13/5
5:7:11:13:15 = 2.3.7/5.11/5.13/5
7:9:11:13:15 = 2.9/7.11/7.13/7.15/7

And there you have it! My master list of non-redundant 5D subgroups containing at minimum one 15-limit otonal pentad.

I would reckon that giving a decently accurate representation of any one of these subgroups would be a sufficient (but not necessary) condition for versatile and reasonably-concordant harmony using at least one higher prime (11 or 13).

Going by the (flawed) error calculations I used initially, it looks like only tunings not suitable for this goal (adjusted error >15 cents) are 5, 6, 7, 8, 10, 12, 14, and 15. I'm sure if we looked at 3D or 4D pentadic subgroups as well (like the 5-limit, 7-limit, and 2.3.5.11 and 2.3.5.13 subgroups), most of these would go back on the "acceptable" list. I can virtually guarantee that at least 8, 12, and 15 would.

Now, if only we could agree on a good way to calculate error for pure octave tunings, I could re-do the calculations and give an accurate reflection of what ED2s are actually good for. Then we could extend that to badness rankings if we wanted to.

However, my real goal here was to make the point that concordant harmony is not rare among ETs (contrary to Paul's assertions that concordance is rare and therefore valuable), even if we demand chords as large as pentads with no identity beyond the 15th harmonic represented. I'm curious how a better error metric will change the spread, but I have doubts that it will make a significant difference in the overall picture.

-Igs