back to list

It's all in the genes!

🔗Graham Breed <gbreed@gmail.com>

3/14/2008 6:31:49 AM

I've kept this to myself for various reasons. As I let the cat out of the bag in another place I may as well explain it to you.

Our story involves the character formerly known as "the val". It represents the mapping from prime intervals to steps in an equal temperament. There are arguments for giving it a special name:

- the putative equal temperament doesn't always make sense

- the dimensions are inconsistent with higher ranks.

The second point may not be obvious. But I write my regular temperament mappings as kets of bras. For example, miracle temperament by period and generator is

[< 1, 1, 3, 3, 2],
< 0, 6, -7, -2, 15]>

The logic is, you take a JI interval as a ket vector [X>. Then multiply it through.

[< 1, 1, 3, 3, 2][X>,
< 0, 6, -7, -2, 15][X>>

This gives a ket with two real numbers -- dimensions of an interval, not a mapping.

Similarly, a planar temperament mapping is a ket with three bras, and so on. So, logically, an equal temperament mapping should be a ket of one bra

[<12, 19, 28]>

The bra itself is a different thing.

So, a name might be useful. There's already "val" and you may consider it accepted usage. I don't like it and I'm not sure exactly what it means. For example, last I remember it had to refer to integer ratios and not intervals taken from an inharmonic timbre. And as it was defined precisely I don't like to use it wrongly.

Now, to the point. A logical name is "gene". The reason is, these bras are the units of inheritance. The argument would have made more sense with my old meaning of "temperament family" but there's still a sense of the scale tree coding family relationships. Miracle temperament has "parents" of 31- and 41-equal. So why not say it inherits its genes from them?

This doesn't cover the whole meaning of "val" because they also cover other transformations. You'll have to judge whether introducing the new term is disrespectful to the mighty Gene Ward Smith who introduced "vals".

It goes further! A set of genes can be called a "genome". There are different ways this could be useful. The best one for me is that the genome could be either a matrix or a wedgie. There are some formulae, like scalar complexity, that can be written the same way for either kind of genome.

The next step is to embrace Poincar� duality. This is something I still don't have working but it should mean that you can always replace "vals" with "monzos" -- that is, intervals written as prime exponent vectors. Like with "val" the extended gene needn't conflict with "monzo" for people who are attached to that term as a monzo can be used in other ways. Rather, it's a replacement for "unison vector" which is a much contested term.

To distinguish the different kinds of genes, how about "m" for mapping and "u" for unison vector? That keeps Fokker's usage alive. So vals used to produce temperament classes are "m-genes" and monzos used equivalently are "u-genes" when you need to make the distinction. I especially like "u-gene" although I don't know anybody called Eugene to pretend to name it after.

You can think of temperament families as having the same genes, but with mutations. I think that's a useful and descriptive terminology.

So, that's how it lies. I don't have an immediate need for such terms, but when I want one I think "gene" is what I'll go for. At least provided I can sneak something out when the original Gene isn't looking.

Graham

🔗Carl Lumma <carl@lumma.org>

3/14/2008 10:01:21 AM

>So, a name might be useful. There's already "val" and you
>may consider it accepted usage. I don't like it and I'm not
>sure exactly what it means.

You remember that it refers to "valuation"?

>For example, last I remember it
>had to refer to integer ratios and not intervals taken from
>an inharmonic timbre.

????????????????????????????????????????????????????

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/14/2008 8:26:00 PM

Carl Lumma wrote:
>> So, a name might be useful. There's already "val" and you >> may consider it accepted usage. I don't like it and I'm not >> sure exactly what it means.
> > You remember that it refers to "valuation"?

Yes.

>> For example, last I remember it >> had to refer to integer ratios and not intervals taken from >> an inharmonic timbre.
> > ????????????????????????????????????????????????????

"A val is a homomorphism of Np[the p-limit rational numbers] to the integers Z."

http://web.archive.org/web/20070127163527/66.98.148.43/~xenharmo/intval.html

Graham

🔗Carl Lumma <carl@lumma.org>

3/14/2008 10:33:45 PM

>>> For example, last I remember it
>>> had to refer to integer ratios and not intervals taken from
>>> an inharmonic timbre.
>>
>> ????????????????????????????????????????????????????
>
>"A val is a homomorphism of Np[the p-limit rational numbers]
>to the integers Z."

What does this have to do with timbre? Let me answer that
for you: nothing. It maps primes to scale steps, and that's
all. If you care about primes, you've already decided on a
way to assign fundamentals to whatever timbre you're working
with. If you're not working with a pitched timbre, you
probably don't care about approximating consonances.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/15/2008 8:34:19 AM

Graham Breed wrote:

> So, a name might be useful. There's already "val" and you > may consider it accepted usage. I don't like it and I'm not > sure exactly what it means. For example, last I remember it > had to refer to integer ratios and not intervals taken from > an inharmonic timbre. And as it was defined precisely I > don't like to use it wrongly.

I'm also not satisfied with the word "val" and unless we're talking to a mathematician and explain that it's short for "valuation" it isn't of much use. And in that case it's only useful for the mathematician -- the rest of us don't have the foggiest idea of what a "valuation" is unless we go back and look it up again.

I also don't like "patent val" because the adjective "patent" with a long "a" is such an uncommon word that it's likely to be misinterpreted as having something to do with patents (in the legal sense of the word, with a short "a").

> The next step is to embrace Poincar� duality. This is > something I still don't have working but it should mean that > you can always replace "vals" with "monzos" -- that is, > intervals written as prime exponent vectors. Like with > "val" the extended gene needn't conflict with "monzo" for > people who are attached to that term as a monzo can be used > in other ways. Rather, it's a replacement for "unison > vector" which is a much contested term.

But a new word that encompasses the meanings of both "val" and "monzo" is a poor replacement for either one, so we'll still need replacements for the original words. Establishing the bra-ket notation to distinguish one from the other was a step in the right direction; more confusion is likely to result if we start blurring the distinction.

As much as the pun on the name "Gene" is appealing in some ways, eponymous words typically use the surname -- the unit of electrical inductance is a "henry", but that's named after Joseph Henry, not someone with the first name Henry.

Part of the problem finding a word for "val" is that what it describes is a thing that makes sense only in the context of the larger structure. It tells you how many of one generator to chain together (and in which direction) to reach each of the prime intervals, but only when used in combination with the other generators. I've been wondering lately about the potential advantages of grouping the numbers in the mapping by generator pairs instead of vals. So for the miracle mapping you mentioned,

[< 1, 1, 3, 3, 2],
< 0, 6, -7, -2, 15]>

you could write it using the generator pairs as <[1, 0>, [1, 6>, [3, -7>, [3, -2>, [2, 15>]. The thing about these generator pairs is that they're actually useful by themselves; each pair tells you how to approximate a particular prime interval. All temperament thingies with an octave as the period and a fourth as the generator can be grouped together with the first two generator pairs as <[1, 0>, [2, -1>, ...]. So they're useful for higher level temperament thingy classification.

The vals are still useful for some things -- e.g. recognizing the connection between meantone [<1, 2, 4, 7], <0, -1, -4, -10]> and injera [<2, 3, 4, 5], <0, 1, 4, 4]> from the similarity between <0, -1, -4] and <0, 1, 4], and combining ET's to form higher rank temperament thingies. And of course they're useful for searching the tuning-math archives (but generator pairs were more commonly seen in earlier messages).

🔗Carl Lumma <carl@lumma.org>

3/15/2008 10:22:17 AM

Herman wrote...

>> So, a name might be useful. There's already "val" and you
>> may consider it accepted usage. I don't like it and I'm not
>> sure exactly what it means. For example, last I remember it
>> had to refer to integer ratios and not intervals taken from
>> an inharmonic timbre. And as it was defined precisely I
>> don't like to use it wrongly.
>
>I'm also not satisfied with the word "val"
Well, it's official. "Val" is the most hated term in the
history of music theory. Even Paul E. hated it. He called
them "breeds".

I'm happy to stand alone in liking it. It's a single syllable,
and it's in all my scheme code and e-mails. And it's on Gene's
site. He invented the concept, and that should also give him
some right to name it in my opinion.

> and unless we're talking to a
>mathematician and explain that it's short for "valuation" it isn't of
>much use.

Do you know the etymology of all the words you use?

>I also don't like "patent val" because the adjective "patent" with a
>long "a" is such an uncommon word that it's likely to be misinterpreted
>as having something to do with patents (in the legal sense of the word,
>with a short "a").

I almost thought you this whole e-mail was facetious when I
read this.

We already changed this term once. Where was your objection then?
It was "standard". I suggested "immediate" but "patent" won out,
and... you think it should be changed because it could be confused
with legal patents??

I can't go on.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/15/2008 11:13:07 AM

Carl Lumma wrote:

>> and unless we're talking to a >> mathematician and explain that it's short for "valuation" it isn't of >> much use.
> > Do you know the etymology of all the words you use?

The first thing that comes to mind regarding the word "val" is that it's short for "value", which is misleading. (I'm sure I've mentioned that before.) It's not of much use to know that it's short for something that isn't familiar in the first place. The point is that this is in the context of a message which was suggesting a replacement for the word "val". I think it's perfectly fair to point out the problems with the word "val" before criticizing the suggested replacement.

>> I also don't like "patent val" because the adjective "patent" with a >> long "a" is such an uncommon word that it's likely to be misinterpreted >> as having something to do with patents (in the legal sense of the word, >> with a short "a").
> > I almost thought you this whole e-mail was facetious when I
> read this.
> > We already changed this term once. Where was your objection then?
> It was "standard". I suggested "immediate" but "patent" won out,
> and... you think it should be changed because it could be confused
> with legal patents??

I don't recall ever approving of the term. I can search the archives, but I won't bother. And the last I recall, the suggested term was something like "nearest prime mapping" anyway. Whatever. What's annoying to me is when I try to write a message about tuning, and the only replies I get are people trying to correct my terminology. Most of the time I really don't care what these things are called, as long as we can agree on something.

I don't think I ever would have objected to "standard val". I don't even recall what the objection to "standard" might have been. The problem with the word "patent" is that it always needs clarification whenever someone sees it for the first time, or they'll end up saying it wrong.

🔗Carl Lumma <carl@lumma.org>

3/15/2008 12:12:18 PM

Hi Herman!

>>> and unless we're talking to a
>>> mathematician and explain that it's short for "valuation" it isn't
>>> of much use.
>>
>> Do you know the etymology of all the words you use?
>
>The first thing that comes to mind regarding the word "val" is that it's
>short for "value", which is misleading. (I'm sure I've mentioned that
>before.) It's not of much use to know that it's short for something that
>isn't familiar in the first place. The point is that this is in the
>context of a message which was suggesting a replacement for the word
>"val". I think it's perfectly fair to point out the problems with the
>word "val" before criticizing the suggested replacement.

It's fair but it's also a waste of time that distracts from
tuning theory and makes the list less desirable to newcomers.
What could be so wrong with a three-letter word that it's
worth all this?

I don't think terminology revision in general is a good idea.
Words have power given them by their users, and no more. We
can use any three-letter term and I'll like it as much, except
the first such term to be used has a large practical advantage
over any other.

The exception is when revising terms improves our understanding
of the identified thing. Gene's definition of "temperament"
did this. It *is* the same thing people meant throughout
history meant. They said "tuning" but that is not what they
would have said if they had the understanding we do. Revisions
like 'naming temperaments after their generators' and
'changing "val" to something else' fail to meet this standard.
They're usually justified instead on some vague notion of
'beginner friendliness', which usually is just a guess on the
part of the person advocating the change.

>>> I also don't like "patent val" because the adjective "patent" with a
>>> long "a" is such an uncommon word that it's likely to be misinterpreted
>>> as having something to do with patents (in the legal sense of the word,
>>> with a short "a").
>>
>> I almost thought you this whole e-mail was facetious when I
>> read this.
>>
>> We already changed this term once. Where was your objection then?
>> It was "standard". I suggested "immediate" but "patent" won out,
>> and... you think it should be changed because it could be confused
>> with legal patents??
>
>I don't recall ever approving of the term. I can search the archives,
>but I won't bother.

I don't recall you approving of it, either. Do we need to track
down every member of this list and get their approval before
standardizing terminology? There was a *huge* thread on it.

>What's annoying
>to me is when I try to write a message about tuning, and the only
>replies I get are people trying to correct my terminology.

That's how I feel!

>I don't think I ever would have objected to "standard val". I don't
>even recall what the objection to "standard" might have been.

Paul E. threw a fit, over a period of more than a year. I must
be the only person on this list who reads every post.

Paul's objection was basically that "standard" implied a value
judgement that wasn't clear. He thought the "standard" val should,
if anything, but the one with the lowest TOP damage. Finally
Gene gave in (as he usually did) and asked for suggestions. I
suggested "immediate val". He suggested "patent val". In the
end I didn't care, since they *both mean the same thing*.

>The problem
>with the word "patent" is that it always needs clarification whenever
>someone sees it for the first time, or they'll end up saying it wrong.

I don't follow you here, but in general it's debateable whether
having terms that require looking up / asking about is better or
worse for the popularity of a field than having pedestrian terms.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/15/2008 2:58:42 PM

Carl Lumma wrote:
> Hi Herman!
> >>>> and unless we're talking to a >>>> mathematician and explain that it's short for "valuation" it isn't
>>>> of much use.
>>> Do you know the etymology of all the words you use?
>> The first thing that comes to mind regarding the word "val" is that it's >> short for "value", which is misleading. (I'm sure I've mentioned that >> before.) It's not of much use to know that it's short for something that >> isn't familiar in the first place. The point is that this is in the >> context of a message which was suggesting a replacement for the word >> "val". I think it's perfectly fair to point out the problems with the >> word "val" before criticizing the suggested replacement.
> > It's fair but it's also a waste of time that distracts from
> tuning theory and makes the list less desirable to newcomers.
> What could be so wrong with a three-letter word that it's
> worth all this?

Let's discuss tuning theory, then. Why is it that the interminable arguments over terminology get all the attention? You may not have noticed, but my original reply had a bit about whether it might be more useful to go back to the older way we used to notate temperaments, with pairs of generators for each prime interval. I thought that would be the thing that was worth commenting on, but the bit about the word "val" got all the attention. Not to mention that I argued against replacing "val" with "gene" and used "val" myself without comment in that bit at the end. Sigh.

> I don't think terminology revision in general is a good idea.
> Words have power given them by their users, and no more. We
> can use any three-letter term and I'll like it as much, except
> the first such term to be used has a large practical advantage
> over any other.

I don't like it when terminology arguments get in the way of discussing tuning theory. Whatever we call the thing, at least we've settled on a notation for it, more or less -- some might write <0, -1, 3, -5, 1| and others <0, -1, 3, -5, 1], but there's little possibility for confusion. That said, "val" is one of those words that seems to generate a lot of unnecessary argument, and I wouldn't be sad so see it go if something better comes along.

> The exception is when revising terms improves our understanding
> of the identified thing. Gene's definition of "temperament"
> did this. It *is* the same thing people meant throughout
> history meant. They said "tuning" but that is not what they
> would have said if they had the understanding we do. Revisions
> like 'naming temperaments after their generators' and
> 'changing "val" to something else' fail to meet this standard.
> They're usually justified instead on some vague notion of
> 'beginner friendliness', which usually is just a guess on the
> part of the person advocating the change.

Naming temperaments after their generators or their "unison vectors" (whatever we call them) was (I think) more of an older practice, before we started naming them things like "harry". (Would that be Partch, Houdini, Truman, or Potter?) One shortcoming is that generators aren't specific enough (think of all the minor-third based temperaments), and neither are unison vectors.

Actually, most of the time when I'm discussing temperaments it's in the context of something like a piece of music or a system of notation, in which case I have a particular scale or set of notes in mind -- I'll say something like miracle[21] or magic[19] -- the word "temperament" doesn't even get mentioned. For similar reasons the word "val" rarely comes up.

> Paul's objection was basically that "standard" implied a value
> judgement that wasn't clear. He thought the "standard" val should,
> if anything, but the one with the lowest TOP damage. Finally
> Gene gave in (as he usually did) and asked for suggestions. I
> suggested "immediate val". He suggested "patent val". In the
> end I didn't care, since they *both mean the same thing*.

I don't know, is there a value judgment with phrases like "standard temperature and pressure"? Not that it matters all that much, it was just a side comment to the issue about wanting to rename "val" to "gene", and the phrase can be avoided without much trouble. But the original usage seems better to me than the replacement.

>> The problem >> with the word "patent" is that it always needs clarification whenever >> someone sees it for the first time, or they'll end up saying it wrong.
> > I don't follow you here, but in general it's debateable whether
> having terms that require looking up / asking about is better or
> worse for the popularity of a field than having pedestrian terms.

You're assuming they'll think to ask about it. I know it confused me at first -- it's an ambiguity that isn't immediately apparent. (It might not be such a problem if you pronounce both words the same way.) The thing is, it looks like a common word that doesn't need to be looked up or asked about.

🔗Carl Lumma <carl@lumma.org>

3/15/2008 4:02:36 PM

>Let's discuss tuning theory, then. Why is it that the interminable
>arguments over terminology get all the attention? You may not have
>noticed, but my original reply had a bit about whether it might be more
>useful to go back to the older way we used to notate temperaments, with
>pairs of generators for each prime interval.

I don't have much to say about it. It's a matter of whether you
like to read by rows or columns. The vals way makes for less
"administrative debris" (the brackets and commas). And what you
say about them being meaningless unless there's more than one
isn't true for ETs. For me, I've got code that works with vals
and I'm not going to change it for fun.

>> The exception is when revising terms improves our understanding
>> of the identified thing. Gene's definition of "temperament"
>> did this. It *is* the same thing people meant throughout
>> history meant. They said "tuning" but that is not what they
>> would have said if they had the understanding we do. Revisions
>> like 'naming temperaments after their generators' and
>> 'changing "val" to something else' fail to meet this standard.
>> They're usually justified instead on some vague notion of
>> 'beginner friendliness', which usually is just a guess on the
>> part of the person advocating the change.
>
>Naming temperaments after their generators or their "unison vectors"
>(whatever we call them) was (I think) more of an older practice, before
>we started naming them things like "harry".

Yes, but Dave even more recently argued all the cute names should
be thrown out in favor of naming after generators.

>(Would that be Partch,
>Houdini, Truman, or Potter?)

Are you kidding?

>One shortcoming is that generators aren't
>specific enough (think of all the minor-third based temperaments),

Yup, that's one reason why the proposal sucked. Another is that
the generators were going to be described as diatonic intervals.

>and neither are unison vectors.

I happen to think Gene's comma sequences are an excellent way to
identify temperaments (better for musicians than wedgies).
There are still some open problems with the method. Unfortunately
Gene was too busy fending off terminology attacks to solve them.

>Actually, most of the time when I'm discussing temperaments it's in the
>context of something like a piece of music or a system of notation, in
>which case I have a particular scale or set of notes in mind -- I'll say
>something like miracle[21] or magic[19] -- the word "temperament"
>doesn't even get mentioned.

Yes. But I just showed that when people say "miracle temperament",
they never say "miracle temperament class".

>> Paul's objection was basically that "standard" implied a value
>> judgement that wasn't clear. He thought the "standard" val should,
>> if anything, but the one with the lowest TOP damage. Finally
>> Gene gave in (as he usually did) and asked for suggestions. I
>> suggested "immediate val". He suggested "patent val". In the
>> end I didn't care, since they *both mean the same thing*.
>
>I don't know, is there a value judgment with phrases like "standard
>temperature and pressure"? Not that it matters all that much, it was
>just a side comment to the issue about wanting to rename "val" to
>"gene", and the phrase can be avoided without much trouble. But the
>original usage seems better to me than the replacement.

I believe I suggested at the time that the best course of action
would be for Paul to propose a new term for the val with lowest
TOP damage. Alas, we still don't have a term for that.

>>> The problem
>>> with the word "patent" is that it always needs clarification whenever
>>> someone sees it for the first time, or they'll end up saying it wrong.
>>
>> I don't follow you here, but in general it's debateable whether
>> having terms that require looking up / asking about is better or
>> worse for the popularity of a field than having pedestrian terms.
>
>You're assuming they'll think to ask about it. I know it confused me at
>first -- it's an ambiguity that isn't immediately apparent. (It might
>not be such a problem if you pronounce both words the same way.) The
>thing is, it looks like a common word that doesn't need to be looked up
>or asked about.

MOS looks like a common word?? I was immediately drawn in by it,
and wrote to John Chalmers to explain it. Which he did.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/15/2008 6:05:14 PM

Herman Miller wrote:

> I don't think I ever would have objected to "standard val". I don't even > recall what the objection to "standard" might have been. The problem > with the word "patent" is that it always needs clarification whenever > someone sees it for the first time, or they'll end up saying it wrong.

I did object to "standard" because it implies that my software, which returns a different default, is sub-standard. But mostly I objected to Gene inventing words and then using them without supplying a definition in the context.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/15/2008 6:17:56 PM

Herman Miller wrote:
> Graham Breed wrote:

>> The next step is to embrace Poincar� duality. This is >> something I still don't have working but it should mean that >> you can always replace "vals" with "monzos" -- that is, >> intervals written as prime exponent vectors. Like with >> "val" the extended gene needn't conflict with "monzo" for >> people who are attached to that term as a monzo can be used >> in other ways. Rather, it's a replacement for "unison >> vector" which is a much contested term.
> > But a new word that encompasses the meanings of both "val" and "monzo" > is a poor replacement for either one, so we'll still need replacements > for the original words. Establishing the bra-ket notation to distinguish > one from the other was a step in the right direction; more confusion is > likely to result if we start blurring the distinction.

Carl objects vehemently, and "val" is in use, so we may as well declare Gene the victor in the terminology dispute. I still think there is a need for a common term because they behave similarly to each other ... even if I can't work out the details.

The most useful concept is what I called "genome" -- the definition of a temperament class in any form -- as wedgie, mapping matrix, or set of unison vectors.

> As much as the pun on the name "Gene" is appealing in some ways, > eponymous words typically use the surname -- the unit of electrical > inductance is a "henry", but that's named after Joseph Henry, not > someone with the first name Henry.

We prefer familiar names in these parts, as in "Kees metric". Also, "Smith" is a more common name than "Gene".

> Part of the problem finding a word for "val" is that what it describes > is a thing that makes sense only in the context of the larger structure. > It tells you how many of one generator to chain together (and in which > direction) to reach each of the prime intervals, but only when used in > combination with the other generators. I've been wondering lately about > the potential advantages of grouping the numbers in the mapping by > generator pairs instead of vals. So for the miracle mapping you mentioned,
> > [< 1, 1, 3, 3, 2],
> < 0, 6, -7, -2, 15]>
> > you could write it using the generator pairs as <[1, 0>, [1, 6>, [3, > -7>, [3, -2>, [2, 15>]. The thing about these generator pairs is that > they're actually useful by themselves; each pair tells you how to > approximate a particular prime interval. All temperament thingies with > an octave as the period and a fourth as the generator can be grouped > together with the first two generator pairs as <[1, 0>, [2, -1>, ...]. > So they're useful for higher level temperament thingy classification.

I don't think that works, and it's something we had a few years to think about. It's easier to recognize a temperament class from a generator mapping if it's a single vector.

> The vals are still useful for some things -- e.g. recognizing the > connection between meantone [<1, 2, 4, 7], <0, -1, -4, -10]> and injera > [<2, 3, 4, 5], <0, 1, 4, 4]> from the similarity between <0, -1, -4] and > <0, 1, 4], and combining ET's to form higher rank temperament thingies. > And of course they're useful for searching the tuning-math archives (but > generator pairs were more commonly seen in earlier messages).

Like that.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/15/2008 6:57:00 PM

Carl Lumma wrote:
>>>> For example, last I remember it >>>> had to refer to integer ratios and not intervals taken from >>>> an inharmonic timbre.
>>> ????????????????????????????????????????????????????
>> "A val is a homomorphism of Np[the p-limit rational numbers] >> to the integers Z."
> > What does this have to do with timbre? Let me answer that
> for you: nothing. It maps primes to scale steps, and that's
> all. If you care about primes, you've already decided on a
> way to assign fundamentals to whatever timbre you're working
> with. If you're not working with a pitched timbre, you
> probably don't care about approximating consonances.

I don't want to get into an argument about inharmonic timbres. The fact is there is a history of people caring about them and there's no good reason we can't accommodate them. It's certainly something I was allowing for before the word "val" came along, and I've referred to it in the series of articles I'm working on now, so the word "val" as strictly defined isn't correct for me.

It may be that Gene didn't intend it to be so restrictive. I think there are comments in the archives where he allowed for other timbres. But I haven't seen a revised definition and using it outside his definition is problematic.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/15/2008 7:06:19 PM

>> and neither are unison vectors.
>
Carl Lumma wrote:
> I happen to think Gene's comma sequences are an excellent way to
> identify temperaments (better for musicians than wedgies).
> There are still some open problems with the method. Unfortunately
> Gene was too busy fending off terminology attacks to solve them.

What's a comma sequence?

Graham

🔗Graham Breed <gbreed@gmail.com>

3/15/2008 7:15:41 PM

Carl Lumma wrote:
> I happen to think Gene's comma sequences are an excellent way to
> identify temperaments (better for musicians than wedgies).
> There are still some open problems with the method. Unfortunately
> Gene was too busy fending off terminology attacks to solve them.

Oh, they're on his website:

http://web.archive.org/web/20070317030821/66.98.148.43/~xenharmo/commaseq.htm

I never liked that idea because the commas can get so large.

What are the open problems? He says the Hermite comma sequence always works. Torsion, maybe? I worked out a normal form for mappings, which I posted here and got no comments on, that should also work for commas. I don't know if it has a standard name.

Graham

🔗Herman Miller <hmiller@IO.COM>

3/15/2008 7:43:22 PM

Carl Lumma wrote:

>> You're assuming they'll think to ask about it. I know it confused me at >> first -- it's an ambiguity that isn't immediately apparent. (It might >> not be such a problem if you pronounce both words the same way.) The >> thing is, it looks like a common word that doesn't need to be looked up >> or asked about.
> > MOS looks like a common word?? I was immediately drawn in by it,
> and wrote to John Chalmers to explain it. Which he did.

Where do you get that I'm saying MOS looks like a common word? It's "patent" that looks like a common word. Ah, forget it.

🔗Herman Miller <hmiller@IO.COM>

3/15/2008 8:18:19 PM

Graham Breed wrote:
> Herman Miller wrote:
>>
>> [< 1, 1, 3, 3, 2],
>> < 0, 6, -7, -2, 15]>
>>
>> you could write it using the generator pairs as <[1, 0>, [1, 6>, [3, >> -7>, [3, -2>, [2, 15>]. The thing about these generator pairs is that >> they're actually useful by themselves; each pair tells you how to >> approximate a particular prime interval. All temperament thingies with >> an octave as the period and a fourth as the generator can be grouped >> together with the first two generator pairs as <[1, 0>, [2, -1>, ...]. >> So they're useful for higher level temperament thingy classification.
> > I don't think that works, and it's something we had a few > years to think about. It's easier to recognize a > temperament class from a generator mapping if it's a single > vector.

By "that", do you mean the idea of classification, or just the notation using the generator pairs? As far as recognizing a temperament thingy, it does work nicely in most cases, but there are some complications. For example:

<0, 0, -1, 0] could be blacksmith [<5, 8, 12, 14], <0, 0, -1, 0]>, but it could also be...

[<17, 27, 40, 48], <0, 0, -1, 0]>
[<24, 38, 56, 67], <0, 0, -1, 0]>
[<27, 43, 63, 76], <0, 0, -1, 0]>
[<36, 57, 84, 101], <0, 0, -1, 0]>

etc.

For the classification, the idea is that all temperament thingies with e.g. a generator that divides the fifth into two equal parts would be in a <[1, 0>, [1, 2>, ...] class. That would include thingies like beatles, semififths, and hemififths. If you just go by the <0, 2], you'll also end up including thingies like shrutar (with a quarter-tone generator).

[<1, 1, 5, 4], <0, 2, -9, -4]> beatles
[<1, 1, 0, 6], <0, 2, 8, -11]> semififths
[<1, 1, -5, -1], <0, 2, 25, 13]> hemififths
[<2, 3, 5, 5], <0, 2, -4, 7]> shrutar

🔗Carl Lumma <carl@lumma.org>

3/15/2008 8:24:38 PM

At 06:57 PM 3/15/2008, you wrote:
>Carl Lumma wrote:
>>>>> For example, last I remember it
>>>>> had to refer to integer ratios and not intervals taken from
>>>>> an inharmonic timbre.
>>>> ????????????????????????????????????????????????????
>>> "A val is a homomorphism of Np[the p-limit rational numbers]
>>> to the integers Z."
>>
>> What does this have to do with timbre? Let me answer that
>> for you: nothing. It maps primes to scale steps, and that's
>> all. If you care about primes, you've already decided on a
>> way to assign fundamentals to whatever timbre you're working
>> with. If you're not working with a pitched timbre, you
>> probably don't care about approximating consonances.
>
>I don't want to get into an argument about inharmonic
>timbres. The fact is there is a history of people caring
>about them and there's no good reason we can't accommodate
>them. It's certainly something I was allowing for before
>the word "val" came along, and I've referred to it in the
>series of articles I'm working on now, so the word "val" as
>strictly defined isn't correct for me.

What timbres don't you think we can accomodate with the
theory as it stands?

>It may be that Gene didn't intend it to be so restrictive.
>I think there are comments in the archives where he allowed
>for other timbres.

Gene's definition does not restrict timbre choice.
At least not any more (and probably less) than "limit" does.
How are you getting that?

-Carl

🔗Carl Lumma <carl@lumma.org>

3/15/2008 8:30:54 PM

Graham wrote...

>> I happen to think Gene's comma sequences are an excellent way to
>> identify temperaments (better for musicians than wedgies).
>> There are still some open problems with the method. Unfortunately
>> Gene was too busy fending off terminology attacks to solve them.
>
>What's a comma sequence?

A canonical way to complete the kernel of a temperament, such
that commas are only added to it as the mapping of the temperament
is extended to higher limits. From them we can categorize
temperaments into "family trees". This concept has been discussed
at intervals on this list, and Gene has a page devoted to it.

-Carl

🔗Carl Lumma <carl@lumma.org>

3/15/2008 8:36:40 PM

At 07:43 PM 3/15/2008, you wrote:
>Carl Lumma wrote:
>
>>> You're assuming they'll think to ask about it. I know it confused me at
>>> first -- it's an ambiguity that isn't immediately apparent. (It might
>>> not be such a problem if you pronounce both words the same way.) The
>>> thing is, it looks like a common word that doesn't need to be looked up
>>> or asked about.
>>
>> MOS looks like a common word?? I was immediately drawn in by it,
>> and wrote to John Chalmers to explain it. Which he did.
>
>Where do you get that I'm saying MOS looks like a common word? It's
>"patent" that looks like a common word. Ah, forget it.

Sorry. Patent *is* a common word that doesn't need to be
looked up. Why do you think they call the legal things "patents"
in the first place?

-Carl

🔗Carl Lumma <carl@lumma.org>

3/15/2008 8:53:49 PM

At 07:15 PM 3/15/2008, you wrote:
>Carl Lumma wrote:
>> I happen to think Gene's comma sequences are an excellent way to
>> identify temperaments (better for musicians than wedgies).
>> There are still some open problems with the method. Unfortunately
>> Gene was too busy fending off terminology attacks to solve them.
>
>Oh, they're on his website:
>
>http://web.archive.org/web/20070317030821/66.98.148.43/~xenharmo/commaseq.htm
>
>I never liked that idea because the commas can get so large.
>
>What are the open problems? He says the Hermite comma
>sequence always works.

Yes, but the commas are larger than the "bridge comma" method,
which doesn't always work.

>I worked out a
>normal form for mappings, which I posted here and got no
>comments on, that should also work for commas. I don't know
>if it has a standard name.

Can you use it to provide an alternate list to the one
published at

/tuning-math/message/11714?var=0&l=1

or any of the examples on the xenharmony page?

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/15/2008 9:01:54 PM

Herman Miller wrote:
> Graham Breed wrote:
>> Herman Miller wrote:
>>> [< 1, 1, 3, 3, 2],
>>> < 0, 6, -7, -2, 15]>
>>>
>>> you could write it using the generator pairs as <[1, 0>, [1, 6>, [3, >>> -7>, [3, -2>, [2, 15>]. The thing about these generator pairs is that >>> they're actually useful by themselves; each pair tells you how to >>> approximate a particular prime interval. All temperament thingies with >>> an octave as the period and a fourth as the generator can be grouped >>> together with the first two generator pairs as <[1, 0>, [2, -1>, ...]. >>> So they're useful for higher level temperament thingy classification.
>> I don't think that works, and it's something we had a few >> years to think about. It's easier to recognize a >> temperament class from a generator mapping if it's a single >> vector.
> > By "that", do you mean the idea of classification, or just the notation > using the generator pairs? As far as recognizing a temperament thingy, > it does work nicely in most cases, but there are some complications. For > example:

I mean the notation by generator pairs doesn't work.

> <0, 0, -1, 0] could be blacksmith [<5, 8, 12, 14], <0, 0, -1, 0]>, but > it could also be...

I get [<5, 8, 11, 14], <0, 0, 1, 0]> as my "canonical mapping" (probably a term I'll have to change) for blacksmith :P

If you want to identify a rank 2 class with a single val, you should multiply through by the number of periods to the octave. So blacksmith can be identified with <0, 0, 5, 0]. This is also the octave equivalent part of the wedgie <<0, 5, 0]] with a redundant 0.

Given that, it's usually a practically unique identifier for a temperament class. There are other theoretical temperaments that will share it, but they're rubbish. Sometimes you may find a pair of marginal temperaments that get the same identifier. My test scripts come up with a few of these in higher limits. I take it to mean the search parameters are wrong. You also have to explicitly exclude temperament-like things with contorsion.

Gene came up with a way of finding the full mapping from this identifier. Something to do with rounding a wedgie to the nearest integers. You can also use a mixed integer linear programming library to find the best one.

> [<17, 27, 40, 48], <0, 0, -1, 0]>
> [<24, 38, 56, 67], <0, 0, -1, 0]>
> [<27, 43, 63, 76], <0, 0, -1, 0]>
> [<36, 57, 84, 101], <0, 0, -1, 0]>
> > etc.

Different numbers of periods to the octave.

> For the classification, the idea is that all temperament thingies with > e.g. a generator that divides the fifth into two equal parts would be in > a <[1, 0>, [1, 2>, ...] class. That would include thingies like beatles, > semififths, and hemififths. If you just go by the <0, 2], you'll also > end up including thingies like shrutar (with a quarter-tone generator).
> > [<1, 1, 5, 4], <0, 2, -9, -4]> beatles
> [<1, 1, 0, 6], <0, 2, 8, -11]> semififths
> [<1, 1, -5, -1], <0, 2, 25, 13]> hemififths
> [<2, 3, 5, 5], <0, 2, -4, 7]> shrutar

This is fine as long as you have a unique way of choosing the generator. Usually the smallest positive one. But that means you need a standard tuning. Sometimes it isn't obvious until you optimize a temperament what the relative sizes of generator candidates will be. I don't know of any case where the same temperament class will be any good but entail different period-mappings for different optimizations. Still, I wouldn't rule it out.

The main reason I worked out a different normal form is that I can get it faster than calculating the optimal generator size. That's important because I don't need to optimize the tuning to get the optimal error. It's less useful as a human-readable identifier. But humans have to remember that different standards of period-generator mapping will be used.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/15/2008 9:43:56 PM

Carl Lumma wrote:
> At 07:15 PM 3/15/2008, you wrote:

>> What are the open problems? He says the Hermite comma >> sequence always works.
> > Yes, but the commas are larger than the "bridge comma" method,
> which doesn't always work.

He says "The Hermite comma sequence is uniquely defined in all cases, and can always be used to define family relationships for temperaments." The problem I found with something like Hermite reduction is that it can introduce torsion, but still what he said there is true of it.

>> I worked out a >> normal form for mappings, which I posted here and got no >> comments on, that should also work for commas. I don't know >> if it has a standard name.
> > Can you use it to provide an alternate list to the one
> published at
> > /tuning-math/message/11714?var=0&l=1
> > or any of the examples on the xenharmony page?

Sorry, I still can't reach Yahoo! Groups. The example at the bottom of Gene's page comes out as

>>> invariant.lattice_reduction([[-4, 4, -1, 0, 0],[1,2,-3,1,0],[-7,-1,1,1,1]])
[[1, 0, 11, -7, -2], [0, 1, 64, -40, -11], [0, 0, 71, -44, -12]]

That is

[[1 0 11 -7 -2>, [0 1 64 -40 -11>, [0 0 71 -44 -12>]

which is completely different to Gene's result. Oh, of course, he's simplifying in the higher primes. So I get

>>> invariant.lattice_reduction([[0, 0, -1, 4, -1], [0, 1, -3, 2, 1], [1,1,1,-1,-7]])
[[1, 0, 0, 13, -12], [0, 1, 0, -10, 4], [0, 0, 1, -4, 1]]

The right way round and in ket form:

[[-12 13 0 0 1>, [4 -10 0 1 0>, [1 -4 1 0 0>]

c.f.

[<-4 4 -1 0|, <-13 10 0 -1 0|, <-24 13 0 0 1|]

(Why is Gene writing them as bras?)

If you want the "invariant" module, I put it temporarily at http://x31eq.com/invariant.py (which is to say it's there, but I won't update it or move it if I change hosts). It has dependencies on other Python modules I wrote which you can probably work around if all you want is the reduction algorithm.

Graham

🔗Carl Lumma <carl@lumma.org>

3/15/2008 10:52:07 PM

At 09:43 PM 3/15/2008, you wrote:
>Carl Lumma wrote:
>> At 07:15 PM 3/15/2008, you wrote:
>
>>> What are the open problems? He says the Hermite comma
>>> sequence always works.
>>
>> Yes, but the commas are larger than the "bridge comma" method,
>> which doesn't always work.
>
>He says "The Hermite comma sequence is uniquely defined in
>all cases, and can always be used to define family
>relationships for temperaments."

It's the other method which isn't uniquely defined.

>The problem I found with
>something like Hermite reduction is that it can introduce
>torsion, but still what he said there is true of it.

The other problem is that the commas tend to be unnatural.

>>> I worked out a
>>> normal form for mappings, which I posted here and got no
>>> comments on, that should also work for commas. I don't know
>>> if it has a standard name.
>>
>> Can you use it to provide an alternate list to the one
>> published at
>>
>> /tuning-math/message/11714?var=0&l=1
>>
>> or any of the examples on the xenharmony page?
>
>Sorry, I still can't reach Yahoo! Groups.

Blast.

>The example at
>the bottom of Gene's page comes out as
//
>The right way round and in ket form:
>
>[[-12 13 0 0 1>, [4 -10 0 1 0>, [1 -4 1 0 0>]
>
>c.f.
>
>[<-4 4 -1 0|, <-13 10 0 -1 0|, <-24 13 0 0 1|]

Hmm. It doesn't look like you're reducing around the 5-limit
comma. Do you know if the sequences produced with your method
uniquely define a temperament?

>(Why is Gene writing them as bras?)

Probably a mistake.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/15/2008 11:33:51 PM

Carl Lumma wrote:
> At 09:43 PM 3/15/2008, you wrote:

>> The problem I found with >> something like Hermite reduction is that it can introduce >> torsion, but still what he said there is true of it.

Strictly speaking, this was a kind of reduced row echelon form. One source I found said Hermite reduction was the same as reduced row echelon form but I'm sure I've read different definitions and I really don't know what's going on.

The Mathworld definition of "Hermite Normal Form" archived along with Gene's page says the transformation should be unimodular, which means no torsion gets added.

> The other problem is that the commas tend to be unnatural.

I think that's inevitable with this approach.

>> The example at >> the bottom of Gene's page comes out as
> //
>> The right way round and in ket form:
>>
>> [[-12 13 0 0 1>, [4 -10 0 1 0>, [1 -4 1 0 0>]
>>
>> c.f.
>>
>> [<-4 4 -1 0|, <-13 10 0 -1 0|, <-24 13 0 0 1|]
> > Hmm. It doesn't look like you're reducing around the 5-limit
> comma. Do you know if the sequences produced with your method
> uniquely define a temperament?

I thought about this over lunch, so now I can check it and see what's going on. Sure enough, there's a mistake: I entered the 5-limit commma wrongly. So what I get is:

>>> invariant.lattice_reduction([[0, 0, -1, 4, -4], [0, 1, -3, 2, 1], [1,1,1,-1,-7]])
[[1, 0, 0, 13, -24], [0, 1, 0, -10, 13], [0, 0, 1, -4, 4]]

That agrees with Gene, as it should, as there's a unique way of getting the identity matrix in the right-hand square bit. If our methods ever differ it's in what happens when that isn't possible with integers.

You can certainly go from these unison vectors and get a temperament class. I believe the reduction is unique for any given matrix but I can't prove it. Maybe it is, indeed, equivalent to Hermite normal form. (The historical Mathworld definition says certain things I make positive should be negative. But then it also insists on square matrices so I don't trust it.)

Graham

🔗Graham Breed <gbreed@gmail.com>

3/16/2008 4:46:14 AM

Carl Lumma wrote:

> What timbres don't you think we can accomodate with the
> theory as it stands?

The theory as it stands is fine. The theory emasculated to fit the definition of "val" archived from Gene's website would have trouble with Tony Salinas's conical bells, for example.

>> It may be that Gene didn't intend it to be so restrictive.
>> I think there are comments in the archives where he allowed >> for other timbres.
> > Gene's definition does not restrict timbre choice.
> At least not any more (and probably less) than "limit" does.
> How are you getting that?

From the part I quoted before. What do you mean by "limit"?

On a more positive note, I think I was wrong about vals allowing for other kinds of transformations. The definition involving primes makes it fairly strict. So maybe there's a closer match to what he said and what I might want to say.

Graham

🔗Carl Lumma <carl@lumma.org>

3/16/2008 11:17:27 AM

At 04:46 AM 3/16/2008, you wrote:
>Carl Lumma wrote:
>
>> What timbres don't you think we can accomodate with the
>> theory as it stands?
>
>The theory as it stands is fine. The theory emasculated to
>fit the definition of "val" archived from Gene's website

How does this definition "emasculate" whatever you think
the theory as it stands is?

>would have trouble with Tony Salinas's conical bells, for
>example.

All it says is you care about primes. If Tony cares about
primes, it works. If he doesn't, it doesn't.

>>> It may be that Gene didn't intend it to be so restrictive.
>>> I think there are comments in the archives where he allowed
>>> for other timbres.
>>
>> Gene's definition does not restrict timbre choice.
>> At least not any more (and probably less) than "limit" does.
>> How are you getting that?
>
> From the part I quoted before. What do you mean by "limit"?

Harmonic limit.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/16/2008 4:13:01 PM

Carl Lumma wrote:
> At 07:43 PM 3/15/2008, you wrote:
>> Carl Lumma wrote:
>>
>>>> You're assuming they'll think to ask about it. I know it confused me at >>>> first -- it's an ambiguity that isn't immediately apparent. (It might >>>> not be such a problem if you pronounce both words the same way.) The >>>> thing is, it looks like a common word that doesn't need to be looked up >>>> or asked about.
>>> MOS looks like a common word?? I was immediately drawn in by it,
>>> and wrote to John Chalmers to explain it. Which he did.
>> Where do you get that I'm saying MOS looks like a common word? It's >> "patent" that looks like a common word. Ah, forget it.
> > Sorry. Patent *is* a common word that doesn't need to be
> looked up. Why do you think they call the legal things "patents"
> in the first place?

I don't know if you're deliberately being obtuse or what, but the word "patent" meaning "obvious" (an adjective) looks like the common word "patent" meaning "limited monopoly" (a noun). (Indeed, if an idea is obvious, in theory you're not supposed to be able to get a patent on it.) In my experience, the adjective "patent" is hardly ever used except in a few fixed expressions like "patent nonsense" (and recently, "patent val").

I wonder, though, do we yet have a word for a val that isn't the nearest prime mapping? Most temperaments can be built from patent vals, but a few require one of the primes to be mapped to the second-best mapping. We could then call it an "obscure val" or "covert val" or something of the sort, making the contrast with "patent" more apparent.....

Take the temperament "octokaidecal" [<2, 3, 4, 5, 7], <0, 1, 3, 3, 0]> for example. You can't build this from patent vals; you need to use an "obscure" val for one of the ET's.

8-ET: [<8, 13, 19, 23, 28]> (obscure)
10-ET: [<10, 16, 23, 28, 35]> (patent)
wedgie: <<2, 6, 6, 0, 5, 4, -7, -3, -21, -21]]
generator mapping: [<2, 3, 4, 5, 7], <0, 1, 3, 3, 0]>

The patent val for 8-ET is [<8, 13, 19, 22, 28]>.

🔗Carl Lumma <carl@lumma.org>

3/16/2008 4:40:47 PM

>>>>> You're assuming they'll think to ask about it. I know it confused me at
>>>>> first -- it's an ambiguity that isn't immediately apparent. (It might
>>>>> not be such a problem if you pronounce both words the same way.) The
>>>>> thing is, it looks like a common word that doesn't need to be looked up
>>>>> or asked about.
>>>> MOS looks like a common word?? I was immediately drawn in by it,
>>>> and wrote to John Chalmers to explain it. Which he did.
>>> Where do you get that I'm saying MOS looks like a common word? It's
>>> "patent" that looks like a common word. Ah, forget it.
>>
>> Sorry. Patent *is* a common word that doesn't need to be
>> looked up. Why do you think they call the legal things "patents"
>> in the first place?
>
>I don't know if you're deliberately being obtuse or what,

The "sorry" by the way was sincere, for misunderstanding there.

>but the word
>"patent" meaning "obvious" (an adjective) looks like the common word
>"patent" meaning "limited monopoly" (a noun). (Indeed, if an idea is
>obvious, in theory you're not supposed to be able to get a patent on
>it.)

Patent means 'manifest'. That's why the legal things are
called "patents". And they long predate the use of protecting
inventions.

>In my experience, the adjective "patent" is hardly ever used except
>in a few fixed expressions like "patent nonsense" (and recently, "patent
>val").

"Patently obvious" is very common.

>I wonder, though, do we yet have a word for a val that isn't the nearest
>prime mapping? Most temperaments can be built from patent vals, but a
>few require one of the primes to be mapped to the second-best mapping.

Usually it's better in some other sense ("all the intervals"), right?

>We could then call it an "obscure val" or "covert val" or something of
>the sort, making the contrast with "patent" more apparent.....

Howabout the "optimal val". Like "optimal generator".

>Take the temperament "octokaidecal" [<2, 3, 4, 5, 7], <0, 1, 3, 3, 0]>
>for example. You can't build this from patent vals; you need to use an
>"obscure" val for one of the ET's.
>
>8-ET: [<8, 13, 19, 23, 28]> (obscure)
>10-ET: [<10, 16, 23, 28, 35]> (patent)

I think the 8-ET val has a TOP damage of 14.967. The patent
val (8 13 19 22 28) has the much higher TOP damage of 27.349.
So this does look like a case of needing the optimal val.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/16/2008 4:47:07 PM

Graham Breed wrote:
> Herman Miller wrote:
>> <0, 0, -1, 0] could be blacksmith [<5, 8, 12, 14], <0, 0, -1, 0]>, but >> it could also be...
> > I get [<5, 8, 11, 14], <0, 0, 1, 0]> as my "canonical > mapping" (probably a term I'll have to change) for blacksmith :P

I like to keep the generator less than half the size of the period (using TOP or TOP-RMS as test cases). There are difficult cases like jamesbond [<7, 11, 16, 19], <0, 0, 0, 1]> where the generator is really close to half the size of the period, but in most cases the result is pretty clear. Of course, any temperament has many possible mappings, but in most cases like this, if you don't find what you're looking for you can negate the signs (e.g., the other common mapping of jamesbond is [<7, 11, 16, 20], <0, 0, 0, -1]>).

> If you want to identify a rank 2 class with a single val, > you should multiply through by the number of periods to the > octave. So blacksmith can be identified with <0, 0, 5, 0]. > This is also the octave equivalent part of the wedgie <<0, > 5, 0]] with a redundant 0.
> > Given that, it's usually a practically unique identifier for > a temperament class. There are other theoretical > temperaments that will share it, but they're rubbish. Good point. When I've run across those kinds of temperaments, I never give them a second examination. And most of the other cases of mistaken identity have different period sizes.

>> For the classification, the idea is that all temperament thingies with >> e.g. a generator that divides the fifth into two equal parts would be in >> a <[1, 0>, [1, 2>, ...] class. That would include thingies like beatles, >> semififths, and hemififths. If you just go by the <0, 2], you'll also >> end up including thingies like shrutar (with a quarter-tone generator).
>>
>> [<1, 1, 5, 4], <0, 2, -9, -4]> beatles
>> [<1, 1, 0, 6], <0, 2, 8, -11]> semififths
>> [<1, 1, -5, -1], <0, 2, 25, 13]> hemififths
>> [<2, 3, 5, 5], <0, 2, -4, 7]> shrutar
> > This is fine as long as you have a unique way of choosing > the generator. Usually the smallest positive one. But that > means you need a standard tuning. Sometimes it isn't > obvious until you optimize a temperament what the relative > sizes of generator candidates will be. I don't know of any > case where the same temperament class will be any good but > entail different period-mappings for different > optimizations. Still, I wouldn't rule it out.
> > The main reason I worked out a different normal form is that > I can get it faster than calculating the optimal generator > size. That's important because I don't need to optimize the > tuning to get the optimal error. It's less useful as a > human-readable identifier. But humans have to remember that > different standards of period-generator mapping will be used.

I haven't measured how long it takes to calculate the TOP generators vs. just getting the TOP error, but for my purposes it's fast enough. I have so many temperaments already that I can't even begin to use a tiny fraction of them for music. Even basic things like evaluating how to notate them with Sagittal or map them to a keyboard are time-consuming. Word takes forever to sort a table.

🔗Carl Lumma <carl@lumma.org>

3/16/2008 4:54:07 PM

>I haven't measured how long it takes to calculate the TOP generators vs.
>just getting the TOP error, but for my purposes it's fast enough. I have
>so many temperaments already that I can't even begin to use a tiny
>fraction of them for music. Even basic things like evaluating how to
>notate them with Sagittal or map them to a keyboard are time-consuming.

Just use a more stringent badness limit.

>Word takes forever to sort a table.

Use Excel instead.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/16/2008 6:07:07 PM

Carl Lumma wrote:

> "Patently obvious" is very common.

Of course, that's an adverb. No ambiguity there.

>> I wonder, though, do we yet have a word for a val that isn't the nearest >> prime mapping? Most temperaments can be built from patent vals, but a >> few require one of the primes to be mapped to the second-best mapping.
> > Usually it's better in some other sense ("all the intervals"), right?

I don't have any hard numbers to go on, but I'd guess that some intervals are improved at the cost of others. For making a good temperament, the important thing is that the ETs are aligned so that the line between them passes near JI. I'd guess that in cases where an "obscure" val is of use in making temperaments, the prime interval falls close to midway between two steps of one of the ET's.

>> We could then call it an "obscure val" or "covert val" or something of >> the sort, making the contrast with "patent" more apparent.....
> > Howabout the "optimal val". Like "optimal generator".

I don't know that it's always the optimal val. It depends on the other ET.

>> Take the temperament "octokaidecal" [<2, 3, 4, 5, 7], <0, 1, 3, 3, 0]> >> for example. You can't build this from patent vals; you need to use an >> "obscure" val for one of the ET's.
>>
>> 8-ET: [<8, 13, 19, 23, 28]> (obscure)
>> 10-ET: [<10, 16, 23, 28, 35]> (patent)
> > I think the 8-ET val has a TOP damage of 14.967. The patent
> val (8 13 19 22 28) has the much higher TOP damage of 27.349.
> So this does look like a case of needing the optimal val.

In this case it does seem to be optimal. I tried adjusting the other numbers and couldn't find anything better. Temperaments using suboptimal vals do exist, but whether any of them are of much use is another question.

8-ET: [<8, 13, 19, 22, 27]> (obscure)
11-ET: [<11, 18, 26, 31, 38]> (obscure)
wedgie: <<1, -1, 6, 7, -4, 7, 8, 17, 20, -1]]
generator mapping: [<1, 2, 2, 5, 6], <0, -1, 1, -6, -7]>
TOP error: 19.403638

🔗Carl Lumma <carl@lumma.org>

3/16/2008 7:33:24 PM

>>> We could then call it an "obscure val" or "covert val" or something of
>>> the sort, making the contrast with "patent" more apparent.....
>>
>> Howabout the "optimal val". Like "optimal generator".
>
>I don't know that it's always the optimal val. It depends on the other ET.
>
>>> Take the temperament "octokaidecal" [<2, 3, 4, 5, 7], <0, 1, 3, 3, 0]>
>>> for example. You can't build this from patent vals; you need to use an
>>> "obscure" val for one of the ET's.
>>>
>>> 8-ET: [<8, 13, 19, 23, 28]> (obscure)
>>> 10-ET: [<10, 16, 23, 28, 35]> (patent)
>>
>> I think the 8-ET val has a TOP damage of 14.967. The patent
>> val (8 13 19 22 28) has the much higher TOP damage of 27.349.
>> So this does look like a case of needing the optimal val.
>
>In this case it does seem to be optimal. I tried adjusting the other
>numbers and couldn't find anything better. Temperaments using suboptimal
>vals do exist, but whether any of them are of much use is another question.

It's why we did away with consistency. You just say what val
you're using.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/16/2008 7:42:33 PM

Carl Lumma wrote:
> At 04:46 AM 3/16/2008, you wrote:
>> Carl Lumma wrote:
>>
>>> What timbres don't you think we can accomodate with the
>>> theory as it stands?
>> The theory as it stands is fine. The theory emasculated to >> fit the definition of "val" archived from Gene's website
> > How does this definition "emasculate" whatever you think
> the theory as it stands is?

The definition itself does nothing. Following the definition strictly would reduce the range of things the theory could talk about.

>> would have trouble with Tony Salinas's conical bells, for >> example.
> > All it says is you care about primes. If Tony cares about
> primes, it works. If he doesn't, it doesn't.

Yes. Specifically, prime numbers.

>>>> It may be that Gene didn't intend it to be so restrictive.
>>>> I think there are comments in the archives where he allowed >>>> for other timbres.
>>> Gene's definition does not restrict timbre choice.
>>> At least not any more (and probably less) than "limit" does.
>>> How are you getting that?
>> From the part I quoted before. What do you mean by "limit"?
> > Harmonic limit.

What do you mean by "harmonic limit"?

Graham

🔗Graham Breed <gbreed@gmail.com>

3/16/2008 8:06:23 PM

Herman Miller wrote:

> I wonder, though, do we yet have a word for a val that isn't the nearest > prime mapping? Most temperaments can be built from patent vals, but a > few require one of the primes to be mapped to the second-best mapping. > We could then call it an "obscure val" or "covert val" or something of > the sort, making the contrast with "patent" more apparent.....

Then I'll chime in once again and say that this isn't the best way of categorizing vals (equal temperament mappings). If you want one true val, take the best (the one with the lowest error). Lets call that the "optimal val" today. But I'd rather say "ambiguous" if there isn't one obvious mapping for the given number of steps to the octave. So the "standard val" (nearest primes) and the optimal vals for different errors should all agree or you specify the mapping. If you invent a rule to specify the default, somebody *will* forget it and make a mistake.

Most temperaments can be built from optimal vals. Everything in my Prime Errors an Complexities PDF certainly. It's a while ago now, but my experience of the temperament searches was that I got better results with optimal vals than patent vals. Which is why I've been arguing against patent vals (the concept, not the name) all this time. The best thing, however, is to get *all* equal temperaments within your badness cutoff. There's no magical rule that says there should be a one-to-one mapping between steps to the octave and vals.

I've got another PDF that makes this rule mathematically strict:

http://x31eq.com/complete.pdf

I know you've seen it, at least an older version, because you supplied some temperament class names. I still have barely any feedback on the substance though which is why I gave the link.

Another thing, from Prime Based Errors and Complexities

http://x31eq.com/primerr.pdf

There's a badness function with a free parameter which has a geometrical interpretation. The badness of a rank 2 temperament is twice the area of the triangle formed by the two defining vals in "badness space". And similarly for the volume of a rank 3 temperament, and so on. From this you can conclude that the best vals to produce a given temperament class will be the optimal ones (taken individually) by the same badness measure. They'll also be nearly orthogonal. Given that it's very unlikely that a better mapping of one of them with the same number of steps to the octave will give a better temperament class.

I think this is interesting. It follows from some things Gene and Paul E. were interested in. I've posted more on this list than I included in the PDF but still didn't get many comments. I'm planning to write more on it next month.

> Take the temperament "octokaidecal" [<2, 3, 4, 5, 7], <0, 1, 3, 3, 0]> > for example. You can't build this from patent vals; you need to use an > "obscure" val for one of the ET's.
> > 8-ET: [<8, 13, 19, 23, 28]> (obscure)
> 10-ET: [<10, 16, 23, 28, 35]> (patent)
> wedgie: <<2, 6, 6, 0, 5, 4, -7, -3, -21, -21]]
> generator mapping: [<2, 3, 4, 5, 7], <0, 1, 3, 3, 0]>
> > The patent val for 8-ET is [<8, 13, 19, 22, 28]>.

That is the optimal val for TOP-RMS.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/16/2008 8:17:13 PM

Herman Miller wrote:

> I like to keep the generator less than half the size of the period > (using TOP or TOP-RMS as test cases). There are difficult cases like > jamesbond [<7, 11, 16, 19], <0, 0, 0, 1]> where the generator is really > close to half the size of the period, but in most cases the result is > pretty clear. Of course, any temperament has many possible mappings, but > in most cases like this, if you don't find what you're looking for you > can negate the signs (e.g., the other common mapping of jamesbond is > [<7, 11, 16, 20], <0, 0, 0, -1]>).

That's fine. In some cases I take whatever comes out of the extended Euclidian algorithm.

>> If you want to identify a rank 2 class with a single val, >> you should multiply through by the number of periods to the >> octave. So blacksmith can be identified with <0, 0, 5, 0]. >> This is also the octave equivalent part of the wedgie <<0, >> 5, 0]] with a redundant 0.
>>
>> Given that, it's usually a practically unique identifier for >> a temperament class. There are other theoretical >> temperaments that will share it, but they're rubbish. > > Good point. When I've run across those kinds of temperaments, I never > give them a second examination. And most of the other cases of mistaken > identity have different period sizes.

That's how my software's been identifying them all this time. Except for the "wedgie" search where I, naturally, use the wedgie. That's how I know how rare the collisions are (maybe you'll get one in the top 100 in the higher limits).

> I haven't measured how long it takes to calculate the TOP generators vs. > just getting the TOP error, but for my purposes it's fast enough. I have > so many temperaments already that I can't even begin to use a tiny > fraction of them for music. Even basic things like evaluating how to > notate them with Sagittal or map them to a keyboard are time-consuming. > Word takes forever to sort a table.

It's something I think about because I do large searches, and I'm also interested in optimizing the algorithm as an abstract problem. Once you have your short-list you can optimize them however you like. It's a fraction of a second to optimize one temperament (by TOP-max), which only becomes important when you string a million of them together. The linear programming library I use (wrapping GLPK) returns the optimal generators along with their error. I think in principle the simplex calculation will take longer than reporting the results.

The RMS calculations are much faster than the minimax ones. I don't think it's calculating the optimal generators that slows things down but deducing the period/generator mapping.

Graham

🔗Carl Lumma <carl@lumma.org>

3/16/2008 9:21:38 PM

>>>> What timbres don't you think we can accomodate with the
>>>> theory as it stands?
>>>
>>> The theory as it stands is fine. The theory emasculated to
>>> fit the definition of "val" archived from Gene's website
>>
>> How does this definition "emasculate" whatever you think
>> the theory as it stands is?
>
>The definition itself does nothing. Following the
>definition strictly would reduce the range of things the
>theory could talk about.

I'd love to see an example.

>>>>> It may be that Gene didn't intend it to be so restrictive.
>>>>> I think there are comments in the archives where he allowed
>>>>> for other timbres.
>>>> Gene's definition does not restrict timbre choice.
>>>> At least not any more (and probably less) than "limit" does.
>>>> How are you getting that?
>>> From the part I quoted before. What do you mean by "limit"?
>>
>> Harmonic limit.
>
>What do you mean by "harmonic limit"?

Either prime or odd limit.

-Carl

🔗Carl Lumma <carl@lumma.org>

3/16/2008 9:26:08 PM

Graham wrote...

>Most temperaments can be built from optimal vals.
>Everything in my Prime Errors an Complexities PDF certainly.
> It's a while ago now, but my experience of the temperament
>searches was that I got better results with optimal vals
>than patent vals. Which is why I've been arguing against
>patent vals (the concept, not the name) all this time.

Do you have evidence Gene was missing temperaments
because he was searching only patent vals? I never
understood his interest in them, but I assumed he had
a good reason. Maybe I was being too generous.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/16/2008 9:26:34 PM

Carl Lumma wrote:
>>>>> What timbres don't you think we can accomodate with the
>>>>> theory as it stands?
>>>> The theory as it stands is fine. The theory emasculated to >>>> fit the definition of "val" archived from Gene's website
>>> How does this definition "emasculate" whatever you think
>>> the theory as it stands is?
>> The definition itself does nothing. Following the >> definition strictly would reduce the range of things the >> theory could talk about.
> > I'd love to see an example.

Ideal tubulongs have always been in the test suite. You've got the Scheme code. Tony's conical bells are for an exhibition at the Science Museum in London next year, I think.

>>>>>> It may be that Gene didn't intend it to be so restrictive.
>>>>>> I think there are comments in the archives where he allowed >>>>>> for other timbres.
>>>>> Gene's definition does not restrict timbre choice.
>>>>> At least not any more (and probably less) than "limit" does.
>>>>> How are you getting that?
>>>> From the part I quoted before. What do you mean by "limit"?
>>> Harmonic limit.
>> What do you mean by "harmonic limit"?
> > Either prime or odd limit.

Then you're right. Rational numbers in general are less restrictive than odd or prime limits.

Graham

🔗Carl Lumma <carl@lumma.org>

3/16/2008 9:27:32 PM

>> The patent val for 8-ET is [<8, 13, 19, 22, 28]>.
>
>That is the optimal val for TOP-RMS.

Really?

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/16/2008 9:29:03 PM

Carl Lumma wrote:
>>> The patent val for 8-ET is [<8, 13, 19, 22, 28]>.
>> That is the optimal val for TOP-RMS.
> > Really?

No, not that one. The one in the example you snipped.

Graham

🔗Carl Lumma <carl@lumma.org>

3/16/2008 9:36:22 PM

>Then you're right. Rational numbers in general are less
>restrictive than odd or prime limits.

The restrictions come from the human hearing system.
If you can't assign a sound a pitch, you can't have
consonance and a temperament is completely useless
to you.

If you can assign a pitch, you can either find an
approximate harmonic series in the timbre or you'll
have one partial that is much louder than others.

If you can find an approximate harmonic series, you
can use Gene's vals. They don't specify a tuning,
just a regular mapping.

If you can only find a single partial, you can still
use Gene's vals. Because then in your music you'll
still have harmonic serii, and hence pitch assignment,
and hence consonance (albeit 1st-order, rather than
the 2nd-order consonance you have with music made
from complex timbres).

Harmonic limits, on the other hand, are a misplaced
concept that we should do away with. We can keep
Gene's vals though (he may say "consecutive" somewhere,
but that's a minor modification... the important part
is that you have a subgroup that can uniquely generate
your gamut and therefore that all pitches have one
mapping through the val).

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/16/2008 9:47:53 PM

Carl Lumma wrote:
> Graham wrote...
> >> Most temperaments can be built from optimal vals. >> Everything in my Prime Errors an Complexities PDF certainly. >> It's a while ago now, but my experience of the temperament >> searches was that I got better results with optimal vals >> than patent vals. Which is why I've been arguing against >> patent vals (the concept, not the name) all this time.
> > Do you have evidence Gene was missing temperaments
> because he was searching only patent vals? I never
> understood his interest in them, but I assumed he had
> a good reason. Maybe I was being too generous.

I was never clear about what he was doing or what he expected to find, so I can't say if he was missing anything. A search for rank 2 temperaments using patent vals with up to 100 steps to the octave is bound to find all rank 2 temperaments generated by patent vals with up to 100 steps to the octave.

Gene did argue that patent val searches were either incomplete or unreliable, but somehow this was a weakness in my method after I stopped using them. Gene also did searches by unison vectors and I'm even less clear about how they worked.

When I did searches by what we're now calling patent vals, I know I missed things, including one temperament class (the details of which escape me) that was deemed interesting by the list. Or perhaps that was a consistent ET search... I'm sure the optimal mappings worked better than patent vals, anyway.

The difference between my approach and Genes is that he used all ETs within a given number of notes, but I only used the best ones. That should make my search generally more efficient. But maybe Gene will fill in the gaps with more marginal vals.

The very best temperaments will come out of a patent val search, because the ETs that produce them will be so good their worst errors will be less than half a scale step. Maybe you could take my provable search rule and show what cut-off guarantees completeness with patent vals.

Maybe you can't get some of the Middle Path lists with only patent vals. That's something I could check but not right now.

Graham

🔗Carl Lumma <carl@lumma.org>

3/16/2008 11:08:28 PM

At 09:47 PM 3/16/2008, you wrote:
>Carl Lumma wrote:
>> Graham wrote...
>>
>>> Most temperaments can be built from optimal vals.
>>> Everything in my Prime Errors an Complexities PDF certainly.
>>> It's a while ago now, but my experience of the temperament
>>> searches was that I got better results with optimal vals
>>> than patent vals. Which is why I've been arguing against
>>> patent vals (the concept, not the name) all this time.
>>
>> Do you have evidence Gene was missing temperaments
>> because he was searching only patent vals? I never
>> understood his interest in them, but I assumed he had
>> a good reason. Maybe I was being too generous.
>
>I was never clear about what he was doing or what he
>expected to find, so I can't say if he was missing anything.
>A search for rank 2 temperaments using patent vals with up
>to 100 steps to the octave is bound to find all rank 2
>temperaments generated by patent vals with up to 100 steps
>to the octave.
>
>Gene did argue that patent val searches were either
>incomplete or unreliable, but somehow this was a weakness in
>my method after I stopped using them.

This post may be helpful

/tuning-math/message/12185

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/17/2008 7:08:37 AM

Carl Lumma wrote:
> At 09:47 PM 3/16/2008, you wrote:

>> Gene did argue that patent val searches were either >> incomplete or unreliable, but somehow this was a weakness in >> my method after I stopped using them.
> > This post may be helpful
> > /tuning-math/message/12185

Well, not really. Although I managed to load it, which is good, so I picked up some other pages while it lasts.

What he says is that he's invented a new term based on an old term that wasn't important.

The geometric ideas are interesting, and he refers to this page:

/tuning-math/message/12188

Generally, that shows that he was on the right lines but didn't quite get there. He talks about a "JIP" which will be "just intonation point". And in this case an "n*JIP". The thing is there is no unique just intonation point in what he calls "val space".

JI vals form a line. To get a JIP you need a projection. Which means there isn't a unique "val space" or "Tenney space". There are two different spaces: complexity space (no projection) and badness space (orthogonal components to the JI line). If he used the right projection (which he got pretty close to doing) and used a weighted Euclidian metric the result would have been scalar badness. Hence the nearest val to the JIP is the optimal TOP-RMS one. And a similar argument *may* work for TOP-max if you get the geometry right. Paul Erlich's error-error plots are something to do with this.

The "standard val" approach is really about taking distances in complexity space and calling them badnesses. Of course, I didn't know that back when he wrote the message in question. It's a shame he hasn't commented since I tied down these geometric ideas. And it isn't about not being around because he was around when I discovered them. For some reason he isn't interested any more :(

In one of the replies he mentions searches: "searching all pairs of semistandard vals over some limit is much more likely to be a complete search than simply confining ourselves to pairs of standard vals."

/tuning-math/message/12188

Well, yes. And isn't searching over all equal temperaments with an error less than some function of the number of steps to the octave more likely to work? He never comments on it.

Graham

🔗Herman Miller <hmiller@IO.COM>

3/17/2008 6:48:55 PM

Graham Breed wrote:
> Most temperaments can be built from optimal vals. > Everything in my Prime Errors an Complexities PDF certainly. I'm sure this must have come up before, but do you know a simple algorithm for finding optimal vals? The nice thing about using nearest prime vals as a common point of reference is that they're trivial to calculate.

> It's a while ago now, but my experience of the temperament > searches was that I got better results with optimal vals > than patent vals. Which is why I've been arguing against > patent vals (the concept, not the name) all this time. The > best thing, however, is to get *all* equal temperaments > within your badness cutoff. There's no magical rule that > says there should be a one-to-one mapping between steps to > the octave and vals.
> > I've got another PDF that makes this rule mathematically strict:
> > http://x31eq.com/complete.pdf
> > I know you've seen it, at least an older version, because > you supplied some temperament class names. I still have > barely any feedback on the substance though which is why I > gave the link.

I haven't taken the time to follow all the math. Time is something I don't have a lot of these days.

> Another thing, from Prime Based Errors and Complexities
> > http://x31eq.com/primerr.pdf
> > There's a badness function with a free parameter which has a > geometrical interpretation. The badness of a rank 2 > temperament is twice the area of the triangle formed by the > two defining vals in "badness space". And similarly for the > volume of a rank 3 temperament, and so on. From this you > can conclude that the best vals to produce a given > temperament class will be the optimal ones (taken > individually) by the same badness measure. They'll also be > nearly orthogonal. Given that it's very unlikely that a > better mapping of one of them with the same number of steps > to the octave will give a better temperament class.
> > I think this is interesting. It follows from some things > Gene and Paul E. were interested in. I've posted more on > this list than I included in the PDF but still didn't get > many comments. I'm planning to write more on it next month.

The thing about this list is that everyone has their own areas of interest that don't always overlap, so I've come not to expect many comments. I'm not an expert on mathematical subjects so I don't often comment on things like derivations and formulas.

🔗Herman Miller <hmiller@IO.COM>

3/17/2008 6:56:11 PM

Graham Breed wrote:

> Maybe you can't get some of the Middle Path lists with only > patent vals. That's something I could check but not right now.

Hedgehog is the only Middle Path temperament you can't get with only patent vals.

🔗Graham Breed <gbreed@gmail.com>

3/17/2008 8:56:47 PM

Herman Miller wrote:
> Graham Breed wrote:
> >> Maybe you can't get some of the Middle Path lists with only >> patent vals. That's something I could check but not right now.
> > Hedgehog is the only Middle Path temperament you can't get with only > patent vals.

Okay.

Note that in "primerr.pdf" I distinguish TOP-RMS mappings from other ones (the "O" is for "optimal"). So you can see that you can get all the Middle Path temperaments with TOP-RMS mappings. Indeed, it looks like there are at least three TOP-RMS ETs belonging to each of the 7-limit families (or rank 2 classes, depending the definition).

You may think that of the 11-limit temperaments you supplied, Keemun has only one TOP-RMS val. It also works with the TOP-RMS val for 4 steps to the octave but I excluded that because I thought people would dislike calling things with less than 5 notes to the octave "equal temperaments".

Graham

🔗Graham Breed <gbreed@gmail.com>

3/17/2008 8:57:17 PM

Herman Miller wrote:
> Graham Breed wrote:
>> Most temperaments can be built from optimal vals. >> Everything in my Prime Errors an Complexities PDF certainly. > > I'm sure this must have come up before, but do you know a simple > algorithm for finding optimal vals? The nice thing about using nearest > prime vals as a common point of reference is that they're trivial to > calculate.

No. Doing it right is, unfortunately, relatively difficult. I think it's worth a whole PDF to explain and I'm planning to write such a PDF this year.

Nearest prime vals are easy to calculate, but so are nearest approximations to any other linearly independent set of intervals. You could define standard 5-limit vals according to the nearest approximations to 5:4 and 6:5, for example.

Generally I'm more interested in finding all equal temperaments within a given error or badness constraint than the single optimal one. The best code for that is now in "complete.py" for STD errors.

A method that'll get you most good mappings is to round each prime both up and down, and take all combinations. This is guaranteed to give an optimal val for TOP-max (there isn't always a unique optimum). It's also exponential in the number of primes, so it becomes very inefficient if you care about such things.

Another approach is to calculate the optimal scale stretch as you go along and take the nearest approximations in the light of that scale stretch. As a slight simplification of that (or for octave-equivalent errors) you can take the mapping of each prime with the nearest approximation to the current average weighted error. This is certainly not guaranteed to get the optimal mapping, because sometimes the best mapping of 7 will change when you introduce 11, for example. But you could call this a feature because you don't have to specify the prime limit. You can generate them lazily as you need them.

Where the optimal val isn't the patent val, it's because the best approximations all tend to be in the same direction -- sharp or flat. That's the logic behind taking approximations relative to the average. Also, starting with the lowest primes is best. If the error in 3 is half a scale step it means the error in 9 is a whole scale step. So it's very unlikely that 3 won't have its nearest mapping, and you can assume this when calculating higher primes.

Graham

🔗Carl Lumma <carl@lumma.org>

3/17/2008 9:05:16 PM

Herman wrote...
>I'm sure this must have come up before, but do you know a simple
>algorithm for finding optimal vals?

Paul suggests brute force:
>Start with the JI val. Multiply it by every possible positive real
>number (in practice you just need to find a small enough step size to
>increment this constant through) and round the result to the nearest
>integers. Use the TOP-et algorithm on all of these, and calculate
>their TOP damage. Then group the results by the octave part of the
>val and pick out the lowest-damage one in each group.
>
>I think any val that doesn't arise through this process can't be the
>minimum-TOP-damage one (when TOP-tuned) for a given octave
>cardinality.

But in the message from Gene I quoted

/tuning-math/message/12185

it looks like there may be a better way. Graham recently said
Gene didn't know what he was doing there, but what he said made
no sense to me, whereas Gene's page makes some sense.

http://tinyurl.com/2nohve

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/17/2008 9:11:36 PM

Carl Lumma wrote:
>> Then you're right. Rational numbers in general are less >> restrictive than odd or prime limits.
> > The restrictions come from the human hearing system.
> If you can't assign a sound a pitch, you can't have
> consonance and a temperament is completely useless
> to you.
> > If you can assign a pitch, you can either find an
> approximate harmonic series in the timbre or you'll
> have one partial that is much louder than others.

This is exactly where I don't have the energy to argue with you. Suffice to say that people have been interested in such things and my theory supports them.

> If you can find an approximate harmonic series, you
> can use Gene's vals. They don't specify a tuning,
> just a regular mapping.

Then they differ from my "m-genes" which assume the tuning of the just intonation they're mapping from. You need that to calculate the errors.

> If you can only find a single partial, you can still
> use Gene's vals. Because then in your music you'll
> still have harmonic serii, and hence pitch assignment,
> and hence consonance (albeit 1st-order, rather than
> the 2nd-order consonance you have with music made
> from complex timbres).
> > Harmonic limits, on the other hand, are a misplaced
> concept that we should do away with. We can keep
> Gene's vals though (he may say "consecutive" somewhere,
> but that's a minor modification... the important part
> is that you have a subgroup that can uniquely generate
> your gamut and therefore that all pitches have one
> mapping through the val).

He specifically says "p-limit". Why not do away with prime numbers altogether? Allow for any linearly independent prime intervals of minimal complexity. The aren't any other properties of the rational numbers that we need.

Graham

🔗Carl Lumma <carl@lumma.org>

3/17/2008 9:34:58 PM

At 09:11 PM 3/17/2008, you wrote:
>Carl Lumma wrote:
>>> Then you're right. Rational numbers in general are less
>>> restrictive than odd or prime limits.
>>
>> The restrictions come from the human hearing system.
>> If you can't assign a sound a pitch, you can't have
>> consonance and a temperament is completely useless
>> to you.
>>
>> If you can assign a pitch, you can either find an
>> approximate harmonic series in the timbre or you'll
>> have one partial that is much louder than others.
>
>This is exactly where I don't have the energy to argue with
>you. Suffice to say that people have been interested in
>such things and my theory supports them.

Of course it doesn't suffice. It might suffice to name
such people. And maybe to explain your "theory", which
you haven't done.

>> If you can find an approximate harmonic series, you
>> can use Gene's vals. They don't specify a tuning,
>> just a regular mapping.
>
>Then they differ from my "m-genes" which assume the tuning
>of the just intonation they're mapping from. You need that
>to calculate the errors.

Sure. The tuning of just intonation is a variable
which can come from the timbre if you like.

>> If you can only find a single partial, you can still
>> use Gene's vals. Because then in your music you'll
>> still have harmonic serii, and hence pitch assignment,
>> and hence consonance (albeit 1st-order, rather than
>> the 2nd-order consonance you have with music made
>> from complex timbres).
>>
>> Harmonic limits, on the other hand, are a misplaced
>> concept that we should do away with. We can keep
>> Gene's vals though (he may say "consecutive" somewhere,
>> but that's a minor modification... the important part
>> is that you have a subgroup that can uniquely generate
>> your gamut and therefore that all pitches have one
>> mapping through the val).
>
>He specifically says "p-limit". Why not do away with prime
>numbers altogether? Allow for any linearly independent
>prime intervals

Any coprime (the same as linearly independent?) numbers.

>of minimal complexity.

?

>The aren't any other
>properties of the rational numbers that we need.

I think I said that.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/17/2008 9:39:15 PM

Carl Lumma wrote:

>> of minimal complexity.
> > ?

Any ratio involving a factor of 3 will have a higher complexity than 3:1. In general terms, this is the property that defines which intervals are primes. Looked at another way, you're free to choose your prime intervals, but the complexity follows from that choice.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/17/2008 10:38:08 PM

Carl Lumma wrote:

> Paul suggests brute force:
>> Start with the JI val. Multiply it by every possible positive real
>> number (in practice you just need to find a small enough step size to
>> increment this constant through) and round the result to the nearest
>> integers. Use the TOP-et algorithm on all of these, and calculate
>> their TOP damage. Then group the results by the octave part of the
>> val and pick out the lowest-damage one in each group.
>>
>> I think any val that doesn't arise through this process can't be the
>> minimum-TOP-damage one (when TOP-tuned) for a given octave
>> cardinality.

Ah, yes. If you expand "patent val" to cover real numbers of notes to the octave you can identify a lot more mappings -- and all the best ones.

> But in the message from Gene I quoted
> > /tuning-math/message/12185
> > it looks like there may be a better way. Graham recently said
> Gene didn't know what he was doing there, but what he said made
> no sense to me, whereas Gene's page makes some sense.

I said what? Show me the reference and I'll apologize.

> http://tinyurl.com/2nohve

That's about optimal tunings and errors, not optimal vals.

Graham

🔗Carl Lumma <carl@lumma.org>

3/17/2008 10:50:10 PM

Graham wrote:
>Herman Miller wrote:
>> I'm sure this must have come up before, but do you know a simple
>> algorithm for finding optimal vals? The nice thing about using
>> nearest prime vals as a common point of reference is that they're
>> trivial to calculate.
//
>Where the optimal val isn't the patent val, it's because the
> best approximations all tend to be in the same direction
>-- sharp or flat. That's the logic behind taking
>approximations relative to the average. Also, starting with
>the lowest primes is best. If the error in 3 is half a
>scale step it means the error in 9 is a whole scale step.
>So it's very unlikely that 3 won't have its nearest mapping,
>and you can assume this when calculating higher primes.

Isn't this equivalent to whether the corresponding ET is
consistent over the given primes? IOW, if

1/2(max error - min error) >= 1

the standard val is minimal (when the errors are expressed as
fractions of a step). Otherwise, it isn't.

When it isn't, I'd just test all combinations of raising and
lowering each position in the val by one. In the 17-limit
that's only 3^7 = 2,187 tests.

Thoughts?

-Carl

🔗Carl Lumma <carl@lumma.org>

3/17/2008 10:52:19 PM

Graham wrote...

>> But in the message from Gene I quoted
>>
>> /tuning-math/message/12185
>>
>> it looks like there may be a better way. Graham recently said
>> Gene didn't know what he was doing there, but what he said made
>> no sense to me, whereas Gene's page makes some sense.
>
>I said what? Show me the reference and I'll apologize.
>
>> http://tinyurl.com/2nohve
>
>That's about optimal tunings and errors, not optimal vals.

/tuning-math/message/17051

-C.

🔗Graham Breed <gbreed@gmail.com>

3/17/2008 10:58:01 PM

Carl Lumma wrote:

> Isn't this equivalent to whether the corresponding ET is
> consistent over the given primes? IOW, if
> > 1/2(max error - min error) >= 1
> > the standard val is minimal (when the errors are expressed as
> fractions of a step). Otherwise, it isn't.

Consistency is only defined for odd limits. In that case, yes, it's the same as saying the val is unambiguous.

> When it isn't, I'd just test all combinations of raising and
> lowering each position in the val by one. In the 17-limit
> that's only 3^7 = 2,187 tests.
> > Thoughts?

Yes but 2,187 is still quite a lot. You can prune the search by giving up when you've already exceeded the limit.

Graham

🔗Carl Lumma <carl@lumma.org>

3/18/2008 12:48:39 AM

At 10:58 PM 3/17/2008, you wrote:
>Carl Lumma wrote:
>
>> Isn't this equivalent to whether the corresponding ET is
>> consistent over the given primes? IOW, if
>>
>> 1/2(max error - min error) >= 1
>>
>> the standard val is minimal (when the errors are expressed as
>> fractions of a step). Otherwise, it isn't.
>
>Consistency is only defined for odd limits.

Where do you come up with these things?

>> When it isn't, I'd just test all combinations of raising and
>> lowering each position in the val by one. In the 17-limit
>> that's only 3^7 = 2,187 tests.
>>
>> Thoughts?
>
>Yes but 2,187 is still quite a lot. You can prune the
>search by giving up when you've already exceeded the limit.

Sure; good point. Just an upper bound.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/18/2008 2:50:23 AM

Carl Lumma wrote:
> At 10:58 PM 3/17/2008, you wrote:
>> Carl Lumma wrote:
>>
>>> Isn't this equivalent to whether the corresponding ET is
>>> consistent over the given primes? IOW, if
>>>
>>> 1/2(max error - min error) >= 1
>>>
>>> the standard val is minimal (when the errors are expressed as
>>> fractions of a step). Otherwise, it isn't.
>> Consistency is only defined for odd limits.
> > Where do you come up with these things?

http://tonalsoft.com/enc/c/consistent.aspx

Graham

🔗Graham Breed <gbreed@gmail.com>

3/18/2008 3:27:23 AM

Carl Lumma wrote:
> Graham wrote...
> >>> But in the message from Gene I quoted
>>>
>>> /tuning-math/message/12185
>>>
>>> it looks like there may be a better way. Graham recently said
>>> Gene didn't know what he was doing there, but what he said made
>>> no sense to me, whereas Gene's page makes some sense.
>> I said what? Show me the reference and I'll apologize.
>>
>>> http://tinyurl.com/2nohve

For the record, that's the page I meant to link to in the message you quoted next.

>> That's about optimal tunings and errors, not optimal vals.
> > /tuning-math/message/17051

What of that message? Is something not clear to you? You could reply to it directly.

Graham

🔗Carl Lumma <carl@lumma.org>

3/18/2008 9:34:30 AM

>>> Consistency is only defined for odd limits.
>>
>> Where do you come up with these things?
>
> http://tonalsoft.com/enc/c/consistent.aspx

That's ancient. There's nothing systemic about
this, it's just Paul's crusade for odd limits.
I've been calculating consistency over primes
since 1998.

-Carl

🔗Carl Lumma <carl@lumma.org>

3/18/2008 9:36:06 AM

>>> That's about optimal tunings and errors, not optimal vals.
>>
>> /tuning-math/message/17051
>
>What of that message? Is something not clear to you? You
>could reply to it directly.

Lots isn't clear, but I'm not sure where to start just yet.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/18/2008 7:00:17 PM

Graham Breed wrote:
> Herman Miller wrote:
>> Graham Breed wrote:
>>> Most temperaments can be built from optimal vals. >>> Everything in my Prime Errors an Complexities PDF certainly. >> I'm sure this must have come up before, but do you know a simple >> algorithm for finding optimal vals? The nice thing about using nearest >> prime vals as a common point of reference is that they're trivial to >> calculate.
> > No. Doing it right is, unfortunately, relatively difficult. > I think it's worth a whole PDF to explain and I'm planning > to write such a PDF this year.
> > Nearest prime vals are easy to calculate, but so are nearest > approximations to any other linearly independent set of > intervals. You could define standard 5-limit vals according > to the nearest approximations to 5:4 and 6:5, for example.

Yes, another example that could be of interest is a set of primes that skips one or more of them (e.g. 3, 5, 7 or 2, 5, 7, 13).

> Generally I'm more interested in finding all equal > temperaments within a given error or badness constraint than > the single optimal one. The best code for that is now in > "complete.py" for STD errors.

I don't have a particular badness constraint that I'm happy with, so

> A method that'll get you most good mappings is to round each > prime both up and down, and take all combinations. This is > guaranteed to give an optimal val for TOP-max (there isn't > always a unique optimum). It's also exponential in the > number of primes, so it becomes very inefficient if you care > about such things.

Do you need both up and down, or just the second-best? I.e., down if it's too high, up if it's too low.

> Another approach is to calculate the optimal scale stretch > as you go along and take the nearest approximations in the > light of that scale stretch. As a slight simplification of > that (or for octave-equivalent errors) you can take the > mapping of each prime with the nearest approximation to the > current average weighted error. This is certainly not > guaranteed to get the optimal mapping, because sometimes the > best mapping of 7 will change when you introduce 11, for > example. But you could call this a feature because you > don't have to specify the prime limit. You can generate > them lazily as you need them.

I was thinking that the best mapping might change as you go to higher limits. But I did a test with different limits, trying all combinations of nearest and second-best mapping, and found something that seems very strange: the higher limits are consistent with the lower ones. It's likely that I'm doing something wrong, but if this turns out to work in general, you could find the optimal val by adding one at a time. If there's ever a case where you need the third-best approximation, this probably won't work.

12: <12, 19, 28] 3.557008
12: <12, 19, 28, 34] 6.143684
12: <12, 19, 28, 34, 42] 7.612148
12f: <12, 19, 28, 34, 42, 45] 8.599415
12f: <12, 19, 28, 34, 42, 45, 49] 8.599415
12f: <12, 19, 28, 34, 42, 45, 49, 51] 8.599415
12fi: <12, 19, 28, 34, 42, 45, 49, 51, 55] 8.599415
12fij: <12, 19, 28, 34, 42, 45, 49, 51, 55, 59] 8.599415
12fijk: <12, 19, 28, 34, 42, 45, 49, 51, 55, 59, 60] 8.599415

Here are a few more 31-limit results.

12fijk: <12, 19, 28, 34, 42, 45, 49, 51, 55, 59, 60] 8.599415
13dk: <13, 21, 30, 37, 45, 48, 53, 55, 59, 63, 65] 15.096538
14cf: <14, 22, 32, 39, 48, 51, 57, 59, 63, 68, 69] 9.431411
15gk: <15, 24, 35, 42, 52, 56, 62, 64, 68, 73, 75] 8.269701
16j: <16, 25, 37, 45, 55, 59, 65, 68, 72, 77, 79] 9.662600
17cghk: <17, 27, 40, 48, 59, 63, 70, 73, 77, 83, 85] 7.960800
18ehijk: <18, 29, 42, 51, 63, 67, 74, 77, 82, 88, 90] 9.817775
19: <19, 30, 44, 53, 66, 70, 78, 81, 86, 92, 94] 6.441058
20cei: <20, 32, 47, 56, 70, 74, 82, 85, 91, 97, 99] 8.784279
21: <21, 33, 49, 59, 73, 78, 86, 89, 95, 102, 104] 8.085219
22fh: <22, 35, 51, 62, 76, 82, 90, 94, 100, 107, 109] 5.303891
23dehjk: <23, 36, 53, 64, 79, 85, 94, 97, 104, 111, 113] 7.521517
24d: <24, 38, 56, 68, 83, 89, 98, 102, 109, 117, 119] 6.143684

> Where the optimal val isn't the patent val, it's because the > best approximations all tend to be in the same direction > -- sharp or flat. That's the logic behind taking > approximations relative to the average. Also, starting with > the lowest primes is best. If the error in 3 is half a > scale step it means the error in 9 is a whole scale step. > So it's very unlikely that 3 won't have its nearest mapping, > and you can assume this when calculating higher primes.

That could explain why the sequence seems to be consistent in different limits. If you started with a 3, 7, 11 without a 2 or a 5, it might end up being inconsistent with the full 2, 3, 5, 7, 11 mapping.

🔗Carl Lumma <carl@lumma.org>

3/18/2008 7:38:04 PM

Herman wrote...

>I was thinking that the best mapping might change as you go to higher
>limits. But I did a test with different limits, trying all combinations
>of nearest and second-best mapping, and found something that seems very
>strange: the higher limits are consistent with the lower ones. It's
>likely that I'm doing something wrong, but if this turns out to work in
>general, you could find the optimal val by adding one at a time.

I don't think this can be the case.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/18/2008 8:15:25 PM

Carl Lumma wrote:
> Paul suggests brute force:
>> Start with the JI val. Multiply it by every possible positive real
>> number (in practice you just need to find a small enough step size to
>> increment this constant through) and round the result to the nearest
>> integers. Use the TOP-et algorithm on all of these, and calculate
>> their TOP damage. Then group the results by the octave part of the
>> val and pick out the lowest-damage one in each group.
>>
>> I think any val that doesn't arise through this process can't be the
>> minimum-TOP-damage one (when TOP-tuned) for a given octave
>> cardinality.

This can be improved. Instead of calculating each point with a fixed increment between points, you can predict when the nearest prime mapping will change. This gives you a range of step sizes with a fixed nearest prime approximation, and you can find the best TOP damage within that range.

🔗Graham Breed <gbreed@gmail.com>

3/18/2008 9:53:10 PM

Herman Miller wrote:
> Graham Breed wrote:
>> A method that'll get you most good mappings is to round each >> prime both up and down, and take all combinations. This is >> guaranteed to give an optimal val for TOP-max (there isn't >> always a unique optimum). It's also exponential in the >> number of primes, so it becomes very inefficient if you care >> about such things.
> > Do you need both up and down, or just the second-best? I.e., down if > it's too high, up if it's too low.

That's what I mean by "round both up and down". Round to floor and ceiling. You may need more, and certainly TOP-RMS can be outside this range, but this does a lot better than rounding all primes to the nearest integer.

> I was thinking that the best mapping might change as you go to higher > limits. But I did a test with different limits, trying all combinations > of nearest and second-best mapping, and found something that seems very > strange: the higher limits are consistent with the lower ones. It's > likely that I'm doing something wrong, but if this turns out to work in > general, you could find the optimal val by adding one at a time. If > there's ever a case where you need the third-best approximation, this > probably won't work.

It doesn't work in general and here are some TOP-RMS examples where it doesn't work:

<5, 8, 12, 14, 17] <5, 8, 12, 14, 18, 19]
<7, 11, 16, 19, 24] <7, 11, 16, 20, 24, 26]
<11, 17, 25, 30] <11, 17, 25, 31, 38]
<11, 17, 25, 31, 38] <11, 17, 25, 30, 38, 40]
<13, 21, 30] <13, 21, 31, 37]
<13, 21, 31, 37] <13, 21, 30, 37, 45]
<32, 51, 75] <32, 51, 74, 90]
<32, 51, 74, 90, 111] <32, 51, 75, 90, 111, 119]
<36, 57, 84, 101, 125] <36, 57, 83, 101, 124, 133]
<48, 76, 111, 135, 166] <48, 76, 112, 135, 166, 178]

>> Where the optimal val isn't the patent val, it's because the >> best approximations all tend to be in the same direction >> -- sharp or flat. That's the logic behind taking >> approximations relative to the average. Also, starting with >> the lowest primes is best. If the error in 3 is half a >> scale step it means the error in 9 is a whole scale step. >> So it's very unlikely that 3 won't have its nearest mapping, >> and you can assume this when calculating higher primes.
> > That could explain why the sequence seems to be consistent in different > limits. If you started with a 3, 7, 11 without a 2 or a 5, it might end > up being inconsistent with the full 2, 3, 5, 7, 11 mapping.

It looks fairly consistent for good ETs.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/18/2008 9:54:46 PM

Carl Lumma wrote:
>>>> Consistency is only defined for odd limits.
>>> Where do you come up with these things?
>> http://tonalsoft.com/enc/c/consistent.aspx
> > That's ancient. There's nothing systemic about
> this, it's just Paul's crusade for odd limits.
> I've been calculating consistency over primes
> since 1998.

Would you care to supply a definition?

Graham

🔗Carl Lumma <carl@lumma.org>

3/19/2008 12:32:06 AM

At 09:54 PM 3/18/2008, you wrote:
>Carl Lumma wrote:
>>>>> Consistency is only defined for odd limits.
>>>> Where do you come up with these things?
>>> http://tonalsoft.com/enc/c/consistent.aspx
>>
>> That's ancient. There's nothing systemic about
>> this, it's just Paul's crusade for odd limits.
>> I've been calculating consistency over primes
>> since 1998.
>
>Would you care to supply a definition?

A val is said to be *consistent* with respect to a
basis of natural numbers if it is both patent and
minimal with respect to that basis given y, its
mapping of the smallest number in the basis.

It is further said to be n-level consistent over
this basis if n is the integer part of

1/2 | max error - min error |

where errors are distances, in val space, from the
elements of the val to y*JIP.

That may not quite be right.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/19/2008 2:18:56 AM

Carl Lumma wrote:

> A val is said to be *consistent* with respect to a
> basis of natural numbers if it is both patent and
> minimal with respect to that basis given y, its
> mapping of the smallest number in the basis.

That's similar to "unambiguous" then.

> It is further said to be n-level consistent over
> this basis if n is the integer part of
> > 1/2 | max error - min error |
> > where errors are distances, in val space, from the
> elements of the val to y*JIP.
> > That may not quite be right.

If y is a free parameter, I believe it's the same as saying the simple TOP badness is less than half. That is "simple badness" as error*complexity with complexity as the number of notes. As such it isn't quite right. 1/2 is a very high cutoff for TOP error. But what I say may not be quite right...

Graham

🔗Graham Breed <gbreed@gmail.com>

3/19/2008 5:24:11 AM

This post certainly isn't clear but as nobody's replied to it I'll have to talk to myself.

Graham Breed wrote:
> Carl Lumma wrote:
>> At 09:47 PM 3/16/2008, you wrote:
> >>> Gene did argue that patent val searches were either incomplete or >>> unreliable, but somehow this was a weakness in my method after I >>> stopped using them.
>>
>> This post may be helpful
>>
>> /tuning-math/message/12185
> > Well, not really. Although I managed to load it, which is good, so I > picked up some other pages while it lasts.
> > What he says is that he's invented a new term based on an old term that > wasn't important.
> > The geometric ideas are interesting, and he refers to this page:

The correct link I'm using is:

http://web.archive.org/web/20070220074528/66.98.148.43/~xenharmo/top.htm

> Generally, that shows that he was on the right lines but didn't quite > get there. He talks about a "JIP" which will be "just intonation > point". And in this case an "n*JIP". The thing is there is no unique > just intonation point in what he calls "val space".

This is muddled. Partly I had the definition of "val space" wrong. Let's talk about a "val lattice" instead (except I won't). In the tuning-math message, he talked about the distance between a val and an n*JIP. In the web page he talks about a JIP and uses it correctly.

To try and clarify this, I'll introduce the concept of "tuning spaces". In a tuning space, each point represents the tunings of a set of prime intervals. Hence it's a regular tempered tuning. One unique point in such a space will be the untempered tuning of just intonation, so you can call it the JIP.

In a tuning space you can use a suitable metric to make the distance from a tempered tuning to the JIP be the error in that tempered tuning. Gene's web page does this correctly for TOP-max errors.

Vals are the points in a tuning space with integer coordinates. The line from the origin that passes through the val defines an equal temperament class. Each point on that line represents a specific tuning of the octave. The optimal tuning of the equal temperament is represented by the point on this line that gets closest to the JIP given whatever metric you imposed before. Gene gets this right on his web page as well.

Now, in his e-mail Gene defines an "n*JIP" which is presumably the just intonation point multiplied by the number of notes to the octave. His patent val will indeed be the closes val to this point. But this fact doesn't have any special meaning because the "n*JIP" isn't special.

> JI vals form a line. To get a JIP you need a projection. Which means > there isn't a unique "val space" or "Tenney space". There are two > different spaces: complexity space (no projection) and badness space > (orthogonal components to the JI line). If he used the right projection > (which he got pretty close to doing) and used a weighted Euclidian > metric the result would have been scalar badness. Hence the nearest val > to the JIP is the optimal TOP-RMS one. And a similar argument *may* > work for TOP-max if you get the geometry right. Paul Erlich's > error-error plots are something to do with this.

Now I start talking about my own model without any introduction. I prefer to think about a lattice of equal temperament mappings (vals) without considering their tunings. I believe this is a lattice, however just intonation doesn't exist on it so let's talk about points in a tuning space anyway. The distance from the origin to a val in tuning space gives the complexity of the equal temperament defined by the val. Hence I call a tuning space inhabited by vals a "complexity space".

In a complexity space, the badness of an equal temperament is measured by the distance from its val to the JI line I talk about. The JI line is the line from the origin passing through the JIP. This distance is proportional to the distance from the temperament line to the JIP that I talked about earlier. There's nothing wrong with Gene's way of looking at things as he explains in his web page and it's essentially equivalent to my way. No argument but a different perspective.

Now, to projections. If you want the badness of an equal temperament to be the distance between its val and a unique point in space, you need to projection complexity space onto what I call "badness space".

Here's a page on projections:

http://www.reference.com/search?q=orthogonal%20projection

Unfortunately that doesn't actually cover the terminology I used. My reference is: D C Lay "Linear Algebra and Its Applications", Third Edition 2003, Pearson. It's the best introduction I've seen to linear algebra in connection with the way I use it, and also happens to be one I found in a shop. On page 386 there's an equation

y = y^ + z

where "y^" is a y with a hat on it and means an orthogonal projection of y onto an implicit "u" as explained in my wikipedia mirror. The z is then called the "component of y orthogonal to u". You can define it as

z = y - y^

and it is in itself a projection. It happens that in

http://x31eq.com/primerr.pdf

I came up with a badness formula which is really the size of the components orthogonal to the weighted prime intervals in complexity space.

The geometric logic is that we want to project complexity space such that the JI line becomes a point. That means we want the components of tuning space orthogonal to it. So badness space is this specific projection of complexity space.

Now, Gene's val space is correct in so far as it goes. It isn't as useful a geometry as a complexity space with a Euclidean metric though because it only measures equal temperaments, not higher ranks. I don't know if it's possible to extend it to higher ranks or not. That's why I said "it *may* work". I apologize if I gave the impression that his val space might not work the way he used it.

Taking the distance from a val to an n*JIP is essentially projecting onto the pure-octaves plane (or whatever) rather than projecting orthogonally to the JI line. Hence you get a uniquely defined result but not a useful one because you don't transform complexity space into badness space.

In Paul Erlich's "Middle Path" paper there's a diagram where equal temperaments are represented by points and rank 2 temperaments by the lines between them. This is also a projection of tuning space giving the components orthogonal to the JI line. It's an octave-equivalent tuning space. The Euclidean distance from the origin will be the STD error of the tempered tuning (assuming the rank 2 temperaments really are straight lines) which approximates the unoptimized TOP-RMS error.

Now, there's also a connection with wedge products. The wedge product of a multivector with a vector give something like the components orthogonal to that vector. See here:

http://www.av8n.com/physics/clifford-intro.htm

So if the badness of a val is the measure of the weighted components orthogonal to the JI vector (as in scalar badness) it should also be something like

B = |T^V|/|V|

in exterior algebra where T is the wedge product of weighted vals, V is the weighted JI vector and |...| is the inner product that gives lengths, areas, and so on. The |V| is so that the size of the JI vector doesn't matter, only its orientation. I already said that the scalar complexity is related to the length of the val in complexity space, so

k = |T|

In practice, this k isn't quite scalar complexity the same way as the B above isn't quite scalar badness. But they could have been defined like this. And the TOP-RMS error is the badness divided by the complexity, so

E = |T^V|/|T||V|

Geometrically, this is the sine of angle between the equal temperament line and the JI line, with generalizations to higher ranks. That's intuitively correct because the closer a val gets to the JI line, the lower the error. I don't know if there's a similar rule for TOP-max.

I also happened to notice that for an equal temperament this angle is also related to the statistical correlation of the weighted val to the JI vector. The higher the correlation the lower the error in the optimal temperament.

> The "standard val" approach is really about taking distances in > complexity space and calling them badnesses. Of course, I didn't know > that back when he wrote the message in question. It's a shame he hasn't > commented since I tied down these geometric ideas. And it isn't about > not being around because he was around when I discovered them. For some > reason he isn't interested any more :(

Gene was certainly looking at Euclidean spaces back when he discovered geometric complexity (which I still don't understand but I'm making progress). Unfortunately if he ever talked about the geometry I outlined above I missed it.

> In one of the replies he mentions searches: "searching all pairs of > semistandard vals over some limit is much more likely to be a complete > search than simply confining ourselves to pairs of standard vals."
> > /tuning-math/message/12188

Maybe that link's correct ;-)

> Well, yes. And isn't searching over all equal temperaments with an > error less than some function of the number of steps to the octave more > likely to work? He never comments on it.

I really think a general cutoff is better than taking the patent and any better vals. And I really don't remember Gene talking about it.

Graham

🔗Carl Lumma <carl@lumma.org>

3/19/2008 9:46:40 AM

At 02:18 AM 3/19/2008, you wrote:
>Carl Lumma wrote:
>
>> A val is said to be *consistent* with respect to a
>> basis of natural numbers if it is both patent and
>> minimal with respect to that basis given y, its
>> mapping of the smallest number in the basis.
>
>That's similar to "unambiguous" then.
>
>> It is further said to be n-level consistent over
>> this basis if n is the integer part of
>>
>> 1/2 | max error - min error |
>>
>> where errors are distances, in val space, from the
>> elements of the val to y*JIP.
>>
>> That may not quite be right.
>
>If y is a free parameter, I believe it's the same as saying
>the simple TOP badness is less than half. That is "simple
>badness" as error*complexity with complexity as the number
>of notes. As such it isn't quite right. 1/2 is a very high
>cutoff for TOP error. But what I say may not be quite right...
>

y is free but what is the number of notes? For an ET val,
it will be y. For a val basis with no 2 identity, I don't
know.

-Carl

🔗Carl Lumma <carl@lumma.org>

3/19/2008 11:07:16 AM

[Sorry if you got an early version of this, sent accidentally.
I've deleted it from the website.]

Graham wrote...

>> Generally, that shows that he was on the right lines but didn't quite
>> get there. He talks about a "JIP" which will be "just intonation
>> point". And in this case an "n*JIP". The thing is there is no unique
>> just intonation point in what he calls "val space".
>
>This is muddled. Partly I had the definition of "val space"
>wrong. Let's talk about a "val lattice" instead (except I
>won't).

I agree "val space" is a bit misleading, since vals exist
only in a lattice there.

>In the tuning-math message, he talked about the
>distance between a val and an n*JIP. In the web page he
>talks about a JIP and uses it correctly.

He must have of course, because he uses it to correctly
calculate TOP tunings.

>To try and clarify this, I'll introduce the concept of
>"tuning spaces". In a tuning space, each point represents
>the tunings of a set of prime intervals. Hence it's a
>regular tempered tuning. One unique point in such a space
>will be the untempered tuning of just intonation, so you can
>call it the JIP.

There will be other points, n*JIP, which are equivalent,
but have the correct distance from the val you're trying
to get the error of. I think.

>In a tuning space you can use a suitable metric to make the
>distance from a tempered tuning to the JIP be the error in
>that tempered tuning. Gene's web page does this correctly
>for TOP-max errors.

Yes, you can see that he basically winds up with Paul's
formula for TOP-tuning vals.

>Vals are the points in a tuning space with integer
>coordinates. The line from the origin that passes through
>the val defines an equal temperament class. Each point on
>that line represents a specific tuning of the octave.
>The optimal tuning of the equal temperament is represented by
>the point on this line that gets closest to the JIP given
>whatever metric you imposed before. Gene gets this right on
>his web page as well.

This I don't get. What do you mean by equal temperament
class? In the 5-limit, for example, the line from the origin
through the 12-ET val defines a series of ETs... if you
want something with 12 notes to the approximate octave you
need to find point on the line closent to 12*JIP. I think.

>Now, in his e-mail Gene defines an "n*JIP" which is
>presumably the just intonation point multiplied by the
>number of notes to the octave.

He clearly defines n in that message as the number of notes
to the octave.

>His patent val will indeed
>be the closest val to this point. But this fact doesn't have
>any special meaning because the "n*JIP" isn't special.

n*JIP is special, and so is the patent val. It's just
not uniquely special, as he later pointed out. Its clear
that whoever wants an efficient way to calculate the val
with least TOP damage would do well to first calculate
the semi-standard vals, since it is apparently among them.

>> JI vals form a line. To get a JIP you need a projection. Which means
>> there isn't a unique "val space" or "Tenney space". There are two
>> different spaces: complexity space (no projection) and badness space
>> (orthogonal components to the JI line). If he used the right projection
>> (which he got pretty close to doing) and used a weighted Euclidian
>> metric the result would have been scalar badness. Hence the nearest val
>> to the JIP is the optimal TOP-RMS one. And a similar argument *may*
>> work for TOP-max if you get the geometry right. Paul Erlich's
>> error-error plots are something to do with this.
>
>Now I start talking about my own model without any
>introduction.

You noticed that, did you.

>I prefer to think about a lattice of equal
>temperament mappings (vals) without considering their
>tunings. I believe this is a lattice,

Gene even says it is a bona fide lattice.

>however just
>intonation doesn't exist on it so let's talk about points in
>a tuning space anyway. The distance from the origin to a
>val in tuning space gives the complexity of the equal
>temperament defined by the val.

OK, that is interesting!

>Hence I call a tuning space inhabited by vals a "complexity space".

OK.

>In a complexity space, the badness of an equal temperament
>is measured by the distance from its val to the JI line I
>talk about.

You mean the error?

>The JI line is the line from the origin passing
>through the JIP.

And I think the closest point (assuming the size metric) on
the JIP line will be n*JIP.

>This distance is proportional to the
>distance from the temperament line to the JIP that I talked
>about earlier. There's nothing wrong with Gene's way of
>looking at things as he explains in his web page and it's
>essentially equivalent to my way. No argument but a
>different perspective.

It doesn't seem different at all.

OK, I've got to tackle the rest later. Got to go to work!
(Heck, it's only noon and all.) I'm sending this now in case
I can get some answers before I tackle the next bit.

And I'm still genuinely interested why you think my timbre
argument is wrong.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/19/2008 6:41:49 PM

Graham Breed wrote:

> To try and clarify this, I'll introduce the concept of > "tuning spaces". In a tuning space, each point represents > the tunings of a set of prime intervals. Hence it's a > regular tempered tuning. One unique point in such a space > will be the untempered tuning of just intonation, so you can > call it the JIP.
> > In a tuning space you can use a suitable metric to make the > distance from a tempered tuning to the JIP be the error in > that tempered tuning. Gene's web page does this correctly > for TOP-max errors.
> > Vals are the points in a tuning space with integer > coordinates. The line from the origin that passes through > the val defines an equal temperament class. Each point on > that line represents a specific tuning of the octave. The > optimal tuning of the equal temperament is represented by > the point on this line that gets closest to the JIP given > whatever metric you imposed before. Gene gets this right on > his web page as well.

Now, this is much clearer than Gene's web page. I could've saved a lot of time back when I was trying to understand TOP.

> Now, to projections. If you want the badness of an equal > temperament to be the distance between its val and a unique > point in space, you need to projection complexity space onto > what I call "badness space".
> > Here's a page on projections:
> > http://www.reference.com/search?q=orthogonal%20projection
> > Unfortunately that doesn't actually cover the terminology I > used. My reference is: D C Lay "Linear Algebra and Its > Applications", Third Edition 2003, Pearson. It's the best > introduction I've seen to linear algebra in connection with > the way I use it, and also happens to be one I found in a > shop. On page 386 there's an equation
> > y = y^ + z
> > where "y^" is a y with a hat on it and means an orthogonal > projection of y onto an implicit "u" as explained in my > wikipedia mirror. The z is then called the "component of y > orthogonal to u". You can define it as
> > z = y - y^
> > and it is in itself a projection. It happens that in
> > http://x31eq.com/primerr.pdf
> > I came up with a badness formula which is really the size of > the components orthogonal to the weighted prime intervals in > complexity space.

I like these geometrical interpretations. I could be a little bit biased from my background, but I'm guessing that a more general audience that's not very mathematically inclined will have an easier time with something that can be represented as geometry, even if a few things such as distance metrics might need explanation.

> The geometric logic is that we want to project complexity > space such that the JI line becomes a point. That means we > want the components of tuning space orthogonal to it. So > badness space is this specific projection of complexity space.
> > Now, Gene's val space is correct in so far as it goes. It > isn't as useful a geometry as a complexity space with a > Euclidean metric though because it only measures equal > temperaments, not higher ranks. I don't know if it's > possible to extend it to higher ranks or not. That's why I > said "it *may* work". I apologize if I gave the impression > that his val space might not work the way he used it.

Rank 2 temperaments are planes in this space. They can be defined by two rank 1 temperaments which they pass through, or by their normal vectors, which represent the vanishing intervals (unison vectors). The TOP error is the distance to the closest point on the plane. (The difficulty in calculating it is based on the taxicab nature of the distance measurement). My breakthrough in calculating TOP for rank 2 temperaments came when I realized how to simplify the problem by projecting it into a lower dimension.

> Taking the distance from a val to an n*JIP is essentially > projecting onto the pure-octaves plane (or whatever) rather > than projecting orthogonally to the JI line. Hence you get > a uniquely defined result but not a useful one because you > don't transform complexity space into badness space.
> > In Paul Erlich's "Middle Path" paper there's a diagram where > equal temperaments are represented by points and rank 2 > temperaments by the lines between them. This is also a > projection of tuning space giving the components orthogonal > to the JI line. It's an octave-equivalent tuning space. > The Euclidean distance from the origin will be the STD error > of the tempered tuning (assuming the rank 2 temperaments > really are straight lines) which approximates the > unoptimized TOP-RMS error.

It can also be seen as a slice through tuning space (at a point where the octave is just). I've made other similar sorts of diagrams from different slices through tuning space.

> Now, there's also a connection with wedge products. The > wedge product of a multivector with a vector give something > like the components orthogonal to that vector. See here:
> > http://www.av8n.com/physics/clifford-intro.htm

Yes, the geometric algebra looks pretty interesting for our purposes. I'd like to learn more about it one of these days when I get the time.

> So if the badness of a val is the measure of the weighted > components orthogonal to the JI vector (as in scalar > badness) it should also be something like
> > B = |T^V|/|V|

With that introduction, this makes a lot more sense.

> in exterior algebra where T is the wedge product of weighted > vals, V is the weighted JI vector and |...| is the inner > product that gives lengths, areas, and so on. The |V| is so > that the size of the JI vector doesn't matter, only its > orientation. I already said that the scalar complexity is > related to the length of the val in complexity space, so
> > k = |T|
> > In practice, this k isn't quite scalar complexity the same > way as the B above isn't quite scalar badness. But they > could have been defined like this. And the TOP-RMS error is > the badness divided by the complexity, so
> > E = |T^V|/|T||V|

Cool!

🔗Herman Miller <hmiller@IO.COM>

3/19/2008 6:52:19 PM

Graham Breed wrote:

> It doesn't work in general and here are some TOP-RMS > examples where it doesn't work:
> > <5, 8, 12, 14, 17] <5, 8, 12, 14, 18, 19]
> <7, 11, 16, 19, 24] <7, 11, 16, 20, 24, 26]
> <11, 17, 25, 30] <11, 17, 25, 31, 38]
> <11, 17, 25, 31, 38] <11, 17, 25, 30, 38, 40]
> <13, 21, 30] <13, 21, 31, 37]
> <13, 21, 31, 37] <13, 21, 30, 37, 45]
> <32, 51, 75] <32, 51, 74, 90]
> <32, 51, 74, 90, 111] <32, 51, 75, 90, 111, 119]
> <36, 57, 84, 101, 125] <36, 57, 83, 101, 124, 133]
> <48, 76, 111, 135, 166] <48, 76, 112, 135, 166, 178]

That could be one difference between TOP-RMS and TOP-MAX, then. For TOP-MAX I get:

<5, 8, 12, 14, 18, 19]
<7, 11, 16, 20, 24, 26]
<11, 17, 25, 30, 37, 40]
<13, 21, 30, 37, 45, 49]
<32, 51, 75, 90, 111, 119]
<36, 57, 84, 101, 125, 133]
<48, 76, 111, 134, 166, 177]

🔗Graham Breed <gbreed@gmail.com>

3/19/2008 10:26:30 PM

Herman Miller wrote:

> That could be one difference between TOP-RMS and TOP-MAX, then. For > TOP-MAX I get:
> > <5, 8, 12, 14, 18, 19]
> <7, 11, 16, 20, 24, 26]
> <11, 17, 25, 30, 37, 40]
> <13, 21, 30, 37, 45, 49]
> <32, 51, 75, 90, 111, 119]
> <36, 57, 84, 101, 125, 133]
> <48, 76, 111, 134, 166, 177]

Yes, the primes act more independently in TOP-max. But still

[51, 81, 118, 143] [51, 81, 119]
[70, 111, 163, 197, 242, 259] [70, 111, 163, 197, 243]
[91, 144, 211, 255, 314, 336] [91, 144, 211, 255, 315]

Graham

🔗Graham Breed <gbreed@gmail.com>

3/19/2008 10:35:25 PM

Herman Miller wrote:

> I like these geometrical interpretations. I could be a little bit biased > from my background, but I'm guessing that a more general audience that's > not very mathematically inclined will have an easier time with something > that can be represented as geometry, even if a few things such as > distance metrics might need explanation.

Different people prefer different explanations. Personally I think the geometry comes out very well, and I can keep the ideas in my head, but I trust the linear algebra more. So I verify the geometry by checking it with the linear algebra.

>> Now, Gene's val space is correct in so far as it goes. It >> isn't as useful a geometry as a complexity space with a >> Euclidean metric though because it only measures equal >> temperaments, not higher ranks. I don't know if it's >> possible to extend it to higher ranks or not. That's why I >> said "it *may* work". I apologize if I gave the impression >> that his val space might not work the way he used it.
> > Rank 2 temperaments are planes in this space. They can be defined by two > rank 1 temperaments which they pass through, or by their normal vectors, > which represent the vanishing intervals (unison vectors). The TOP error > is the distance to the closest point on the plane. (The difficulty in > calculating it is based on the taxicab nature of the distance > measurement). My breakthrough in calculating TOP for rank 2 temperaments > came when I realized how to simplify the problem by projecting it into a > lower dimension.

The difficulty is that there aren't supposed to be angles when you use the taxicab metric. How do you define the complexity of the plane? With a Euclidean metric it's the area of the parallelogram described by the vals of the two equal temperaments. But how do you abstract that to a space without angles?

Can you even talk of projections without angles?

>> In Paul Erlich's "Middle Path" paper there's a diagram where >> equal temperaments are represented by points and rank 2 >> temperaments by the lines between them. This is also a >> projection of tuning space giving the components orthogonal >> to the JI line. It's an octave-equivalent tuning space. >> The Euclidean distance from the origin will be the STD error >> of the tempered tuning (assuming the rank 2 temperaments >> really are straight lines) which approximates the >> unoptimized TOP-RMS error.
> > It can also be seen as a slice through tuning space (at a point where > the octave is just). I've made other similar sorts of diagrams from > different slices through tuning space.

Maybe, or it's a projection onto a slice.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/19/2008 10:55:02 PM

Carl Lumma wrote:

>> To try and clarify this, I'll introduce the concept of >> "tuning spaces". In a tuning space, each point represents >> the tunings of a set of prime intervals. Hence it's a >> regular tempered tuning. One unique point in such a space >> will be the untempered tuning of just intonation, so you can >> call it the JIP.
> > There will be other points, n*JIP, which are equivalent,
> but have the correct distance from the val you're trying
> to get the error of. I think.

No, not in a "tuning space". JI has a 2:1 of 1200 cents, 3:1 of 1901.955 cents, and so on.

>> Vals are the points in a tuning space with integer >> coordinates. The line from the origin that passes through >> the val defines an equal temperament class. Each point on >> that line represents a specific tuning of the octave.
>> The optimal tuning of the equal temperament is represented by >> the point on this line that gets closest to the JIP given >> whatever metric you imposed before. Gene gets this right on >> his web page as well.
> > This I don't get. What do you mean by equal temperament
> class? In the 5-limit, for example, the line from the origin
> through the 12-ET val defines a series of ETs... if you
> want something with 12 notes to the approximate octave you
> need to find point on the line closent to 12*JIP. I think.

An equal temperament class is an equal temperament with allowance for different octave stretches. Each point on the line has the same mapping (the first val the line passes through).

>> Now, in his e-mail Gene defines an "n*JIP" which is >> presumably the just intonation point multiplied by the >> number of notes to the octave.
> > He clearly defines n in that message as the number of notes
> to the octave.

Does it have to be an integer?

>> His patent val will indeed >> be the closest val to this point. But this fact doesn't have >> any special meaning because the "n*JIP" isn't special.
> > n*JIP is special, and so is the patent val. It's just
> not uniquely special, as he later pointed out. Its clear
> that whoever wants an efficient way to calculate the val
> with least TOP damage would do well to first calculate
> the semi-standard vals, since it is apparently among them.

Sure you could do that, but why do you need the geometry to tell you?

>> I prefer to think about a lattice of equal >> temperament mappings (vals) without considering their >> tunings. I believe this is a lattice,
> > Gene even says it is a bona fide lattice.

But the JI line isn't part of it. Also the "badness space" isn't a true vector space of tunings because each point on the JI line is an identity. I think that stops it being a lattice as well.

>> In a complexity space, the badness of an equal temperament >> is measured by the distance from its val to the JI line I >> talk about.
> > You mean the error?

No, I mean the badness. The distance from the ET line to the JI point is the error. The val is roughly n times further out for n notes to the octave (exactly so if n is the number of steps to a 2:1 in the optimal tuning). The ET line and the JI line both pass through the origin. So the further out you get the further apart they get. Hence the distance from the val to the JI line is about n times the error, giving a badness.

Precisely, even if n is a real this relationship isn't exact. The shortest line from a point to a line is always perpendicular to a given line. So the difference between badness and n*error is the difference between a sine and a tangent. Which sounds about right for STD vs TOP-max error.

>> The JI line is the line from the origin passing >> through the JIP.
> > And I think the closest point (assuming the size metric) on
> the JIP line will be n*JIP.

Only if n is a real number including the optimal scale stretch.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/19/2008 11:12:18 PM

Back to this...

Herman Miller wrote:
> Rank 2 temperaments are planes in this space. They can be defined by two > rank 1 temperaments which they pass through, or by their normal vectors, > which represent the vanishing intervals (unison vectors). The TOP error > is the distance to the closest point on the plane. (The difficulty in > calculating it is based on the taxicab nature of the distance > measurement). My breakthrough in calculating TOP for rank 2 temperaments > came when I realized how to simplify the problem by projecting it into a > lower dimension.

A rank 2 temperament class is also a line in "complexity space". (Hence I still think of them as "linear temperaments".) You can then define the complexity as the shortest distance from the line to the origin and the badness as the shortest distance from the
temperament class line to the JI line. Maybe it works for a taxicab metric.

Graham

🔗Carl Lumma <carl@lumma.org>

3/20/2008 12:09:30 AM

[I'm answering this and then going back to the previous.]

Graham wrote...

>>> To try and clarify this, I'll introduce the concept of
>>> "tuning spaces". In a tuning space, each point represents
>>> the tunings of a set of prime intervals. Hence it's a
>>> regular tempered tuning. One unique point in such a space
>>> will be the untempered tuning of just intonation, so you can
>>> call it the JIP.
>>
>> There will be other points, n*JIP, which are equivalent,
>> but have the correct distance from the val you're trying
>> to get the error of. I think.
>
>No, not in a "tuning space". JI has a 2:1 of 1200 cents,
>3:1 of 1901.955 cents, and so on.

All the good vals are very close together in this space, then.
This can't be what you mean, and isn't what Gene means by
val space. That space has units of generators, not cents.

>>> Vals are the points in a tuning space with integer
>>> coordinates. The line from the origin that passes through
>>> the val defines an equal temperament class. Each point on
>>> that line represents a specific tuning of the octave.
>>> The optimal tuning of the equal temperament is represented by
>>> the point on this line that gets closest to the JIP given
>>> whatever metric you imposed before. Gene gets this right on
>>> his web page as well.
>>
>> This I don't get. What do you mean by equal temperament
>> class? In the 5-limit, for example, the line from the origin
>> through the 12-ET val defines a series of ETs... if you
>> want something with 12 notes to the approximate octave you
>> need to find point on the line closent to 12*JIP. I think.
>
>An equal temperament class is an equal temperament with
>allowance for different octave stretches. Each point on the
>line has the same mapping (the first val the line passes
>through).

Each point on the line in val space has a different mapping.
In the 3-limit, the line through 12-ET nearly hits < 5 8 |
and < 7 11 |.

>>> Now, in his e-mail Gene defines an "n*JIP" which is
>>> presumably the just intonation point multiplied by the
>>> number of notes to the octave.
>>
>> He clearly defines n in that message as the number of notes
>> to the octave.
>
>Does it have to be an integer?

He doesn't say.

>>> His patent val will indeed
>>> be the closest val to this point. But this fact doesn't have
>>> any special meaning because the "n*JIP" isn't special.
>>
>> n*JIP is special, and so is the patent val. It's just
>> not uniquely special, as he later pointed out. Its clear
>> that whoever wants an efficient way to calculate the val
>> with least TOP damage would do well to first calculate
>> the semi-standard vals, since it is apparently among them.
>
>Sure you could do that, but why do you need the geometry to
>tell you?

I don't know about geometry, but presumably it's a helluva
lot easier to find all the semistandard vals and compare
their TOP damage than the approaches suggested in this thread
so far. If Gene used geometry to figure that out, then
that's why geometry.

>>> I prefer to think about a lattice of equal
>>> temperament mappings (vals) without considering their
>>> tunings. I believe this is a lattice,
>>
>> Gene even says it is a bona fide lattice.
>
>But the JI line isn't part of it.

So?

>Also the "badness space"
>isn't a true vector space of tunings because each point on
>the JI line is an identity. I think that stops it being a
>lattice as well.

I guess this is where I should go back to the previous
message.

>>> In a complexity space, the badness of an equal temperament
>>> is measured by the distance from its val to the JI line I
>>> talk about.
>>
>> You mean the error?
>
>No, I mean the badness. The distance from the ET line to
>the JI point is the error. The val is roughly n times
>further out for n notes to the octave (exactly so if n is
>the number of steps to a 2:1 in the optimal tuning)

Why do you think he multiplies the JIP by n?

>The ET
>line and the JI line both pass through the origin. So the
>further out you get the further apart they get. Hence the
>distance from the val to the JI line is about n times the
>error, giving a badness.

That seems a roundabout way to get badness.

-Carl

🔗Carl Lumma <carl@lumma.org>

3/20/2008 12:43:37 AM

>>>> Now, in his e-mail Gene defines an "n*JIP" which is
>>>> presumably the just intonation point multiplied by the
>>>> number of notes to the octave.
>>>
>>> He clearly defines n in that message as the number of notes
>>> to the octave.
>>
>>Does it have to be an integer?
>
>He doesn't say.

Actually, in the post where he mentions n*JIP, I believe
he's using the classic def. of "ET" where octaves must be
pure, and therefore it is n*JIP. On the TOP page, though,
he grows the ball out from the JIP until it touches the
val line. This will give a non-integer n in most often.
So he's right in both places.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/20/2008 3:08:52 AM

I wrote:
> A rank 2 temperament class is also a line in "complexity space".

Which is surely not the case :(

🔗Graham Breed <gbreed@gmail.com>

3/20/2008 4:35:50 AM

Carl Lumma wrote:
> > Graham wrote...

>>> There will be other points, n*JIP, which are equivalent,
>>> but have the correct distance from the val you're trying
>>> to get the error of. I think.
>> No, not in a "tuning space". JI has a 2:1 of 1200 cents, >> 3:1 of 1901.955 cents, and so on.
> > All the good vals are very close together in this space, then.
> This can't be what you mean, and isn't what Gene means by
> val space. That space has units of generators, not cents.

All the good tunings are very close together. The vals are way out according to their complexities as measured in "complexity space". What Gene described clearly gives a tempered tuning to each point. Otherwise how can the error be the closest approach to the JI point? The units are weighted prime intervals, but you can still associate intervals in cents with each point.

>> An equal temperament class is an equal temperament with >> allowance for different octave stretches. Each point on the >> line has the same mapping (the first val the line passes >> through).
> > Each point on the line in val space has a different mapping.
> In the 3-limit, the line through 12-ET nearly hits < 5 8 |
> and < 7 11 |.

It has a different tuning map. Only the lattice points are mappings from JI. Other points can inherit a mapping from the line they sit on. The 12-ET line doesn't hit those points so it doesn't have those mappings.

> I don't know about geometry, but presumably it's a helluva
> lot easier to find all the semistandard vals and compare
> their TOP damage than the approaches suggested in this thread
> so far. If Gene used geometry to figure that out, then
> that's why geometry.

No because finding the semistandard vals is as difficult as the general problem.

>>>> I prefer to think about a lattice of equal >>>> temperament mappings (vals) without considering their >>>> tunings. I believe this is a lattice,
>>> Gene even says it is a bona fide lattice.
>> But the JI line isn't part of it.
> > So?

So it stops being a lattice when you add the JI line.

>>>> In a complexity space, the badness of an equal temperament >>>> is measured by the distance from its val to the JI line I >>>> talk about.
>>> You mean the error?
>> No, I mean the badness. The distance from the ET line to >> the JI point is the error. The val is roughly n times >> further out for n notes to the octave (exactly so if n is >> the number of steps to a 2:1 in the optimal tuning)
> > Why do you think he multiplies the JIP by n?

I don't see why I should be speculating on anybody's motives.

>> The ET >> line and the JI line both pass through the origin. So the >> further out you get the further apart they get. Hence the >> distance from the val to the JI line is about n times the >> error, giving a badness.
> > That seems a roundabout way to get badness.

How else do you propose to do it?

Graham

🔗Carl Lumma <carl@lumma.org>

3/20/2008 9:40:49 AM

Graham wrote...
>>>> There will be other points, n*JIP, which are equivalent,
>>>> but have the correct distance from the val you're trying
>>>> to get the error of. I think.
>>> No, not in a "tuning space". JI has a 2:1 of 1200 cents,
>>> 3:1 of 1901.955 cents, and so on.
>>
>> All the good vals are very close together in this space, then.
>> This can't be what you mean, and isn't what Gene means by
>> val space. That space has units of generators, not cents.
>
>All the good tunings are very close together. The vals are
>way out according to their complexities as measured in
>"complexity space".

The tunings are in units of generators, so they're very near
the integers of the val lattice. The JIP is in size units,
but can be moved into position with n*JIP. Or alternatively,
the size functional can be imposed on every point in val
space, which seems to be what you're thinking.

>The units are weighted prime intervals, but you can still
>associate intervals in cents with each point.

Unless we can figure out how we're completely missing
eachother on sentences like this, we're doomed. Can we
first stick to talking about Gene's stuff before moving
on to yours?

>> I don't know about geometry, but presumably it's a helluva
>> lot easier to find all the semistandard vals and compare
>> their TOP damage than the approaches suggested in this thread
>> so far. If Gene used geometry to figure that out, then
>> that's why geometry.
>
>No because finding the semistandard vals is as difficult as
>the general problem.

I never saw Gene give his method for this, but he described
the geometric condition. How do you think he did it?

>>>>> In a complexity space, the badness of an equal temperament
>>>>> is measured by the distance from its val to the JI line I
>>>>> talk about.
>>>>
>>>> You mean the error?
>>>
>>> No, I mean the badness. The distance from the ET line to
>>> the JI point is the error. The val is roughly n times
>>> further out for n notes to the octave (exactly so if n is
>>> the number of steps to a 2:1 in the optimal tuning)
>>
>> Why do you think he multiplies the JIP by n?
>
>I don't see why I should be speculating on anybody's motives.

That was a rhetorical question. Look at what you had
just written.

>>> The ET
>>> line and the JI line both pass through the origin. So the
>>> further out you get the further apart they get. Hence the
>>> distance from the val to the JI line is about n times the
>>> error, giving a badness.
>>
>> That seems a roundabout way to get badness.
>
>How else do you propose to do it?

If the justification for finding this distance is that it
approximates n, why not just use n?

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/20/2008 7:01:22 PM

Carl Lumma wrote:
> Graham wrote...
>>>>> There will be other points, n*JIP, which are equivalent,
>>>>> but have the correct distance from the val you're trying
>>>>> to get the error of. I think.
>>>> No, not in a "tuning space". JI has a 2:1 of 1200 cents, >>>> 3:1 of 1901.955 cents, and so on.
>>> All the good vals are very close together in this space, then.
>>> This can't be what you mean, and isn't what Gene means by
>>> val space. That space has units of generators, not cents.
>> All the good tunings are very close together. The vals are >> way out according to their complexities as measured in >> "complexity space".
> > The tunings are in units of generators, so they're very near
> the integers of the val lattice. The JIP is in size units,
> but can be moved into position with n*JIP. Or alternatively,
> the size functional can be imposed on every point in val
> space, which seems to be what you're thinking.

The tunings are all over the place. Every point in the space is a tuning. Naturally some of them will be near the integers but some won't.

I brought in the terms "tuning space" and "complexity space" to highlight different ways of interpreting the geometry. The way Gene talks about it is a tuning space. A val defines an equal temperament class as the line going through the origin and the val. Each point on that line is a specific tuning of the octave. The optimal tuning is the nearest point on the line (or plane or so on for higher ranks) to the unique just intonation point. Once you find that tuning, the distance from it to the JIP is the optimal error for that temperament class.

In this context it makes no sense to move the JIP around. You could move it closer to the origin and get an arbitrarily small error for any temperament class.

The other interpretation is a complexity space. Here, the lattice points are vals representing an equal temperament class with a specific mapping from JI. Points off the lattice have no meaning. The distance of a val from the origin is the complexity of the temperament. The optimal badness of a temperament class is the distance from its val to the JI line, which will include your n*JIP points. But as the line doesn't belong to the lattice, points on it have no special meaning.

Whatever interpretation you use there's only one point that has the correct distance to give optimal error or badness (at least for Euclidean metrics where there's a single optimum).

>> The units are weighted prime intervals, but you can still
>> associate intervals in cents with each point.
> > Unless we can figure out how we're completely missing
> eachother on sentences like this, we're doomed. Can we
> first stick to talking about Gene's stuff before moving
> on to yours?

You said before there was no essential difference.

Gene says "We want to find the value of t giving the closest point on this line to the JIP; this will be the size of the retuned octave". So why do we want to find this t and if it gives the size of an octave why not describe it in cents?

It only makes sense if each point on the line is a regular temperament (by my definition): a specific tuning of prime intervals with a mapping from just intonation. Arbitrary points in the space are literally tempered tunings: systematic deviations from just intonation without a mapping from just intonation to a simpler set of generators.

>>> I don't know about geometry, but presumably it's a helluva
>>> lot easier to find all the semistandard vals and compare
>>> their TOP damage than the approaches suggested in this thread
>>> so far. If Gene used geometry to figure that out, then
>>> that's why geometry.
>> No because finding the semistandard vals is as difficult as >> the general problem.
> > I never saw Gene give his method for this, but he described
> the geometric condition. How do you think he did it?

I don't know how he did it because he didn't tell me how he did it.

>>>>>> In a complexity space, the badness of an equal temperament >>>>>> is measured by the distance from its val to the JI line I >>>>>> talk about.
>>>>> You mean the error?
>>>> No, I mean the badness. The distance from the ET line to >>>> the JI point is the error. The val is roughly n times >>>> further out for n notes to the octave (exactly so if n is >>>> the number of steps to a 2:1 in the optimal tuning)
>>> Why do you think he multiplies the JIP by n?
>> I don't see why I should be speculating on anybody's motives.
> > That was a rhetorical question. Look at what you had
> just written.

Then if you're trying to make a point, make it clearly.

>>>> The ET >>>> line and the JI line both pass through the origin. So the >>>> further out you get the further apart they get. Hence the >>>> distance from the val to the JI line is about n times the >>>> error, giving a badness.
>>> That seems a roundabout way to get badness.
>> How else do you propose to do it?
> > If the justification for finding this distance is that it
> approximates n, why not just use n?

Use n to do what? This distance doesn't approximate anything. It *is* the badness of the temperament class. You can take n as the complexity, which implies the STD error -- which is an RMS-based way of measuring the error of a temperament when you keep the octaves pure.

Taking the distance from the val to the n*JIP gives you the badness according to an inferior error measure for pure octaves. It's the RMS or minimax of the deviations of the prime intervals assuming octaves are pure. This doesn't take proper account of intervals between primes. It isn't a good predictor of the true TOP error (see "Prime Based Errors and Complexities"). So that's why you don't use such a distance if you want the correct badness.

To get the badness as a distance between points, you need to project into a "badness space". To do this, you project each val relative to the JI line onto the hyperplane that's orthogonal to the JI line and passes through the origin. This is the x+y+z+...=0 hyperplane, and so all points on it have a zero mean, hence the RMS and STD are equal. The vals are still a lattice, but in a lower dimensional space. The JI line now becomes a JIP at the origin. The distance of a val from the JI line is the correct badness. Take a slice of the val lattice with n notes to the octave and the nearest point in that slice to the origin is the optimal mapping (by optimal badness, which implies optimal STD error for a Euclidean metric).

Yes, this is more complicated than taking an arbitrary point and pretending it has a meaning, but it's also correct. I don't see why we should have terms for the wrong thing when we know the right thing.

Graham

🔗Herman Miller <hmiller@IO.COM>

3/20/2008 7:55:05 PM

Graham Breed wrote:

> The difficulty is that there aren't supposed to be angles > when you use the taxicab metric. How do you define the > complexity of the plane? With a Euclidean metric it's the > area of the parallelogram described by the vals of the two > equal temperaments. But how do you abstract that to a space > without angles?

It wasn't clear that you were talking about complexity with your comment "it only measures equal temperaments, not higher ranks". I was thinking of distance. I'm not familiar with how areas would be measured.

> Can you even talk of projections without angles?

You can project along an axis.

🔗Graham Breed <gbreed@gmail.com>

3/20/2008 8:02:32 PM

Herman Miller wrote:
> Graham Breed wrote:

>> Can you even talk of projections without angles?
> > You can project along an axis.

What does that mean?

Graham

🔗Carl Lumma <carl@lumma.org>

3/20/2008 10:39:37 PM

At 07:01 PM 3/20/2008, Graham wrote:
>The other interpretation is a complexity space. Here, the
>lattice points are vals representing an equal temperament
>class with a specific mapping from JI.

This is what Gene talks about.

>Points off the lattice have no meaning.

They represent tunings of the nearest val.

>The distance of a val from the
>origin is the complexity of the temperament.

It's an appealing kind of complexity. But since there's no
consensus on which complexity is best I don't think
statements like this are very constructive. I wasn't aware
that you alone had even settled on one kind of complexity.
Have you?

>The optimal
>badness of a temperament class is the distance from its val
>to the JI line, which will include your n*JIP points. But
>as the line doesn't belong to the lattice, points on it have
>no special meaning.

They allow the TOP tuning of the val to be quickly calculated.

>>> The units are weighted prime intervals, but you can still
>>> associate intervals in cents with each point.
>>
>> Unless we can figure out how we're completely missing
>> eachother on sentences like this, we're doomed. Can we
>> first stick to talking about Gene's stuff before moving
>> on to yours?
>
>You said before there was no essential difference.
>
>Gene says "We want to find the value of t giving the closest
>point on this line to the JIP; this will be the size of the
>retuned octave". So why do we want to find this t and if it
>gives the size of an octave why not describe it in cents?

Leaving it in generators lets you easily use it to calculate
the consistency level of the val, among other things.

>It only makes sense if each point on the line is a regular
>temperament (by my definition): a specific tuning of prime
>intervals with a mapping from just intonation.

Yes!

>Arbitrary
>points in the space are literally tempered tunings:
>systematic deviations from just intonation without a mapping
>from just intonation to a simpler set of generators.

It clearly makes sense to think of them as having the
mapping of the nearest val point (which will usually be
obtained simply by rounding each of the coordinates).

>>>> I don't know about geometry, but presumably it's a helluva
>>>> lot easier to find all the semistandard vals and compare
>>>> their TOP damage than the approaches suggested in this thread
>>>> so far. If Gene used geometry to figure that out, then
>>>> that's why geometry.
>>>
>>> No because finding the semistandard vals is as difficult as
>>> the general problem.
>>
>> I never saw Gene give his method for this, but he described
>> the geometric condition. How do you think he did it?
>
>I don't know how he did it because he didn't tell me how he
>did it.

It's not in his maple code either, that I can see. But I bet
you it's easier that the techniques Herman, you, and I were
just kicking around. I'll search the archives.

>>>>> The ET
>>>>> line and the JI line both pass through the origin. So the
>>>>> further out you get the further apart they get. Hence the
>>>>> distance from the val to the JI line is about n times the
>>>>> error, giving a badness.
>>>>
>>>> That seems a roundabout way to get badness.
>>> How else do you propose to do it?
>>
>> If the justification for finding this distance is that it
>> approximates n, why not just use n?
>
>Use n to do what? This distance doesn't approximate
>anything. It *is* the badness of the temperament class.

Once again, I wasn't aware anybody had decided on what badness
is best. If you have, can you say why and give a quick
comparison to other proposals that have been made, with examples?

>You can take n as the complexity, which implies the STD
>error -- which is an RMS-based way of measuring the error of
>a temperament when you keep the octaves pure.

This paragraph exemplifies paragraphs of yours I find
impossible to understand. You seem to make leap after leap
off things only you know about. How would I know what STD
error is and why should I believe in it? Please don't say
your pdf, because I find it impenetrable. Mainly because
its made almost entirely of paragraphs like this one.

>Taking the distance from the val to the n*JIP gives you the
>badness

error

>It's the RMS or minimax of the deviations of the
>prime intervals assuming octaves are pure. This doesn't
>take proper account of intervals between primes.

Yes it does, if you do the thing with the minimum and
maximum errors and keep the signs. It's the same reason
you can compute consistency without looking at the
intervals between primes.

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/21/2008 3:02:36 AM

Carl Lumma wrote:
> At 07:01 PM 3/20/2008, Graham wrote:
>> The other interpretation is a complexity space. Here, the >> lattice points are vals representing an equal temperament >> class with a specific mapping from JI.
> > This is what Gene talks about.

No. He says he's talking about that but then he introduces a just intonation point and talks about points off the lattice.

>> Points off the lattice have no meaning.
> > They represent tunings of the nearest val.

Either it's a lattice or it isn't.

>> The distance of a val from the >> origin is the complexity of the temperament.
> > It's an appealing kind of complexity. But since there's no
> consensus on which complexity is best I don't think
> statements like this are very constructive. I wasn't aware
> that you alone had even settled on one kind of complexity.
> Have you?

Did I define a distance measure?

For an equal temperament I'd either use scalar complexity (the RMS of the weighted numbers of steps to the prime intervals) or the number of steps to the octave. Either way it doesn't make much difference. In this case a Euclidean metric gives you scalar complexity.

Strictly speaking it's also multiplied by the square root of the number of prime intervals. I forgot to mention that before.

>> The optimal >> badness of a temperament class is the distance from its val >> to the JI line, which will include your n*JIP points. But >> as the line doesn't belong to the lattice, points on it have >> no special meaning.
> > They allow the TOP tuning of the val to be quickly calculated.

No, that'd be a tuning space.

>>>> The units are weighted prime intervals, but you can still
>>>> associate intervals in cents with each point.
>>> Unless we can figure out how we're completely missing
>>> eachother on sentences like this, we're doomed. Can we
>>> first stick to talking about Gene's stuff before moving
>>> on to yours?
>> You said before there was no essential difference.
>>
>> Gene says "We want to find the value of t giving the closest >> point on this line to the JIP; this will be the size of the >> retuned octave". So why do we want to find this t and if it >> gives the size of an octave why not describe it in cents?
> > Leaving it in generators lets you easily use it to calculate
> the consistency level of the val, among other things.

When did I say you couldn't measure it in generators?

>> It only makes sense if each point on the line is a regular >> temperament (by my definition): a specific tuning of prime >> intervals with a mapping from just intonation.
> > Yes!
> >> Arbitrary >> points in the space are literally tempered tunings: >> systematic deviations from just intonation without a mapping >>from just intonation to a simpler set of generators.
> > It clearly makes sense to think of them as having the
> mapping of the nearest val point (which will usually be
> obtained simply by rounding each of the coordinates).

I don't think that does make sense.

>>>>> I don't know about geometry, but presumably it's a helluva
>>>>> lot easier to find all the semistandard vals and compare
>>>>> their TOP damage than the approaches suggested in this thread
>>>>> so far. If Gene used geometry to figure that out, then
>>>>> that's why geometry.
>>>> No because finding the semistandard vals is as difficult as >>>> the general problem.
>>> I never saw Gene give his method for this, but he described
>>> the geometric condition. How do you think he did it?
>> I don't know how he did it because he didn't tell me how he >> did it.
> > It's not in his maple code either, that I can see. But I bet
> you it's easier that the techniques Herman, you, and I were
> just kicking around. I'll search the archives.

Oh, you have some maple code? Well, see if you can find anything.

>>>>>> The ET >>>>>> line and the JI line both pass through the origin. So the >>>>>> further out you get the further apart they get. Hence the >>>>>> distance from the val to the JI line is about n times the >>>>>> error, giving a badness.
>>>>> That seems a roundabout way to get badness.
>>>> How else do you propose to do it?
>>> If the justification for finding this distance is that it
>>> approximates n, why not just use n?
>> Use n to do what? This distance doesn't approximate >> anything. It *is* the badness of the temperament class.
> > Once again, I wasn't aware anybody had decided on what badness
> is best. If you have, can you say why and give a quick
> comparison to other proposals that have been made, with examples?

I didn't say anything about it being best or any context for it to be best in.

>> You can take n as the complexity, which implies the STD >> error -- which is an RMS-based way of measuring the error of >> a temperament when you keep the octaves pure.
> > This paragraph exemplifies paragraphs of yours I find
> impossible to understand. You seem to make leap after leap
> off things only you know about. How would I know what STD
> error is and why should I believe in it? Please don't say
> your pdf, because I find it impenetrable. Mainly because
> its made almost entirely of paragraphs like this one.

The badness is about n times the error. If you make it exactly n times the error, for a Euclidean metric, it implies the error must be the STD error. There's also the factor of the square root of the number of primes that I forgot to mention before.

>> Taking the distance from the val to the n*JIP gives you the >> badness
> > error
> >> It's the RMS or minimax of the deviations of the >> prime intervals assuming octaves are pure. This doesn't >> take proper account of intervals between primes.

I should have said "worst-abs" not "minimax".

> Yes it does, if you do the thing with the minimum and
> maximum errors and keep the signs. It's the same reason
> you can compute consistency without looking at the
> intervals between primes.

The RMS or worst-abs doesn't take proper account of intervals between primes. If you "do the thing with the minimum and maximum errors" it can only work if the result is not the RMS or worst-abs.

Graham

🔗Carl Lumma <carl@lumma.org>

3/21/2008 9:33:02 AM

At 03:02 AM 3/21/2008, you wrote:
>Carl Lumma wrote:
>> At 07:01 PM 3/20/2008, Graham wrote:
>>> The other interpretation is a complexity space. Here, the
>>> lattice points are vals representing an equal temperament
>>> class with a specific mapping from JI.
>>
>> This is what Gene talks about.
>
>No. He says he's talking about that but then he introduces
>a just intonation point and talks about points off the lattice.

They're in the same space. It doesn't matter that the JIP
is off the val lattice.

>>> Points off the lattice have no meaning.
>>
>> They represent tunings of the nearest val.
>
>Either it's a lattice or it isn't.

The vals live on a lattice, the other points live in the space.

>>> The distance of a val from the
>>> origin is the complexity of the temperament.
>>
>> It's an appealing kind of complexity. But since there's no
>> consensus on which complexity is best I don't think
>> statements like this are very constructive. I wasn't aware
>> that you alone had even settled on one kind of complexity.
>> Have you?
>
>Did I define a distance measure?
>
>For an equal temperament I'd either use scalar complexity
>(the RMS of the weighted numbers of steps to the prime
>intervals) or the number of steps to the octave. Either way
>it doesn't make much difference. In this case a Euclidean
>metric gives you scalar complexity.
>Strictly speaking it's also multiplied by the square root of
>the number of prime intervals. I forgot to mention that before.

Can you demonstrate that the formulae are the same?

>>> The optimal
>>> badness of a temperament class is the distance from its val
>>> to the JI line, which will include your n*JIP points. But
>>> as the line doesn't belong to the lattice, points on it have
>>> no special meaning.
>>
>> They allow the TOP tuning of the val to be quickly calculated.
>
>No, that'd be a tuning space.

If you say so. I'll keep on calculating it with this method
for now.

>>>>> The units are weighted prime intervals, but you can still
>>>>> associate intervals in cents with each point.
>>>> Unless we can figure out how we're completely missing
>>>> eachother on sentences like this, we're doomed. Can we
>>>> first stick to talking about Gene's stuff before moving
>>>> on to yours?
>>>
>>> You said before there was no essential difference.
>>>
>>> Gene says "We want to find the value of t giving the closest
>>> point on this line to the JIP; this will be the size of the
>>> retuned octave". So why do we want to find this t and if it
>>> gives the size of an octave why not describe it in cents?
>>
>> Leaving it in generators lets you easily use it to calculate
>> the consistency level of the val, among other things.
>
>When did I say you couldn't measure it in generators?

For the last three messages I've been trying to convince you
I can.

>> It clearly makes sense to think of them as having the
>> mapping of the nearest val point (which will usually be
>> obtained simply by rounding each of the coordinates).
>
>I don't think that does make sense.

What do you call this

(11.976740698521905 18.963172772659682 27.94572829655111)

?

>>>>>> I don't know about geometry, but presumably it's a helluva
>>>>>> lot easier to find all the semistandard vals and compare
>>>>>> their TOP damage than the approaches suggested in this thread
>>>>>> so far. If Gene used geometry to figure that out, then
>>>>>> that's why geometry.
>>>>>
>>>>> No because finding the semistandard vals is as difficult as
>>>>> the general problem.
>>>>
>>>> I never saw Gene give his method for this, but he described
>>>> the geometric condition. How do you think he did it?
>>> I don't know how he did it because he didn't tell me how he
>>> did it.
>>
>> It's not in his maple code either, that I can see. But I bet
>> you it's easier that the techniques Herman, you, and I were
>> just kicking around. I'll search the archives.
>
>Oh, you have some maple code? Well, see if you can find
>anything.

It's not in there, unfortunately.

>>>>>>> The ET
>>>>>>> line and the JI line both pass through the origin. So the
>>>>>>> further out you get the further apart they get. Hence the
>>>>>>> distance from the val to the JI line is about n times the
>>>>>>> error, giving a badness.
>>>>>>
>>>>>> That seems a roundabout way to get badness.
>>>>> How else do you propose to do it?
>>>>
>>>> If the justification for finding this distance is that it
>>>> approximates n, why not just use n?
>>>
>>> Use n to do what? This distance doesn't approximate
>>> anything. It *is* the badness of the temperament class.
>>
>> Once again, I wasn't aware anybody had decided on what badness
>> is best. If you have, can you say why and give a quick
>> comparison to other proposals that have been made, with examples?
>
>I didn't say anything about it being best or any context for
>it to be best in.

"It *is* the badness"...

>>> You can take n as the complexity, which implies the STD
>>> error -- which is an RMS-based way of measuring the error of
>>> a temperament when you keep the octaves pure.
>>
>> This paragraph exemplifies paragraphs of yours I find
>> impossible to understand. You seem to make leap after leap
>> off things only you know about. How would I know what STD
>> error is and why should I believe in it? Please don't say
>> your pdf, because I find it impenetrable. Mainly because
>> its made almost entirely of paragraphs like this one.
>
>The badness is about n times the error. If you make it
>exactly n times the error, for a Euclidean metric, it
>implies the error must be the STD error. There's also the
>factor of the square root of the number of primes that I
>forgot to mention before.

I thought you were advocating STD error. Here it sounds
like you're not.

>>> Taking the distance from the val to the n*JIP gives you the
>>> badness
>>
>> error
>>
>>> It's the RMS or minimax of the deviations of the
>>> prime intervals assuming octaves are pure. This doesn't
>>> take proper account of intervals between primes.
>
>I should have said "worst-abs" not "minimax".
>
>> Yes it does, if you do the thing with the minimum and
>> maximum errors and keep the signs. It's the same reason
>> you can compute consistency without looking at the
>> intervals between primes.
>
>The RMS or worst-abs doesn't take proper account of
>intervals between primes. If you "do the thing with the
>minimum and maximum errors" it can only work if the result
>is not the RMS or worst-abs.

I don't know what "worst-abs" is, but it's how Paul calculated
TOP tunings for ETs.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/21/2008 4:47:40 PM

Graham Breed wrote:
> Herman Miller wrote:
>> Graham Breed wrote:
> >>> Can you even talk of projections without angles?
>> You can project along an axis.
> > What does that mean?

Essentially, dropping one of the coordinates. An orthogonal projection.

You might not think that's very useful with the taxicab metric, but the distance between two points in the original space is at least the distance between the two projected points, and possibly greater. If you project along each axis, one of those projections will have the correct distance.

🔗Graham Breed <gbreed@gmail.com>

3/21/2008 5:00:06 PM

Herman Miller wrote:
> Graham Breed wrote:
>> Herman Miller wrote:
>>> Graham Breed wrote:
>>>> Can you even talk of projections without angles?
>>> You can project along an axis.
>> What does that mean?
> > Essentially, dropping one of the coordinates. An orthogonal projection.

So "orthogonal" means right angles which means angles.

> You might not think that's very useful with the taxicab metric, but the > distance between two points in the original space is at least the > distance between the two projected points, and possibly greater. If you > project along each axis, one of those projections will have the correct > distance.

It sounds like you're defining a notion of "parallel" which implies a definition of angles. Something like this:

http://www.reference.com/browse/wiki/Levi-Civita_connection

(Specifically that doesn't apply because the taxicab metric isn't differentiable.)

I'm not saying it won't work, but geometries without angles are tricky to talk about. You can have straight lines but they're only a property of the coordinate system. The measures should be independent of the coordinates.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/21/2008 7:49:25 PM

Carl Lumma wrote:
> At 03:02 AM 3/21/2008, you wrote:
>> Carl Lumma wrote:
>>> At 07:01 PM 3/20/2008, Graham wrote:
>>>> The other interpretation is a complexity space. Here, the >>>> lattice points are vals representing an equal temperament >>>> class with a specific mapping from JI.
>>> This is what Gene talks about.
>> No. He says he's talking about that but then he introduces >> a just intonation point and talks about points off the lattice.
> > They're in the same space. It doesn't matter that the JIP
> is off the val lattice.

I called them different spaces to try and make a distinction clear :-S

>>>> Points off the lattice have no meaning.
>>> They represent tunings of the nearest val.
>> Either it's a lattice or it isn't.
> > The vals live on a lattice, the other points live in the space.

The vals live on a lattice, and measures defined on that lattice give errors and badnesses for the corresponding temperament classes, however they're tuned -- which means the optimal tuning. Taken as a tuning space, the optimal tunings don't correspond to the lattice points.

>>>> The distance of a val from the >>>> origin is the complexity of the temperament.
>>> It's an appealing kind of complexity. But since there's no
>>> consensus on which complexity is best I don't think
>>> statements like this are very constructive. I wasn't aware
>>> that you alone had even settled on one kind of complexity.
>>> Have you?
>> Did I define a distance measure?
>>
>> For an equal temperament I'd either use scalar complexity >> (the RMS of the weighted numbers of steps to the prime >> intervals) or the number of steps to the octave. Either way >> it doesn't make much difference. In this case a Euclidean >> metric gives you scalar complexity.
>> Strictly speaking it's also multiplied by the square root of >> the number of prime intervals. I forgot to mention that before.
> > Can you demonstrate that the formulae are the same?

The Euclidean distance from the origin to x is

sqrt(x_1**2 + x_2**2 + ... + x_n**2)

the RMS of x is

sqrt[(x_1**2 + x_2**2 + ... + x_n**2)/n]

>>>> The optimal >>>> badness of a temperament class is the distance from its val >>>> to the JI line, which will include your n*JIP points. But >>>> as the line doesn't belong to the lattice, points on it have >>>> no special meaning.
>>> They allow the TOP tuning of the val to be quickly calculated.
>> No, that'd be a tuning space.
> > If you say so. I'll keep on calculating it with this method
> for now.

You can take a tuning space and multiply everything by n. But the logic is still that of a tuning space, so it'd be better to divide the val by n.

>>> It clearly makes sense to think of them as having the
>>> mapping of the nearest val point (which will usually be
>>> obtained simply by rounding each of the coordinates).
>> I don't think that does make sense.
> > What do you call this
> > (11.976740698521905 18.963172772659682 27.94572829655111)
> > ?

Dunno. It isn't a weighted val, anyway.

Assuming it is an ET tuning, what would it become if you moved it off the ET line?

>>>>>>>> The ET >>>>>>>> line and the JI line both pass through the origin. So the >>>>>>>> further out you get the further apart they get. Hence the >>>>>>>> distance from the val to the JI line is about n times the >>>>>>>> error, giving a badness.
>>>>>>> That seems a roundabout way to get badness.
>>>>>> How else do you propose to do it?
>>>>> If the justification for finding this distance is that it
>>>>> approximates n, why not just use n?
>>>> Use n to do what? This distance doesn't approximate >>>> anything. It *is* the badness of the temperament class.
>>> Once again, I wasn't aware anybody had decided on what badness
>>> is best. If you have, can you say why and give a quick
>>> comparison to other proposals that have been made, with examples?
>> I didn't say anything about it being best or any context for >> it to be best in.
> > "It *is* the badness"...

It's the badness that's defined by the geometry. Badness and complexity are the two primary quantities that the lattice gives. To get error you divide badness by complexity. But error can be derived another way (from an ET line in a tuning space) and maybe there are some lattices where the two definitions don't agree.

There could be a word for this kind of badness but I'm sure we won't be using it very much. For a Euclidean metric it's scalar badness. I'm not sure what it is for a taxicab metric.

Next week maybe I'll explain the geometry of a more useful badness measure.

>>>> You can take n as the complexity, which implies the STD >>>> error -- which is an RMS-based way of measuring the error of >>>> a temperament when you keep the octaves pure.
>>> This paragraph exemplifies paragraphs of yours I find
>>> impossible to understand. You seem to make leap after leap
>>> off things only you know about. How would I know what STD
>>> error is and why should I believe in it? Please don't say
>>> your pdf, because I find it impenetrable. Mainly because
>>> its made almost entirely of paragraphs like this one.
>> The badness is about n times the error. If you make it >> exactly n times the error, for a Euclidean metric, it >> implies the error must be the STD error. There's also the >> factor of the square root of the number of primes that I >> forgot to mention before.
> > I thought you were advocating STD error. Here it sounds
> like you're not.

I'm just saying that STD error is what you get from scalar badness (Tenney weighting and a Euclidean space) and the number of notes to an equal temperament as complexity. It's some kind of octave-equivalent error. I'm sure it has a geometric meaning and it seems to be the straight line distances measured in Paul E's projective error space (or one like it where linear temperaments are straight lines).

>>>> Taking the distance from the val to the n*JIP gives you the >>>> badness
>>> error
>>>
>>>> It's the RMS or minimax of the deviations of the >>>> prime intervals assuming octaves are pure. This doesn't >>>> take proper account of intervals between primes.
>> I should have said "worst-abs" not "minimax".
>>
>>> Yes it does, if you do the thing with the minimum and
>>> maximum errors and keep the signs. It's the same reason
>>> you can compute consistency without looking at the
>>> intervals between primes.
>> The RMS or worst-abs doesn't take proper account of >> intervals between primes. If you "do the thing with the >> minimum and maximum errors" it can only work if the result >> is not the RMS or worst-abs.
> > I don't know what "worst-abs" is, but it's how Paul calculated
> TOP tunings for ETs.

Worst-abs is the worst absolute value. Where did tunings come from? We were talking about errors assuming a tuning with pure octaves.

Graham

🔗Herman Miller <hmiller@IO.COM>

3/22/2008 2:42:52 PM

Graham Breed wrote:
> Herman Miller wrote:
>> Graham Breed wrote:
>>> Herman Miller wrote:
>>>> Graham Breed wrote:
>>>>> Can you even talk of projections without angles?
>>>> You can project along an axis.
>>> What does that mean?
>> Essentially, dropping one of the coordinates. An orthogonal projection.
> > So "orthogonal" means right angles which means angles.

"Orthogonal" means right angles in exactly the same way that "taxicab" means right angles!

🔗Graham Breed <gbreed@gmail.com>

3/24/2008 4:29:54 AM

Herman Miller wrote:

> "Orthogonal" means right angles in exactly the same way that "taxicab" > means right angles!

No, a taxicab metric means you follow the lines. You can set whatever angle you like between axes and the distances are the same. 5-limit lattices are triangular with some extra connections, and they also support a taxicab metric.

In the past, Paul Erlich said he wanted a complexity measure that generalizes area with a taxicab metric. But then he also said that there shouldn't be angles defined in the space. I really don't know how you can have areas without angles and I can't think of anyway to define a taxicab area that doesn't bring in a Euclidean metric by the back door.

Graham

🔗Carl Lumma <carl@lumma.org>

3/24/2008 8:05:14 AM

Graham wrote...

>>> For an equal temperament I'd either use scalar complexity
>>> (the RMS of the weighted numbers of steps to the prime
>>> intervals) or the number of steps to the octave. Either way
>>> it doesn't make much difference. In this case a Euclidean
>>> metric gives you scalar complexity.
>>> Strictly speaking it's also multiplied by the square root of
>>> the number of prime intervals. I forgot to mention that before.
>>
>> Can you demonstrate that the formulae are the same?
>
>The Euclidean distance from the origin to x is
>
>sqrt(x_1**2 + x_2**2 + ... + x_n**2)
>
>the RMS of x is
>
>sqrt[(x_1**2 + x_2**2 + ... + x_n**2)/n]

OK

>>>>> The optimal
>>>>> badness of a temperament class is the distance from its val
>>>>> to the JI line, which will include your n*JIP points. But
>>>>> as the line doesn't belong to the lattice, points on it have
>>>>> no special meaning.
>>>> They allow the TOP tuning of the val to be quickly calculated.
>>> No, that'd be a tuning space.
>>
>> If you say so. I'll keep on calculating it with this method
>> for now.
>
>You can take a tuning space and multiply everything by n.
>But the logic is still that of a tuning space, so it'd be
>better to divide the val by n.

Could do. That's what's Gene's talking about with having
things 'all about the size of an octave, but that need not
concern us'.

>>>> It clearly makes sense to think of them as having the
>>>> mapping of the nearest val point (which will usually be
>>>> obtained simply by rounding each of the coordinates).
>>> I don't think that does make sense.
>>
>> What do you call this
>>
>> (11.976740698521905 18.963172772659682 27.94572829655111)
>>
>> ?
>
>Dunno. It isn't a weighted val, anyway.

It's the point in val space, not on the val lattice, representing
the optimal tuning of the <12 19 28] val, which is easily obtained
by rounding each of the coordinates above.

>>>>>>>>> The ET
>>>>>>>>> line and the JI line both pass through the origin. So the
>>>>>>>>> further out you get the further apart they get. Hence the
>>>>>>>>> distance from the val to the JI line is about n times the
>>>>>>>>> error, giving a badness.
>>>>>>>> That seems a roundabout way to get badness.
>>>>>>> How else do you propose to do it?
>>>>>> If the justification for finding this distance is that it
>>>>>> approximates n, why not just use n?
>>>>> Use n to do what? This distance doesn't approximate
>>>>> anything. It *is* the badness of the temperament class.
>>>> Once again, I wasn't aware anybody had decided on what badness
>>>> is best. If you have, can you say why and give a quick
>>>> comparison to other proposals that have been made, with examples?
>>> I didn't say anything about it being best or any context for
>>> it to be best in.
>>
>> "It *is* the badness"...
>
>It's the badness that's defined by the geometry.

There are many ways to define badness geometrically.

>>>>> You can take n as the complexity, which implies the STD
>>>>> error -- which is an RMS-based way of measuring the error of
>>>>> a temperament when you keep the octaves pure.
>>>> This paragraph exemplifies paragraphs of yours I find
>>>> impossible to understand. You seem to make leap after leap
>>>> off things only you know about. How would I know what STD
>>>> error is and why should I believe in it? Please don't say
>>>> your pdf, because I find it impenetrable. Mainly because
>>>> its made almost entirely of paragraphs like this one.
>>> The badness is about n times the error. If you make it
>>> exactly n times the error, for a Euclidean metric, it
>>> implies the error must be the STD error. There's also the
>>> factor of the square root of the number of primes that I
>>> forgot to mention before.
>>
>> I thought you were advocating STD error. Here it sounds
>> like you're not.
>
>I'm just saying that STD error is what you get from scalar
>badness (Tenney weighting and a Euclidean space)

Again, if you've done a comparison of complexity/error/badness
measures and know which one is best and why, start a new
thread and explain it. Otherwise, this just sounds like more
of your frequent discoveries, which I'm sure are cool but I
can't tell if they're progressing to anything worth learning
about.

-Carl

🔗Herman Miller <hmiller@IO.COM>

3/24/2008 4:48:50 PM

Graham Breed wrote:
> Herman Miller wrote:
> >> "Orthogonal" means right angles in exactly the same way that "taxicab" >> means right angles!
> > No, a taxicab metric means you follow the lines.

So you project along the lines. Where's the difficulty in that?

🔗Carl Lumma <carl@lumma.org>

3/25/2008 1:31:37 AM

I wrote...

> n*JIP is special, and so is the patent val. It's just
> not uniquely special, as he later pointed out. Its clear
> that whoever wants an efficient way to calculate the val
> with least TOP damage would do well to first calculate
> the semi-standard vals, since it is apparently among them.

Not true! See:

/tuning-math/message/12579

-Carl

🔗Graham Breed <gbreed@gmail.com>

3/25/2008 3:16:51 AM

Carl Lumma wrote:
> I wrote...
> >> n*JIP is special, and so is the patent val. It's just
>> not uniquely special, as he later pointed out. Its clear
>> that whoever wants an efficient way to calculate the val
>> with least TOP damage would do well to first calculate
>> the semi-standard vals, since it is apparently among them.
> > Not true! See:
> > /tuning-math/message/12579

In that case I'm mystified as to what a "semi-standard val" is. The definition was "any val which is minimial". So what does "minimal" mean if not a minimum of something?

For that matter I never understood how standard vals could always be minimal.

Graham

🔗Graham Breed <gbreed@gmail.com>

3/25/2008 4:54:21 AM

Carl Lumma wrote:
> I wrote...
> >> n*JIP is special, and so is the patent val. It's just
>> not uniquely special, as he later pointed out. Its clear
>> that whoever wants an efficient way to calculate the val
>> with least TOP damage would do well to first calculate
>> the semi-standard vals, since it is apparently among them.
> > Not true! See:
> > /tuning-math/message/12579

I think this message says what a semistandard val is:

/tuning-math/message/12195

"""
Given an r2 temperament, we can check for vals for n*period equal
divisions of the octave, and see which are minimal for val distance from
n*JIP. Very often there will be only one of them; in fact for most r2
temperaments there can be only one. These can be considered
semistandard vals, but in the typical case where it is unique, the
standard val for edo n*period and given r2 temperament. Of course this
idea applies to other temperament ranks also.
"""

So semistandard vals are minimally close to the n*JIP. That is, they give an equal temperament with the same max Tenney-weighted error as the standard val for a pure octave tuning. When the standard val isn't the TOP mapping, neither are the semistandard vals.

Graham

🔗Carl Lumma <carl@lumma.org>

3/25/2008 8:27:06 AM

At 03:16 AM 3/25/2008, you wrote:
>Carl Lumma wrote:
>> I wrote...
>>
>>> n*JIP is special, and so is the patent val. It's just
>>> not uniquely special, as he later pointed out. Its clear
>>> that whoever wants an efficient way to calculate the val
>>> with least TOP damage would do well to first calculate
>>> the semi-standard vals, since it is apparently among them.
>>
>> Not true! See:
>>
>> /tuning-math/message/12579
>
>In that case I'm mystified as to what a "semi-standard val"
>is. The definition was "any val which is minimial". So
>what does "minimal" mean if not a minimum of something?
>
>For that matter I never understood how standard vals could
>always be minimal.
>
>
> Graham

A standard/patent val is a val with the least distance in
val space from the n*JIP, where n is the integer mapping the
octave. There are sometimes other points on the val lattice
the same distance away (semi-standard).
But in certain cases, none of these vals are closest to the
m*JIP, where m is a real number describing the tuning of
the TOP octave.
So Gene's procedure is correct, but he's minimizing distance
from the wrong point.

-Carl