back to list

A Different Kind of Badness

🔗cityoftheasleep <igliashon@...>

2/3/2012 10:12:28 AM

I'm leaving perception (and thus, reality) out of it this time, because that's just too thorny and sensitive to slog through. Instead, I propose this:

1. Define a finite set of intervals {j}, such that all intervals satisfy some arbitrary delimitation of JI (i.e., some n-odd-limit, or some arbitrary log(n*d) threshold, or some other arbitrary way of defining JI as a finite number of ratios over a given pitch range).
2. Define another finite set of intervals {t}, such that all members are arbitrarily chosen.
3. Given these, it will be possible to describe any interval in {t} according to its proximity to any and all intervals in {j}.
4. Given 3, there will be at least one interval in {j} to which any interval in {t} is closest to.
5. We can define a new set of intervals {a} composed of all the intervals in {j} to which intervals in {t} are closest.
6. Given 3, we can alternatively define a set {b} composed of all the intervals in {j} to which intervals in {t} are within some arbitrary bound of proximity.
7. Define a set {c} as the intersection of sets {a} and {b}.

This is the "hard cutoff" approach to finding the "quantitatively best" interpretation of any tuning's approximation to JI, given some definition of JI.

I'm not entirely sure how to define a "soft cutoff" approach quite so formally, but it should be possible. Basically define not a finite set {j} but rather an infinite set {ji}, including all rational intervals. Then define an infinite number of sets of "mappings" from set {t} onto set {ji}, such that any number of members of {t} may be mapped to any number of members of {ji}. Then score each set of mappings according to three factors: a) the complexity of the intervals in {ji} onto which the intervals in {t} are mapped, b) the difference between the intervals in {t} and the intervals in {ji} onto which they are mapped, and c) the number of intervals in {t} included in the mapping. The relative weight given to each of these three criteria will be variable. These scores are a type of "badness of fit", and lower scores are better. Now, I'm not sure about this, but I suspect that there should be a finite number of mappings of {t} to {ji} whose badness scores are below a given cutoff, and that there should be at least one mapping with a lowest badness score. Is this correct?

In any case, if there was some way to algorithmically formalize this, I can imagine a tool where you input a scala file, and it gives you the best ways to interpret the scale in terms of JI, depending on how much weight you gave to the three scoring criteria. I can also imagine linking this directly to the scala file outputs on Graham's temperament finder, such that it's possible to see how well the temperament mapping actually describes the scale, and what some "better" temperament mappings might be, based on the relative weights given the storing criteria.

-Igs

🔗genewardsmith <genewardsmith@...>

2/3/2012 12:30:13 PM

--- In tuning@yahoogroups.com, "cityoftheasleep" <igliashon@...> wrote:

> In any case, if there was some way to algorithmically formalize this, I can imagine a tool where you input a scala file, and it gives you the best ways to interpret the scale in terms of JI, depending on how much weight you gave to the three scoring criteria.

I've often wished for something simpler and more basic: a better way than "show locations" in Scala to find the approximate JI intervals of a scale.

🔗cityoftheasleep <igliashon@...>

2/3/2012 3:15:50 PM

For what it's worth, I think this is basically a formalization of what I do intuitively when I analyze a tuning, and also a generalization of what I've been trying to do with the subgroup ET approach. I just wish I had the math and programming skills to get this out of the conceptual stage. Does anyone else think this is a useful concept?

-Igs

--- In tuning@yahoogroups.com, "cityoftheasleep" <igliashon@...> wrote:
>
> I'm leaving perception (and thus, reality) out of it this time, because that's just too thorny and sensitive to slog through. Instead, I propose this:
>
> 1. Define a finite set of intervals {j}, such that all intervals satisfy some arbitrary delimitation of JI (i.e., some n-odd-limit, or some arbitrary log(n*d) threshold, or some other arbitrary way of defining JI as a finite number of ratios over a given pitch range).
> 2. Define another finite set of intervals {t}, such that all members are arbitrarily chosen.
> 3. Given these, it will be possible to describe any interval in {t} according to its proximity to any and all intervals in {j}.
> 4. Given 3, there will be at least one interval in {j} to which any interval in {t} is closest to.
> 5. We can define a new set of intervals {a} composed of all the intervals in {j} to which intervals in {t} are closest.
> 6. Given 3, we can alternatively define a set {b} composed of all the intervals in {j} to which intervals in {t} are within some arbitrary bound of proximity.
> 7. Define a set {c} as the intersection of sets {a} and {b}.
>
> This is the "hard cutoff" approach to finding the "quantitatively best" interpretation of any tuning's approximation to JI, given some definition of JI.
>
> I'm not entirely sure how to define a "soft cutoff" approach quite so formally, but it should be possible. Basically define not a finite set {j} but rather an infinite set {ji}, including all rational intervals. Then define an infinite number of sets of "mappings" from set {t} onto set {ji}, such that any number of members of {t} may be mapped to any number of members of {ji}. Then score each set of mappings according to three factors: a) the complexity of the intervals in {ji} onto which the intervals in {t} are mapped, b) the difference between the intervals in {t} and the intervals in {ji} onto which they are mapped, and c) the number of intervals in {t} included in the mapping. The relative weight given to each of these three criteria will be variable. These scores are a type of "badness of fit", and lower scores are better. Now, I'm not sure about this, but I suspect that there should be a finite number of mappings of {t} to {ji} whose badness scores are below a given cutoff, and that there should be at least one mapping with a lowest badness score. Is this correct?
>
> In any case, if there was some way to algorithmically formalize this, I can imagine a tool where you input a scala file, and it gives you the best ways to interpret the scale in terms of JI, depending on how much weight you gave to the three scoring criteria. I can also imagine linking this directly to the scala file outputs on Graham's temperament finder, such that it's possible to see how well the temperament mapping actually describes the scale, and what some "better" temperament mappings might be, based on the relative weights given the storing criteria.
>
> -Igs
>

🔗Mike Battaglia <battaglia01@...>

2/3/2012 11:24:45 PM

On Fri, Feb 3, 2012 at 1:12 PM, cityoftheasleep <igliashon@...> wrote:
>
> I'm leaving perception (and thus, reality)

LOL

> 1. Define a finite set of intervals {j}, such that all intervals satisfy some arbitrary delimitation of JI (i.e., some n-odd-limit, or some arbitrary log(n*d) threshold, or some other arbitrary way of defining JI as a finite number of ratios over a given pitch range).
> 2. Define another finite set of intervals {t}, such that all members are arbitrarily chosen.
> 3. Given these, it will be possible to describe any interval in {t} according to its proximity to any and all intervals in {j}.
> 4. Given 3, there will be at least one interval in {j} to which any interval in {t} is closest to.
> 5. We can define a new set of intervals {a} composed of all the intervals in {j} to which intervals in {t} are closest.
> 6. Given 3, we can alternatively define a set {b} composed of all the intervals in {j} to which intervals in {t} are within some arbitrary bound of proximity.
> 7. Define a set {c} as the intersection of sets {a} and {b}.
>
> This is the "hard cutoff" approach to finding the "quantitatively best" interpretation of any tuning's approximation to JI, given some definition of JI.

Can you give an example of J and T and how your algorithm applies? If
J is the 9-odd-limit and T is the diatonic scale, do minor seconds get
tuned to 9/8 then?

-Mike

🔗cityoftheasleep <igliashon@...>

2/4/2012 10:51:58 AM

--- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> Can you give an example of J and T and how your algorithm applies? If
> J is the 9-odd-limit and T is the diatonic scale, do minor seconds get
> tuned to 9/8 then?

Well, {c} is just the set of {j} that is approximated in {t}. I didn't bring tuning or mapping into that definition. Mapping would be pairing elements of {c} with elements of {t} in some way, probably "an element of {c} may be paired with an element of {t} IFF the element of {c} is closer to the element of {t} than it is to any other element of {t}, AND the element of {c} is within the proximity cut-off that defined {b} TO the element of {t}." This way, some elements of {t} may end up not being paired with elements of {c} (like the minor 2nd), but all the elements of {c} will be paired with at least one element of {t}. I dunno, I could probably refine that definition, but it's a starting-point.

-Igs

🔗cityoftheasleep <igliashon@...>

2/5/2012 8:58:07 AM

So, over on XA, the problem of inconsistency came up. If we're going by the JI intervals closest to each interval of some arbitrary tuning, we might end up with inconsistent mappings where the two best 3/2's don't get the best 9/4 (for instance). This is because when we're doing the mapping in the way I described it, we are essentially treating every interval in our JI set as being its own "prime". This is clearly problematic, because 1) it forces us to choose a finite set of consonances, 2) it may lead to inconsistency, 3) it might make things confusing if we work with long chains of intervals. Bad bad bad, very naive on my part.

Better idea, which is probably a lot harder to pull off in practice: check the scale against every possible regular temperament, including subgroups, within some arbitrary p-limit, and whose error does not exceed an arbitrary threshold, and score each temperament according to 1) how close the optimal tuning of the temperament is to the input scale, given the same interval of equivalence (specified on input, I'd imagine), and 2) how accurate the temperament itself is. These scores tell us the most accurate regular temperaments that can describe the scale input. We could also add some weighting for number of basis elements in the temperament, to award "versatility".

Now, what would I have to do to automate this? I can't really expect anyone to do the hard work for me, but I have no idea where to begin or how hard it would actually be. I envision a web app like Graham's temperament finder some day, but until I figure out what I have to learn in order to create something like that, I can't even speculate on whether or not it's within the realm of possibility for me.

-Igs

--- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> On Fri, Feb 3, 2012 at 1:12 PM, cityoftheasleep <igliashon@...> wrote:
> >
> > I'm leaving perception (and thus, reality)
>
> LOL
>
> > 1. Define a finite set of intervals {j}, such that all intervals satisfy some arbitrary delimitation of JI (i.e., some n-odd-limit, or some arbitrary log(n*d) threshold, or some other arbitrary way of defining JI as a finite number of ratios over a given pitch range).
> > 2. Define another finite set of intervals {t}, such that all members are arbitrarily chosen.
> > 3. Given these, it will be possible to describe any interval in {t} according to its proximity to any and all intervals in {j}.
> > 4. Given 3, there will be at least one interval in {j} to which any interval in {t} is closest to.
> > 5. We can define a new set of intervals {a} composed of all the intervals in {j} to which intervals in {t} are closest.
> > 6. Given 3, we can alternatively define a set {b} composed of all the intervals in {j} to which intervals in {t} are within some arbitrary bound of proximity.
> > 7. Define a set {c} as the intersection of sets {a} and {b}.
> >
> > This is the "hard cutoff" approach to finding the "quantitatively best" interpretation of any tuning's approximation to JI, given some definition of JI.
>
> Can you give an example of J and T and how your algorithm applies? If
> J is the 9-odd-limit and T is the diatonic scale, do minor seconds get
> tuned to 9/8 then?
>
> -Mike
>

🔗Mike Battaglia <battaglia01@...>

2/5/2012 9:42:19 AM

On Sun, Feb 5, 2012 at 11:58 AM, cityoftheasleep
<igliashon@...> wrote:
>
> So, over on XA, the problem of inconsistency came up. If we're going by the JI intervals closest to each interval of some arbitrary tuning, we might end up with inconsistent mappings where the two best 3/2's don't get the best 9/4 (for instance). This is because when we're doing the mapping in the way I described it, we are essentially treating every interval in our JI set as being its own "prime". This is clearly problematic, because 1) it forces us to choose a finite set of consonances, 2) it may lead to inconsistency, 3) it might make things confusing if we work with long chains of intervals. Bad bad bad, very naive on my part.
>
> Better idea, which is probably a lot harder to pull off in practice: check the scale against every possible regular temperament, including subgroups, within some arbitrary p-limit, and whose error does not exceed an arbitrary threshold, and score each temperament according to 1) how close the optimal tuning of the temperament is to the input scale, given the same interval of equivalence (specified on input, I'd imagine), and 2) how accurate the temperament itself is. These scores tell us the most accurate regular temperaments that can describe the scale input. We could also add some weighting for number of basis elements in the temperament, to award "versatility".

A good way to mix the two approaches is to allow for a select few
intervals to be inconsistent by simply mapping them twice. 9/8 is the
obvious choice here, because there are a lot of EDOs with a good 9'/8
which isn't the same as 3/2 * 3/2 / 2/1.

You could also try starting out with the 15-integer-limit diamond as
the set of starting consonances, and treat every single element in
that as its own basis. To simplify, you might make it so that powers
of 2 are always consistent, so that you end up instead using the union
of the 15-odd-limit tonality diamond and 2/1 as your basis - so that
you get 2.3.5.5/3'.7.7/3'.7/5'.9'.9'/3.etc as a basis. Then, for most
useful temperaments with halfway decent error, all of these
inconsistencies should vanish - which means you'll get "commas" like
9'/9 vanishing and so on, and it'll all work itself out.

To give yourself a good, intuitive understanding of what I mean - go
to 12, 19, 22, or 31-EDO and try mapping 9' separately from 3*3. OK,
big deal - look what happens: 9' and 9 end up being the same thing. In
fact, if you do the thing I said above, all of these duplicate,
faux-prime commas within the 11-odd-limit will end up vanishing,
because 31-EDO is consistent in the 11-odd-limit. You'll get a few
stragglers for the 13- and 15-odd-limits though. For temperaments like
16-EDO, you'll get a few more remaining; in this case, 9'/9 is mapped
to 1\16, not 0\16 if you use the patent val.

So you're still using regular mappings, just treating inconsistent
intervals as new regularly-mapped prime intervals.

> Now, what would I have to do to automate this? I can't really expect anyone to do the hard work for me, but I have no idea where to begin or how hard it would actually be. I envision a web app like Graham's temperament finder some day, but until I figure out what I have to learn in order to create something like that, I can't even speculate on whether or not it's within the realm of possibility for me.

Graham's finder does this sort of thing for whatever limit that you
put in, especially the temperament search part of it. He's written
stuff on how his algorithm works. If you just want to round things off
to the nearest match, you can do that as well. It's kind of like the
"patent val" vs "best val" approach, generalized to higher-dimensional
temperaments, and mapping things from this particular inconsistent-JI
space onto it.

-Mike

🔗cityoftheasleep <igliashon@...>

2/5/2012 11:33:43 AM

--- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> A good way to mix the two approaches is to allow for a select few
> intervals to be inconsistent by simply mapping them twice. 9/8 is the
> obvious choice here, because there are a lot of EDOs with a good 9'/8
> which isn't the same as 3/2 * 3/2 / 2/1.

Well, if we're going to do this anyway, why is the inconsistency a problem in the first place? Or maybe I should ask, under what circumstances does inconsistency become problematic?

> You could also try starting out with the 15-integer-limit diamond as
> the set of starting consonances, and treat every single element in
> that as its own basis. To simplify, you might make it so that powers
> of 2 are always consistent, so that you end up instead using the union
> of the 15-odd-limit tonality diamond and 2/1 as your basis - so that
> you get 2.3.5.5/3'.7.7/3'.7/5'.9'.9'/3.etc as a basis. Then, for most
> useful temperaments with halfway decent error, all of these
> inconsistencies should vanish - which means you'll get "commas" like
> 9'/9 vanishing and so on, and it'll all work itself out.

Right, that's kind of what I'm thinking--error should control inconsistency, and I suspect that the best mapping for a given scale should end up being consistent...but I can't prove that yet.

> Graham's finder does this sort of thing for whatever limit that you
> put in, especially the temperament search part of it.

Huh? The problem, as I see it, is that his app always includes all the basis elements of the limit, whether or not removing some of them will lower the error. If you could just, say, plug in an ET (or some combination of ETs) and the 19-limit, and have the app sort through all the subgroups of the 19-limit and spit out the ones on which the resulting temperament has the lowest errors, we'd be in business. But I'm not going to ask/beg/demand that Graham make any modifications when he's already done so much work. If he thinks it'd be a worthwhile feature to add, I'd be ecstatic, but I'm not expecting it to happen.

> He's written
> stuff on how his algorithm works. If you just want to round things off
> to the nearest match, you can do that as well. It's kind of like the
> "patent val" vs "best val" approach, generalized to higher-dimensional
> temperaments, and mapping things from this particular inconsistent-JI
> space onto it.

What language did he program in? Maybe if I can learn the language, and get a bit of help with the math, I can take his code and modify it...but realistically, how much time and effort will that take me?

-Igs

🔗Mike Battaglia <battaglia01@...>

2/5/2012 11:46:34 AM

On Sun, Feb 5, 2012 at 2:33 PM, cityoftheasleep <igliashon@...> wrote:
>
> --- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> > A good way to mix the two approaches is to allow for a select few
> > intervals to be inconsistent by simply mapping them twice. 9/8 is the
> > obvious choice here, because there are a lot of EDOs with a good 9'/8
> > which isn't the same as 3/2 * 3/2 / 2/1.
>
> Well, if we're going to do this anyway, why is the inconsistency a problem in the first place? Or maybe I should ask, under what circumstances does inconsistency become problematic?

It wasn't a problem; it was just something I wanted you to be aware
of. It seemed like you were envisioning it as though you could just
jump from this sort of thing to a regular temperament without dealing
with multiple mappings for the same ratio. I was just pointing out
that if you didn't do something like what I suggested above, you'd run
into problems, where although intervals end up making sense locally,
once you regularly map everything you get 1-EDO or something stupid
like that.

> > You could also try starting out with the 15-integer-limit diamond as
> > the set of starting consonances, and treat every single element in
> > that as its own basis. To simplify, you might make it so that powers
> > of 2 are always consistent, so that you end up instead using the union
> > of the 15-odd-limit tonality diamond and 2/1 as your basis - so that
> > you get 2.3.5.5/3'.7.7/3'.7/5'.9'.9'/3.etc as a basis. Then, for most
> > useful temperaments with halfway decent error, all of these
> > inconsistencies should vanish - which means you'll get "commas" like
> > 9'/9 vanishing and so on, and it'll all work itself out.
>
> Right, that's kind of what I'm thinking--error should control inconsistency, and I suspect that the best mapping for a given scale should end up being consistent...but I can't prove that yet.

I don't think that's right. What's the best mapping for 16-EDO? You
can not map 9'/8 at all, or you can map 9'/8 to 225 cents. Which one
is better, to ignore that interval or to have it?

> > Graham's finder does this sort of thing for whatever limit that you
> > put in, especially the temperament search part of it.
>
> Huh? The problem, as I see it, is that his app always includes all the basis elements of the limit, whether or not removing some of them will lower the error. If you could just, say, plug in an ET (or some combination of ETs) and the 19-limit, and have the app sort through all the subgroups of the 19-limit and spit out the ones on which the resulting temperament has the lowest errors, we'd be in business.

http://x31eq.com/cgi-bin/rt.cgi?ets=12&limit=19

There's all the 19-limit matches for 12-EDO, for instance. I tried to
put this limit in

http://x31eq.com/cgi-bin/rt.cgi?ets=12&limit=2.3.5.5/3.7.7/3.7/5.9.9/5.9/7.11.11/3.11/5.11/7.11/9.13.13/3.13/5.13/7.13/9.13/11

but it crashes. In fact, it crashes even for this limit, which is much smaller

http://x31eq.com/cgi-bin/rt.cgi?ets=12&limit=2.3.5.5/3.7.7/3.7/5.9.9/5.9/7

But if you change the 9/7 to an 11/7, it works again...?

http://x31eq.com/cgi-bin/rt.cgi?ets=12&limit=2.3.5.5/3.7.7/3.7/5.9.9/5.11/7

Not sure why, exactly.

-Mike

🔗gbreed@...

2/5/2012 11:46:14 AM

If a mapping is consistent it will be the best.

Yes, you've suggested an interesting feature but I probably won't have the time and inclination to implement it. The code's in python.

Graham

------Original message------
From: cityoftheasleep <igliashon@...>
To: <tuning@yahoogroups.com>
Date: Sunday, February 5, 2012 7:33:43 PM GMT-0000
Subject: [tuning] Re: A Different Kind of Badness

--- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> A good way to mix the two approaches is to allow for a select few
> intervals to be inconsistent by simply mapping them twice. 9/8 is the
> obvious choice here, because there are a lot of EDOs with a good 9'/8
> which isn't the same as 3/2 * 3/2 / 2/1.

Well, if we're going to do this anyway, why is the inconsistency a problem in the first place? Or maybe I should ask, under what circumstances does inconsistency become problematic?

> You could also try starting out with the 15-integer-limit diamond as
> the set of starting consonances, and treat every single element in
> that as its own basis. To simplify, you might make it so that powers
> of 2 are always consistent, so that you end up instead using the union
> of the 15-odd-limit tonality diamond and 2/1 as your basis - so that
> you get 2.3.5.5/3'.7.7/3'.7/5'.9'.9'/3.etc as a basis. Then, for most
> useful temperaments with halfway decent error, all of these
> inconsistencies should vanish - which means you'll get "commas" like
> 9'/9 vanishing and so on, and it'll all work itself out.

Right, that's kind of what I'm thinking--error should control inconsistency, and I suspect that the best mapping for a given scale should end up being consistent...but I can't prove that yet.

> Graham's finder does this sort of thing for whatever limit that you
> put in, especially the temperament search part of it.

Huh? The problem, as I see it, is that his app always includes all the basis elements of the limit, whether or not removing some of them will lower the error. If you could just, say, plug in an ET (or some combination of ETs) and the 19-limit, and have the app sort through all the subgroups of the 19-limit and spit out the ones on which the resulting temperament has the lowest errors, we'd be in business. But I'm not going to ask/beg/demand that Graham make any modifications when he's already done so much work. If he thinks it'd be a worthwhile feature to add, I'd be ecstatic, but I'm not expecting it to happen.

> He's written
> stuff on how his algorithm works. If you just want to round things off
> to the nearest match, you can do that as well. It's kind of like the
> "patent val" vs "best val" approach, generalized to higher-dimensional
> temperaments, and mapping things from this particular inconsistent-JI
> space onto it.

What language did he program in? Maybe if I can learn the language, and get a bit of help with the math, I can take his code and modify it...but realistically, how much time and effort will that take me?

-Igs

------------------------------------

You can configure your subscription by sending an empty email to one
of these addresses (from the address at which you receive the list):
tuning-subscribe@yahoogroups.com - join the tuning group.
tuning-unsubscribe@yahoogroups.com - leave the group.
tuning-nomail@yahoogroups.com - turn off mail from the group.
tuning-digest@yahoogroups.com - set group to send daily digests.
tuning-normal@yahoogroups.com - set group to send individual emails.
tuning-help@yahoogroups.com - receive general help information.
Yahoo! Groups Links

🔗Mike Battaglia <battaglia01@...>

2/5/2012 11:51:39 AM

On Sun, Feb 5, 2012 at 2:46 PM, gbreed@... <gbreed@...> wrote:
>
> If a mapping is consistent it will be the best.
>
> Yes, you've suggested an interesting feature but I probably won't have the time and inclination to implement it. The code's in python.

It'd already by mostly implemented, leaving a minimum of additional
work to the user, if not for the weird bug from my last post. I found
an even simpler version too - this crashes

http://x31eq.com/cgi-bin/rt.cgi?ets=12&limit=2.3.3.3.3.3.3.3.3.9/7

this works though

http://x31eq.com/cgi-bin/rt.cgi?ets=12&limit=2.3.3.3.3.3.3.3.3.9/5

and this works too, the original with one less 3

http://x31eq.com/cgi-bin/rt.cgi?ets=12&limit=2.3.3.3.3.3.3.3.9/7

These mappings themselves are pointless, but they do demonstrate the
same behavior that caused the crash for larger inconsistent
temperaments.

-Mike

🔗genewardsmith <genewardsmith@...>

2/5/2012 12:28:03 PM

--- In tuning@yahoogroups.com, "cityoftheasleep" <igliashon@...> wrote:

> What language did he program in?

Python, but it sometimes calls Pari routines. Or so I understand.

🔗cityoftheasleep <igliashon@...>

2/5/2012 1:51:34 PM

--- In tuning@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> I don't think that's right. What's the best mapping for 16-EDO? You
> can not map 9'/8 at all, or you can map 9'/8 to 225 cents. Which one
> is better, to ignore that interval or to have it?

Well, that's an over-simplification. We'd have to introduce proper error weighting that accounts for the number of "primes" that are mapped, and the complexity of primes being mapped. The "best" mapping would be the lowest error with the most and simplest "primes". I think we'd actually have to be able to run the calculation to know the answer to your question. But, I suspect 16 would be best mapped as something like a 2.5.7.11.13.19 temperament, rather than as a 2.3.5.7.9'.11 temperament. Especially with Tenney weighting. And to my knowledge, 16's consistent on the former.

> http://x31eq.com/cgi-bin/rt.cgi?ets=12&limit=19
>
> There's all the 19-limit matches for 12-EDO, for instance.

Demonstrating what?

-Igs

🔗cityoftheasleep <igliashon@...>

2/5/2012 1:56:10 PM

I found this:
http://www.greenteapress.com/thinkpython/thinkCSpy/

Is this a good way for a complete novice to start learning python?

-Igs

--- In tuning@yahoogroups.com, "genewardsmith" <genewardsmith@...> wrote:
>
>
>
> --- In tuning@yahoogroups.com, "cityoftheasleep" <igliashon@> wrote:
>
> > What language did he program in?
>
> Python, but it sometimes calls Pari routines. Or so I understand.
>

🔗genewardsmith <genewardsmith@...>

2/5/2012 2:01:09 PM

--- In tuning@yahoogroups.com, "cityoftheasleep" <igliashon@...> wrote:
>
> I found this:
> http://www.greenteapress.com/thinkpython/thinkCSpy/
>
> Is this a good way for a complete novice to start learning python?

Hellifiknow, but there is stuff out there like this:

http://wiki.python.org/moin/BeginnersGuide

🔗Carl Lumma <carl@...>

2/5/2012 5:05:12 PM

Probably better is to use these websites

http://www.learnpython.org
http://www.trypython.org
http://facts.learnpython.org

-Carl

--- In tuning@yahoogroups.com, "cityoftheasleep" <igliashon@...> wrote:
>
> I found this:
> http://www.greenteapress.com/thinkpython/thinkCSpy/
>
> Is this a good way for a complete novice to start learning python?
>
> -Igs
>