back to list

More TOP/prime RMS code

🔗Graham Breed <gbreed@gmail.com>

11/27/2005 4:25:32 PM

I've got an algebraic solution to the rank 2 TOP now, using PuLP and
GLPK. First, though, here's the updated code for equal (rank 1)
temperaments. Usual caveats about indentation in Yahoo Groups apply.

def getPORMSWE(self):
"""Return the prime, optimum, RMS, weighted error.

This is the RMS of the prime intervals where octave stretching
is allowed, with each prime interval weighted according to its size.
"""
avgStretches, avgSquares = self.getPrimeStretching()
return sqrt(1.0 - (avgStretches**2 / avgSquares))

def getPORMSWEStep(self):
"""Return the stretched step size
for the prime, optimum, RMS, weighted error.
"""
return self.getPORMSWEStretch() / self.basis[0]

def getPORMSWEStretch(self):
"""Return the stretch for the prime, optimum, RMS, weighted error.
"""
avgStretches, avgSquares = self.getPrimeStretching()
return avgStretches / avgSquares

def getPrimeStretching(self):
"""Used by getPORMSWE() and getPORMSWEStretch().
Not likely to be much use on its own.
"""
sumStretches = sumSquares = 0.0
for stretch in self.weightedPrimes():
sumStretches = sumStretches + stretch
sumSquares = sumSquares + stretch**2
return sumStretches/len(self.basis), sumSquares/len(self.basis)

def getTOPError(self, stretch, wPrimes=None):
"""TOP Error for a given octave stretch (non-optimal)"""
worst = 0.0
for w in wPrimes or self.weightedPrimes():
w = w*stretch
if abs(1-w)>worst:
worst = abs(1-w)
return worst

def getTOP(self):
"""Return the TOP error and the optimum stretch"""
best, bestStretch = 1e50, 1.0
wPrimes = self.weightedPrimes()
for prime1 in wPrimes:
for prime2 in wPrimes:
stretch = prime1/prime2
error = self.getTOPError(stretch, wPrimes)
if error < best:
best = error
bestStretch = stretch
return best, bestStretch

def weightedPrimes(self):
"""Used for calculating and optimizing weighted prime errors"""
result = [1.0]
for i in range(1,len(self.basis)):
result.append(self.basis[i]/self.primes[i-1]/self.basis[0])
return result

Now here's the rank 2 code:

def optimizePORMSWE(self):
"""Set the prime, optimum, RMS, weighted errors optimum"""
sx0 = sx1 = sx02 = sx12 = sx01 = 0.0

primes = [1.0]+self.primes
for i in range(len(primes)):
m = self.mapping[i]
x0 = m[0]/primes[i]
x1 = m[1]/primes[i]
sx0 = sx0 + x0
sx1 = sx1 + x1
sx02 = sx02 + x0**2
sx12 = sx12 + x1**2
sx01 = sx01 + x0*x1

denom = sx02*sx12 - sx01**2
self.basis = ((sx0*sx12 - sx1*sx01)/denom,
(sx1*sx02 - sx0*sx01)/denom)

def getPRMSWError(self):
"""Get the prime, RMS, weighted error"""
primes = [1.0]+self.primes
total = 0.0
for i in range(len(primes)):
m = self.mapping[i]
error = (self.basis[0]*m[0] + self.basis[1]*m[1])/primes[i] - 1
total = total + error*error
return sqrt(total/len(primes))

def optimizeTOP(self):
"""Set the TOP generators

Requires PuLP and GLPK
"""
import pulp
prob = pulp.LpProblem("top", pulp.LpMinimize)

# set three variables: the generators and the thing to minimize
# all of them have to be positive
period = pulp.LpVariable("period", 0, None)
generator = pulp.LpVariable("generator", 0, None)
error = pulp.LpVariable("error", 0, None)

# specify that it's the error we want to minimize
prob.__iadd__((error, "obj"))
# uses __iadd__() instead of += for syntax compatibility
# with Python 1.5.2

# now set the errors of the temperament as constraints
primes = [1.0]+self.primes
for i in range(len(primes)):
weightedPrime = (period*self.mapping[i][0] +
generator*self.mapping[i][1])/primes[i]
# set two error constraints for an overall absolute error
prob.__iadd__(error >= weightedPrime - 1)
prob.__iadd__(error >= 1 - weightedPrime)

prob.solve(pulp.GLPK(msg=0))

self.basis = period.varValue, generator.varValue

def getTOPError(self):
"""Get the TOP (not necessarily optimum) error"""
primes = [1.0]+self.primes
max = 0.0
for i in range(len(primes)):
m = self.mapping[i]
error = (self.basis[0]*m[0] + self.basis[1]*m[1])/primes[i] - 1
if abs(error)>max:
max = abs(error)
return max

It happens that the optimization code has the same number of lines for
both TOP and prime RMS. The RMS doesn't return the optimal error as
a side effect, but then it doesn't rely on thousands of lines of
library code either. I've benchmarked the TOP optimization at a
shocking three -- count 'em -- three orders of magnitute slower than
the prime RMS. Here are the results of that.

5-limit ET RMS in 1.0 ms
5-limit ET TOP in 3.4 ms
5-limit R2 RMS in 14.5 ms
5-limit R2 TOP in 26.5 s
7-limit ET RMS in 1.1 ms
7-limit ET TOP in 6.6 ms
7-limit R2 RMS in 17.4 ms
7-limit R2 TOP in 35.8 s
11-limit ET RMS in 1.3 ms
11-limit ET TOP in 11.4 ms
11-limit R2 RMS in 20.4 ms
11-limit R2 TOP in 44.9 s
13-limit ET RMS in 1.4 ms
13-limit ET TOP in 18.1 ms
13-limit R2 RMS in 23.2 ms
13-limit R2 TOP in 54.2 s
17-limit ET RMS in 1.6 ms
17-limit ET TOP in 26.9 ms
17-limit R2 RMS in 26.1 ms
17-limit R2 TOP in 63.7 s
19-limit ET RMS in 1.8 ms
19-limit ET TOP in 38.4 ms
19-limit R2 RMS in 29.2 ms
19-limit R2 TOP in 73.6 s

I don't know what's going wrong, seeing that GLPK uses a simplex algorithm,
which everybody says is fast. Still, slow it is. Every time I interrupted
it, it was in the C coded optimization, so that part must be significantly
slow. I ran the test using 30 equal temperaments and all the rank 2
temperaments generated from them. Here's the full test code:

import temper, time
limits = 1, 3, 5, 7, 11, 13, 17, 19
for d in range(2,8):
ets = [temper.PrimeET(n, temper.primes[:d]) for n in range(30,60)]
timestamp = time.time()
for n in range(1000):
for et in ets:
et.getPORMSWE()
print "%2i-limit ET RMS in %5.1f ms" % (limits[d], time.time()-timestamp)
timestamp = time.time()
for n in range(1000):
for et in ets:
et.getTOP()
print "%2i-limit ET TOP in %5.1f ms" % (limits[d], time.time()-timestamp)

# now for rank 2 temperamnts (linear temperaments and their kin)
temper.Temperament(7,5,temper.limit5).optimizeTOP() # initialize PuLP
r2s = []
for i in range(len(ets)-1):
et1 = ets[i]
for j in range(i+1, len(ets)):
et2 = ets[j]
try:
r2 = et1 & et2
except temper.TemperamentException:
continue
r2s.append(r2)
timestamp = time.time()
for n in range(1000):
for r2 in r2s:
r2.optimizePORMSWE()
print "%2i-limit R2 RMS in %5.1f ms" % (limits[d], time.time()-timestamp)
timestamp = time.time()
for r2 in r2s:
r2.optimizeTOP()
print "%2i-limit R2 TOP in %5.1f s" % (limits[d], time.time()-timestamp)

Graham

🔗Paul Erlich <perlich@aya.yale.edu>

11/28/2005 4:47:59 PM

Here's my 2-line MATLAB code for calculating the TOP tuning for ETs
in the 11-limit:

%r contains the 'val' . . .

tmp=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log(11)/log(2)];

tmp=r/((min(tmp)+max(tmp))/2);

%tmp contains the TOP tuning of the primes.

And the (minimized) damage is given by:

err(j)=1200*max((tmp-[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
(11)/log(2)])./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
(11)/log(2)]);

The symbol "./" means element-by-element division.

I have to run now, unfortunately!

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> I've got an algebraic solution to the rank 2 TOP now, using PuLP and
> GLPK. First, though, here's the updated code for equal (rank 1)
> temperaments. Usual caveats about indentation in Yahoo Groups
apply.
>
> def getPORMSWE(self):
> """Return the prime, optimum, RMS, weighted error.
>
> This is the RMS of the prime intervals where octave stretching
> is allowed, with each prime interval weighted according to its
size.
> """
> avgStretches, avgSquares = self.getPrimeStretching()
> return sqrt(1.0 - (avgStretches**2 / avgSquares))
>
> def getPORMSWEStep(self):
> """Return the stretched step size
> for the prime, optimum, RMS, weighted error.
> """
> return self.getPORMSWEStretch() / self.basis[0]
>
> def getPORMSWEStretch(self):
> """Return the stretch for the prime, optimum, RMS, weighted
error.
> """
> avgStretches, avgSquares = self.getPrimeStretching()
> return avgStretches / avgSquares
>
> def getPrimeStretching(self):
> """Used by getPORMSWE() and getPORMSWEStretch().
> Not likely to be much use on its own.
> """
> sumStretches = sumSquares = 0.0
> for stretch in self.weightedPrimes():
> sumStretches = sumStretches + stretch
> sumSquares = sumSquares + stretch**2
> return sumStretches/len(self.basis), sumSquares/len(self.basis)
>
> def getTOPError(self, stretch, wPrimes=None):
> """TOP Error for a given octave stretch (non-optimal)"""
> worst = 0.0
> for w in wPrimes or self.weightedPrimes():
> w = w*stretch
> if abs(1-w)>worst:
> worst = abs(1-w)
> return worst
>
> def getTOP(self):
> """Return the TOP error and the optimum stretch"""
> best, bestStretch = 1e50, 1.0
> wPrimes = self.weightedPrimes()
> for prime1 in wPrimes:
> for prime2 in wPrimes:
> stretch = prime1/prime2
> error = self.getTOPError(stretch, wPrimes)
> if error < best:
> best = error
> bestStretch = stretch
> return best, bestStretch
>
> def weightedPrimes(self):
> """Used for calculating and optimizing weighted prime errors"""
> result = [1.0]
> for i in range(1,len(self.basis)):
> result.append(self.basis[i]/self.primes[i-1]/self.basis[0])
> return result
>
> Now here's the rank 2 code:
>
> def optimizePORMSWE(self):
> """Set the prime, optimum, RMS, weighted errors optimum"""
> sx0 = sx1 = sx02 = sx12 = sx01 = 0.0
>
> primes = [1.0]+self.primes
> for i in range(len(primes)):
> m = self.mapping[i]
> x0 = m[0]/primes[i]
> x1 = m[1]/primes[i]
> sx0 = sx0 + x0
> sx1 = sx1 + x1
> sx02 = sx02 + x0**2
> sx12 = sx12 + x1**2
> sx01 = sx01 + x0*x1
>
> denom = sx02*sx12 - sx01**2
> self.basis = ((sx0*sx12 - sx1*sx01)/denom,
> (sx1*sx02 - sx0*sx01)/denom)
>
> def getPRMSWError(self):
> """Get the prime, RMS, weighted error"""
> primes = [1.0]+self.primes
> total = 0.0
> for i in range(len(primes)):
> m = self.mapping[i]
> error = (self.basis[0]*m[0] + self.basis[1]*m[1])/primes[i] -
1
> total = total + error*error
> return sqrt(total/len(primes))
>
> def optimizeTOP(self):
> """Set the TOP generators
>
> Requires PuLP and GLPK
> """
> import pulp
> prob = pulp.LpProblem("top", pulp.LpMinimize)
>
> # set three variables: the generators and the thing to minimize
> # all of them have to be positive
> period = pulp.LpVariable("period", 0, None)
> generator = pulp.LpVariable("generator", 0, None)
> error = pulp.LpVariable("error", 0, None)
>
> # specify that it's the error we want to minimize
> prob.__iadd__((error, "obj"))
> # uses __iadd__() instead of += for syntax compatibility
> # with Python 1.5.2
>
> # now set the errors of the temperament as constraints
> primes = [1.0]+self.primes
> for i in range(len(primes)):
> weightedPrime = (period*self.mapping[i][0] +
> generator*self.mapping[i][1])/primes[i]
> # set two error constraints for an overall absolute error
> prob.__iadd__(error >= weightedPrime - 1)
> prob.__iadd__(error >= 1 - weightedPrime)
>
> prob.solve(pulp.GLPK(msg=0))
>
> self.basis = period.varValue, generator.varValue
>
> def getTOPError(self):
> """Get the TOP (not necessarily optimum) error"""
> primes = [1.0]+self.primes
> max = 0.0
> for i in range(len(primes)):
> m = self.mapping[i]
> error = (self.basis[0]*m[0] + self.basis[1]*m[1])/primes[i] -
1
> if abs(error)>max:
> max = abs(error)
> return max
>
> It happens that the optimization code has the same number of lines
for
> both TOP and prime RMS. The RMS doesn't return the optimal error as
> a side effect, but then it doesn't rely on thousands of lines of
> library code either. I've benchmarked the TOP optimization at a
> shocking three -- count 'em -- three orders of magnitute slower than
> the prime RMS. Here are the results of that.
>
> 5-limit ET RMS in 1.0 ms
> 5-limit ET TOP in 3.4 ms
> 5-limit R2 RMS in 14.5 ms
> 5-limit R2 TOP in 26.5 s
> 7-limit ET RMS in 1.1 ms
> 7-limit ET TOP in 6.6 ms
> 7-limit R2 RMS in 17.4 ms
> 7-limit R2 TOP in 35.8 s
> 11-limit ET RMS in 1.3 ms
> 11-limit ET TOP in 11.4 ms
> 11-limit R2 RMS in 20.4 ms
> 11-limit R2 TOP in 44.9 s
> 13-limit ET RMS in 1.4 ms
> 13-limit ET TOP in 18.1 ms
> 13-limit R2 RMS in 23.2 ms
> 13-limit R2 TOP in 54.2 s
> 17-limit ET RMS in 1.6 ms
> 17-limit ET TOP in 26.9 ms
> 17-limit R2 RMS in 26.1 ms
> 17-limit R2 TOP in 63.7 s
> 19-limit ET RMS in 1.8 ms
> 19-limit ET TOP in 38.4 ms
> 19-limit R2 RMS in 29.2 ms
> 19-limit R2 TOP in 73.6 s
>
> I don't know what's going wrong, seeing that GLPK uses a simplex
algorithm,
> which everybody says is fast. Still, slow it is. Every time I
interrupted
> it, it was in the C coded optimization, so that part must be
significantly
> slow. I ran the test using 30 equal temperaments and all the rank 2
> temperaments generated from them. Here's the full test code:
>
> import temper, time
> limits = 1, 3, 5, 7, 11, 13, 17, 19
> for d in range(2,8):
> ets = [temper.PrimeET(n, temper.primes[:d]) for n in range
(30,60)]
> timestamp = time.time()
> for n in range(1000):
> for et in ets:
> et.getPORMSWE()
> print "%2i-limit ET RMS in %5.1f ms" % (limits[d], time.time()-
timestamp)
> timestamp = time.time()
> for n in range(1000):
> for et in ets:
> et.getTOP()
> print "%2i-limit ET TOP in %5.1f ms" % (limits[d], time.time()-
timestamp)
>
> # now for rank 2 temperamnts (linear temperaments and their kin)
> temper.Temperament(7,5,temper.limit5).optimizeTOP() #
initialize PuLP
> r2s = []
> for i in range(len(ets)-1):
> et1 = ets[i]
> for j in range(i+1, len(ets)):
> et2 = ets[j]
> try:
> r2 = et1 & et2
> except temper.TemperamentException:
> continue
> r2s.append(r2)
> timestamp = time.time()
> for n in range(1000):
> for r2 in r2s:
> r2.optimizePORMSWE()
> print "%2i-limit R2 RMS in %5.1f ms" % (limits[d], time.time()-
timestamp)
> timestamp = time.time()
> for r2 in r2s:
> r2.optimizeTOP()
> print "%2i-limit R2 TOP in %5.1f s" % (limits[d], time.time()-
timestamp)
>
>
> Graham
>

🔗Graham Breed <gbreed@gmail.com>

11/29/2005 7:49:50 PM

On 11/29/05, Paul Erlich <perlich@aya.yale.edu> wrote:
> Here's my 2-line MATLAB code for calculating the TOP tuning for ETs
> in the 11-limit:

This is good! Is this in your paper? It must have been so simple I
skipped over it. It certainly isn't what Gene had on his website.
The advantages over my algorithm are numerous: it's shorter, faster,
and gives the right answer!

> %r contains the 'val' . . .
>
> tmp=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log(11)/log(2)];

Looks like this is the list of weighted primes, which I call w.

w=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log(11)/log(2)];

> tmp=r/((min(tmp)+max(tmp))/2);
>
> %tmp contains the TOP tuning of the primes.
>
> And the (minimized) damage is given by:
>
> err(j)=1200*max((tmp-[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
> (11)/log(2)])./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
> (11)/log(2)]);

That can be simplified to

err = (max(w)-min(w))/(max(w)+min(w));

I don't know what the j's for or why you multiply by 1200.

This is good, because it's an octave-equivalent formulation of the
error: the error depends on the weighted primes regardless of the
octave stretch. It means you can calculate the damage with only one
iteration over the primes, and may be helpful in reducing the rank 2
temperament problem to a one dimensional optimization (assuming you
don't have a simple solution to that as well).

It's similar to the RMS result in that it's a deviation divided by an
average. And it can be approximated as

err = (max(w) - min(w))/2;

for most purposes, because the average weighted prime will be close to
1. That means the error function for rank 2 temperaments is piecewise
linear and so may be easier to optimize. It may be possible to add
this constraint to the linear programming problem.

That means the TOP optimization for equal temperaments really is
simpler and faster than the RMS now, so I'm impressed. But it looks
like this is another special case, like the optimization of one comma,
and the general regular temperament problem is still much more
difficult.

The Python code is updated at:

http://microtonal.co.uk/temper.py

Graham

🔗Paul Erlich <perlich@aya.yale.edu>

11/30/2005 3:20:47 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 11/29/05, Paul Erlich <perlich@a...> wrote:
> > Here's my 2-line MATLAB code for calculating the TOP tuning for
ETs
> > in the 11-limit:
>
> This is good! Is this in your paper?

See footnote xxx.

> It must have been so simple I
> skipped over it. It certainly isn't what Gene had on his website.

I hope that Gene feels better and then reconsiders this point, which
I've made before.

> The advantages over my algorithm are numerous: it's shorter, faster,
> and gives the right answer!
>
> > %r contains the 'val' . . .
> >
> > tmp=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log(11)/log
(2)];
>
> Looks like this is the list of weighted primes, which I call w.
>
> w=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log(11)/log(2)];
>
> > tmp=r/((min(tmp)+max(tmp))/2);
> >
> > %tmp contains the TOP tuning of the primes.
> >
> > And the (minimized) damage is given by:
> >
> > err(j)=1200*max((tmp-[1 log(3)/log(2) log(5)/log(2) log(7)/log(2)
log
> > (11)/log(2)])./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
> > (11)/log(2)]);
>
> That can be simplified to
>
> err = (max(w)-min(w))/(max(w)+min(w));

Huh! How do you derive that?

> I don't know what the j's

Whoops -- I actually deleted stuff from the code I use, and forgot to
get rid of the j.

> for or why you multiply by 1200.

Dave Keenan's units.

> This is good, because it's an octave-equivalent formulation of the
> error: the error depends on the weighted primes regardless of the
> octave stretch. It means you can calculate the damage with only one
> iteration over the primes, and may be helpful in reducing the rank 2
> temperament problem to a one dimensional optimization (assuming you
> don't have a simple solution to that as well).

Haven't thought about that yet . . .

> It's similar to the RMS result in that it's a deviation divided by
an
> average. And it can be approximated as
>
> err = (max(w) - min(w))/2;
>
> for most purposes, because the average weighted prime will be close
to
> 1. That means the error function for rank 2 temperaments is
piecewise
> linear

That seemed obvious to me before. But how does this reasoning allow
you to come to that conclusion about rank 2 temperaments? I don't
follow your jump.

> and so may be easier to optimize.

> It may be possible to add
> this constraint to the linear programming problem.
>
> That means the TOP optimization for equal temperaments really is
> simpler and faster than the RMS now, so I'm impressed. But it looks
> like this is another special case, like the optimization of one
comma,
> and the general regular temperament problem is still much more
> difficult.
>
> The Python code is updated at:
>
> http://microtonal.co.uk/temper.py
>
>
> Graham
>

🔗Graham Breed <gbreed@gmail.com>

11/30/2005 6:47:21 PM

On 12/1/05, Paul Erlich <perlich@aya.yale.edu> wrote:

> > err = (max(w)-min(w))/(max(w)+min(w));
>
> Huh! How do you derive that?

Oh Lordy -- I worked it out in my head one morning. Let's see ... the
2/(max(w)+min(w)) fomula means you scale by the average of the largest
and smallest weighted prime. The result is that both errors are equal
(hence the minimax).

The largest weighted prime is 2*max(w)/(max(w) + min(w))

The error in it is 2*max(w)/(max(w) + min(w)) - 1
and this is one of the maximum errors.

That gives (2*max(w)-max(w)-min(w))/(max(w)+min(w))
or (max(w)-min(w))/(max(w)+min(w))

> > It's similar to the RMS result in that it's a deviation divided by
> an
> > average. And it can be approximated as
> >
> > err = (max(w) - min(w))/2;
> >
> > for most purposes, because the average weighted prime will be close
> to
> > 1. That means the error function for rank 2 temperaments is
> piecewise
> > linear
>
> That seemed obvious to me before. But how does this reasoning allow
> you to come to that conclusion about rank 2 temperaments? I don't
> follow your jump.

Any regular temperament can be written in terms of weighted primes,
and the relationship still holds.

Graham

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 1:20:42 PM

--- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:
>
> On 12/1/05, Paul Erlich <perlich@a...> wrote:
>
> > > err = (max(w)-min(w))/(max(w)+min(w));
> >
> > Huh! How do you derive that?
>
> Oh Lordy -- I worked it out in my head one morning. Let's see ...
the
> 2/(max(w)+min(w)) fomula means you scale by the average of the
largest
> and smallest weighted prime.
> The result is that both errors are equal
> (hence the minimax).
> The largest weighted prime is 2*max(w)/(max(w) + min(w))

After the scaling.

> The error in it is 2*max(w)/(max(w) + min(w)) - 1
> and this is one of the maximum errors.
>
> That gives (2*max(w)-max(w)-min(w))/(max(w)+min(w))
> or (max(w)-min(w))/(max(w)+min(w))

Wow.

> > > It's similar to the RMS result in that it's a deviation divided
by
> > an
> > > average. And it can be approximated as
> > >
> > > err = (max(w) - min(w))/2;
> > >
> > > for most purposes, because the average weighted prime will be
close
> > to
> > > 1. That means the error function for rank 2 temperaments is
> > piecewise
> > > linear
> >
> > That seemed obvious to me before. But how does this reasoning
allow
> > you to come to that conclusion about rank 2 temperaments? I don't
> > follow your jump.
>
> Any regular temperament can be written in terms of weighted primes,

What exactly does that mean?

> and the relationship still holds.
>
>
> Graham

🔗Paul G Hjelmstad <paul_hjelmstad@allianzlife.com>

12/1/2005 3:08:24 PM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
wrote:
> >
> > On 11/29/05, Paul Erlich <perlich@a...> wrote:
> > > Here's my 2-line MATLAB code for calculating the TOP tuning for
> ETs
> > > in the 11-limit:
> >
> > This is good! Is this in your paper?
>
> See footnote xxx.
>
> > It must have been so simple I
> > skipped over it. It certainly isn't what Gene had on his website.
>
> I hope that Gene feels better and then reconsiders this point,
which
> I've made before.
>
> > The advantages over my algorithm are numerous: it's shorter,
faster,
> > and gives the right answer!
> >
> > > %r contains the 'val' . . .
> > >
> > > tmp=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log(11)/log
> (2)];
> >
> > Looks like this is the list of weighted primes, which I call w.
> >
> > w=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log(11)/log(2)];
> >
> > > tmp=r/((min(tmp)+max(tmp))/2);
> > >
> > > %tmp contains the TOP tuning of the primes.
> > >
> > > And the (minimized) damage is given by:
> > >
> > > err(j)=1200*max((tmp-[1 log(3)/log(2) log(5)/log(2) log(7)/log
(2)
> log
> > > (11)/log(2)])./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
> > > (11)/log(2)]);
> >
> > That can be simplified to
> >
> > err = (max(w)-min(w))/(max(w)+min(w));

Maybe I have no place butting in here, but I get 1408.7 for err(j)
and 0.37159 for err. This is using Octave. Just for fun, what are
your values, and I will enjoy reverse engineering your formulas.

Paul Hj

🔗Paul Erlich <perlich@aya.yale.edu>

12/1/2005 3:16:18 PM

--- In tuning-math@yahoogroups.com, "Paul G Hjelmstad"
<paul_hjelmstad@a...> wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> >
> > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
> wrote:
> > >
> > > On 11/29/05, Paul Erlich <perlich@a...> wrote:
> > > > Here's my 2-line MATLAB code for calculating the TOP tuning
for
> > ETs
> > > > in the 11-limit:
> > >
> > > This is good! Is this in your paper?
> >
> > See footnote xxx.
> >
> > > It must have been so simple I
> > > skipped over it. It certainly isn't what Gene had on his
website.
> >
> > I hope that Gene feels better and then reconsiders this point,
> which
> > I've made before.
> >
> > > The advantages over my algorithm are numerous: it's shorter,
> faster,
> > > and gives the right answer!
> > >
> > > > %r contains the 'val' . . .
> > > >
> > > > tmp=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
(11)/log
> > (2)];
> > >
> > > Looks like this is the list of weighted primes, which I call w.
> > >
> > > w=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log(11)/log
(2)];
> > >
> > > > tmp=r/((min(tmp)+max(tmp))/2);
> > > >
> > > > %tmp contains the TOP tuning of the primes.
> > > >
> > > > And the (minimized) damage is given by:
> > > >
> > > > err(j)=1200*max((tmp-[1 log(3)/log(2) log(5)/log(2) log(7)/log
> (2)
> > log
> > > > (11)/log(2)])./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2)
log
> > > > (11)/log(2)]);
> > >
> > > That can be simplified to
> > >
> > > err = (max(w)-min(w))/(max(w)+min(w));
>
> Maybe I have no place butting in here, but I get 1408.7 for err(j)

The (j) part should have been deleted, as we discussed.

> and 0.37159 for err.

Which tuning did you plug in?

> This is using Octave. Just for fun, what are
> your values,

For which tuning?

> and I will enjoy reverse engineering your formulas.
>
> Paul Hj
>

🔗Paul G Hjelmstad <paul_hjelmstad@allianzlife.com>

12/2/2005 8:27:40 AM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul G Hjelmstad"
> <paul_hjelmstad@a...> wrote:
> >
> > --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> > wrote:
> > >
> > > --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...>
> > wrote:
> > > >
> > > > On 11/29/05, Paul Erlich <perlich@a...> wrote:
> > > > > Here's my 2-line MATLAB code for calculating the TOP tuning
> for
> > > ETs
> > > > > in the 11-limit:
> > > >
> > > > This is good! Is this in your paper?
> > >
> > > See footnote xxx.
> > >
> > > > It must have been so simple I
> > > > skipped over it. It certainly isn't what Gene had on his
> website.
> > >
> > > I hope that Gene feels better and then reconsiders this point,
> > which
> > > I've made before.
> > >
> > > > The advantages over my algorithm are numerous: it's shorter,
> > faster,
> > > > and gives the right answer!
> > > >
> > > > > %r contains the 'val' . . .
> > > > >
> > > > > tmp=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
> (11)/log
> > > (2)];
> > > >
> > > > Looks like this is the list of weighted primes, which I call
w.
> > > >
> > > > w=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log(11)/log
> (2)];
> > > >
> > > > > tmp=r/((min(tmp)+max(tmp))/2);
> > > > >
> > > > > %tmp contains the TOP tuning of the primes.
> > > > >
> > > > > And the (minimized) damage is given by:
> > > > >
> > > > > err(j)=1200*max((tmp-[1 log(3)/log(2) log(5)/log(2) log
(7)/log
> > (2)
> > > log
> > > > > (11)/log(2)])./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2)
> log
> > > > > (11)/log(2)]);
> > > >
> > > > That can be simplified to
> > > >
> > > > err = (max(w)-min(w))/(max(w)+min(w));
> >
> > Maybe I have no place butting in here, but I get 1408.7 for err(j)
>
> The (j) part should have been deleted, as we discussed.

So you're not using the err(j) formula at all?

> > and 0.37159 for err.
>
> Which tuning did you plug in?
>
> > This is using Octave. Just for fun, what are
> > your values,
>
> For which tuning?

Didn't really know what I was doing so I put r=5. What would
be a real ordinary value for r?

Paul Hj

🔗Paul Erlich <perlich@aya.yale.edu>

12/2/2005 2:49:38 PM

--- In tuning-math@yahoogroups.com, "Paul G Hjelmstad"
<paul_hjelmstad@a...> wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> >
> > --- In tuning-math@yahoogroups.com, "Paul G Hjelmstad"
> > <paul_hjelmstad@a...> wrote:
> > >
> > > --- In tuning-math@yahoogroups.com, "Paul Erlich"
<perlich@a...>
> > > wrote:
> > > >
> > > > --- In tuning-math@yahoogroups.com, Graham Breed
<gbreed@g...>
> > > wrote:
> > > > >
> > > > > On 11/29/05, Paul Erlich <perlich@a...> wrote:
> > > > > > Here's my 2-line MATLAB code for calculating the TOP
tuning
> > for
> > > > ETs
> > > > > > in the 11-limit:
> > > > >
> > > > > This is good! Is this in your paper?
> > > >
> > > > See footnote xxx.
> > > >
> > > > > It must have been so simple I
> > > > > skipped over it. It certainly isn't what Gene had on his
> > website.
> > > >
> > > > I hope that Gene feels better and then reconsiders this
point,
> > > which
> > > > I've made before.
> > > >
> > > > > The advantages over my algorithm are numerous: it's
shorter,
> > > faster,
> > > > > and gives the right answer!
> > > > >
> > > > > > %r contains the 'val' . . .
> > > > > >
> > > > > > tmp=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
> > (11)/log
> > > > (2)];
> > > > >
> > > > > Looks like this is the list of weighted primes, which I
call
> w.
> > > > >
> > > > > w=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
(11)/log
> > (2)];
> > > > >
> > > > > > tmp=r/((min(tmp)+max(tmp))/2);
> > > > > >
> > > > > > %tmp contains the TOP tuning of the primes.
> > > > > >
> > > > > > And the (minimized) damage is given by:
> > > > > >
> > > > > > err(j)=1200*max((tmp-[1 log(3)/log(2) log(5)/log(2) log
> (7)/log
> > > (2)
> > > > log
> > > > > > (11)/log(2)])./[1 log(3)/log(2) log(5)/log(2) log(7)/log
(2)
> > log
> > > > > > (11)/log(2)]);
> > > > >
> > > > > That can be simplified to
> > > > >
> > > > > err = (max(w)-min(w))/(max(w)+min(w));
> > >
> > > Maybe I have no place butting in here, but I get 1408.7 for err
(j)
> >
> > The (j) part should have been deleted, as we discussed.
>
> So you're not using the err(j) formula at all?

I'm using it without the (j).

> > > and 0.37159 for err.
> >
> > Which tuning did you plug in?
> >
> > > This is using Octave. Just for fun, what are
> > > your values,
> >
> > For which tuning?
>
> Didn't really know what I was doing so I put r=5. What would
> be a real ordinary value for r?

r has to be a vector with five entries, representing the mapping of
the first five primes to steps in any ET, such as [22 35 51 62 76]. I
have no idea how Octave would or could do element-by-element division
of a scalar by a vector!

🔗Graham Breed <gbreed@gmail.com>

12/3/2005 2:34:45 AM

On 12/2/05, Paul Erlich <perlich@aya.yale.edu> wrote:
> --- In tuning-math@yahoogroups.com, Graham Breed <gbreed@g...> wrote:

> > Any regular temperament can be written in terms of weighted primes,
>
> What exactly does that mean?

For a regular tuning, the size of each prime interval (a prime number
ratio in JI) is always the same, and you can work out all other
intervals from the primes. (At least, if the temperament's defined by
primes, let's leave special cases aside for now.) The weighted primes
list is the prime sizes (cents or octaves or whatever) scaled by the
weighting. With Tenney weighting, that means each JI interval has a
size of 1 (if the size and weight are in the same units). For
temperaments, the nearer to 1 the better. The formulae for equal
temperaments don't depend on the equal steps, only the sizes of the
prime intervals and the weighting.

For a rank 2 (formerly linear) temperament, the weighted primes depend
on the generator. Any given octave-equivalent generator size gives a
list of weighted primes that you can apply the formulae to.

Graham

🔗Paul G Hjelmstad <paul_hjelmstad@allianzlife.com>

12/5/2005 9:55:01 AM

--- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul G Hjelmstad"
> <paul_hjelmstad@a...> wrote:
> >
> > --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> > wrote:
> > >
> > > --- In tuning-math@yahoogroups.com, "Paul G Hjelmstad"
> > > <paul_hjelmstad@a...> wrote:
> > > >
> > > > --- In tuning-math@yahoogroups.com, "Paul Erlich"
> <perlich@a...>
> > > > wrote:
> > > > >
> > > > > --- In tuning-math@yahoogroups.com, Graham Breed
> <gbreed@g...>
> > > > wrote:
> > > > > >
> > > > > > On 11/29/05, Paul Erlich <perlich@a...> wrote:
> > > > > > > Here's my 2-line MATLAB code for calculating the TOP
> tuning
> > > for
> > > > > ETs
> > > > > > > in the 11-limit:
> > > > > >
> > > > > > This is good! Is this in your paper?
> > > > >
> > > > > See footnote xxx.
> > > > >
> > > > > > It must have been so simple I
> > > > > > skipped over it. It certainly isn't what Gene had on his
> > > website.
> > > > >
> > > > > I hope that Gene feels better and then reconsiders this
> point,
> > > > which
> > > > > I've made before.
> > > > >
> > > > > > The advantages over my algorithm are numerous: it's
> shorter,
> > > > faster,
> > > > > > and gives the right answer!
> > > > > >
> > > > > > > %r contains the 'val' . . .
> > > > > > >
> > > > > > > tmp=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
> > > (11)/log
> > > > > (2)];
> > > > > >
> > > > > > Looks like this is the list of weighted primes, which I
> call
> > w.
> > > > > >
> > > > > > w=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
> (11)/log
> > > (2)];
> > > > > >
> > > > > > > tmp=r/((min(tmp)+max(tmp))/2);
> > > > > > >
> > > > > > > %tmp contains the TOP tuning of the primes.
> > > > > > >
> > > > > > > And the (minimized) damage is given by:
> > > > > > >
> > > > > > > err(j)=1200*max((tmp-[1 log(3)/log(2) log(5)/log(2) log
> > (7)/log
> > > > (2)
> > > > > log
> > > > > > > (11)/log(2)])./[1 log(3)/log(2) log(5)/log(2) log(7)/log
> (2)
> > > log
> > > > > > > (11)/log(2)]);
> > > > > >
> > > > > > That can be simplified to
> > > > > >
> > > > > > err = (max(w)-min(w))/(max(w)+min(w));
> > > >
> > > > Maybe I have no place butting in here, but I get 1408.7 for
err
> (j)
> > >
> > > The (j) part should have been deleted, as we discussed.
> >
> > So you're not using the err(j) formula at all?
>
> I'm using it without the (j).
>
> > > > and 0.37159 for err.
> > >
> > > Which tuning did you plug in?
> > >
> > > > This is using Octave. Just for fun, what are
> > > > your values,
> > >
> > > For which tuning?
> >
> > Didn't really know what I was doing so I put r=5. What would
> > be a real ordinary value for r?
>
> r has to be a vector with five entries, representing the mapping of
> the first five primes to steps in any ET, such as [22 35 51 62 76].
I
> have no idea how Octave would or could do element-by-element
division
> of a scalar by a vector!

Matlab did it too. But my trial ran out 11/30/05. It must have
treated r like 5 5 5 5 5!
>

🔗wallyesterpaulrus <perlich@aya.yale.edu>

12/6/2005 1:25:28 PM

--- In tuning-math@yahoogroups.com, "Paul G Hjelmstad"
<paul_hjelmstad@a...> wrote:
>
> --- In tuning-math@yahoogroups.com, "Paul Erlich" <perlich@a...>
> wrote:
> >
> > --- In tuning-math@yahoogroups.com, "Paul G Hjelmstad"
> > <paul_hjelmstad@a...> wrote:
> > >
> > > --- In tuning-math@yahoogroups.com, "Paul Erlich"
<perlich@a...>
> > > wrote:
> > > >
> > > > --- In tuning-math@yahoogroups.com, "Paul G Hjelmstad"
> > > > <paul_hjelmstad@a...> wrote:
> > > > >
> > > > > --- In tuning-math@yahoogroups.com, "Paul Erlich"
> > <perlich@a...>
> > > > > wrote:
> > > > > >
> > > > > > --- In tuning-math@yahoogroups.com, Graham Breed
> > <gbreed@g...>
> > > > > wrote:
> > > > > > >
> > > > > > > On 11/29/05, Paul Erlich <perlich@a...> wrote:
> > > > > > > > Here's my 2-line MATLAB code for calculating the TOP
> > tuning
> > > > for
> > > > > > ETs
> > > > > > > > in the 11-limit:
> > > > > > >
> > > > > > > This is good! Is this in your paper?
> > > > > >
> > > > > > See footnote xxx.
> > > > > >
> > > > > > > It must have been so simple I
> > > > > > > skipped over it. It certainly isn't what Gene had on
his
> > > > website.
> > > > > >
> > > > > > I hope that Gene feels better and then reconsiders this
> > point,
> > > > > which
> > > > > > I've made before.
> > > > > >
> > > > > > > The advantages over my algorithm are numerous: it's
> > shorter,
> > > > > faster,
> > > > > > > and gives the right answer!
> > > > > > >
> > > > > > > > %r contains the 'val' . . .
> > > > > > > >
> > > > > > > > tmp=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2)
log
> > > > (11)/log
> > > > > > (2)];
> > > > > > >
> > > > > > > Looks like this is the list of weighted primes, which I
> > call
> > > w.
> > > > > > >
> > > > > > > w=r./[1 log(3)/log(2) log(5)/log(2) log(7)/log(2) log
> > (11)/log
> > > > (2)];
> > > > > > >
> > > > > > > > tmp=r/((min(tmp)+max(tmp))/2);
> > > > > > > >
> > > > > > > > %tmp contains the TOP tuning of the primes.
> > > > > > > >
> > > > > > > > And the (minimized) damage is given by:
> > > > > > > >
> > > > > > > > err(j)=1200*max((tmp-[1 log(3)/log(2) log(5)/log(2)
log
> > > (7)/log
> > > > > (2)
> > > > > > log
> > > > > > > > (11)/log(2)])./[1 log(3)/log(2) log(5)/log(2) log
(7)/log
> > (2)
> > > > log
> > > > > > > > (11)/log(2)]);
> > > > > > >
> > > > > > > That can be simplified to
> > > > > > >
> > > > > > > err = (max(w)-min(w))/(max(w)+min(w));
> > > > >
> > > > > Maybe I have no place butting in here, but I get 1408.7 for
> err
> > (j)
> > > >
> > > > The (j) part should have been deleted, as we discussed.
> > >
> > > So you're not using the err(j) formula at all?
> >
> > I'm using it without the (j).
> >
> > > > > and 0.37159 for err.
> > > >
> > > > Which tuning did you plug in?
> > > >
> > > > > This is using Octave. Just for fun, what are
> > > > > your values,
> > > >
> > > > For which tuning?
> > >
> > > Didn't really know what I was doing so I put r=5. What would
> > > be a real ordinary value for r?
> >
> > r has to be a vector with five entries, representing the mapping
of
> > the first five primes to steps in any ET, such as [22 35 51 62
76].
> I
> > have no idea how Octave would or could do element-by-element
> division
> > of a scalar by a vector!
>
> Matlab did it too. But my trial ran out 11/30/05. It must have
> treated r like 5 5 5 5 5!

Yes, it did.