back to list

Relative error theorems

🔗genewardsmith@juno.com

10/19/2001 7:02:54 PM

Relative Error Theorem:

Let l be an odd number, h a val and f a note of the p-limit, where p
is the largest prime less than or equal to l. Define the error in
relative cents of the val h for the note f to be
e(h, f) = 100*(h(f) - h(2) log2(f)). Define the badness measures
e_inf(h, l) = max |e(h, r)|, e_2(h, l) = sqrt(sum e(h, r)^2), and
e_1(h, l) = sum |e(h, r)|, where in each case we take r to range over
rational numbers p/q where p/q is reduced to its lowest terms,
1<p/q, and 1<=p,q<=l. Let j and k be two vals such that j+k=h; we
then have

e_inf(h, l) <= e_inf(j, l) + e_inf(k, l)

e_2(h, l) <= e_2(j, l) + e_2(k, l)

e_1(h, l) <= e_1(j, l) + e_1(k, l)

Proof: It follows from its definition and the multiplicative
linearity of the logarithm that relative error is linear, ie that
e(j+k, l) = e(j, l) + e(k, l). We may define error vectors by
defining a fixed ordering of the numbers r, so that if vj is the
vector of errors for j, vk is the vector of errors for k, and vh for
h=j+k, the linearity of relative error entails that vh = vj+vk. We
may consider these vectors to reside in a normed vector space with
norm L_inf (the maximum of the absolute values of the coordinates),
L_2 (Euclidean norm), or L_1 (the sum of the absolute values.) These
norms are respectively e_inf, e_2 and e_1; from this and the triangle
inequality for each of these norms, the result follows.

Definition: Associated generator

Let j and k be valid vals, such that a=j(2) < b=k(2) are distinct, so
that j and k are 2-distinct. We reduce to lowest terms by dividing
through by the gcd, so that if d = gcd(a, b), q=a/d, s=b/d. We form
the fraction s/q, and define r/p by the condition that p is the least
denominator for which we have |rq - sp| = 1. Exchanging the names of
j and k, q and s, p and r if necessary we define matters so that
rq - sp = 1. We then define the associated generator val g as
j(2) k - k(2) j, and the reduced generator val as g/d.

Theorem: Let l, h, f, and p be as above. Define
com_inf(h, l) = max |h(r)|, com_2(h, l) = sqrt(sum h(r)^2) and
com_1(h, l) = sum |h(r)|, where r is defined as before. If j and k are
2-distinct valid vals, and if g is the associated generator val, then

com_inf(g, l) <= (k(2) e_inf(j, l) + j(2) e_inf(k, l))/100 (Graham's
complexity)

com_2(g, l) <= (k(2) e_2(j, l) + j(2) e_2(k, l))/100

com_1(g, l) <= (k(2) e_1(j, l) + j(2) e_1(k, l))/100

Proof:

Defining p and f as before, we define a linear temperament associated
to j and k by A^j(f) B^k(f) for some fixed A and B. If we set
G = A^p B^r and E = A^q B^s we may also express the linear
temperament in terms of G and E, since the transformation matrix
[[p r] [q s]]has determinant -1, and hence is invertible as an
integral matrix. Inverting it, we find that in terms of G and E, the
approximation to f is given by G^h(f) E^e(f), where h = q k - s j is
the reduced generator val and e = r j - p k is the interval of
equivalence val. (In the particular case of the j+k et, we have
j(2)+k(2)= n, A = B = 2^(1/n), so that G= 2^((p+r)/n) is the
generator within an interval of equivalence of
E = 2^((q+s)/n) = 2^(1/d).)

We have

e(j, f) = 100(j(f) - j(2) log2(f)),

e(k, f) = 100(k(f) - k(2) log2(f)).

From this we get

j(f) = j(2) log2(f) + e(j, f)/100,

k(f) = k(2) log2(f) + e(k, f)/100

Then

h(f) = j(2) k - k(2) j = j(2) (k(2) log2(f) + e(k, f)/100) -
k(2) (j(2) log2(f) + e(j, f)1/00) = (e(j, f) + e(k, f))/100.

Forming vectors as before and applying the triangle inequality, we
get the theorem.